Experts warn that DeepSeek, a generative AI developed in China, has failed multiple security tests, raising concerns about the risks for users. The Silicon Valley security provider AppSOC discovered significant vulnerabilities, including the ability to jailbreak the AI and generate malware.
Chinese AI DeepSeek fails security tests
David Reid, a cybersecurity expert at Cedarville University, expressed alarm over the test results. “It failed a bunch of benchmarks where you could jailbreak it. You could in some cases, generate actual malware which is a big red flag,” he stated. Reid noted that these failures are particularly concerning given they allow for the potential creation of harmful code.
AppSOC’s analysis assigned DeepSeek a risk score of 8.3 out of 10, recommending against its use in enterprise settings that involve sensitive data or intellectual property. Anjana Susarla, an expert in responsible AI at Michigan State University, echoed this sentiment, questioning whether organizations could manipulate DeepSeek to access confidential company information.
While DeepSeek may appear to perform similarly to established AI models like ChatGPT, Susarla advised against using it in chatbots or customer-facing applications, asserting, “The answer is no.” The implications of using such an untested model could lead to significant security vulnerabilities.
Heightened concern stems from DeepSeek’s Chinese origins, comparable to the controversy surrounding the social media platform TikTok, which has faced scrutiny from U.S. lawmakers for its data security practices. Members of the U.S. House recently announced plans to introduce legislation banning DeepSeek on government devices, citing the risk of data access by the Chinese Communist Party.
DeepSeek R1 vs o3-mini in performance, cost, and usability showdown
“This is a five alarm national security fire,” declared U.S. Representative Josh Gottheimer from New Jersey. He emphasized the need to prevent any infiltration of government employees’ devices, recalling past issues with TikTok.
As of now, some countried like Italy and Australia have already enacted bans on DeepSeek for government use. Despite efforts to manage the risks, cybersecurity experts like Dimitri Sirota, CEO of BigID, acknowledge it may be challenging to deter average users from downloading and utilizing the application. The temptation of new, popular technologies often outweighs caution in user behavior.
Concerns regarding DeepSeek include not only its technical vulnerabilities but also the geopolitical risks associated with software tied to China. Data collected by DeepSeek may be subject to Chinese law, requiring it to disclose user information upon request from the government.
Experts warn that the Chinese government possesses the capabilities to analyze data aggregated from DeepSeek along with other sources to potentially create profiles of American users. This situation parallels the apprehensions surrounding TikTok, where worries persisted about the Chinese Communist Party leveraging user data for intelligence purposes.
To mitigate risks when using DeepSeek or other AI models, cybersecurity experts recommend best practices such as ensuring strong, unique passwords and enabling two-factor authentication. Users should also be cautious about sharing personal information with AI applications and remain skeptical of requests for sensitive data.
Finally, users should carefully read the terms and conditions of any AI application before use, to understand data usage and sharing practices. Experts warn that applications based in China or other adversarial states should be treated with heightened scrutiny due to the potential risks associated with data privacy and security.