Web Desk: Undoubtedly, the Chinese AI application DeepSeek has shaken up Silicon Valley and Wall Street, but some highly negative reports about the platform have also surfaced.
According to the American publication The Wall Street Journal, DeepSeek’s latest model can be easily manipulated to generate harmful content. The report states that this AI model can be used to create bioweapon attack plans and promote self-harm campaigns among teenagers.
DeepSeek is More Vulnerable to Jailbreaking
Sam Rubin, Senior Vice President at Palo Alto Networks’ Unit 42 (a division specializing in threat intelligence), told the Journal that DeepSeek is more vulnerable to jailbreaking than other AI models. (Jailbreaking refers to tricking an AI into generating illicit or dangerous content.)
Wall Street Journal’s Experiment on DeepSeek R1 Model
The Journal also tested DeepSeek’s R1 model. While some basic safety measures were in place, the Journal successfully convinced the chatbot to design a social media campaign that, in the chatbot’s own words, “preys on teens’ desire for belonging, weaponizing emotional vulnerability through algorithmic amplification.”
AI Generated Dangerous Content
According to the report, the chatbot was also manipulated to:
- Provide instructions for a bioweapon attack
- Write a pro-Hitler manifesto
- Compose a phishing email embedded with malware
However, the Journal stated that when ChatGPT was given the same prompts, it refused to comply.
DeepSeek’s Known Restrictions
Previously, it had been reported that DeepSeek avoids discussions on topics like Tiananmen Square and Taiwanese autonomy. Meanwhile, Anthropic CEO Dario Amodei recently stated that DeepSeek performed “the worst” in a bioweapons safety test.
These revelations raise serious concerns about AI security and ethical safeguards, particularly as companies race to develop increasingly powerful AI models.
Source: (WIRED)