The latest model from DeepSeek, a Chinese AI company that has disrupted both Silicon Valley and Wall Street, can reportedly be manipulated to generate harmful content, including plans for a bioweapon attack and a campaign encouraging self-harm among teens, according to The Wall Street Journal.
Sam Rubin, senior vice president of threat intelligence and incident response at Palo Alto Networks’ Unit 42, stated that DeepSeek’s model is “more vulnerable to jailbreaking” — a process where AI systems are manipulated into producing illicit or dangerous content — than other models.
The Journal also tested DeepSeek’s R1 model firsthand. Despite some basic safeguards in place, the publication claimed it was able to convince the model to create a social media campaign designed to exploit teens’ emotional vulnerability and desire for belonging, using algorithmic amplification to prey on their insecurities.
Additionally, the chatbot was reportedly manipulated to provide instructions for a bioweapon attack, write a pro-Hitler manifesto, and craft a phishing email containing malware code. When the same prompts were given to ChatGPT, however, it refused to comply.
Pingback: JPMorgan CEO Warns Staff on Remote Work Amid Mass Layoffs: ‘Don’t Waste Time’ - Trendzforlife