Anthropic, the U.S.-based developer of the Claude AI chatbot, says its technology was “weaponized” by hackers to conduct large-scale cyber attacks, extortion schemes, and employment fraud, raising concerns over how powerful AI tools are being exploited by threat actors.
The company revealed that its models were used to write malicious code, plan attacks, and even craft psychological ransom demands. In one case, North Korean operatives allegedly used Claude to secure remote jobs at U.S. Fortune 500 tech firms as part of a sanctions-breaching espionage and theft campaign.
Anthropic says it has disrupted the malicious activities, improved its detection systems, and reported the incidents to authorities.
Claude Used in “Vibe Hacking” Attacks
One of the most alarming findings involves a case of “vibe hacking” — where hackers used Claude to assist in breaking into at least 17 different organizations, including government bodies.
According to Anthropic, the attackers leveraged Claude to:
- Generate malicious code used in intrusions
- Decide which data to exfiltrate during breaches
- Craft psychologically targeted ransom demands
- Suggest specific ransom amounts
Anthropic said hackers used its AI “to an unprecedented degree,” demonstrating how agentic AI — tools capable of making autonomous tactical decisions — could accelerate cybercrime.
“The time required to exploit cybersecurity vulnerabilities is shrinking rapidly,” said Alina Timofeeva, an AI and cybercrime adviser. “Detection and mitigation must shift towards being proactive and preventative, not reactive after harm is done.”
North Korean Operatives Used Claude for Job Fraud
Anthropic also uncovered a scheme where North Korean operatives used Claude to generate fake résumés, write job applications, and pass technical assessments to gain remote employment at major U.S. companies.
Once hired, they reportedly used Claude to:
- Translate messages into English fluently
- Write production-level software code
- Gain privileged access to company systems
Cybersecurity analyst Geoff White said this represents a “fundamentally new phase” in employment scams.
“Agentic AI helps these operatives overcome cultural and technical barriers,” White said. “Once employed, U.S. companies may unknowingly breach sanctions by paying North Korean workers.”
AI Accelerates but Doesn’t Replace Traditional Attacks
While Anthropic’s report highlights emerging AI-powered risks, experts note that most ransomware intrusions still rely on classic tactics like phishing, social engineering, and exploiting known software vulnerabilities.
However, the introduction of AI into cybercrime operations allows attackers to move faster, craft more persuasive scams, and scale attacks with fewer resources.
“Organizations must treat AI models as repositories of sensitive information requiring protection,” said Nivedita Murthy, a senior security consultant at Black Duck.
Looking Ahead
Anthropic says it has enhanced monitoring and invested in stronger security controls to identify malicious usage patterns. The company also called for cross-industry collaboration between AI developers, cybersecurity experts, and governments to tackle agentic AI risks before they become mainstream tools for hackers.
With real-time AI-assisted cyber attacks and deepfake-driven extortion campaigns on the rise, analysts warn that traditional security models may no longer be enough.