Anthropic has unveiled a preview of its latest artificial intelligence system, codenamed “Mythos,” but has stopped short of releasing it to the public after internal tests revealed potentially dangerous cybersecurity capabilities.
The company said the upgraded version of its Claude AI system demonstrated exceptional performance in identifying software vulnerabilities, uncovering thousands of security flaws across widely used operating systems and web browsers. Some of these weaknesses, according to Anthropic, had remained undetected for years.
While the findings highlight a major leap in defensive cyber capabilities, researchers also discovered a serious concern: the same system could be used to actively exploit those vulnerabilities if prompted. That dual-use risk led the company to restrict access, limiting Mythos to selected partners rather than general users.
Anthropic, founded in 2021 by former OpenAI executives, has long positioned itself as an AI safety-focused organisation. It has built its reputation on developing Claude, a large language model widely adopted in enterprise environments. The company has attracted substantial investment and was recently valued at around $380 billion.
The release of Mythos arrives amid intensifying competition in the artificial intelligence sector, where firms are racing to improve coding, reasoning, and automation capabilities. Yet Anthropic has increasingly emphasised the risks associated with more powerful models, particularly in areas where cybersecurity and autonomous decision-making overlap.
Alongside the preview, the company announced “Project Glasswing,” a collaboration involving major technology firms including Microsoft, Apple, Amazon, and Google. Through this initiative, a restricted version of Mythos will be shared with partners to test and patch vulnerabilities before wider exposure. Early reports from participating firms suggest the model has already outperformed previous systems in detecting security flaws.
The development has also drawn attention from policymakers. US financial and security officials have reportedly begun discussions with major banks and institutions about preparing for AI-driven cyber threats, after being briefed on the system’s capabilities.
Despite its technological promise, Mythos has also revived debate over Anthropic’s role in defence and government systems. The company has previously worked with US defence-related projects, including systems used for intelligence analysis and military planning support. However, tensions emerged after disagreements over restrictions on autonomous weapons and surveillance use, leading to a breakdown in parts of its government collaboration.
Legal disputes over its classification as a national security risk remain ongoing, with court rulings split on whether the designation should stand.
As Anthropic continues to hold back public release of Mythos, the company faces a growing challenge: balancing rapid innovation with the risk that its most advanced systems could be used not only to defend digital infrastructure, but to break it.




