- Anthropic backs Project Glasswing with $100m, after Claude model found thousands of high-severity software flaws.
- The multi-vendor approach signals a shift towards industry-wide AI security standards.
What happened
Project Glasswing brings leading tech firms together to test advanced AI security tools, after models exposed thousands of critical vulnerabilities.
AI company Anthropic has strengthened partnerships with Nvidia and Cisco through its new cybersecurity initiative, Project Glasswing.
The programme brings together major technology players, including Amazon Web Services and Microsoft (cloud), Google and Apple (platforms), Nvidia (compute), and Cisco (networking). It provides controlled access to Anthropic’s Claude Mythos Preview model for defensive security testing.
The model focuses on vulnerability detection and system hardening. Anthropic said it has already identified thousands of high-severity flaws across widely used software and infrastructure.
Cisco contributes networking and security capabilities, while Nvidia supports the compute layer needed to run large-scale AI models. Together, the partners are testing how AI can secure complex enterprise environments.
Anthropic is backing the initiative with up to $100m in usage credits and additional funding for open-source security work. It also plans to extend participation to dozens of organisations managing critical infrastructure.
The effort goes beyond bilateral partnerships. It signals a coordinated, multi-vendor approach to embedding security directly into AI development and deployment.
Why it’s important
AI security is entering a coalition phase, where leading vendors align to define standards for enterprise deployment and risk control.
Project Glasswing points to the formation of an “AI security alliance” in practice. Leading vendors are no longer acting alone. They are coordinating across compute, networking and software layers.
This shift reflects rising systemic risk from generative AI. Vulnerabilities now span models, data pipelines and network infrastructure. No single vendor can manage this alone.
By aligning Nvidia’s compute, Cisco’s networking and Anthropic’s models, the initiative moves towards integrated security standards. These standards will likely shape how enterprises deploy and govern AI systems.
The approach also helps balance capability and control. Advanced models can detect vulnerabilities at scale, but require safeguards to prevent misuse.
In the long term, such alliances may define the competitive landscape. Companies embedded in these ecosystems will influence not just technology, but the rules governing AI security.
Also read: Broadcom and Google strike long-term deal on custom AI chips
Also read: Australia taps Anthropic for AI safety, economic data push
