- Big Sleep, introduced in late 2024, has now added five new critical findings to its track record of pre-empting software vulnerabilities.
- The discoveries emphasise the growing role of AI in safeguarding open-source ecosystems and reducing exposure windows before exploits arise.
What happened: Big Sleep discovered five new cyber threats
Google’s Big Sleep system has revealed five new vulnerabilities in open-source software libraries — flaws previously undetected by human analysts — as of November 2025. The tool, which functions by analysing extensive code-bases and recognising patterns typical of security defects, flagged issues such as buffer-overflows and improper input validation in widely used components.
Big Sleep originally surfaced in 2024 and has since been credited with identifying more than 25 vulnerabilities in total. For example, it previously flagged a major flaw in SQLite (CVE-2025-6965) in real time, according to secondary sources.
Also Read: Google claims quantum computing breakthrough
Also Read: Centrics Networks strengthens cybersecurity foundation
Why it’s important
The significance of this development lies in AI’s transition from support-tool to active threat hunter in cybersecurity. By autonomously finding latent vulnerabilities before they are exploited, Big Sleep is helping to shift the balance away from reactive patching. For open-source software — which underpins much of the internet’s infrastructure — reducing the “window of exposure” is vital.
Moreover, the discovery of five fresh flaws further validators the notion that AI can scale security coverage much faster than traditional methods. Given the ever-increasing volume and complexity of code, human teams alone may struggle to keep pace. That said, reliance on AI also raises questions of transparency, potential false positives, and how organisations integrate such tools into their ecosystems securely.
In a broader sense, Google’s deployment of Big Sleep underlines how major tech firms are embedding AI deeper into their security fabric — not just for feature innovation, but to protect the digital infrastructure itself. As cyber-threats evolve, the combination of human expertise and AI vigilance may become the standard defence model rather than the exception.

