Close Menu
    Facebook LinkedIn YouTube Instagram X (Twitter)
    Blue Tech Wave Media
    Facebook LinkedIn YouTube Instagram X (Twitter)
    • Home
    • Leadership Alliance
    • Exclusives
    • Internet Governance
      • Regulation
      • Governance Bodies
      • Emerging Tech
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profiles
      • Startups
      • Tech Titans
      • Partner Content
    • Others
      • Fintech
        • Blockchain
        • Payments
        • Regulation
      • Tech Trends
        • AI
        • AR/VR
        • IoT
      • Video / Podcast
    Blue Tech Wave Media
    Home » AI ‘worm’ raises alarm on cybersecurity vulnerabilities
    ai worm
    ai worm
    AI

    AI ‘worm’ raises alarm on cybersecurity vulnerabilities

    By cherry qiuMarch 2, 2024No Comments3 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    • Researchers have created an AI ‘worm’ that can infiltrate generative AI email assistants like ChatGPT and Gemini, breaching security measures to extract sensitive information and distribute spam messages.
    • Researchers showcased how these worms could compromise data integrity and propagate themselves to unsuspecting users.

    A group of researchers unveiled what they claim to be one of the first generative AI worms, capable of spreading across systems to potentially pilfer data or propagate malware. This breakthrough underscores the vulnerabilities inherent in connected AI ecosystems and serves as a stark warning for tech companies and developers harnessing these technologies.

    Exploiting vulnerabilities in AI systems

    The brainchild of researchers Ben Nassi, Stav Cohen, and Ron Bitton, the AI worm named Morris II pays homage to the disruptive Morris computer worm of 1988. By leveraging an “adversarial self-replicating prompt”, the researchers demonstrated how this malicious entity could infiltrate generative AI email assistants like ChatGPT and Gemini, breaching security measures to extract sensitive information and distribute spam messages. This revelation sheds light on the nascent threat posed by generative AI worms, a menace that experts believe could have far-reaching implications if left unchecked.

    The researchers’ methodology involved exploiting vulnerabilities in the AI systems by injecting self-replicating prompts, both in text form and embedded within image files. Through a series of simulated attacks on an email system integrated with various generative AI models, the team showcased how these worms could compromise data integrity and propagate themselves to unsuspecting users. By coercing the AI to generate further instructions in its responses, akin to traditional cyberattacks like SQL injection, the researchers highlighted the potential magnitude of this new breed of threat.

    Also read: Cyberattack on Change Healthcare sparks concerns over security

    Alerting major players and calls for action

    While the research was conducted in controlled environments and not against publicly available platforms, the implications are profound. As large language models continue to evolve and diversify into multimodal capabilities encompassing images and videos, the scope for exploitation widens. The emergence of generative AI worms underscores the imperative for robust cybersecurity measures within the AI ecosystem, urging industry players to fortify their defenses against novel threats.

    Also read: ChatGPT went down due to DDoS attack, not its popularity

    Urgency for robust cybersecurity measures

    In response to the findings, major players in the AI domain such as Google and OpenAI have been alerted. While OpenAI acknowledges the vulnerability and vows to enhance resilience against such attacks, Google has remained tight-lipped on the matter. The research serves as a clarion call for vigilance and proactive security measures within the AI landscape, emphasizing the critical need for secure application design and vigilant monitoring to thwart potential breaches.

    As the specter of generative AI worms looms on the horizon, experts caution that the future risk posed by these entities is a tangible concern. With the proliferation of AI applications entrusted to execute tasks autonomously, the potential for malicious actors to exploit loopholes is a pressing reality. The onus lies on developers and industry stakeholders to stay ahead of the curve, implementing stringent security protocols and safeguards to mitigate the looming threat of generative AI worms infiltrating the wild in the coming years.

    AI AI worm OpenAi
    cherry qiu

    Cherry Qiu was an intern reporter at BTW media covering AI. She majored in journalism and has various working experiences.

    Related Posts

    CoreWeave acquires Core Scientific in $9bn AI infrastructure deal

    July 9, 2025

    OpenAI tightens security amid DeepSeek ‘copy’ allegations

    July 9, 2025

    Comcast moves more data with less energy used

    July 9, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    CATEGORIES
    Archives
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023

    Blue Tech Wave (BTW.Media) is a future-facing tech media brand delivering sharp insights, trendspotting, and bold storytelling across digital, social, and video. We translate complexity into clarity—so you’re always ahead of the curve.

    BTW
    • About BTW
    • Contact Us
    • Join Our Team
    TERMS
    • Privacy Policy
    • Cookie Policy
    • Terms of Use
    Facebook X (Twitter) Instagram YouTube LinkedIn

    Type above and press Enter to search. Press Esc to cancel.