Close Menu
    Facebook LinkedIn YouTube Instagram X (Twitter)
    Blue Tech Wave Media
    Facebook LinkedIn YouTube Instagram X (Twitter)
    • Home
    • Leadership Alliance
    • Exclusives
    • Internet Governance
      • Regulation
      • Governance Bodies
      • Emerging Tech
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profiles
      • Startups
      • Tech Titans
      • Partner Content
    • Others
      • Fintech
        • Blockchain
        • Payments
        • Regulation
      • Tech Trends
        • AI
        • AR/VR
        • IoT
      • Video / Podcast
    Blue Tech Wave Media
    Home » Google launches new Gemma 2 models, smaller and safer
    Google-Gemma 2 models-08-01
    Google-Gemma 2 models-08-01
    AI

    Google launches new Gemma 2 models, smaller and safer

    By Vicky WuAugust 1, 2024No Comments3 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    • Google has unveiled three new Gemma 2 models, including the lightweight Gemma 2 2B, which surpasses larger models with a smaller parameter count.
    • Alongside this, ShieldGemma acts as a safety classifier, and Gemma Scope offers enhanced interpretability. These models are designed to advance safer, more efficient, and transparent artificial intelligence.

    OUR TAKE
    Google’s new Gemma 2 models exemplify a shift towards more efficient, safe, and transparent AI. By demonstrating that smaller models can outperform larger ones, Google challenges the status quo and promotes responsible AI development. The focus on safety and interpretability is commendable, fostering trust and broader adoption.
    –Vicky Wu, BTW reporter

    What happened

    Google has released three new models in its Gemma 2 family of generative AI, touted as smaller, safer, and more transparent than their peers. The new models, Gemma 2 2B, ShieldGemma, and Gemma Scope, are designed to cater to various applications while prioritising safety and interpretability.

    Gemma 2 2B is a lightweight Large Language Model (LLM) optimised for local device operation and licensed for research and commercial use. Despite having only 2.6 billion parameters, Gemma 2 2B outperforms larger models like OpenAI’s GPT-3.5 and Mistral AI’s Mistral 8x7B, as evidenced by independent evaluations. Instead, Gemma 2 2B showcases the effectiveness of advanced training techniques, superior architectures, and high-quality data. Google hopes this will encourage a shift towards refining models rather than increasing size, and highlights the importance of model compression and distillation for more accessible AI with lower computational demands.

    ShieldGemma is a collection of safety classifiers that detect toxic content, such as hate speech and sexually explicit material, filtering prompts and generated content. Gemma Scope provides enhanced transparency by allowing developers to examine specific aspects of the Gemma 2 models, making their inner workings more interpretable.

    Also read: Apple employs Google’s chips for AI model training

    Also read: UK antitrust body examines Google’s partnership with Anthropic

    Why it’s important

    These releases come shortly after the U.S. Commerce Department endorsed open AI models, highlighting the benefits of broadening generative AI’s accessibility. The new models demonstrate that smaller parameter sizes can achieve competitive performance through advanced training techniques and high-quality data, challenging the notion that larger models always perform better.

    Google’s Gemma 2 models foster goodwill within the AI community, akin to Meta’s Llama models, by offering fully open-source options. The emphasis on safety and interpretability is crucial as organisations increasingly adopt AI technologies, ensuring responsible and ethical deployment.

    The availability of these models marks a significant step towards more accessible and accountable AI, potentially leading to a shift in industry focus from sheer size to refinement and optimisation.

    Gemma 2 models Google large language model
    Vicky Wu

    Vicky is an intern reporter at Blue Tech Wave specialising in AI and Blockchain. She graduated from Dalian University of Foreign Languages. Send tips to v.wu@btw.media.

    Related Posts

    Orange Business: Unveils defence division

    July 11, 2025

    AFRINET SA: Expands digital services in the DRC

    July 10, 2025

    UK government’s new deal with Google to improve public services

    July 10, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    CATEGORIES
    Archives
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023

    Blue Tech Wave (BTW.Media) is a future-facing tech media brand delivering sharp insights, trendspotting, and bold storytelling across digital, social, and video. We translate complexity into clarity—so you’re always ahead of the curve.

    BTW
    • About BTW
    • Contact Us
    • Join Our Team
    TERMS
    • Privacy Policy
    • Cookie Policy
    • Terms of Use
    Facebook X (Twitter) Instagram YouTube LinkedIn

    Type above and press Enter to search. Press Esc to cancel.