Google launches new Gemma 2 models, smaller and safer

  • Google has unveiled three new Gemma 2 models, including the lightweight Gemma 2 2B, which surpasses larger models with a smaller parameter count.
  • Alongside this, ShieldGemma acts as a safety classifier, and Gemma Scope offers enhanced interpretability. These models are designed to advance safer, more efficient, and transparent artificial intelligence.

OUR TAKE
Google’s new Gemma 2 models exemplify a shift towards more efficient, safe, and transparent AI. By demonstrating that smaller models can outperform larger ones, Google challenges the status quo and promotes responsible AI development. The focus on safety and interpretability is commendable, fostering trust and broader adoption.
–Vicky Wu, BTW reporter

What happened

Google has released three new models in its Gemma 2 family of generative AI, touted as smaller, safer, and more transparent than their peers. The new models, Gemma 2 2B, ShieldGemma, and Gemma Scope, are designed to cater to various applications while prioritising safety and interpretability.

Gemma 2 2B is a lightweight Large Language Model (LLM) optimised for local device operation and licensed for research and commercial use. Despite having only 2.6 billion parameters, Gemma 2 2B outperforms larger models like OpenAI’s GPT-3.5 and Mistral AI’s Mistral 8x7B, as evidenced by independent evaluations. Instead, Gemma 2 2B showcases the effectiveness of advanced training techniques, superior architectures, and high-quality data. Google hopes this will encourage a shift towards refining models rather than increasing size, and highlights the importance of model compression and distillation for more accessible AI with lower computational demands.

ShieldGemma is a collection of safety classifiers that detect toxic content, such as hate speech and sexually explicit material, filtering prompts and generated content. Gemma Scope provides enhanced transparency by allowing developers to examine specific aspects of the Gemma 2 models, making their inner workings more interpretable.

Also read: Apple employs Google’s chips for AI model training

Also read: UK antitrust body examines Google’s partnership with Anthropic

Why it’s important

These releases come shortly after the U.S. Commerce Department endorsed open AI models, highlighting the benefits of broadening generative AI’s accessibility. The new models demonstrate that smaller parameter sizes can achieve competitive performance through advanced training techniques and high-quality data, challenging the notion that larger models always perform better.

Google’s Gemma 2 models foster goodwill within the AI community, akin to Meta’s Llama models, by offering fully open-source options. The emphasis on safety and interpretability is crucial as organisations increasingly adopt AI technologies, ensuring responsible and ethical deployment.

The availability of these models marks a significant step towards more accessible and accountable AI, potentially leading to a shift in industry focus from sheer size to refinement and optimisation.

Vicky-Wu

Vicky Wu

Vicky is an intern reporter at Blue Tech Wave specialising in AI and Blockchain. She graduated from Dalian University of Foreign Languages. Send tips to v.wu@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *