Google launches safety-focused ‘open’ AI models

  • Google has unveiled a trio of generative AI models that prioritise safety, transparency, and ease of use, marking a significant step in the development of open model technologies.  
  • These models aim to foster collaboration within the developer community while addressing the growing concerns surrounding AI safety and ethical use.

OUR TAKE
These new models represent Google’s commitment to fostering goodwill in the developer community, offering tools that are accessible for research and commercial applications while addressing safety concerns in AI.  

-Lilith Chen, BTW reporter

What happened  

Google, in its latest effort to enhance the safety of generative AI, has introduced the new Gemma 2 models: Gemma 2 2B, ShieldGemma, and Gemma Scope. These models expand on the Gemma 2 family launched in May and are designed for diverse applications while focusing on user safety and transparency.

The Gemma 2 2B is a lightweight model capable of running on various hardware, including laptops and edge devices. It can be easily downloaded from platforms such as Google’s Vertex AI model library and Kaggle, making it accessible to a wide range of developers. Meanwhile, ShieldGemma functions as a suite of safety classifiers that identify and filter harmful content, including hate speech, harassment, and sexually explicit material, helping to create safer AI interactions. Lastly, Gemma Scope allows developers to better understand the inner workings of the Gemma 2 models by providing detailed insights into their data processing and predictive capabilities, ultimately fostering more responsible AI development.

Also read: Apple employs Google’s chips for AI model training

Also read: Google’s Olympics AI ad sparks debate over authenticity

Why it’s important  

The release of these models aligns with a recent U.S. Commerce Department report advocating for open AI technologies, emphasising their potential benefits for smaller companies and researchers. This report underscores the importance of making advanced AI tools accessible, as they can empower innovation and enhance competitiveness in various sectors. Additionally, it highlights the need for monitoring AI models to mitigate potential risks associated with their use, ensuring they are employed responsibly.

By making these generative AI models accessible, Google aims to support innovation within the developer community while addressing critical safety concerns in AI applications. The emphasis on safety and transparency reflects a growing awareness of the ethical implications of AI technology. Google’s initiative not only facilitates broader participation in AI development but also encourages responsible practices that prioritise user safety. As generative AI continues to evolve, the company’s efforts will likely play a crucial role in shaping a safer, more equitable technological landscape.

Lilith-Chen

Lilith Chen

Lilith Chen is an intern reporter at BTW Media covering artificial intelligence and fintech. She graduated from Zhejiang University of Technology. Send tips to l.chen@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *