Close Menu
  • Home
  • Leadership Alliance
  • Exclusives
  • History of the Internet
  • AFRINIC News
  • Internet Governance
    • Regulations
    • Governance Bodies
    • Emerging Tech
  • Others
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profile
      • Startups
      • Tech Titans
      • Partner Content
    • Fintech
      • Blockchain
      • Payments
      • Regulations
    • Tech Trends
      • AI
      • AR / VR
      • IoT
    • Video / Podcast
  • Country News
    • Africa
    • Asia Pacific
    • North America
    • Lat Am/Caribbean
    • Europe/Middle East
Facebook LinkedIn YouTube Instagram X (Twitter)
Blue Tech Wave Media
Facebook LinkedIn YouTube Instagram X (Twitter)
  • Home
  • Leadership Alliance
  • Exclusives
  • History of the Internet
  • AFRINIC News
  • Internet Governance
    • Regulation
    • Governance Bodies
    • Emerging Tech
  • Others
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profiles
      • Startups
      • Tech Titans
      • Partner Content
    • Fintech
      • Blockchain
      • Payments
      • Regulation
    • Tech Trends
      • AI
      • AR/VR
      • IoT
    • Video / Podcast
  • Africa
  • Asia-Pacific
  • North America
  • Lat Am/Caribbean
  • Europe/Middle East
Blue Tech Wave Media
Home » Google launches safety-focused ‘open’ AI models
AI

Google launches safety-focused ‘open’ AI models

By Lilith ChenAugust 1, 2024Updated:December 20, 2024No Comments3 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email
  • Google has unveiled a trio of generative AI models that prioritise safety, transparency, and ease of use, marking a significant step in the development of open model technologies.  
  • These models aim to foster collaboration within the developer community while addressing the growing concerns surrounding AI safety and ethical use.

OUR TAKE
These new models represent Google’s commitment to fostering goodwill in the developer community, offering tools that are accessible for research and commercial applications while addressing safety concerns in AI.  

-Lilith Chen, BTW reporter

What happened  

Google, in its latest effort to enhance the safety of generative AI, has introduced the new Gemma 2 models: Gemma 2 2B, ShieldGemma, and Gemma Scope. These models expand on the Gemma 2 family launched in May and are designed for diverse applications while focusing on user safety and transparency.

The Gemma 2 2B is a lightweight model capable of running on various hardware, including laptops and edge devices. It can be easily downloaded from platforms such as Google’s Vertex AI model library and Kaggle, making it accessible to a wide range of developers. Meanwhile, ShieldGemma functions as a suite of safety classifiers that identify and filter harmful content, including hate speech, harassment, and sexually explicit material, helping to create safer AI interactions. Lastly, Gemma Scope allows developers to better understand the inner workings of the Gemma 2 models by providing detailed insights into their data processing and predictive capabilities, ultimately fostering more responsible AI development.

Also read: What is OpenAI?

Also read: Apple employs Google’s chips for AI model training

Also read: Google’s Olympics AI ad sparks debate over authenticity

Why it’s important  

The release of these models aligns with a recent U.S. Commerce Department report advocating for open AI technologies, emphasising their potential benefits for smaller companies and researchers. This report underscores the importance of making advanced AI tools accessible, as they can empower innovation and enhance competitiveness in various sectors. Additionally, it highlights the need for monitoring AI models to mitigate potential risks associated with their use, ensuring they are employed responsibly.

By making these generative AI models accessible, Google aims to support innovation within the developer community while addressing critical safety concerns in AI applications. The emphasis on safety and transparency reflects a growing awareness of the ethical implications of AI technology. Google’s initiative not only facilitates broader participation in AI development but also encourages responsible practices that prioritise user safety. As generative AI continues to evolve, the company’s efforts will likely play a crucial role in shaping a safer, more equitable technological landscape.

Google open AI models OpenAi
Lilith Chen

Lilith Chen is an intern reporter at BTW Media covering artificial intelligence and fintech. She graduated from Zhejiang University of Technology. Send tips to l.chen@btw.media.

Related Posts

Google opens energy‑efficient AI data centre in Winschoten

November 20, 2025

Transatel selects Oracle to power its 5G Standalone core for IoT

November 17, 2025

Google expands in Germany with €5.5B cloud build-out

November 12, 2025
Add A Comment
Leave A Reply Cancel Reply

CATEGORIES
Archives
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023

Blue Tech Wave (BTW.Media) is a future-facing tech media brand delivering sharp insights, trendspotting, and bold storytelling across digital, social, and video. We translate complexity into clarity—so you’re always ahead of the curve.

BTW
  • About BTW
  • Contact Us
  • Join Our Team
  • About AFRINIC
  • History of the Internet
TERMS
  • Privacy Policy
  • Cookie Policy
  • Terms of Use
Facebook X (Twitter) Instagram YouTube LinkedIn
BTW.MEDIA is proudly owned by LARUS Ltd.

Type above and press Enter to search. Press Esc to cancel.