Google launches innovative AI model to improve security

  • Google has introduced three new generative AI models, Gemma 2 2B, ShieldGemma, and Gemma Scope, which are designed to be safer, smaller, and more transparent than most existing models. 
  • These models are part of Google’s Gemma 2 family and are intended to foster goodwill within the developer community and address safety concerns in AI applications.

OUR TAKE
Google recently introduces three new generative AI models, Gemma 2 2B, ShieldGemma, and Gemma Scope. These models are part of Google’s Gemma 2 family, which aims to increase trust in the developer community and address security in AI applications by providing safer, easier-to-understand AI technologies. The launch of these models follows the affirmation of open AI models in the US Department of Commerce’s preliminary report, which highlights the importance of open models for small companies, researchers, non-profits and individual developers, as well as the need to monitor these models for potential risks.

-Rae Li, BTW reporter 

What happened

Google has released three new generative AI models, Gemma 2 2B, ShieldGemma, and Gemma Scope. Gemma 2 2B is a lightweight model for generating and analysing text that can run on a wide range of hardware including laptops and edge devices. It is licensed for use in certain research and commercial applications and can be downloaded from Google’s Vertex AI model library, the data science platform Kaggle, and Google’s AI Studio toolkit. ShieldGemma is a series of “security classifiers” designed to detect harmful information such as hate speech, harassment and pornography. It builds on top of Gemma 2, which can be used to filter model generation prompts and model-generated content. Finally, Gemma Scope allows developers to drill down into specific parts of the Gemma 2 model, making the inner workings more interpretable. In a blog post, Google describes Gemma Scope as consisting of specialised neural networks that help us unravel the complex information that Gemma 2 processes and expand it into forms that are easier to analyse and understand. By examining these extended views, researchers can gain valuable insight into how Gemma 2 recognises patterns, processes information and ultimately makes predictions.

The release of these new models follows the U.S. Department of Commerce’s affirmation of open AI models in a preliminary report. The report notes that open models make generative AI more accessible to small companies, researchers, non-profits and individual developers, while highlighting the importance of monitoring these models for potential risks. 

Also read: Apple employs Google’s chips for AI model training

Also read: UK antitrust body examines Google’s partnership with Anthropic

Why it’s important 

The three new generative AI models released by Google are important because they represent crucial advances in security and transparency in AI technology. Gemma 2 2B’s lightweight design and ability to run on a wide range of hardware makes advanced AI technology accessible and available to a wider range of developers and small businesses, which helps to bridge the technology divide and foster innovation. Meanwhile, ShieldGemma’s security classifier and Gemma Scope’s interpretability features provide new solutions to ethical and security issues in AI applications, which are critical to building user trust and ensuring responsible use of AI technologies.

The release of these models echoes the US Department of Commerce’s affirmation of open AI models, highlighting the role of open models in fostering the democratisation of technology and innovation. With the launch of these models, Google has not only demonstrated its leadership in the AI space, but also its commitment to promoting the development of open and responsible AI.

Rae-Li

Rae Li

Rae Li is an intern reporter at BTW Media covering IT infrastructure and Internet governance. She graduated from the University of Washington in Seattle. Send tips to rae.li@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *