• Google CEO Sundar Pichai has acknowledged controversy over the output of its Gemini AI, after it was found that the AI model produced historically inaccurate descriptions.
  • Google took action last week, temporarily halting Gemini’s ability to generate images.
  • Sundar Pichai promised to continue working to improve the accuracy and fairness of Gemini’s AI.

Gemini AI Sparks Controversy with Historically Inaccurate Outputs

CEO Sundar Pichai of Google has acknowledged the controversial outputs of its Gemini AI, stating in an internal memo that the generated images and text have “offended our users and shown bias.” The acknowledgment comes in response to the discovery that the AI model produced historically inaccurate depictions, including racially diverse Nazi-era German soldiers, non-white portrayals of US Founding Fathers, and misrepresentations of Google’s own co-founders. Pichai addresses the gravity of the situation, marking the first widespread communication from the CEO regarding the issue.

Also read: Google’s Gemini expected to land on Android phones next year

Temporary Halt on Gemini’s Image Generation and Apology

Google took action last week by temporarily pausing Gemini’s ability to generate images in the wake of the controversy. The decision followed the public revelation of the AI model’s problematic outputs, prompting widespread criticism. The company has issued an apology, acknowledging that the Gemini AI “missed the mark” and expressing its commitment to resolving the issues. Despite the apology, Google has not specified whether the problems have been fully addressed. The decision to temporarily disable the image generation feature aims to mitigate the impact of biased and inaccurate depictions.

CEO Pichai’s Assurance and Commitment to Improvement

In the internal memo reported by Semafor, CEO Sundar Pichai emphasizes Google’s dedication to rectifying the problems associated with Gemini AI. Pichai reveals that the company has been diligently working to address the “problematic text and image responses” within the Gemini app. While refraining from claiming that the issues have been completely resolved, Pichai acknowledges the imperfections of AI, particularly at this early stage in the industry’s development.

You can read Sundar Pichai’s full memo to Google employees below:

Hi everyone. I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong.

Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts. No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.

Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging Al products.

We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

Even as we learn from what went wrong here, we should also build on the product and technical announcements we’ve made in Al over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.

We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the Al wave. Let’s focus on what matters most: building helpful products that are deserving of our users’ trust.