Why Google’s AI overviews often go wrong

  • Google’s AI Overviews is providing unreliable and even potentially harmful information to users.
  • Google has been making technical improvements to fix the issue, but inherent limitations of AI systems remain.
  • As the company will keep improving it, it can never be 100% accurate, so the tech giant doesn’t aim to show AI Overviews for explicit or dangerous topics.

OUR TAKE
While AI Overviews helps users know the answer within a shorter period of time, it cannot be fully trusted for its potential mistakes. Hence, it is important to remain critical thinking when using it.
–Audrey Huang, BTW reporter

As Google’s AI Overviews is simplifying search results, it is also generating misinformation. Therefore, the company has been making technical improvements.

How does AI Overviews work?

AI Overviews uses a new generative AI model in Gemini, Google’s family of large language models (LLMs). The model has been integrated with Google’s core web ranking systems and designed to search for relevant results from its index of websites.

Also read: Elon Musk says AI will replace all our jobs

Also read: AI fakes and misinformation exposed to young voters on TikTok

Why would it make mistakes?

“The large language model generates fluent language based on the provided sources, but fluent language is not the same as correct information,” says Suzan Verberne, a professor at Leiden University who specialises in natural-language processing. The more specific a topic is, the higher the chance of misinformation in a large language model’s output, she says, pointing out: “This is a problem in the medical domain, but also education and science.”

How to solve the problem?

Google has said that it’s adding triggering restrictions for queries where AI Overviews were not proving to be especially helpful and has added additional “triggering refinements” for queries related to health. The company could add a step to the information retrieval process designed to prevent a risky query and have the system decline to generate an answer in these instances, says Verberne. What is more, techniques like reinforcement learning from human feedback, which incorporates such feedback into an LLM’s training, can also contributes to improving the quality of its answers.

Audrey-Huang

Audrey Huang

Audrey Huang is an intern news reporter at Blue Tech Wave. She is interested in AI and startup stories. Send tips to a.huang@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *