Google’s DeepMind unveils ‘superhuman‘ AI fact-checker, ‘SAFE’

  • Search-Augmented Factuality Evaluator (SAFE) is a method that uses a large language model (LLM) to break down generated text into individual facts.
  • This “superhuman” AI system can improve fact-checking, cost efficiency, and accuracy.
  • Gary Marcus, a prominent AI researcher, suggested “superhuman” might simply mean better than an underpaid crowd worker, rather than a true expert fact-checker.

Google DeepMind has unveiled a “superhuman” AI system that can outperform human fact-checkers in assessing the accuracy of information generated by large language models.

Search-Augmented Factuality Evaluator (SAFE)

This study, titled “Long-form factuality in large language models”, introduces SAFE as a method for decomposing generated text into individual facts using large language models. It then uses Google Search results to determine the accuracy of each claim.

The researchers pitted SAFE against human annotators on a data set containing around 16,000 facts and found that SAFE’s ratings matched human ratings 72% of the time. Even more impressively, when there were disagreements between SAFE and human raters, SAFE’s judgement was correct in 76% of cases.

Also read: Microsoft hires DeepMind co-founder Mustafa Suleyman as CEO of new AI unit  

‘Superhuman’ performance caused controversy

While researchers claim that large language model agents can achieve “superhuman” rating performance, some experts question what “superhuman” really means here.

AI researcher Gary Marcus suggests that “superhuman” may simply mean better than an underpaid crowd worker, rather than a true expert fact checker.

Marcus argues that benchmarking SAFE against expert human fact-checkers is crucial to truly demonstrate its superhuman performance.

Advantages of SAFE

A clear advantage of SAFE is cost – the researchers found that using the AI system was about 20 times cheaper than using human fact-checkers. As the amount of information continues to grow, it is increasingly important to adopt a low-cost, high-return approach.

The DeepMind team also used SAFE to evaluate the factual accuracy of 4 families (Gemini, GPT, Claude, and PaLM-2) of 13 top language models. They found that larger models typically produce fewer factual errors.

However, even the best-performing models still produced a large number of false statements.

This highlights the risk of over-reliance on language models that can fluently express inaccurate information. Automated fact-checking tools like SAFE can play a key role in mitigating these risks.

Jennifer-Yu

Jennifer Yu

Jennifer Yu is a reporter at BTW Media covering artificial intelligence and products. She graduated from The University of Hong Kong. Send tips to j.yu@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *