Close Menu
  • Home
  • Leadership Alliance
  • Exclusives
  • History of the Internet
  • AFRINIC News
  • Internet Governance
    • Regulations
    • Governance Bodies
    • Emerging Tech
  • Others
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profile
      • Startups
      • Tech Titans
      • Partner Content
    • Fintech
      • Blockchain
      • Payments
      • Regulations
    • Tech Trends
      • AI
      • AR / VR
      • IoT
    • Video / Podcast
  • Country News
    • Africa
    • Asia Pacific
    • North America
    • Lat Am/Caribbean
    • Europe/Middle East
Facebook LinkedIn YouTube Instagram X (Twitter)
Blue Tech Wave Media
Facebook LinkedIn YouTube Instagram X (Twitter)
  • Home
  • Leadership Alliance
  • Exclusives
  • History of the Internet
  • AFRINIC News
  • Internet Governance
    • Regulation
    • Governance Bodies
    • Emerging Tech
  • Others
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profiles
      • Startups
      • Tech Titans
      • Partner Content
    • Fintech
      • Blockchain
      • Payments
      • Regulation
    • Tech Trends
      • AI
      • AR/VR
      • IoT
    • Video / Podcast
  • Africa
  • Asia-Pacific
  • North America
  • Lat Am/Caribbean
  • Europe/Middle East
Blue Tech Wave Media
Home » Google and Stanford researchers launch AI fact-checking tool
AI-fact-checking
AI-fact-checking
IoT

Google and Stanford researchers launch AI fact-checking tool

By Chloe ChenApril 1, 2024No Comments2 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email
  • A recent development by Google DeepMind and Stanford University introduces the Search-Augmented Factuality Evaluator (SAFE), a tool designed to fact-check long responses from AI chatbots.
  • SAFE employs a multi-step process, including segmentation, correction, and comparison with Google search results, achieving a 76% accuracy rate in verifying controversial facts.
  • This innovation not only enhances accuracy in AI-generated responses but also presents economic advantages, being over 20 times cheaper than manual annotation.

No matter how powerful current AI chatbots are, there tends to exist a much-criticised behaviour – providing users with answers that are somewhat convincing but factually inaccurate. Simply put, AI sometimes ‘runs off the rails’ in its responses, even ‘spreading rumours’. Preventing such behaviour in AI large models is no easy task and is a technical challenge. However, according to the foreign media Marktechpost, Google DeepMind and Stanford University seem to have found a workaround.

Also read: OpenAI’s GPT store fails to meet expectations

Also read: US federal agencies now required to have chief AI officer

The tool is based on the Search-Augmented Factuality Evaluator (SAFE)

Researchers have introduced a tool based on large language models – the Search-Augmented Factuality Evaluator (SAFE), which can fact-check long responses generated by chatbots. Their research results, along with experimental code and datasets, have now been made public, click here to view.

The system analyses, processes, and evaluates the responses generated by chatbots through four steps to verify accuracy and authenticity: segmenting the answers into individual items for verification, correcting the above content, and then comparing it with Google search results. Subsequently, the system also checks the relevance of each fact to the original question.

Researchers created a dataset called LongFact to assess its performance

To assess its performance, researchers created a dataset called LongFact containing approximately 16,000 facts and tested the system on 13 large language models from Claude, Gemini, GPT, and PaLM-2. The results show that in the focused analysis of 100 controversial facts, SAFE’s judgment accuracy reaches 76% upon further review. At the same time, the framework also has economic advantages: it is more than 20 times cheaper than manual annotation.

AI
Chloe Chen

Chloe Chen is a junior writer at BTW Media. She graduated from the London School of Economics and Political Science (LSE) and had various working experiences in the finance and fintech industry. Send tips to c.chen@btw.media.

Related Posts

Nokia restructures business to drive AI and network innovation

November 21, 2025

Google opens energy‑efficient AI data centre in Winschoten

November 20, 2025

ITW Asia 2025 spotlighting connectivity’s next wave

November 20, 2025
Add A Comment
Leave A Reply Cancel Reply

CATEGORIES
Archives
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023

Blue Tech Wave (BTW.Media) is a future-facing tech media brand delivering sharp insights, trendspotting, and bold storytelling across digital, social, and video. We translate complexity into clarity—so you’re always ahead of the curve.

BTW
  • About BTW
  • Contact Us
  • Join Our Team
  • About AFRINIC
  • History of the Internet
TERMS
  • Privacy Policy
  • Cookie Policy
  • Terms of Use
Facebook X (Twitter) Instagram YouTube LinkedIn
BTW.MEDIA is proudly owned by LARUS Ltd.

Type above and press Enter to search. Press Esc to cancel.