How does Alphabet plan to thwart AI manipulation in elections?

  • Alphabet takes proactive steps, limiting election-related queries to prevent AI manipulation in the 2024 U.S. Presidential election.
  • Widespread use of AI tools poses a threat with lifelike disinformation, including deepfake videos and biased algorithmic decisions in elections.
  • A comprehensive strategy is crucial to safeguard elections from AI risks.

As Alphabet takes steps to curb election-related queries, it seems to be a necessary safeguard against AI manipulation.

But the question is what methods can they employ to achieve this positive outcome?

Election landscape: Tech giants respond to AI concerns

Alphabet plans to limit election-related queries on Google’s chatbot Bard and AI-based search in the lead-up to the 2024 U.S. Presidential election. These restrictions aim to address concerns about AI’s role in elections.

Meta has implemented similar measures, while Elon Musk’s X allows political advertising in the U.S. amidst increased regulatory scrutiny over AI.

At the same time, The European Union is introducing rules for clear labeling of political advertising on Big Tech platforms.

Also read: Expedia adds AI, but personalised travel still won’t beat search engines

Safeguarding elections amidst the rise of lifelike disinformation

Alphabet’s move to limit election-related queries is seen as a key step in preventing AI manipulation. The question is, what will they do to ensure this positive outcome?

The use of artificial intelligence (AI) tools capable of generating lifelike disinformation poses a potential threat to elections.

Just as PBS NewsHour posted: “AI-generated disinformation poses threat of misleading voters in 2024 election”

This includes deep fake videos and audio generated by AI, potentially deceiving voters and manipulating public opinion.

The AI challenge involves not only deliberate disinformation but also algorithmic decisions in voter registration and mail-in ballot verification, raising concerns about bias.

AI-generated disinformation not only threatens to deceive audiences but also erodes an already embattled information ecosystem by flooding it with inaccuracies and experts say.

Generative AI tools are most effective when they generate content similar to their training databases, which can lead to distorted global political conversations. AI tools such as ChatGPT were mentioned for their enhanced capabilities, reshaping the information ecosystem and impacting search engines and news websites.

Overall, this underscores the unique vulnerability of elections to AI-driven disinformation, including the potential for deeply forged images, audio, and video to affect public perception and trust in the electoral process.

Also read: Are the MIT guidelines for responsible AI development enough?


A united worldwide effort is crucial to tackle AI’s potential threats to elections and democracy.

Multifaceted approaches to mitigate AI risks in elections

The path forward to protect elections involves a comprehensive approach from various sectors to counter the risks posed by AI technologies. Several key actions should be considered:

Coordination efforts

The executive branch should designate a lead agency to coordinate governance of AI issues in elections. This interagency effort is crucial to address the multifaceted challenges posed by AI.

Disinformation countermeasures

The Cybersecurity and Infrastructure Security Agency should create and share resources to help election offices combat disinformation campaigns. This includes addressing the exploitation of deepfake tools and language models that undermine the integrity of election processes.

Political Ad disclosure requirements

The Federal Election Commission should extend political ad disclosure requirements to cover the full range of online communications, ensuring rules encompass content from paid influencers and online promotions that may involve AI-generated material.

Innovation in detection technologies

The federal government, through agencies like the Defense Advanced Research Projects Agency and the AI Institute for Agent-based Cyber Threat Intelligence and Operation, should promote and encourage innovation in deepfake detection and detection of voting disinformation campaigns. This includes developing high-accuracy detection tools for use by state and local election offices.

Involvement of AI developers and social media companies

AI developers must implement and refine filters for election falsehoods, while social media companies should develop policies to reduce harms from AI-generated content. Public verification of election officials’ accounts and cooperation in identifying and removing coordinated bots and deepfakes are essential.

Regulation of AI

Congress and state legislatures must act swiftly to regulate AI. Options for deliberation include mandating watermarking and digital signatures, requiring safety proof for AI products, and limiting the creation and transmission of harmful AI-generated content that interferes with elections.

Challenges with Generative AI tools

Open-source generative AI tools pose challenges due to their fully public source code. Despite this, regulation targeting the development and deployment of proprietary AI apps in the private sector can still have a significant impact.

Transparency for voters

Transparency is crucial for safe AI use in elections. Lawmakers should compel AI developers to disclose data categories and guiding principles, mandate algorithmic impact assessments, and require periodic third-party audits of AI systems used in election administration.

Global cooperation

A coordinated global response is necessary to address the potential threats posed by AI to elections and democracy.

In conclusion, a whole-of-society response is imperative to navigate the transformative impact of AI on elections, ensuring their integrity and safeguarding democracy.

Elma-Yuan

Elma Yuan

Elma Yuan was a junior reporter at BTW media interested in media and communication.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *