- Some states have laws against election deepfakes, these regulations often do not cover exaggerated or clearly false representations.
- It can undermine public trust in what is real and what is fabricated. This erosion of trust is seen as detrimental to the functioning of a democratic society.
OUR TAKE
The use of AI-generated images, such as those posted by Trump, can significantly influence voter perceptions and opinions. The proliferation of AI-generated content can lead to a general distrust in media and information sources. Understanding these legal frameworks is essential for developing effective policies that can mitigate the spread of false information in political contexts.
-Lia XU, BTW reporter
What happened
Former President Donald Trump recently shared a series of AI-generated images on social media to support his presidential candidacy. Among these images was a fabricated endorsement from pop star Taylor Swift. In conjunction with these posts, Trump accused Vice President Kamala Harris of using AI to create a false image of a rally crowd. This accusation coincided with the release of an AI-generated image depicting Harris speaking to a crowd, with a communist symbol in the background. Universal music group, which represents Swift, did not immediately respond to a request for comment about the use of her likeness in Trump’s post.
“It’s convenient for Trump, who was going around calling everything fake before AI, and wants us to call true things fake — like Harris’ crowds — to spread AI garbage to undermine the very idea of authenticity, and even reality in some ways”, says Robert Weissman, copresident of Public Citizen.
Also read: Presti raises $3.5M, using 75K images for AI furniture photography
Also read: TikTok begins automatically labeling AI-generated content
Why it’s important
The use of AI to create fake endorsements undermines trust in public figures and the media. When influential individuals like Taylor Swift are falsely represented, it can erode public confidence in both the authenticity of endorsements and the integrity of political messages. Besides, creating and disseminating fake endorsements raises significant ethical issues. It involves deception and manipulation, often for political or financial gain, and can have real-world consequences on public perception and behaviour.
The misuse of AI-generated content can lead to legal challenges, including potential lawsuits for defamation or fraud. It also raises questions about the regulation of AI technology and how to hold individuals accountable for digital deception. This incident highlights the urgent need for enhanced media literacy. As AI technologies become more sophisticated, the public must develop skills to critically evaluate online content and recognise potential fakes.