- OpenAI plans to add tamper-resistant watermarks to images and audio for authenticity.
- OpenAI collaborates with tech giants like Google and Microsoft to trace media origins.
- Fake videos during the Indian election highlight the global issue of manipulated content.
OpenAI has launched a tool that can detect images created by its text-to-image generator, DALL-E 3, a Microsoft-backed startup. The tool correctly identified images created by DALL-E 3 about 98% of the time in internal testing and can handle common modifications like compression, cropping, and saturation changes with minimal impact.
Enhancing content authenticity
The creator of ChatGPT plans to introduce tamper-resistant watermarking to digitally mark content like photos and audio, aiming to provide a robust signal that is challenging to remove. This initiative is part of a broader effort by OpenAI to ensure the authenticity and traceability of digital media.
Industry collaboration for media traceability
OpenAI has joined forces with leading tech companies, including Google, Microsoft, and Adobe, as part of an industry group focused on establishing standards to trace the origins of various types of media. This collaboration underscores a collective commitment to combating misinformation and ensuring transparency in the digital landscape.
Also read: OpenAI will detect AI images created by DALL-E3
Misinformation challenges during elections
During the recent general election in India, fake videos surfaced online, featuring two Bollywood actors criticising Prime Minister Narendra Modi. This incident underscores the prevalence of misinformation and manipulated content, particularly during crucial events like elections.
Global concern
The proliferation of AI-generated content and deepfakes is a growing global issue, observed not only in India but also in countries like the U.S., Pakistan, and Indonesia. The use of manipulated media in elections raises concerns about the impact of misinformation on democratic processes and public perception.
Also read: OpenAI uses FT content to train AI models in media collaboration
Promoting societal resilience
In collaboration with Microsoft, OpenAI has launched a $2 million “societal resilience” fund dedicated to supporting AI education initiatives. This fund aims to raise awareness, educate the public, and enhance preparedness to address the challenges posed by AI-generated content and deepfakes, fostering a more resilient and informed society.