- OpenAI advocates for watermarking AI-generated content to improve transparency, help users identify the source, and reduce misinformation online.
- Watermarking AI-generated content is crucial for clear information, particularly during important events like elections.
OUR TAKE
OpenAI’s support for California’s AB 3211 highlights the growing concern over AI transparency, especially in election years. By mandating labels for AI-generated content, the bill aims to clarify the origin of digital media and combat misinformation. With the measure advancing through the legislative process, it could set a precedent for addressing the challenges posed by AI in the digital age.
-Tacy Ding, BTW reporter.
What happened
OpenAI, the developer behind ChatGPT, is backing a California bill that would mandate tech companies to label AI-generated content. This includes everything from innocuous memes to deepfakes designed to spread misinformation about political candidates.
California state lawmakers introduced 65 bills related to AI this legislative session, according to the state’s legislative database. These proposals include measures to ensure algorithmic decisions are unbiased and to protect the intellectual property of deceased individuals from AI exploitation. However, many of these bills have already been defeated.
OpenAI sent a letter of support to California State Assembly member Buffy Wicks, who authored the bill known as the California Provenance, Authenticity and Watermarking Standards Act (AB 3211).
The bill, called AB 3211, has so far been overshadowed by attention on another California state artificial intelligence (AI) bill, SB 1047, which mandates that AI developers conduct safety testing on some of their own models.
That bill has faced a backlash from the tech industry, including OpenAI, which has Microsoft as a backer.
Also read:OpenAI hires former Meta executive to lead strategic initiatives
Also read:Anthropic thinks benefits of California’s AI bill may outweigh costs
Why it’s important
San Francisco-based OpenAI believes that for AI-generated content, transparency and requirements around provenance such as watermarking are important, especially in an election year.
With countries representing a third of the world’s population having polls this year, experts are concerned about the role AI-generated content will play, and it has already been prominent in some elections, such as in Indonesia.
“New technology and standards can help people understand the origin of content they find online, and avoid confusion between human-generated and photorealistic AI-generated content,” OpenAI Chief Strategy Officer Jason Kwon wrote in the letter.
AB 3211 has already cleared the state Assembly with a unanimous 62-0 vote. Earlier this month, it also passed the Senate Appropriations Committee, positioning it for a full vote in the state Senate. If it is approved by August 31, the end of the legislative session, it will be sent to Governor Gavin Newsom, who will have until September 30 to sign or veto the bill.