- A landmark California legislation aimed at establishing the nation’s first security measures for the largest AI systems has received an important vote.
- The proposal, which aims to reduce the potential risks posed by AI, requires companies to test their models and publicly disclose their security protocols.
OUR TAKE
Supporters of the bill say it would set some of the first much-needed safety ground rules for large-scale U.S. AI models, for which more than $100 million in data is needed to train the system. No current AI model reaches this threshold, and big tech companies should work with relevant organizations to create socially appropriate AI usage norms.
— Iydia Ding, BTW reporter
What happened
California’s effort to create the nation’s first security measures for the largest artificial intelligence system passed an important vote on Wednesday that could pave the way for U.S. regulations on technology moving at warp speed. The proposal, aimed at reducing potential risks posed by AI, would require companies to test their models and publicly disclose their security protocols to prevent models from being manipulated, for example, to wipe out the state’s power grid or help make chemical weapons – something experts say could be possible in the future as the industry grows rapidly.
The bill was among hundreds of lawmakers who voted to pass it in the final week of the session. Newsom then has until the end of September to decide whether to sign them into law, veto them or allow them to become law without his signature. The measure was introduced in the Assembly on Wednesday and needs a final vote in the Senate before reaching the governor’s office.
Also read: Elon Musk backs California bill to regulate AI
Also read: OpenAI supports California bill requiring watermarking of AI content
Why it’s important
Supporters of the bill say it would set some of the first much-needed safety ground rules for large-scale U.S. AI models, for which more than $100 million in data is needed to train the system. No AI model currently reaches this threshold, big tech companies should work with relevant organizations to create socially appropriate AI usage norms. “It’s time for big tech to play by some sort of rule,” Republican Rep. Devon Mathis said Wednesday in support of the bill.
The proposal, written by Democratic Sens. Wiener faced stiff opposition from venture capital firms and tech firms, including OpenAI, Google and Meta, the parent company of Facebook and Instagram. They say safety regulations should be set by the federal government and that the California legislation targets developers, not those who use and exploit AI systems to cause harm, with several California House members also opposing the bill.