On Monday, California’s Governor Gavin Newsom vetoed a law passed by the state’s Senate known as SB 1047 that sought to implement stricter rules on the development of artificial intelligence.
The law was authored and sponsored by Senator Scott Wiener and was named the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. One of its most controversial provisions involved holding AI companies liable in case their models harmed users in some way. This could be important going forward as AI models have given some horrendous advice in the past, such as Google’s Gemini AI suggesting that users should eat multiple small rocks per day.
My statement on the Governor’s veto of SB 1047: pic.twitter.com/SsuBvV2mMI
— Senator Scott Wiener (@Scott_Wiener) September 29, 2024
It also would have forced companies to create a “kill switch” for these systems and drafted protection mechanisms for whistleblowers. Companies in the state that created sophisticated models worth over $10 million would have to comply with SB 1047 if Newsom hadn’t vetoed it.
Newsom claimed that the bill lacked sufficient specificity when it comes to high-risk environments and that its broad application would impose stringent standards on basic functions that would discourage innovation in the sector.
He argued that specialized models may emerge and be as dangerous as large ones and would not be targeted or affected by the bill and that its approval would have curtailed the kind of technological advancement that ultimately benefits society.
Tech Industry Has Mixed Views About SB 1047
The tech industry had mixed views about the bill. For example, Meta Platforms, Google, and OpenAI strongly opposed it, arguing that it hurt the sector’s growth and slowed down technological progress.
“We are pleased that Governor Newsom vetoed SB1047. This bill would have stifled AI innovation, hurt business growth and job creation, and broken the state’s long tradition of fostering open-source development. We support responsible AI regulations and remain committed to partnering with lawmakers to promote better approaches,” commented Jamie Radice, manager of public affairs for Meta.
Also read: New California Law Combats “Disappearing” Digital Purchases: Here’s How
Meanwhile, other large companies like Anthropic, which is, ironically, backed by Google, and Elon Musk favored it along with prominent unions like SAG-AFTRA.
“We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure. It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause harm, and for these companies to implement reasonable safeguards against such risks,” AI scientists Geoffrey Hinton and Yoshua Bengio wrote in a letter that tried to pressure Newsom to approve the law.
Advocacy nonprofit Accountable Tech criticized the veto as a “massive giveaway to Big Tech companies” that leaves Americans as “unconsenting guinea pigs” in an unregulated AI industry.
Speaker Emerita Nancy Pelosi, on the other hand, thanked Newsom for recognizing the opportunity to enable small entrepreneurs and academia to dominate the field.
Senator Wiener, the bill’s proponent, highlighted that Newsom’s veto gives AI companies a free pass to develop AI models without any restrictions and fails to create appropriate safeguards for both users and society.
Also read: Key Parts of California’s AADC Law Struck Down in Huge Win for Site-Owners
“This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way,” Wiener posted on his X account.
The Governor’s decision maintains a free-market approach that allows AI companies to operate without any regulation whatsoever and could encourage further investments in AI businesses in the state.
Critics argue that the lack of clear rules and boundaries could leave the public vulnerable to AI-linked harms.
Gavin Newsom’s Controversy Spree Continues
SB 1047 was far from the only bill that reached Newsom’s desk in the past few weeks. He also vetoed a major bill that would have forced automakers to alert drivers when they exceed the speed limit by 10 or more miles per hour. One major move that wasn’t covered by much of the media, however, is a rather suspicious bill that extended alcohol sales until 4 AM, but only in one location: the new VIP club in the LA Clippers arena. Critics pointed out that Newsom recently received a $1 million donation from Connie Ballmer, the wife of Steve Ballmer, who ‘happens’ to own the Clippers, alleging that the bill was due to little more than bribery. Ballmer is now one of Newsom’s largest individual donors along with Reed Hastings, the founder of Netflix, and George Soros.
This kind of move isn't ugly politics, it's flat-out bribery. Who is paying for all those French Laundry meals? Like Eric Adams and Bob Menendez, Gavin Newsom should be under investigation and possibly jailed. pic.twitter.com/37V2gYkuEX
— Matt Stoller (@matthewstoller) September 30, 2024
This was far from Newsom’s first scandal. Earlier this year, he reportedly pushed for a carveout on a new bill to raise the minimum wage for workers at fast food restaurants to $20 an hour. Strangely, the carveout made it so that the law wouldn’t apply to fast food chains that have their own bakeries to sell and make bread in-house as a stand-alone item. This quickly blew up into a major scandal as it seemed like Panera Bread was the only chain that fit this carveout and one of Newsom’s biggest donors and closest friends, Greg Flynn, owns 24 Panera locations in California.
Since the scandal, Newsom said that his legal team looked into the issue and called the controversy “absurd”, saying that Panera Bread restaurants are “likely not exempt” because it mixes dough elsewhere to be baked in the franchises. However, he didn’t explain what the point of the carveout was if not to give his buddy and donor a gift (again, it doesn’t fit any other chain except for Panera). Of course, we don’t have enough evidence to show that these moves were explicitly corrupt but they sure are suspicious.
Newsom’s Alternative Approach and Implications for the AI Sector
Newsom has asked leading experts in the AI field to come up with guardrails that use science-based analysis to draft rules that both foster innovation while protecting users and society from the dangers of indiscriminate AI development.
The Governor stressed that the state will not abandon its efforts to create adequate legislation for the sector but emphasized that his administration still needed to research the matter more thoroughly to estimate the impact that AI has on specific sectors like energy, communications, natural resources like water, and the environment.
He emphasized that the federal government has been acting too slowly to regulate the sector, leaving the state with the challenging task of drafting local frameworks that may fail to adequately uncover the different risks that this technological advancement brings.
The US Congress has more resources, expertise, and power to put together more comprehensive AI legislation but it remains to be seen exactly what that might look like. At the federal level, the US Senate created a $32 billion roadmap that should lead to comprehensive AI regulation. It focuses on key areas that affect topics like national security, elections, and copyrighted content.
Meanwhile, certain states like Colorado and Utah have drafted local laws that address AI bias in the employment market and healthcare decisions.
Also read: Elon Musk’s Radical Conservative Turn Matters More Than You Think
AI companies are claiming that they have established appropriate guardrails to protect society and users from the harms that this technologic could cause but self-regulation is never a good idea as companies will pretty much always cut corners to increase profits.
Critics argue that these claims cannot be verified and authorities have no way to make sure that this is the case as no regulation allows them to take action. Newsom’s veto of SB 1047 could influence other states and shape, in some way or the other, the scope and reach of federal laws.
“While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that the voluntary commitments from industry are not enforceable and rarely work out well for the public,” Senator Wiener further commented.
As the AI industry continues to advance rapidly, the challenge of balancing innovation with public safety remains at the forefront of policy discussions across the nation.
The coming months and years will likely see continued debate and refinement of AI regulatory frameworks as policymakers, industry leaders, and the public grapple with the profound implications of this transformative technology.