Legislation to regulate artificial intelligence (AI) software in California has been revised in response to industry discontent with the bill, which awaits a State Assembly vote later this month.
California State Senator Scott Wiener (D)’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) has faced resistance from leading AI companies such as Anthropic and from federal lawmakers like Congressional Representative Zoe Lofgren (D-CA-18).
“I’m very concerned about the effect this legislation could have on the innovation economy of California without any clear benefit for the public,” wrote Lofgren in an August 7 letter [PDF] to Wiener. “There is a real risk that companies will decide to incorporate in other jurisdictions or simply not release models in California.”
California is home to 35 of the top 50 AI companies in the world, according to Governor Gavin Newsom (D)’s executive order last September, which calls for studying the development, use, and risks of AI technology.
We accepted a number of very reasonable amendments proposed … we’ve addressed the core concerns
Wiener on Thursday acknowledged changes to the bill, citing input from Anthropic, a startup built by former OpenAI staff and others with a focus on the safe use of machine learning.
“While the amendments do not reflect 100 percent of the changes requested by Anthropic – a world leader on both innovation and safety – we accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry,” Wiener said in a statement.
SB 1047, co-authored by Senator Richard Roth (D-Riverside), Senator Susan Rubio (D-Baldwin Park) and Senator Henry Stern (D-Los Angeles), has the support of latter-day AI pioneers Geoffrey Hinton, emeritus professor of computer science at University of Toronto and former AI lead at Google, and Yoshua Bengio, professor of computer science at University of Montreal.
In a statement, Hinton said, “Forty years ago when I was training the first version of the AI algorithms behind tools like ChatGPT, no one – including myself – would have predicted how far AI would progress. Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously.
“SB 1047 takes a very sensible approach to balance those concerns. I am still passionate about the potential for AI to save lives through improvements in science and medicine, but it’s critical that we have legislation with real teeth to address the risks. California is a natural place for that to start, as it is the place this technology has taken off.”
The bill focuses on “frontier models,” a term that refers to state-of-the-art AI models requiring more than 1026 integer or floating-point operations to create, at a training cost of more than $100 million using average market prices.
In a recent interview with Norges Bank CEO Nicolai Tangen, Anthropic CEO Dario Amodei said AI models now commonly cost around $100 million to train and that there are models currently being trained at a cost of about $1 billion. In the next few years, he said, the cost could go to $10 billion or $100 billion.
And if chip and algorithm improvements continue, Amodei said, at that point, “there is in my mind a good chance that by that time we’ll be able to get models that are better than most humans at most things.”
That’s the sort of scenario that concerns the public, which largely supported SB 1047 as initially written. According to an Artificial Intelligence Policy Institute (AIPI) poll, “Only 25 percent of California voters oppose the legislation.”
The tech industry has been less enthusiastic. Anthropic last month sent a letter [PDF] to state lawmakers outlining its problems with the bill, which aims to establish a safety regime for large AI models. The San Francisco-based biz took issue with provisions that allowed AI companies to be sued prior to the establishment of harm; the creation of a new Frontier Model Division to police frontier models; and rules covering pricing and labor that extend beyond the bill’s described score.
Anthropic’s proposed changes, though potentially unpopular with voters, have been largely accepted.
The changes limit enforcement penalties, such as the injunctive option to require the deletion of models and their weights. Criminal perjury provisions for lying about models were dropped, based on the adequacy of existing law about lying to the government. There’s no longer language that would create a Frontier Model Division, though some of the proposed responsibilities will be handled by other government bodies. And the legal standard by which developers must attest to compliance has been reduced from “reasonable assurance” to “reasonable care.”
An open source carveout has been made – developers spending less than $10 million to fine tune models aren’t covered by the bill.
Also, whistleblower protections have been narrowed such that contractors don’t have to have their own internal whistleblowing process.
SB 1047 can be voted on as of August 20 and must pass by the end of the month to have a chance to advance to Governor Newsom for signature. ®