California Governor spikes AI bill (Image Credit: Ian Murphy using Microsoft Designer)California’s governor, Gavin Newsome, has spiked an AI safety bill that has become a battleground for tech companies. In refusing to sign the bill, Newsome cited concerns that mirror the talking points of tech companies that had lobbied to kill it.

Among the concerns from tech companies was that Senate Bill 1047 only targeted models based on cost to create and computational size. It uses figures in excess of $100 million to train the AI which includes the data acquisition and cost of computing. A second measurement was where the computing power exceeds 1026 integer or floating point operations.

Another concern was that it would impact general-purpose AI rather than AI used in high-risk environments. This would have had an immediate impact on social media companies looking to build new AI models to leverage user data.

Gavin Newsome, Governor of California (Image Credit: LinkedIn)
Gavin Newsome, Governor of California

Addressing both those issues, Newsome said, “Key to the debate is whether the threshold for regulation should be based on the cost and number of computations needed to develop an Al model, or whether we should evaluate the system’s actual risks regardless of these factors.”

To raise the stakes further, several AI companies threatened to move out of California. As the state is home to 32 of the top 50 AI companies worldwide, this would have seriously worried Newsome. The state is already struggling with a budget shortfall, and losing that tax revenue would have exacerbated that.

However, that would not have reduced the impact on those companies. The bill applies to any AI company doing business in California so that it would have still impacted their sales.

What is Senate Bill 1047?

SB 1047, Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was introduced to the California Senate in February 2024. It was sponsored by Senator Wiener, with three other Senators, Roth, Rubio and Stern, named as co-authors.

It was approved on September 30th by a vote of 30-9 and passed to Governor Newsome. Interestingly, it passed every vote, showing an interest in passing it into law. Through various amendments, the bill’s focus has been narrowed to reduce the impact on AI companies and developers.

Requirements for developers

Focused on Frontier AI models, the bill requires developers of models meeting the threshold of $100 million to train or 1026 integer floating point operations to take several actions. According to the bill, these include:

  1. Before training the model, implement certain cybersecurity protections, the capability to promptly enact a full shutdown, a written safety and security protocol, and take reasonable care to implement appropriate measures to SB 1047 Page 3 prevent “critical harms”—defined as mass casualties, at least $500 million in damage, or other comparable harms.
  2. Before using the model or making it publicly available: assess whether the model is reasonably capable of causing or materially enabling a critical harm; record and retain test results from the assessment; and take reasonable care to implement appropriate safeguards.
  3. Neither use, unless exclusively for training or evaluation, nor make publicly available a model if there is an unreasonable risk it will cause or materially enable a critical harm.
  4. Beginning in 2026, annually retain a third-party auditor to perform an independent audit of a developer’s compliance with applicable duties.
  5. Make public and provide to the Attorney General redacted copies of safety and security protocol and auditors’ reports. Upon request, provide to the Attorney General unredacted copies of those documents, which are exempt from the California Public Records Act. Submit annual compliance statements to the Attorney General. Report to the Attorney General safety incidents within 72 hours.

Other provisions

One of the provisions also required companies to know what a customer intended to do with their product. That includes determining if their product would be used to create a frontier model. Such a step would affect the relationship between the AI vendor and the customer.

The bill also authorises the Attorney General to bring a civil action for violation of the bill. Given the provision above, any vendor who does not report a customer for creating a frontier model would risk being taken to court. In effect, it turns vendors into compliance officers.

To strengthen the policing of frontier models, whistleblowers are protected. It protects them from retaliation from employers for disclosing information about frontier models. It also applies when an AI model poses an unreasonable risk of critical harm. That latter point widens the scope of the bill from frontier models to a risk category.

The California GovOps team would have to create a Board of Frontier Models. This body would be responsible for defining the models covered under the legislation.

Developers were also required to test their models to ensure they did not pose a risk or threat. However, the bill did not clarify how that would be assessed in terms of effectiveness.

The challenges for Newsome

Newsome found himself walking a fine line with this bill. He was caught between what he seemingly accepts as necessary and the vast marketing machines deployed by the tech industry.

He openly acknowledges the need for legislation and that the AI industry needs to be regulated. Newsome said, “Let me be clear—I agree with the author—we cannot afford to wait for a major catastrophe to occur before taking action to protect the public. California will not abandon its responsibility.

“Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable.”

In his statement on not signing the bill, Newsome commented on the issues of cost, computation and harm. He said, “By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.

“Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”

He continued, “I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities. Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself.”

Enterprise Times: What does this mean?

It seems disappointing that a bill that reads as balanced and reasonable was rejected. Interestingly, there seems to have been little discussion over harm from smaller models. Covering both scenarios seems like a missed opportunity and suggests that the authors and supporters were focused on just one definition of harm.

If it had been expanded to create coverage for those high-risk models, would Newsome have still rejected it? Based on the language in his statement, the answer is yes. And that, for those who want technology regulation, is the biggest disappointment.

So, where does this bill go now? It could be amended to create a second risk category to ensure that it covers those high-risk models. That would address Newsome’s primary objection, but there is no guarantee he would support it.

Whatever happens, California has missed an opportunity. Many will see this as it once again bowing to the spending power of big tech. For now, however, AI technology continues to evolve with little in the way of realistic checks and balances.

LEAVE A REPLY

Please enter your comment!
Please enter your name here