California’s governor vetoes controversial AI safety bill that the tech industry called an innovation killer

Fortune· Photo by Justin Sullivan/Getty Images

California legislation aiming to limit the most existential threats of artificial intelligence was vetoed by California Gov. Gavin Newsom today, following fierce debate within the tech community about whether its requirements would save human lives or crush AI advancement.

The bill would have set a de facto national standard for AI safety in the absence of federal law, and the potential for its passage set off an intense lobbying campaign by the tech industry to defeat it.

The bill, known as SB 1047, would have required companies building large-scale AI models—meaning those that cost more than $100 million to train—to run safety tests on those systems and take steps to limit any risks that they identify in the process.

Newsom said the bill was too broad and instead proposed a task force of researchers, led by the “godmother of AI” Fei-Fei Li, to come up with new guardrails for future legislation. Li, whose AI startup World Labs has raised $230 million, was a vocal opponent of SB 1047. She wrote in a Fortune op-ed that, while well-intended, the legislation would harm U.S. innovation.

The governor called for new language in a future bill that focuses only on AI deployed in high-risk environments and are used to make critical decisions or use sensitive data.

In his veto of the recent legislation, Newsom wrote: “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.” He also noted that 32 of the world’s 50 largest AI companies are based in his state while calling attention to 17 bills addressing AI that he did sign in recent weeks, from measures curbing election deepfakes to requiring AI watermarking.

The safety guardrails proposed by SB 1047 sparked months of debate within the tech community about whether the bill would push AI innovation out of California or curb major threats posed by rapid unchecked advancement, like the escalation of nuclear war or development of bioweapons. Earlier this month a YouGov poll found nearly 80% of voters nationally supported the California AI safety bill.

Just last week, more than a hundred Hollywood stars including Shonda Rhimes and Mark Hamill came out in support of the AI safety bill, building on the actors guild SAG-AFTRA’s successful lobbying for protection from AI-generated copycats of their work.

ChatGPT developer OpenAI urged Newsom not to sign the legislation, arguing that AI regulation should be left to the federal government to avoid a confusing patchwork of laws that vary by state. And while Silicon Valley remains a breeding ground for AI innovation, OpenAI said California risks making businesses flee the state to avoid burdensome regulation if SB 1047 passes.

The legislation also faced pushback from some Californian members of Congress, including political power broker and Rep. Nancy Pelosi, who urged Newsom not to sign the bill in order to protect AI innovation.

State Sen. Scott Wiener, the sponsor of the legislation, has argued that his proposal is a common sense and “light-touch” measure that aligns with the voluntary safety commitments many tech companies, including OpenAI, have already made. And in the absence of comprehensive federal AI rules, the California lawmaker saw his proposal as an opportunity for California to lead in U.S. tech policy, just as it has previously on data privacy, net neutrality, and social media regulation.

“This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way,” Wiener said in a statement today.

Tesla CEO Elon Musk was also among the legislation's supporters, posting on X that “all things considered,” California should enact an AI safety law. Musk, whose xAI develops the Grok chatbot, said the endorsement was a “tough call” but that AI should ultimately be regulated just like any other technology that has a public risk.

Anthropic, the buzzy AI startup that pitches itself as safety-focused, was heavily involved in making sure that the final version of SB 1047 didn’t add overly burdensome legal obligations for developers, leading to amendments that clarify an AI company won’t be punished unless its model harms the public.

Anthropic declined to explicitly support or oppose the final proposal in their letter to Newsom but did say the state should create some AI regulatory framework, especially in the absence of any clear action from federal lawmakers.

This story was originally featured on Fortune.com