- home
- article
- California Governor Gavin Newsom vetoes landmark AI safety bill
California Governor Gavin Newsom vetoes landmark AI safety bill
California Governor Gavin Newsom has vetoed a bill aimed at regulating powerful artificial intelligence models, citing concerns that the legislation was too stringent and could stifle innovation. The bill, known as SB-1047, would have imposed some of the first AI regulations in the US, making artificial intelligence companies legally liable for damage caused by their models. Newsom acknowledged that the bill was "well-intentioned" but expressed concerns that it could have unintended consequences, such as driving smaller, specialized models that may be equally or even more dangerous. This decision comes despite opposition from some tech industry leaders, including Elon Musk, who supported the bill. California already has a number of AI laws on the books targeting potential harms, but this bill was seen as a more comprehensive effort to regulate the industry.
This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way.
We dominate this space and I don't want to lose that competitiveness.
By focusing only on the most expensive and large-scale models, SB-1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.
While well-intentioned, SB-1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.
Instead, the bill applies stringent standards to even the most basic functions—so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.
Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB-1047—at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.
We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure.
sources
perspectives
countries
organizations
- 1.Democratic Party
- 2.OpenAI
- 3.Meta
- 4.Google
- 5.Counterpoint Research
- 6.Dreamforce
- 7.House of Representatives
- 8.National Conference of State Legislatures
- 9.Salesforce
- 10.Screen Actors Guild
- 11.US Center for AI Safety
persons
- 1.Gavin Newsom
- 2.Elon Musk
- 3.Scott Wiener
- 4.Geoffrey Hinton
- 5.Nancy Pelosi
- 6.Yoshua Bengio
- 7.Dan Hendrycks
- 8.Jason Kwon
- 9.Marc Andreessen
- 10.Mark Zuckerberg
- 11.Sam Altman
- 12.Wei Sun