Governor Gavin Newsom vetoes a groundbreaking AI safety bill, citing concerns over overly stringent regulations on both high-risk and basic AI systems, while signaling ongoing efforts to establish protective guardrails.
In a decision that has sparked intense debate in Silicon Valley and beyond, California Governor Gavin Newsom vetoed SB 1047, a pioneering piece of legislation aimed at regulating artificial intelligence (AI). The bill, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would have required companies investing more than $100 million in AI development to implement strict safety measures, including a “kill switch” to disable potentially dangerous models.
The legislation, widely regarded as the most ambitious AI regulation attempt in the U.S., was designed to prevent worst-case scenarios such as mass casualty events linked to rogue AI. It had gained support from prominent tech figures and Hollywood elites, including billionaire Elon Musk and AI pioneers Yoshua Bengio and Geoffrey Hinton. Despite its backing from notable proponents, the bill was met with considerable resistance from tech giants like OpenAI, Meta, and Google.
Newsom’s Justification: Balancing Innovation and Safety
In his veto message, Newsom emphasized the bill’s well-meaning nature but criticized its lack of nuance. He pointed out that the legislation did not differentiate between AI systems based on their risk levels or applications. According to Newsom, the bill placed unnecessarily stringent requirements on all AI systems, including basic ones, solely based on the size of the company deploying them.
“I do not believe this is the best approach to protecting the public from real threats posed by the technology,” Newsom stated. He explained that while he agrees with the need for regulation, the current proposal could hinder California’s position as a global tech leader.
Speaking earlier this month at Salesforce’s annual Dreamforce conference, Newsom underscored the significance of AI for California’s economic future. “This is a space where we dominate, and I want to maintain our dominance,” he said, highlighting the state’s dual challenge of leading innovation while managing associated risks.
Pushback from Silicon Valley and Beyond
Despite broad support in California’s legislature, SB 1047 encountered resistance not just from tech companies but also from a group of eight California House Democrats, including longtime Newsom ally Nancy Pelosi. Their concerns echoed the governor’s, suggesting that the bill could stifle innovation by applying stringent standards across the board, regardless of an AI system’s potential risks or benefits.
However, supporters of the bill argue that it is a necessary step toward mitigating the looming dangers of unchecked AI development. More than 100 employees from AI giants like Google, Meta, and OpenAI had signed a letter urging Newsom to sign the legislation, warning that the most advanced AI models “may soon pose severe risks.”
Hollywood figures also added their voices to the discussion, with over 125 actors and directors calling for the bill’s passage. They expressed their belief in AI’s potential for good, while also cautioning that unchecked development could lead to devastating consequences.
Federal and Global Implications of AI Regulation
California Senator Scott Wiener, the bill’s author, expressed disappointment with the veto but remains committed to pushing for stronger AI safeguards. He has also been an advocate for federal AI legislation, though he voiced doubts about Congress’s ability to act swiftly on technology-related policy.
Meanwhile, at the international level, U.S. President Joe Biden addressed the importance of AI regulation in his speech to the United Nations General Assembly, calling on world leaders to work together to develop global standards that prioritize human safety.
“This is just the tip of the iceberg of what we need to do to manage this new technology,” Biden said. His comments reflected growing global concern about AI’s potential to reshape societies in unforeseen ways.
A Step Toward Comprehensive AI Policy?
While SB 1047’s failure marks a significant moment in the ongoing conversation about AI regulation, it may not be the last word on the matter. Governor Newsom has indicated his intention to work with AI experts, including renowned researcher Fei-Fei Li, to develop a more refined approach to safeguarding the public from AI risks. Additionally, he has signed 17 other AI-related bills, addressing more immediate concerns like deepfake election content and the misuse of digital likenesses in entertainment.
The veto of SB 1047 represents a delicate balancing act for California—a state that is both at the forefront of technological innovation and increasingly aware of the potential dangers posed by AI. As the debate continues, California’s role in shaping the future of AI regulation will undoubtedly remain crucial.
By Orlando J. Gutiérrez