Controversial California AI bill aims to prevent major disasters

The controversial SB 1047 bill seeks to regulate large AI models to prevent critical harms, but has met strong opposition from tech giants and researchers.

 Animal, Bear, Mammal, Wildlife, Brown Bear

California is set to vote on SB 1047, a bill designed to prevent catastrophic harm from AI systems. The bill targets large AI models—those costing over $100 million to train and using immense computing power—requiring their developers to implement strict safety protocols. These include emergency shut-off mechanisms and third-party audits. The Frontier Model Division (FMD) will oversee compliance and enforce penalties for violations.

While the bill aims to mitigate risks such as AI-driven cyberattacks or weapon creation, it has sparked significant controversy. Silicon Valley leaders, including tech giants and venture capitalists, argue that SB 1047 could stifle innovation and impose undue burdens on startups. Critics claim it may hinder the development of new AI technologies and drive innovation away from California.

Supporters of the bill, including State Senator Scott Wiener and prominent AI researchers, contend that preemptive regulation is essential to safeguard against potential AI disasters. They believe it’s crucial to establish regulations before serious incidents occur. The bill is expected to be approved by the Senate and is now awaiting a decision from Governor Gavin Newsom.

If passed, SB 1047 would not take effect immediately, with the FMD scheduled to be established by 2026. The bill is anticipated to face legal challenges from various stakeholders who are concerned about its implications for the tech industry.