California enacts landmark AI whistleblower law

The law shields whistleblowers with reasonable cause to report catastrophic AI risks without fear of retaliation.

California’s SB 53 protects employees reporting AI risks, requiring transparency and risk reporting from high-computing AI developers.

California has enacted SB 53, offering legal protection to employees reporting AI risks or safety concerns. The law covers companies using large-scale computing for AI model training, focusing on leading developers and exempting smaller firms.

It also mandates transparency, requiring risk mitigation plans, safety test results, and reporting of critical safety incidents to the California Office of Emergency Services (OES).

The legislation responds to calls from industry insiders, including former OpenAI and DeepMind employees, who highlighted restrictive offboarding agreements that silenced criticism and limited public discussion of AI risks.

The new law protects employees who have ‘reasonable cause’ to believe a catastrophic risk exists, defined as endangering 50 lives or causing $1 billion in damages. It allows them to report concerns to regulators, the Attorney General, or management without fear of retaliation.

While experts praise the law as a crucial step, they note its limitations. The protections focus on catastrophic risks, leaving smaller but significant harms unaddressed.

Harvard law professor Lawrence Lessig emphasises that a lower ‘good faith’ standard for reporting would simplify protections for employees, though it is currently limited to internal anonymous channels.

The law reflects growing recognition of the stakes in frontier AI, balancing the need for innovation with safeguards that encourage transparency. Advocates stress that protecting whistleblowers is essential for employees to raise AI concerns safely, even at personal or financial risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot