Loading chat...
CA SB1047
Bill
AI Summary
SB 1047 Summary
-
Requires developers of "covered models" (large AI systems trained with over $100 million in computing costs) to implement safety protocols, cybersecurity protections, and the capability for full shutdown before initial training begins.
-
Mandates developers conduct annual safety assessments, retain third-party auditors beginning January 1, 2026, and submit compliance statements to the Attorney General; prohibits deploying covered models with unreasonable risk of causing "critical harm" (weapons of mass destruction, mass casualties, cyberattacks on critical infrastructure).
-
Establishes reporting requirements for AI safety incidents within 72 hours and requires computing cluster operators to assess whether customers intend to train covered models, implementing capability for prompt shutdown.
-
Creates the Board of Frontier Models within the Government Operations Agency to update AI safety definitions and auditing standards annually based on technological developments and stakeholder input.
-
Provides whistleblower protections prohibiting retaliation against employees reporting safety violations and establishes civil penalties up to 30% of training costs for violations causing harm; also mandates development of "CalCompute," a public cloud computing cluster to advance safe, ethical AI research.
Legislative Description
Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
Last Action
In Senate. Consideration of Governor's veto pending.
9/29/2024