Loading chat...
CA SB53
Bill
Status
9/29/2025
Primary Sponsor
Scott Wiener
Click for details
AI Summary
-
Enacts the Transparency in Frontier Artificial Intelligence Act (TFAIA) requiring large AI developers (over $500 million annual revenue) who train foundation models using more than 10^26 computing operations to publish safety frameworks describing how they assess and mitigate catastrophic risks
-
Mandates frontier developers report critical safety incidents to the Office of Emergency Services within 15 days, or within 24 hours if an incident poses imminent risk of death or serious injury, with incidents including unauthorized model weight access, loss of model control, or AI deception against developers
-
Defines "catastrophic risk" as risks that could cause death/serious injury to more than 50 people or over $1 billion in property damage from AI-enabled weapons assistance, autonomous cyberattacks/crimes, or models evading developer control
-
Establishes whistleblower protections prohibiting frontier developers from retaliating against employees who report safety concerns or TFAIA violations to the Attorney General, federal authorities, or internal supervisors with investigative authority
-
Creates a consortium within the Government Operations Agency to develop a framework for "CalCompute," a public cloud computing cluster to be housed at the University of California, with a report due to the Legislature by January 1, 2027
-
Imposes civil penalties up to $1 million per violation enforced by the Attorney General and preempts local ordinances adopted after January 1, 2025 specifically regulating frontier developers' catastrophic risk management
Legislative Description
Artificial intelligence models: large developers.
Last Action
Chaptered by Secretary of State. Chapter 138, Statutes of 2025.
9/29/2025