Loading chat...
NE LB1083
Bill
Status
1/15/2026
Primary Sponsor
Tanya Storer
Click for details
AI Summary
-
Requires large frontier AI developers (those with $500M+ annual revenue training models using 10^26+ computing operations) and large chatbot providers ($25M+ revenue, 1M+ monthly users) to publish detailed public safety and child protection plans on their websites
-
Defines "catastrophic risk" as AI contributing to 50+ deaths/injuries or $1B+ in damages through weapons assistance, autonomous criminal conduct, cyberattacks, or evading developer control; "child safety risk" covers chatbot behavior that could cause death, injury, or severe emotional distress to minors
-
Mandates reporting of safety incidents to the Attorney General within 15 days (24 hours for imminent death/injury risks), with quarterly submission of internal catastrophic risk assessments by large frontier developers
-
Establishes whistleblower protections prohibiting retaliation against employees who report safety concerns or violations, with civil action remedies including damages and attorney's fees available within one year of violation
-
Authorizes Attorney General enforcement with civil penalties up to $1 million per violation for large frontier developers and $50,000 per violation for large chatbot providers, with act becoming operative January 1, 2027
Legislative Description
Adopt the Transparency in Artificial Intelligence Risk Management Act, create a fund, and change provisions relating to records which may be withheld from the public
Last Action
Storer priority bill
2/19/2026