Loading chat...
MD SB936
Bill
Status
2/3/2025
Primary Sponsor
Katie Hester
Click for details
AI Summary
-
Requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from algorithmic discrimination based on age, race, disability, sex, religion, and other protected characteristics, effective February 1, 2026
-
Mandates developers provide deployers with documentation disclosing intended uses, known limitations, discrimination risks, and mitigation measures before selling or licensing high-risk AI systems
-
Requires deployers to implement risk management policies, complete impact assessments before deployment and after significant updates, and maintain records for at least 3 years
-
Obligates deployers to notify consumers when high-risk AI is used for consequential decisions (employment, housing, lending, healthcare, education, legal services) and provide explanations for adverse decisions along with opportunities to correct data and appeal
-
Establishes civil penalties of up to $1,000 per violation ($10,000 for willful violations), authorizes Attorney General enforcement with a 45-day cure period, and allows consumers to bring private civil actions after filing administrative complaints
Legislative Description
Consumer Protection - High-Risk Artificial Intelligence - Developer and Deployer Requirements
Disclosure
Last Action
Hearing 2/27 at 1:00 p.m.
2/6/2025