Governor Gavin Newsom has signed SB 243 into law, making California the first state to regulate AI companion chatbots. The legislation holds companies legally accountable if their chatbots fail to meet new safety standards for protecting children.

The law requires clear AI disclosures and crisis-response protocols for self-harm content. Chatbots must remind children every three hours that they’re interacting with AI, not humans. The law creates a limited private right of action for users who can show injury from non-compliance. SB 243 takes effect Jan. 1, 2026.

“We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech,” Newsom said in a statement, according to TechCrunch. “Our children’s safety is not for sale.”

State Senator Steve Padilla told TechCrunch the law represents “a step in the right direction” toward regulating powerful technology. He hopes other states will follow California’s lead, given federal inaction on the issue.

Annual safety reporting to the state’s Department of Public Health will begin in July 2027.