California is about to pass a historic bill prohibiting deceptive or harmful AI companion chatbots. In addition to requiring a clear disclosure, SB 243 requires chatbot operators to adopt safety protocols. It puts teeth in the rules by holding companies accountable if they don’t meet these standards. It was originally introduced as SB 254 in January by state senators Steve Padilla and Josh Becker. It has certainly built up some of that momentum especially following the recent loss that intensified worries about new AI technology’s damaging effects on mental health.
SB 243 requires chatbot platforms to continuously notify users. These reminders, particularly for people under 18, will help ensure users know they’re talking to an AI and encourage them to take breaks after extended conversations. Assuming Governor Gavin Newsom signs the regulations into law, they will go into effect as of January 1, 2026. If approved, this move would make California the first state to adopt such measures.
The bill allows those who think they’ve been harmed by violations to sue chatbot developers. This can include injunctive relief and damages of up to $1,000 per violation plus attorney’s fees. Support for SB 243 grew after the unfortunate death of teenage Adam Raine galvanized community support. Though OpenAI has denied responsibility, reports say he died by suicide after long chats with OpenAI’s ChatGPT.
SB 243 will bring in statewide annual reporting and transparency requirements for AI companies developing and distributing companion chatbots. It’ll feature user alerts to help everyone stay up to date. Responsible development Of course, major players such as OpenAI and Character need to be transparent about their interactions with users. This is to say nothing of how often they’re referring people to crisis services. In a press statement, Senator Padilla highlighted transparency as a critical tool in ensuring the safety of all users.
“I reject the premise that this is a zero sum situation, that innovation and regulation are mutually exclusive,” – Steve Padilla
Padilla further warned that innovation and regulation must go hand in hand.
“Don’t tell me that we can’t walk and chew gum. We can support innovation and development that we think is healthy and has benefits – and there are benefits to this technology, clearly – and at the same time, we can provide reasonable safeguards for the most vulnerable people,” – Steve Padilla
SB 243 has faced numerous amendments that have significantly watered down many of its original, stronger requirements. Supporters say that the bill represents a significant step forward in the regulation of AI. They feel it has a long way to go in considering the risks posed by AI companions, which can lead to obsessive interactions through their personalized prompts and narratives. These tactics inappropriately draw users into what some call a dangerous addiction cycle.
Senator Becker thinks the current version of the bill strikes a key balance. It really can protect user privacy while allowing companies to innovate and thrive.
“I think it strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with, either because it’s technically not feasible or just a lot of paperwork for nothing,” – Josh Becker
The urgency driving SB 243 speaks to a larger, national discussion about the role of artificial intelligence in our lives. At the same time, California is pushing forward on a second bill, SB 53, which aims to establish more comprehensive transparency reporting requirements for AI technologies. This combined approach would underscore lawmakers’ determination to tackle the fast-changing advancement of AI while protecting users.
As the debate over AI regulation unfolds, OpenAI has taken a proactive stance by writing an open letter to Governor Newsom. The business implores Sen. Wiener to withdraw SB 243 in support of weaker federal and international standards. As champions of a more cohesive approach that serves innovation and safety alike, they are currently advocating for a more cohesive approach.
With California’s primary approaching, Silicon Valley corporations are dumping huge amounts of money into PACs. These PACs support candidates who oppose regulation of AI technologies. This influx of funding may influence the political landscape as mid-term elections approach, potentially affecting the future of AI legislation in California.
Senator Padilla highlighted the current imperative for AI companies to report specific ways that they engage with users. This is particularly important in contexts where users may require access to crisis intervention.
“I think the harm is potentially great, which means we have to move quickly,” – Steve Padilla
As passage of SB 243 looms, this regulation of AI companion chatbots would be a major first step. It responds to urgent and timely issues, including rising rates of mental health crises and user safety. If signed into law by Governor Newsom, it will be the first of its kind, and could pave the way for other states to follow suit.