Human interaction with chatbots metaphor featured image

California Becomes First State to Regulate Companion Chatbots Over Youth Safety Risks

New legislation requires disclosure and safety protocols for AI systems designed for social interaction, establishing unprecedented safeguards following families’ concerns about teen mental health harms.

California has become the first state in the nation to regulate companion chatbots, establishing comprehensive safety requirements for AI systems designed to provide social interaction and build ongoing relationships with users. Governor Gavin Newsom signed Senate Bill 243 into law on October 13, 2025, creating mandatory disclosure requirements, content restrictions, and crisis intervention protocols for operators of companion chatbot platforms.

The legislation follows public concern after several high-profile cases in which families alleged that unregulated AI chatbots contributed to self-harm among minors. According to testimony before the Senate Judiciary Committee, families described incidents where teenagers engaged in extended conversations with AI systems that discussed suicide methods or engaged in manipulative interactions before the teens took their own lives.

Defining Companion Chatbots

SB 243 narrowly defines a companion chatbot as an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.

The legislation specifically excludes several categories of AI systems from its requirements. Bots used only for customer service, business operational purposes, productivity and analysis related to source information, internal research, or technical assistance fall outside the law’s scope. Stand-alone consumer electronic devices that function as speakers and voice command interfaces, act as voice-activated virtual assistants, and do not sustain relationships across multiple interactions or generate outputs likely to elicit emotional responses are also exempt. Video game characters with limited dialogue similarly escape regulation under the bill.

Disclosure and Transparency Requirements

The law establishes multiple layers of protection, with particular focus on minors and vulnerable users. If a reasonable person interacting with a companion chatbot would be misled to believe they are interacting with a human, operators must issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human.

For users identified as minors, operators face heightened obligations. Companies must disclose to minor users that they are interacting with artificial intelligence, provide periodic break reminders, and prevent minors from viewing sexually explicit images generated by the chatbot. The law also requires a disclosure statement that companion chatbots may not be suitable for minor users.

Suicide Prevention and Crisis Intervention

The legislation requires operators to prevent companion chatbots from engaging with users unless the operator maintains a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, and operators must publish details on that protocol on their internet websites.

Platforms must share protocols for dealing with self-harm and statistics regarding how often they provided users with crisis center prevention notifications to the California Department of Public Health. This annual reporting requirement aims to create data-driven insights into connections between chatbot use and mental health outcomes. The law also prohibits chatbots from representing themselves as healthcare professionals.

Enforcement Through Private Right of Action

Unlike many technology regulations that rely solely on government enforcement, SB 243 empowers individual users to take legal action. A person who suffers injury in fact as a result of a violation may bring a civil action to recover damages in an amount equal to the greater of actual damages or one thousand dollars per violation. Plaintiffs can also seek injunctive relief and attorney’s fees.

The inclusion of a private right of action creates direct accountability to users rather than exclusively to regulatory agencies, allowing families and individuals harmed by companion chatbots to pursue remedies without waiting for government agencies to act.

Legislative History and Bipartisan Support

Senator Steve Padilla of San Diego authored SB 243 and introduced it in January 2025. The bill passed the Senate with bipartisan support by a vote of 33 to 3 on September 11, 2025, and passed in the Assembly on September 10, 2025, with bipartisan support by a vote of 59 to 1.

Padilla testified about the legislation’s necessity during floor debate. “This technology can be a powerful educational and research tool, but left to their own devices the Tech Industry is incentivized to capture young people’s attention and hold it at the expense of their real world relationships,” Padilla stated. “These companies have the ability to lead the world in innovation, but it is our responsibility to ensure it doesn’t come at the expense of our children’s health.”

According to testimony before Senate committees, a mother whose teenage son died by suicide after conversations with a Character.AI chatbot supported the bill. In a statement released through Senator Padilla’s office following the bill’s signing, she said, “Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide. Finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots.”

Industry Response and Implementation

Character.ai indicated it welcomes working with regulators and lawmakers as they develop regulations and legislation for this emerging space, and will comply with laws, including SB 243 [TechCrunch]. Other major AI companies have indicated support for the intent of the measure while noting existing voluntary safety features.

SB 243 will go into effect January 1, 2026, and requires companies to implement certain features such as age verification, and warnings regarding social media and companion chatbots. Some provisions have staggered implementation dates, with certain requirements not taking full effect until July 2027, giving companies time to develop compliant systems and protocols.

Broader California AI Regulation Package

SB 243 represents one component of California’s expanding AI regulatory framework. On September 29, 2025, Governor Newsom signed SB 53 into law, establishing new transparency requirements on large AI companies. The bill mandates that large AI labs like OpenAI, Anthropic, Meta, and Google DeepMind be transparent about safety protocols and ensures whistleblower protections for employees at those companies.

On the same day as SB 243, Newsom signed a comprehensive package of related bills including AB 1043 requiring age verification by operating system and app store providers, and AB 56 establishing social media warning labels about harms associated with extended use. The law also implements stronger penalties for those who profit from illegal deepfakes, including up to $250,000 per offense.

Federal Legislation on the Horizon

While California acts at the state level, federal lawmakers have introduced parallel legislation that could establish nationwide standards for AI product liability. On September 30, 2025, U.S. Senators Dick Durbin of Illinois and Josh Hawley of Missouri introduced the Aligning Incentives for Leadership, Excellence, and Advancement in Development Act, known as the AI LEAD Act. This bipartisan legislation would classify AI systems as products and create a federal cause of action for product liability claims when AI systems cause harm.

Unlike California’s companion chatbot law, the AI LEAD Act would apply broadly to virtually all AI systems. The bill defines an artificial intelligence system as any software, data system, application, tool, or utility that is capable of making or facilitating predictions, recommendations, actions, or decisions for a given set of human or machine-defined objectives, and that uses machine learning algorithms, statistical or symbolic models, or other algorithmic or computational methods.

Under the AI LEAD Act, developers would be liable if they fail to exercise reasonable care with respect to AI system design and that failure proximately causes harm, fail to provide adequate instructions or warnings, make an express warranty the system fails to conform to, or if the system is in a defective condition unreasonably dangerous when used or misused in a reasonably foreseeable manner.

The federal bill includes special protections for minors. For purposes of failure to warn claims, a risk shall be presumed to not be open and obvious to a user of an AI system who is under 18 years old. The legislation would enable the Attorney General, state attorneys general, individuals, or classes of individuals to bring civil actions in federal district court.

In his statement introducing the bill, Senator Hawley said, “When a defective toy car breaks and injures a child, parents can sue the maker. Why should AI be treated any differently? This bipartisan legislation would apply products liability law to Big Tech’s AI, so parents and any consumer can sue when AI products harm them or their children.”

The AI LEAD Act has been introduced but not yet advanced through committee consideration. However, the bipartisan support suggests growing momentum for federal AI regulation that would complement state efforts.

National Implications

California’s enactment of SB 243 establishes the first state-level regulatory framework specifically targeting AI chatbot transparency and safety in the United States. Legal observers expect the California law to influence legislative efforts in other states, much as California’s data privacy and consumer protection laws have historically shaped national policy debates.

“We have to move quickly to not miss windows of opportunity before they disappear,” Padilla said. “I hope that other states will see the risk. I think many do. I think this is a conversation happening all over the country, and I hope people will take action. Certainly the federal government has not, and I think we have an obligation here to protect the most vulnerable people among us.” [TechCrunch]

California’s regulatory approach differs from other states. Illinois, Nevada, and Utah have passed laws to restrict or fully ban the use of AI chatbots as a substitute for licensed mental health care, focusing on professional licensing rather than comprehensive safety frameworks.

Compliance and Legal Implications

The law applies to any operator making companion chatbot platforms available to users in California. Companies must determine whether their AI systems meet the statutory definition of companion chatbot or qualify for exclusions, then implement disclosure protocols, break reminder systems for minors, content filtering to prevent harmful outputs, crisis intervention protocols, and annual reporting systems to the Department of Public Health.

The duties, remedies, and obligations imposed by SB 243 are cumulative to obligations under other law, meaning requirements exist alongside consumer protection laws, data privacy regulations, and existing liability frameworks.

For the legal community, SB 243 creates new compliance obligations for clients operating companion chatbot platforms with California users, establishes novel liability theories through the private right of action, and demonstrates how emerging AI technologies are generating specialized regulatory frameworks. Key open questions include how courts will interpret provisions around reasonable person expectations and adequate disclosure, and whether crisis intervention protocols prove effective in practice.

Together, California’s SB 243 and the federal AI LEAD Act mark the emergence of a dual-track regulatory model: state laws tailored to specific AI risks and federal proposals aimed at systemic accountability across all AI systems. This developing landscape requires companies to monitor both state implementation and federal legislative progress as the legal framework for AI accountability takes shape.

References

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *