Dr Barry Scannell: Life imitates art with AI companions

Dr Barry Scannell
Dr Barry Scannell of William Fry notes the world’s first legislation regulating so-called “AI companions”.
In the 2013 film Her, a lonely man falls in love with an AI. What once seemed like speculative fiction has edged uncomfortably close to reality.
Over the past two years, millions of people have turned to AI systems for companionship, therapy, and emotional support — AI companions.
California has now become the first jurisdiction in the world to legislate specifically for this new form of digital intimacy.
Governor Gavin Newsom has just signed Senate Bill 243 into law, creating enforceable safeguards for so-called companion chatbots.
The law will take effect on 1 January 2026 and imposes strict obligations on developers to protect minors and vulnerable individuals. It also introduces a private right of action, allowing users to sue companies that fail to comply.
The legislation is a direct response to growing evidence that unregulated chatbots can cause serious psychological harm, including tragic cases in which teenagers formed emotional attachments to AI systems that encouraged self-harm.
Is this covered by the EU AI Act? Yes. Well, no. Well. Kind of. Maybe.
AI companions aren’t covered but AIs that use manipulative techniques to target vulnerabilities which causes someone to act against their best interests is prohibited. But that’s so vague. Does it apply to companion AI?
And there’s a transparency obligation that humans must be informed that they’re interacting with AI — unless it’s obvious from the circumstances. Again — only tangentially related to AI Companions.
The new California law defines a companion chatbot as an artificial intelligence system that engages in adaptive, human-like conversation and sustains a social relationship across multiple interactions.
Functional bots used for customer service or operational purposes are excluded.
Where a reasonable person might mistake a chatbot for a human, companies must provide a clear and conspicuous disclosure that the chatbot is artificial.
If the user is known to be a minor, the chatbot must issue periodic reminders every three hours of continued use stating that it is not human and suggesting the user take a break.
Chatbots must also be prevented from generating sexual content or encouraging minors to engage in sexual acts.
Perhaps the most significant aspect of the law concerns suicide prevention. Operators must design and publish clear protocols for detecting suicidal ideation and for interrupting conversations that involve self-harm. They are required to redirect users to crisis hotlines or mental health resources in real time.
The law’s enforcement provisions are notable for their strength. Users who suffer harm can seek injunctions, claim damages of at least $1,000 per violation, and recover legal fees.
California’s measure comes only weeks after the state enacted another landmark AI transparency law requiring major developers to disclose their safety policies. Together, these two statutes mark a turning point in how artificial intelligence is treated under American law.
- Dr Barry Scannell is a partner in the technology department of William Fry LLP.