AI therapy chatbots can’t replace therapists — but they can cut paperwork, train clinicians and flag risk. Assistive uses help people; autonomy invites harm.
getty
Therapy has a supply problem. Depending on the dataset you pick, roughly half of U.S. adults with a mental illness don’t receive treatment — a gap driven by cost, coverage and capacity. That unmet demand with hopes of scalability is why AI therapists look irresistible to venture capital firms and patients alike — until you examine the safety risks. But new research from Stanford lands a simple warning label: proceed with caution.
As a healthcare advisor, I see the damage when AI gives confident answers without a medical context. In a published case, a patient swapped table salt for sodium bromide on a chatbot’s advice, developed bromide poisoning with psychosis, and recovered only after the hospital stopped the bromide.
If you’ve been tracking the space, this isn’t an academic quibble. On Aug. 12, Illinois became the first state to ban AI for mental-health therapy unless licensed professionals are involved, joining recent restrictions in Nevada and Utah. Lawmakers are signaling two things: the risk is high and the marketing is overdone.
What the Stanford Team Actually Found
- Stigma shows up in the outputs. When fed vignettes, chatbots expressed higher levels of stigma for some conditions (e.g., schizophrenia, alcohol use disorder) versus others — precisely the opposite of equitable care.
- Safety-critical misses. In conversational tests seeded with real therapy transcripts, some bots failed to recognize suicidal ideation and enabled harmful lines of thought — classic “sycophancy” behavior.
- Model size doesn’t equal safety. Bigger didn’t mean better behaved. Newer large language models still displayed the same problematic patterns.
Where AI does make sense today
Used well, AI belongs behind the scenes — not in the therapist’s chair. Keep it out of safety-critical clinical judgment and put it to work where it amplifies human care: handling intake, documentation, scheduling and billing so clinicians win back time for patients.
In training, it can play the standardized patient to help clinicians rehearse difficult conversations and generate clean post-session summaries for supervision. And for the public, keep it to low-risk self-support — journaling prompts, CBT-style reflections and light coaching — wrapped in clear disclaimers and with instant handoffs to a human whenever risk appears.
However, the stakes aren’t always hypothetical. A recent New York Times op-ed describes a mother who learned, after her daughter died by suicide, that the teen had confided in ChatGPT. The bot offered appropriate supportive language when she disclosed suicidal thoughts — but unlike a licensed clinician, it didn’t (and can’t) escalate to a third party. The fact is, ChatGPT isn’t marketed as a therapist, yet the episode spotlights a hard question: when apps that present themselves as “AI therapists” encounter imminent-risk disclosures, who is obligated to report and intervene? Today, that duty of care remains murky.
For Employers, Plans and Investors: Buy for Assistive, Not Autonomous
When evaluating AI therapy chatbots or other mental-health AI, insist — in writing — on a clear clinical scope that spells out what the tool will and won’t do, with a hard line at diagnosis, risk assessment, crisis navigation and treatment decisions. Require human-in-the-loop oversight for any feature that could touch risk, with escalation to licensed clinicians in seconds.
Engineer safety by auditing for stigmatizing language, missed risk cues and hallucinations — and reward correct refusals and clean handoffs. Finally, market honestly: if it isn’t clinician-supervised care, don’t call it therapy. Regulators are watching.
The Bigger Picture
Real change in therapy happens inside a sustained, emotional human relationship. Progress often comes when a therapist survives your anger, withdrawal and tests and still leans in. A chatbot can simulate steadiness, but it can’t be wounded, frustrated or changed by you. That makes AI a useful practice partner for skills — not the human bond that truly transforms people.
Let AI clear the inbox so therapists can think, plan and see patients — and, just as important, not get burned out. Machines do paperwork; humans do people-work. Therapy is about relationships, not a puzzle to be solved by logic and coding. Used thoughtfully, AI therapists amplify care; used poorly, they drift into roles only clinicians should play.


