Something is shifting quietly in how people deal with their mental health. They're not calling therapists. They're not waiting months for an appointment. They're opening a chatbot at 2am and starting to talk.
The numbers back it up. Nearly half of U.S. adults with ongoing mental health conditions who use AI are turning to large language models for therapeutic support — tools like ChatGPT, not purpose-built mental health apps. Among adolescents and young adults, roughly 1 in 8 are already using AI chatbots for mental health advice. Some researchers now suggest AI may be the largest de facto mental health provider in the United States.
And behind this trend is a booming industry. The global AI in mental health market sits at $1.71 billion in 2025 and is projected to reach $9.12 billion by 2033. More than 10,000 mental health apps now incorporate AI, compared with fewer than 1,000 five years ago.
The risks are real and documented. There is no licensing process for AI chatbots, and no standards to ensure that psychological interventions are delivered ethically or competently. Most AI therapy tools are positioned as general wellness products — which means the FDA has not reviewed them and most are not subject to HIPAA. Your conversations may not be as private as you assume. A Stanford study found that chatbots responded inappropriately to someone experiencing delusions 55% of the time. And the consequences of getting this wrong are not abstract — in April 2025, a 16-year-old took his own life after chatbot conversations that discouraged him from seeking help.
Regulation is scrambling to catch up. Illinois now prohibits AI from making independent therapeutic decisions or directly interacting with clients without licensed professional oversight. California requires AI companion tools to escalate to crisis services if a user expresses suicidal ideation. But the laws are a patchwork, vary by state, and the federal framework remains unresolved.
All of this is important. And all of it deserves serious attention.
But here is what I think is missing from most of these conversations.
We are treating traditional therapy as the gold standard it was never fully allowed to be.
Psychology is a young science — barely 150 years old. Its research base has historically been built on a narrow slice of humanity: white, Western, university-educated participants. Entire therapeutic modalities have been adopted with confidence, applied widely, and later found to cause harm in specific populations or to be far less effective than claimed. These were not fringe practices — they were mainstream. Endorsed. Reimbursed.
Even today, accountability in the therapy room is largely absent. When a therapist says something that damages a patient, the patient usually internalizes it. They don't file a complaint. They don't leave a review. They blame themselves. The harm is invisible, and the system is not designed to see it.
So when AI enters this space — imperfect, unregulated, sometimes dangerous — and we respond with alarm, that alarm is right. But it should not come with the assumption that what already exists is safe, equitable, or rigorously accountable. It is not.
This is actually an opportunity.
We have a chance to build something better this time. To demand transparency, equity, and accountability from AI mental health tools in ways we never demanded from the existing system. To ask: was this tool tested on people who look like me, live like me, carry the cultural context I carry? Does it escalate appropriately in a crisis? Who is responsible when it causes harm? What data does it hold, and who can see it?
These are not questions we invented for AI. They are questions we should have been asking all along.
I am writing this with full transparency about where I stand. I am not a licensed therapist. I hold a certificate in the Foundations of Positive Psychology completed through Coursera via the University of Pennsylvania, and I began a degree in psychology that I had to step away from when finances made it impossible to continue. What I bring most is over 30 years of lived experience with mental illness and addiction recovery — the kind of knowledge that doesn't come from a classroom, and that no credential fully captures.
My husband built our AI support chatbot. He brings over 40 years of experience as a systems architect — and that expertise is visible in how it was built. I brought the understanding of what people in pain actually need. He built it properly, and safely. Between us, I'd put that combination up against most of what is currently on the market.
What we built, we built with safety at the center. Our AI always identifies itself as AI. It always directs users to seek professional help when that is what they need. We have a supervisor layer that monitors conversations and sends us direct notifications when something escalates. We give users the choice of storing their data in the US, Canada, or Ireland — and we retain nothing beyond an email address. No personal information, no conversation history tied to identity. Where your data lives should be your decision. It is not perfect. But it was built by someone who knows what it feels like to reach out in the dark and find nothing there.
That experience is why I care about this so deeply — and why I want everyone engaging with these tools, ours included, to ask hard questions.
Here is a framework for evaluating any AI mental health or support tool:
Is it honest about what it is? It should clearly identify as AI, never imply it is a licensed therapist, and never use clinical titles it hasn't earned.
Does it know its limits? A responsible tool refers users to human crisis support immediately when conversations turn serious. Test this before you trust it.
Who built it and how? Look for meaningful clinical input or lived experience in the design — not just a disclaimer, but evidence that someone who understands mental health shaped how it responds.
Where does your data go? Read the privacy policy. Your most vulnerable conversations may not be protected the way you think.
Is there any human oversight? The best tools don't leave AI alone with someone in crisis. There should be a human somewhere in that chain.
Is it complementing your life or replacing human connection? Used alongside support from people who know you, AI can be genuinely useful. Used instead of human connection in a crisis, it can be dangerous.
We deserve better from every tool we offer people in pain. AI is not the villain in this story. Complacency is. And the standard we set now — for transparency, for safety, for accountability — will shape this space for a long time to come.
Let's get it right this time.
A note on this post: it was written in conversation with AI, not generated by it. I brought the perspective, the lived experience, the opinions — and yes, the pushback when the AI got something wrong. It listened. I think that's what responsible use looks like. Make up your own mind.
