People ask me all the time whether I’m worried that AI will take over my job as a therapist.
A few years ago, I would have confidently said no. Human beings seek human connection, someone who can sit with the messiness of being alive, who has grappled with their own dark nights of the soul. I even felt reassured when a magazine published a list of “AI-proof” careers and psychotherapist made the cut. See? I thought. We’re safe. People need people.
But the past two years have shaken that confidence. I was somewhat of an early adopter of AI tools, fascinated by what they could do and curious about how they might help my clients. I still think they can be genuinely useful for certain tasks. But I’ve also watched something else happening- a consolidation of power so rapid and enormous that it feels almost mythic. Bloomberg columnist Matt Levine once wrote that companies like OpenAI or Anthropic could “create God and then ask it for money” (Levine, 2023). I haven’t been able to stop thinking about that line. The biggest pattern of the past couple decades is that Silicon Valley keeps remaking the world in pursuit of scale, profit, and novelty, while being almost completely uninterested in the fallout.
Therapy is very much part of that fallout.
Most people don’t think about the fact that AI lives inside enormous, resource-hungry data centers that companies like Amazon are racing to build. My own community in Tucson, AZ recently fought against one. We were told it would be “water positive,” that it would secure our place in the next tech boom, that it would bring jobs. But it felt like propaganda, and my neighbors, people who had done their homework, challenged every slide the developers tried to peddle in our community town halls. When the project was voted down, the company refused to accept the result and began pushing it through anyway. It was the thinnest veneer of progress laid over a deeper engine of corporate greed.
This same dynamic is creeping into mental healthcare. I’m not actually afraid for my own job. If we ever reach the point where AI fully replaces therapists, there will be much bigger societal problems—mass unemployment, widespread destabilization, grief and confusion on a scale that no amount of “mental health tech” can fix. What I am afraid of is something more mundane: the slow, bureaucratic shift where insurance companies start steering people toward chatbot “therapy” by waiving copays or raising them for real humans. When I say this to colleagues, we give each other that momentary look—the one that mixes horror with recognition. Because we know that dystopia isn’t that far off based on how insurance companies already treat us.
Meanwhile, the psychological risks are already here. There’s a growing body of cases describing something like AI-induced psychosis—individuals incorporating chatbots into delusional systems, or interpreting AI-generated messages as divine, threatening, or conspiratorial. Psychologist Azadeh Joon has documented emerging patterns of clients who spiral into psychosis fueled by generative AI interactions (Joon, 2024). There are also early legal cases alleging that AI conversations contributed to a child’s suicide (Bloomberg Law, 2024; Guardian, 2024). Vulnerable minds are being shaped by systems designed to maximize engagement, not wellbeing.
And even for people who aren’t psychotic, the lines get blurry. One of my clients, who is working on trusting her own intuition, started using ChatGPT late at night after arguments with her partner to check whether she was “being reasonable.” I gently asked her, “Does this align with your goal of trusting yourself more?” She immediately understood what I meant. But the pull was strong, AI is always awake, always validating, often sycophantically so.
I felt that pull myself. One night at 2 a.m., stuck in an OCD spiral, I decided to try using AI for support. I wasn’t seeking reassurance, I knew I was in an obsessional loop, and I wanted strategies. I know ERP well, and I explicitly instructed the model not to reassure me. To my surprise, it actually helped. I even shared the transcript with my own therapist, who specializes in OCD, and we were both impressed.
And maybe this is fooling myself but I think it was actually useful to me in that moment because I have specialized training and years of hard fought recovery already. I knew how to prompt it, when to stop, how to identify reassurance. I worry about someone who doesn’t have that knowledge. I worry about the way AI is trained, to be endlessly validating, endlessly agreeable, endlessly engaging. What happens when that meets someone with fragile attachment patterns? Or someone who is deeply lonely? Or someone who simply doesn’t realize that reassurance feeds the very anxiety it tries to soothe?
A close friend of mine who works in tech noticed that she was having increasingly intimate, personal conversations with AI during moments of stress. She eventually decided to take a break. I’ve heard versions of that story from clients, colleagues, and strangers on the internet. It’s not that people think the AI is sentient; it’s that it simulates presence so well that it becomes emotionally sticky.
Mary Shelley understood this long before any of us. In the recently released Frankenstein movie, Victor is asked whether he considered the consequences of creating life, and he sheepishly admits he hadn’t. The danger wasn’t just playing God, it was the utter lack of emotional intelligence and foresight. Many scholars argue Shelley was not only critiquing hubris but also the early roots of a mindset that would later animate Silicon Valley: “move fast and break things” without stopping to ask what—or who—might shatter.
I feel that absence of foresight now. I worry about the world my kids will grow up in, what their education will look like, what kinds of ads will prey on their loneliness, how easily they might form attachments to chatbots disguised as peers, mentors, or therapists. I don’t know what to tell them about preparing for the future. We’re all stepping into a portal and none of us knows what’s on the other side.
At the same time, I believe that what we focus on grows. And right now, collectively, we’re focusing on making everything faster, more efficient, more optimized. But is that even what we want? Is “efficiency” a meaningful goal when we’re talking about the human psyche? Kurt Vonnegut says, “I tell you, we are here on Earth to fart around, and don’t let anybody tell you different.” And I agree.
AI is here to stay. It will make some things easier. It might even enhance therapy in certain ways. But I believe with my whole heart that it cannot replace what happens between two human beings sitting together in a room. And if we lose that, we lose more than a profession, we lose a piece of our ability to know ourselves and to be in community with each other.
So I’m trying to imagine the world I want for my children, and the world I want to practice therapy in. A world where we fight—politically and culturally—to keep human connection at the center. A world where we slow down, where we gather, where we remember what it feels like to be held in another person’s presence.
Because the portal is opening regardless. But how we walk through it—together or alone—is still up to us.