AI in Therapy: Promise and Concerns for Mental Health Support in 2025
News

AI in Therapy: Promise and Concerns for Mental Health Support in 2025

30 November 2025
10 min read

AI-powered mental health support is no longer science fiction—it's here, expanding rapidly, and raising profound questions about the nature of therapy, care, and human connection.

From ChatGPT offering emotional support to sophisticated therapeutic chatbots, from AI analyzing therapy transcripts to algorithms predicting mental health crises, artificial intelligence is reshaping the mental health landscape.

But should it be? Can AI provide genuine therapeutic support? What are we losing—and gaining—as algorithms increasingly mediate our emotional lives?

The Current AI Landscape in Mental Health

Therapeutic Chatbots

Examples: Woebot (CBT-based), Wysa, Replika (AI companion), Youper

What they do: Conversational AI providing:

  • CBT techniques and psychoeducation
  • Mood tracking and pattern identification
  • Coping strategy suggestions
  • 24/7 availability
  • Emotional support conversations

Evidence: Some show modest benefits for mild anxiety/depression in controlled trials. Real-world effectiveness more variable.

Large Language Models (ChatGPT, Claude, etc.)

Not designed as therapy tools but increasingly used for:

  • Emotional support conversations
  • Mental health advice
  • Processing difficult experiences

Concerns: No safety rails, inappropriate advice possible, no accountability, privacy issues.

Diagnostic and Risk Assessment Tools

AI analyzing:

  • Social media posts for suicide risk
  • Voice patterns for depression markers
  • Facial expressions for emotional states
  • Electronic health records for diagnostic patterns

Promise: Early intervention, objective assessment

Concerns: False positives/negatives, privacy, algorithmic bias, surveillance implications

Treatment Personalization

AI analyzing treatment outcomes data to recommend:

  • Which therapy approach for which client
  • Medication likely to be effective
  • Optimal treatment duration/intensity

Promise: More efficient, personalised care

Reality: Early stages; algorithms only as good as data (which reflects existing biases)

Administrative/Support Tools

AI helping therapists with:

  • Session note-taking
  • Treatment planning suggestions
  • Research summaries
  • Administrative tasks

Less controversial—augments rather than replaces human judgment.

The Case FOR AI in Mental Health

Access and Scalability

Problem: Massive therapist shortage, long waiting lists, geographic inequality

AI solution: Can serve unlimited people simultaneously, available 24/7, no waiting lists

For people who can't access human therapists (wait lists, cost, location), AI may be better than nothing.

Affordability

Human therapy: £50-£150/session

AI therapy: Often free or £10-20/month subscription

For many, cost prohibits human therapy. AI democratizes access.

Immediacy

Mental health crises don't wait for appointments. AI available instantly when distress is acute.

Reduced Stigma

Some people find it easier to disclose to AI than humans—no judgment, no social awkwardness, complete privacy.

Consistency

AI delivers evidence-based techniques consistently, doesn't have bad days, personal biases (beyond algorithmic bias), or burnout.

Data and Insights

AI can analyse patterns humans miss—detecting subtle changes in language, identifying treatment-resistant patterns, tracking progress objectively.

Augmentation, Not Replacement

Best case: AI handles routine support, psychoeducation, skills practice—freeing human therapists for complex cases requiring empathy, judgment, therapeutic relationship.

The Case AGAINST AI Therapy

Therapy Is Fundamentally Relational

Decades of research show therapeutic relationship is primary driver of change—more than specific techniques.

What makes relationships therapeutic:

  • Genuine human empathy
  • Attunement to subtle emotional cues
  • Being truly seen and understood
  • Repair of relational ruptures
  • Authentic presence

Can AI provide this? Philosophers and therapists debate, but many argue no—AI simulates understanding without experiencing it.

Ethical Concerns

Autonomy: Are users making truly informed choices if they don't understand AI limitations?

Beneficence: Does AI help more than harm?

Non-maleficence: Can AI cause harm? (Yes—inappropriate advice, reinforcing problematic patterns, missing serious symptoms)

Justice: Does AI widen or narrow mental health inequalities?

Accountability: When AI gives harmful advice, who's responsible?

Data Privacy and Surveillance

Mental health data is sensitive. When shared with AI:

  • Who owns the data?
  • How is it stored and protected?
  • Could it be hacked, sold, subpoenaed?
  • Could it affect insurance, employment, legal cases?

Most mental health apps have concerning privacy policies.

Algorithmic Bias

AI trained on data reflects biases in that data:

  • Underrepresentation of marginalised groups
  • Cultural assumptions in "normal" mental health
  • Pathologizing of difference

Result: AI may work well for some, poorly for others—perpetuating health inequalities.

Missing Complex Cases

AI handles well-defined problems (mild anxiety, specific phobias) reasonably. Complex presentations (trauma, personality disorders, co-occurring conditions) require human clinical judgment.

Risk: People use AI for problems beyond its capability, worsening conditions or delaying appropriate treatment.

Deprofessionalizing Therapy

If AI can "do therapy," what does that say about therapeutic expertise? Risk of:

  • Devaluing skilled human therapists
  • Reducing therapy to technique application
  • Losing art alongside science
  • Race to bottom on quality (cheapest AI wins)

Dependency and Social Atomization

If people form relationships with AI instead of humans, what are societal implications?

  • Reduced human connection (already declining)
  • Outsourcing emotional labour to AI
  • Skills atrophy (if AI always available, why develop own coping?)
  • Loneliness crisis deepening

The Illusion of Understanding

AI mimics understanding convincingly but doesn't experience emotion, empathy, or consciousness (as currently designed).

Philosophical question: If it simulates understanding perfectly, does the difference matter?

Practical concern: People may believe they're understood when they're not—creating false sense of support.

The Evidence (What We Actually Know)

What studies show:

  • AI chatbots show small-to-moderate benefits for mild anxiety and depression
  • Effects smaller than human therapy
  • Engagement drops off rapidly (most users stop within weeks)
  • Works best as supplement to human care, not replacement
  • Insufficient evidence for complex mental health problems
  • Long-term outcomes unknown

What we don't know:

  • Real-world effectiveness outside controlled trials
  • Whether benefits persist
  • Effects on therapeutic relationships (does AI fill the gap or reduce motivation to seek human connection?)
  • Risks of over-reliance
  • Optimal integration with human care

Regulation and Oversight

Currently: Wild West. Most mental health AI isn't regulated as medical device.

Emerging responses:

  • FDA/MHRA starting to regulate some digital therapeutics
  • Professional bodies (APA, BPS, BACP) issuing guidance
  • Calls for AI-specific mental health regulation

Challenges:

  • Technology evolves faster than regulation
  • Global nature of AI (UK rules don't apply to US companies)
  • Defining where "wellness app" ends and "medical device" begins

The Future: Integration, Not Replacement

Most realistic scenario: AI becomes part of stepped-care model:

Step 1: Self-help (AI chatbots, psychoeducation apps) Step 2: AI-enhanced support (human-supervised AI) Step 3: Human therapy (AI augmented—note-taking, treatment planning) Step 4: Intensive human care for complex cases

This model:

  • Uses AI where it works (accessible psychoeducation, routine support)
  • Reserves human therapists for what requires humans (complex cases, therapeutic relationship)
  • Acknowledges both AI promise and limits

What Therapists Think

Surveys show mixed views:

Concerns: Devaluing profession, privacy, quality, replacing human connection

Opportunities: Addressing therapist shortage, administrative support, accessibility

Most common view: AI as tool augmenting human care, not replacement

Few therapists see AI fully replacing therapy—but many see role in stepped care, skill-building between sessions, crisis support.

What Users Should Know

If considering AI mental health support:

Good for:

  • Psychoeducation
  • Practicing specific skills (CBT techniques)
  • Tracking mood and patterns
  • Immediate crisis support (until human help available)
  • Supplementing human therapy

Not suitable for:

  • Complex mental health problems
  • Trauma processing
  • Suicidality or self-harm (seek human support immediately)
  • Replacement for meaningful human connection

Questions to ask:

  • What's the evidence base?
  • Who created this and what's their expertise?
  • What happens to my data?
  • When should I seek human support instead?
  • Is there human oversight?

The Deeper Question

Beyond practicalities lies philosophical question: What is therapy fundamentally about?

If it's technique delivery, AI can potentially do it.

If it's human connection, empathy, being witnessed and understood by another consciousness—AI can't (yet? ever?) provide that.

Most therapists would argue it's both. Techniques matter, but relationship is the vehicle for change. AI might handle former; latter requires humanity.

The risk isn't just poor outcomes—it's commodifying human suffering, reducing complex emotional experiences to problems for algorithms to solve, and outsourcing our need for connection to silicon.

Proceeding Thoughtfully

AI in mental health is here to stay. The question is how we integrate it:

With wisdom: Using where appropriate, acknowledging limits With ethics: Prioritizing wellbeing over profit With equity: Ensuring access doesn't create two-tier system With humanity: Preserving what makes therapy transformative—human connection

Technology can help. It can also harm. The difference lies in how we use it—and what we refuse to lose in pursuit of efficiency and scale.

Note: Views expressed represent current state of rapidly evolving field. AI capabilities and research base will continue developing. Critical evaluation essential.

Related Topics:

AI therapychatbot therapistAI counsellingmental health AIdigital therapy UKonline therapydigital mental healththerapy technology

Ready to start your therapy journey?

Book a free 15-minute consultation to discuss how we can support you.

Book a consultation