AI Listens, But Does It Care? Ethical Concerns in Mental Health Apps

Written by Marina Linde de Jager – Legal Advisor & AI Ethics Specialist at AI for Change Foundation

 

Introduction

Mental health apps powered by artificial intelligence (AI) have become an increasingly popular tool for emotional support and psychological guidance. From
chatbot therapists to mood-tracking algorithms, these apps promise accessible mental health care, often for free or at low cost.
However, as the demand for such tools increases, so do the ethical concerns. Who governs the advice given by these algorithms? How is user data stored and used?
And can an AI system truly understand the human complexity of mental health? At the AI for Change Foundation, we believe in the power of technology to do
good—but only when it is designed and deployed ethically. Mental health, being one of the most vulnerable areas of healthcare, deserves extra scrutiny.

 

What Are AI-Driven Mental Health Apps?

Mental health tools powered by AI use algorithms to function. These algorithms, often based on natural language processing (NLP) and machine learning, are used to
simulate conversations, track symptoms, and even deliver interventions based on Cognitive Behavioural Therapy (CBT).

Popular apps include:

  •  Woebot – a chatbot offering CBT techniques
  • Replika – an AI companion marketed for emotional support
  • Wysa – an “AI coach” to manage anxiety, stress, and depression


These tools aim to bridge gaps in mental health services, especially where access to licensed professionals is limited. But they are spreading faster than regulations can
keep up.

 

Key Ethical Concerns


1. Data Privacy and Consent

Mental health apps handle deeply personal information: thoughts of self-harm, trauma and emotional distress. Yet, many operate under opaque data policies. An
ongoing study by the Mozilla Foundation found that most mental health apps shared data with third parties—often for advertising or product development—without
sufficient user consent.

  • Do users genuinely understand how their data is being used?
  • Is consent truly voluntary or hidden within complex terms
  • What happens to data if the app is bought or shut down?

In mental health, privacy is not just a legal concern—it’s a matter of trust and safety.

 

2. Algorithmic Bias

AI models are only as good as the data they’re trained on. If those datasets lack diversity in gender, ethnicity, or socioeconomic background, the resulting advice may
be biased—or even harmful. For example, an app trained mostly on Western psychological models may misunderstand symptoms in non-Western users or offer irrelevant coping mechanisms. More seriously, NLP bias can lead to the misinterpretation of cultural nuances or slang, a significant problem when evaluating mental well-being.

 

3. Lack of Clinical Oversight

Many AI mental health tools are not clinically validated. While some collaborate with psychologists during development, few are subject to the rigorous testing that
medical devices undergo.

What happens when an AI gives poor advice during a mental health crisis? Is there a system of accountability in place?

Can users dispute or flag dangerous AI guidance?

A 2024 study published in Nature Digital Medicine noted that most mental health apps lacked transparency in how their AI models worked, raising questions about
their safety and efficacy.

 

4. Emotional Dependence and AI Companionship

Apps like Replika and similar “AI friends” are designed to simulate empathy. For isolated users, this can be comforting—but also problematic. Prolonged use may
create emotional dependency on an algorithm that mimics understanding but doesn’t truly comprehend human emotion.
This raises complex ethical questions:

  • Are these apps honest about what they are—and what they’re not?
  • Is there a risk that these apps could replace meaningful human connections or
    necessary professional mental healthcare?
  • Is it ethical to simulate compassion, especially for profit?

 

Are They Ever Helpful?

Yes—AI tools can play a supportive role when designed transparently and used responsibly. For example: Wysa claims its AI has been used by 6 million people worldwide and is built under clinical supervision. Woebot has published research in peer-reviewed journals showing short-term improvement in users’ mood and anxiety levels. They can offer an accessible, stigma-free space for early intervention, especially for those waiting for or unable to afford professional help.
While offering potential benefits, these tools cannot replace therapy. And without regulatory oversight, the boundary between helpful intervention and potential harm
remains dangerously undefined.

 

What Needs to Change?

1. Transparent Data Practices

  • Clear, concise consent forms

  • Opt-in (not opt-out) for data sharing

  • User control over data deletion

2. Clinical Validation

  • Independent audits and peer-reviewed research

  • Disclaimers about the app’s capabilities and limitations

  •  Integration with real mental health professionals when needed

3. Global Standards and Regulation

Currently, mental health apps fall into a grey zone: not quite medical devices, not quite wellness tools. This allows companies to bypass oversight.

Governments and global AI bodies should work toward a unified framework for:

  • AI explainability

  • Risk classification of mental health apps

  • Ethical use of emotional simulation and AI companionship

Our Role at AI for Change Foundation

 

At the AI for Change Foundation, we advocate for:

  • Responsible innovation that puts user safety and dignity first.
  • Community-driven policymaking, empowering technologists, clinicians, and
    users to define ethical standards.
  • Equal access and positive outcomes, ensuring mental health tools serve all
    individuals, not just a privileged group.
    Mental health deserves more than efficiency. It deserves empathy. And while AI can support that mission, it must never replace it.

References

Mozilla Foundation “What Does Giving Your Consent Really Mean?“ *Privacy Not Included*. Accessed April 10, 2025.
https://foundation.mozilla.org/en/privacynotincluded/articles/what-does-giving-your-consent-really-mean/.


De Freitas, Julian, and I. Glenn Cohen.  “The health risks of generative AI-based wellness apps“ *Nature Medicine* 30, no. 5 (2024): 1269-1275.
https://doi.org/10.1038/s41591-024-02943-6.


Wysa. “Wysa – Everyday Mental Health“Accessed April 10, 2025.
https://www.wysa.com/.


Robinson, Athena, Hattie Wright, Darcy L King, Alison Inkster, and Adam Miner.
“Evidence of Human-Level Bonds Established with a Digital Conversational Agent:
Cross-sectional, Retrospective Observational Study“ *JMIR Formative Research* 5, no. 5 (2021): e27868. https://formative.jmir.org/2021/5/e27868/.

https://www.harvey.ai/customers/a-and-o-shearman

 

Follow Marina Linde de Jager on LinkedIn