Voice AI: Transforming Healthcare and Our Ethics Along with It
Written by Marina Linde de Jager – Legal Advisor & AI Ethics Specialist at AI for Change Foundation
The New Frontier: Voice AI in Healthcare
From administrative monotony to patient companionship, voice AI is emerging as a disruptive force in healthcare. Two notable stories illustrate both its remarkable potential and the ethical tightrope it walks:
1. Infinitus’s “Eva”, deployed by Cencora, is automating benefits-verification calls – handling more than 100 full-time staff equivalents and resolving them four
times faster than human workers.
2. Everfriends, in collaboration with Hume AI, offers empathetic voice companions to older adults and individuals with dementia, adapting responses
based on emotional cues and reducing loneliness for 85% of users.
These advancements promise a healthcare system that’s more efficient, accessible, and emotionally supportive. But they also stir profound ethical questions about privacy, trust, depersonalisation, and oversight.
Efficiency Meets Equity and Erosion?
Take Eva: designed to offload laborious calls to insurers, allowing clinicians to focus on patients. Cencora cites a 400% increase in processing speed and significant
administrative savings.
Yet, crucial questions emerge:
• Are patients and payers aware they’re speaking to AI?
• Is consent obtained before personal data enters these systems?
• How are errors handled – and who’s accountable when AI misinterprets information?
Missteps in voice recognition or context may impact patient care – or even reinforce inequitable outcomes if the AI cannot accurately interpret speakers with accents or
speech impairments.
Companionship vs. Replacement
Voice AI is also entering the emotional domain. Everfriends simulates empathetic conversation with seniors and dementia patients. Powered by Hume AI’s emotion-
sensing technology, it adjusts its voice tone, speed, and style – to respond appropriately to the user’s mood.
This offers real benefits: studies indicate that 85% of users report reduced loneliness.
But there are risks:
• Can a simulated “friend” be trusted when real human interaction is scarce?
• Does emotional dependence on AI devalue human bonds?
• Is the AI’s emotional understanding accurate, or a superficial mimicry that may eventually frustrate or mislead?
These questions strike at the heart of what caregiving means in an age of intelligent
machines.
Ethical Concerns: Privacy, Transparency & Accountability
Voice AIs are sophisticated data harvesters. Eva logs call content, details, and interactions. Everfriends captures emotional data from tone and cadence.
Concerns include:
• Informed consent: Are users – or their guardians – aware of how the AI uses their voice and emotional data?
• Data protection: Is sensitive health information securely stored, and who controls it?
• Algorithmic accountability: What happens when a voice assistant makes a mistake – incorrectly processes a request, fails to escalate a tricky situation, or misreads emotional cues?
Without clear boundaries, these tools may inadvertently compromise trust, rights, or health outcomes.
A Framework for Ethical Voice AI in Healthcare
At AI for Change Foundation, we propose early-stage guidance to ensure responsible deployment:
1. Consent by Design
Require clear disclosure (audio or written) that users are interacting with AI and obtain informed consent before use.
2. Human-AI Collaboration, Not Substitution
Design systems to escalate complex, emotional, or uncertain interactions to human providers, maintaining clinician oversight.
3. Emotional Data Protocols
Restrict emotional data collection to clinical or therapeutic use-cases. Delete or safely anonymise data not fundamental to care.
4. Bias Sensitivity Testing
Ensure voice AIs perform equitably across demographics – gender, age, accent, neurodiversity – and address any disparities prior to deployment.
5. Transparent Oversight & Audit Trails
Maintain logs of every AI interaction. Deploy routine audits, preferably led by independent third parties, to assess performance and safety.
Conclusion: The Path Ahead
Voice AI has the potential to transform healthcare – by slashing admin burdens, reducing clinician burnout, and offering companionship to the vulnerable.
But it must be deployed with care. Without safeguards, voice AI risks undermining consent, replacing genuine care, and reinforcing harms just when humanity
matters most.
At AI for Change Foundation, our vision is clear: voice AI must be both powerful and profoundly human, shaped by transparent values, intentional oversight, and
unwavering respect for dignity and data rights.
If you’re interested in how to pilot voice AI ethically – or how to advocate for sound regulation – let’s co-create the future of healthcare that balances innovation with
humanity.
References
Business Insider (2025).
Stone, L. (2025, June 12). How voice AI can slash healthcare clinicians’ workloads and offer companionship for older adults. Business Insider.
https://www.businessinsider.com/voice-ai-healthcare-admin-loneliness-companionship-2025-6
Hume AI. (2024, March 27).
Hume AI announces $50 million fundraise and empathic voice interface [Press release]. Fitt.
https://insider.fitt.co/press-release/hume-ai-announces-50-million-fundraise-and-
empathic-voice-interface/
Hume AI. (2024).
How EverFriends.ai uses empathic AI for eldercare [Case study]. Hume AI.
https://www.hume.ai/blog/case-study-hume-everfriends