ChatGPT Conversations as Courtroom
Evidence: Legal Realities and Ethical
Reflections

Written by Marina Linde de Jager – Legal Advisor & AI Ethics Specialist at AI for Change Foundation

 

Introduction

As generative AI tools like ChatGPT become integrated into everyday workflows, from writing  assistance to legal research, therapy simulations, and education, a persistent question emerges: Can your private conversations with ChatGPT be used as evidence in court?
The short answer: yes. Courts have already begun to treat user prompts and chat logs as potentially admissible records. This development raises deep legal and ethical
concerns about privacy, data governance, and the evolving boundary between human and machine communication.

 

Your Prompts Are Not Legally Private

Unlike interactions with attorneys, therapists, or doctors, conversations with AI models like ChatGPT are not covered by legal privilege or confidentiality protections.
OpenAI CEO Sam Altman has explicitly warned users:
“People are talking to ChatGPT like it’s a therapist, lawyer, or priest. I think that’s very screwed up… those conversations can be subpoenaed.”-Sam Altman, 2025 (via Instagram)
This sentiment is echoed by legal experts. As noted in a recent piece by Russell McVeagh, legal privilege does not extend to AI interfaces. If a user shares sensitive information in a ChatGPT chat, that conversation may become fair game in discovery or investigation.

 

Court Precedents Are Emerging

In a key development, the U.S. federal court in The New York Times v. OpenAI ordered OpenAI to preserve all user chat logs, including deleted and temporary data, as part of evidence preservation. Despite OpenAI’s objections, the ruling signals a judicial stance: AI chat logs are discoverable records.
Additionally, a Czech environmental court rejected the use of ChatGPT outputs as factual evidence, ruling that the model is not a reliable source of truth. However, this did not preclude the chat logs themselves from being examined as digital records of user intent or action.

 

Is This a Global Legal Trend?

The admissibility of AI-generated content as evidence is gaining attention across legal systems. In the United States, courts have begun treating ChatGPT interactions as potentially discoverable and admissible, especially when used to support or refute claims in litigation. In South Africa, judges have criticized legal arguments containing fabricated AI-generated case law, ultimately referring attorneys to professional bodies, suggesting that courts may scrutinise AI-derived material as part of evidentiary review.
Meanwhile, Czech courts have dismissed AI-generated evidence outright, citing concerns over reliability and verifiability. These examples highlight a growing
international debate over whether and how AI-generated material, including ChatGPT prompts and outputs, should be handled as admissible legal evidence.

 

The Ethical Implications

These rulings and practices raise several concerns that intersect with AI for Change Foundation’s mission to promote ethical AI use:
     a. Informed Consent
Most users are unaware that their chats can be stored, reviewed, and potentially used in litigation. This lack of transparency undermines the principle of informed consent, a core pillar of responsible AI use.

     b. Surveillance Creep
If legal authorities, corporations, or governments can access AI conversations retroactively, this could lead to forms of surveillance-by-design, where users are effectively monitored through seemingly benign interactions.

     c. Data Governance and Control
Even if a user deletes a conversation, OpenAI (or other providers) may still retain server-side copies due to internal retention policies or court orders. This raises crucial questions: Who owns the conversation? Can users truly erase their digital footprint?

 

Regulatory Gaps and Recommendations

While some jurisdictions are developing AI-specific data protection laws, global frameworks are still lagging behind technological adoption. The ability to use AI prompts in court sits in a grey area, between personal speech and data record.

To uphold ethical standards in the deployment of generative AI, we recommend:
     • Clear Disclosure: AI platforms must clearly inform users that chats are not confidential and may be stored or accessed under legal compulsion.

     • User Data Portability and Deletion Rights: Users should have enforceable rights to delete, export, or restrict access to their chat history.

     • Regulated Evidentiary Use: Lawmakers should define when and how AI- generated content and logs may be admissible in legal settings, ideally with safeguards to prevent misuse or misinterpretation.

     • Differentiated Trust Models: Platforms offering legal or therapeutic simulations should implement stricter privacy policies or explicitly warn users about limitations.

 

Conclusion

As AI tools like ChatGPT become increasingly embedded in decision-making, communication, and personal reflection, the legal system is catching up, and not always in ways that respect privacy or ethics.

At AI for Change Foundation, we believe that the responsible use of AI must include rigorous privacy protections, clear user rights, and ethical boundaries for legal admissibility. Without these safeguards, the promise of AI may be overshadowed by its potential for surveillance, coercion, or harm.

 

References

Altman, S. (2025, July 25). Sam Altman warns there’s no legal confidentiality when using ChatGPT as a therapist. TechCrunch. Retrieved from

https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no-legal-confidentiality-
when-using-chatgpt-as-a-therapist/

Altman, S. (2025, July 25). Sam Altman says your ChatGPT therapy session might not stay private in a lawsuit. Business Insider. Retrieved from

https://www.businessinsider.com/chatgpt-privacy-therapy-sam-altman-openai-lawsuit-2025-7

Times of India. (2025, July 27). ‘I think that’s very screwed up’: OpenAI CEO Sam Altman warns about ChatGPT privacy. The Times of India. Retrieved from https://timesofindia.indiatimes.com/technology/tech-news/i-think-thats-very-screwed-up-openai-ceo-sam-altman-warns-about-chatgpt-privacy/articleshow/122931790.cms

Economic Times. (2025, July 26). Telling secrets to ChatGPT? Using it as at herapist? Your AI chats aren’t legally private, warns Sam Altman. The Economic
Times. Retrieved from

https://economictimes.indiatimes.com/magazines/panache/telling-secrets-to-chatgpt-using-it-as-a-therapist-your-ai-chats-arent-legally-private-warns-sam-
altman/articleshow/122924553.cms

Reuters. (2025, June 6). OpenAI appeals data preservation order in NYT copyright case. Reuters. Retrieved from https://www.reuters.com/business/media-telecom/openai-appeal-new-york-times-suit-demand-asking-not-delete-any-user-chats-2025-06-06/

National Law Review. (2025, July 25). Privacy under pressure: What the NYT v. OpenAI teaches us about data governance. NatLawReview.com. Retrieved from

https://natlawreview.com/article/privacy-under-pressure-what-nyt-v-openai-teaches-us-about-data-governance

Cliffe Dekker Hofmeyr. (2025, July 4). Another episode of fabricated citations — real repercussions. Cliffe Dekker Hofmeyr.

https://www.cliffedekkerhofmeyr.com/news/publications/2025/Practice/Employment-Law/combined-employment-and-knowledge-management-alert-4-july-Another-episode-of-fabricated-citations-real-repercussions-South-African-courts-show-no-tolerance-for-AI-hallucinated-cases

 

Follow Marina Linde on LinkedIn