UnpressAI

29 Jul 2025, 01:05

Research Shows Risks of Using ChatGPT for Psychotherapy

  • Research indicates that there are significant risks associated with the use of AI in therapy.
  • Chatbots may worsen the mental states of users.
  • OpenAI expresses concerns regarding trust in AI in psychotherapy.

Research conducted at Stanford University revealed serious risks associated with the use of large language models, such as ChatGPT, in psychotherapy. Scientists found that AI chatbots can provide unsafe or unhelpful recommendations that exacerbate mental distress in users.

The research highlights that individuals who turn to chatbots during a crisis may receive responses that worsen their condition. "There have been instances of fatalities due to the use of commercial bots," researchers note.

The growing popularity of AI in the field of mental health has sparked concerns among professionals. Psychotherapist Karan Evans asserts that ChatGPT has become the most widely used tool in the world, often without adequate oversight.

Moreover, OpenAI's Sam Altman expressed concerns regarding trust in AI chatbots in therapy, emphasizing the importance of the confidentiality of information that users may provide. The research also indicates that AI often fails to recognize critical signs of mental distress, which could lead to dangerous consequences.

It is important to note that the use of AI in psychotherapy raises many questions regarding ethics and safety. Researchers caution against the reliance on such technologies, as the risks may outweigh the potential benefits.

Tags: Technology/AI/Research

Articles on this topic:

  • www.independent.co.uk - ChatGPT is pushing people towards mania, psychosis and death – and OpenAI doesn’t know how to stop it
  • www.zdnet.com - Even OpenAI CEO Sam Altman thinks you shouldn't trust AI for therapy