Psychologists warn about advice from the new version of ChatGPT

Entertainment|2025/12/01
Psychologists warn about advice from the new version of ChatGPT
ChatGPT
استمع للخبر:
0:00

ملاحظة: النص المسموع ناتج عن نظام آلي

  • The program gives dangerous advice to psychiatric patients
  • It reinforces delusional beliefs

Several leading psychologists have warned that the latest version of ChatGPT, known as ChatGPT-5, gives dangerous and unhelpful advice to people going through mental health crises.

A new study showed that the program failed to recognize dangerous behaviors when interacting with users with psychological problems.

A psychiatrist and a psychologist simulated multiple mental health cases while interacting with ChatGPT-5, during which the program confirmed and reinforced delusional beliefs, such as claims of being “the next Einstein” or having the ability to pass between moving cars, without challenging these ideas.

In milder cases, the program offered some useful advice and resource guidance, which may reflect OpenAI’s efforts to improve the tool in collaboration with experts, but scientists stressed that this does not replace professional help.

This warning comes amid increased scrutiny of how ChatGPT interacts with vulnerable users, following the death of an American teenager from California named Adam Rhine, whose family filed a lawsuit against the company over his discussions with the program about ending his life.

In the study, researchers used virtual personas representing cases such as a woman with obsessive-compulsive disorder, a man who believed he had ADHD, and a person experiencing psychotic symptoms, to assess ChatGPT-5’s responses.

Researchers found that the program may encourage delusional beliefs and provide inappropriate responses, such as praising a persona who claimed to be “the next Einstein” or encouraging risky behaviors.

ChatGPT-5 failed to recognize warning signs or mental deterioration, despite providing some help in simple everyday cases.

Experts noted that the program may inadvertently reinforce harmful behaviors, especially when dealing with complex or psychotic cases, and that its design, which encourages continued user interaction, could worsen the problem.

A British psychiatrist emphasized that digital tools like this are not a substitute for professional mental health care, and that training and supervision by specialists ensure safe and effective support.

The president of the Association of Clinical Psychologists stressed the need to improve AI responses to signs of danger and complex psychological difficulties, and the necessity of monitoring and regulation to ensure the safe use of this technology.

OpenAI stated that it is working with global mental health experts to enhance ChatGPT’s ability to detect signs of psychological distress and guide users toward professional help, with improvements including safer models, break reminders, and parental control tools.