26 Aug 2025, 11:08
Research Shows Inconsistency of Chatbots in Responses to Questions about Suicide
- Chatbots have shown inconsistency in processing inquiries about suicide.
- The research underscores the necessity of improved safety protocols for chatbots.
- Responses to inquiries about suicide vary from safe to unsafe.
A new study conducted by RAND Corporation found that three popular artificial intelligence chatbots, namely ChatGPT, Claude, and Gemini, demonstrate inconsistency in their responses to inquiries related to suicide. The study, published in the journal Psychiatric Services, focused on 30 inquiries that were classified by level of risk, ranging from low to high.
Chatbots were found to be cautious in responses to the most unsafe inquiries related to specific methods of suicide, refraining from discussing them. However, their responses to less extreme inquiries were found to be superfluous. For instance, ChatGPT and Claude provided direct responses to inquiries about unsafe substances, while Gemini often responded with lower risk.
The authors of the study emphasize the necessity of improved models for chatbots, as more and more individuals, including youth, turn to them for assistance with mental health issues. They highlighted the need for standards that regulate how chatbots should respond to inquiries related to suicide.
The research also indicated that chatbots do not bear responsibility for providing therapeutic assistance, and their responses often lead to advice to revert to professional help or hotlines.
Tags: Technology/AI/Research