AI Chatbots Under Fire for Inconsistent Suicide-Related Response Handling
MedPage Today
|
Contributed by: Kate Gamble
Summary
A recent study published in *Psychiatric Services* by the RAND Corporation highlights significant shortcomings in AI chatbots' handling of suicide-related inquiries, with implications for mental health support. Evaluating OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude, the research found that while these systems typically avoid direct responses to high-risk questions, their replies to less explicit queries vary widely. This inconsistency raises concerns, especially given a lawsuit involving a teenager's suicide allegedly linked to ChatGPT's guidance. The study calls for the development of clear guidelines to prevent potential harm from these increasingly common digital support systems.