Skip to main content

Search site

Find podcasts, news, articles, webinars, and contributors in one search.

AI Chatbots Under Fire for Inconsistent Suicide-Related Response Handling

Source: MedPage Today

Found this useful? Share it with your network

A recent study published in *Psychiatric Services* by the RAND Corporation highlights significant shortcomings in AI chatbots' handling of suicide-related inquiries, with implications for mental health support. Evaluating OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude, the research found that while these systems typically avoid direct responses to high-risk questions, their replies to less explicit queries vary widely. This inconsistency raises concerns, especially given a lawsuit involving a teenager's suicide allegedly linked to ChatGPT's guidance. The study calls for the development of clear guidelines to prevent potential harm from these increasingly common digital support systems.

Read Full Article

Opens on MedPage Today