Skip to main content

Search site

Find podcasts, news, articles, webinars, and contributors in one search.

Mount Sinai Study Says AI Chatbots "Highly Vulnerable" to Repeating and Elaborating on False Info

Source: newswise.com

Found this useful? Share it with your network

A study from the Icahn School of Medicine at Mount Sinai reveals that AI chatbots can spread false medical information, highlighting the need for stronger safeguards in healthcare technology. Researchers discovered that these chatbots often embellish incorrect medical details when presented with misleading user queries, but the introduction of simple cautionary prompts significantly reduced misinformation dissemination. This suggests that minimal interventions can enhance the accuracy of AI-generated medical advice, underlining the importance of integrating such measures into healthcare applications. Future research will build on these findings by testing the methodology with real patient data and exploring additional safety prompts.

Read Full Article

Opens on newswise.com