Mount Sinai Study Says AI Chatbots "Highly Vulnerable" to Repeating and Elaborating on False Info
newswise.com
|
Contributed by: Kate Gamble
Summary
A study from the Icahn School of Medicine at Mount Sinai reveals that AI chatbots can spread false medical information, highlighting the need for stronger safeguards in healthcare technology. Researchers discovered that these chatbots often embellish incorrect medical details when presented with misleading user queries, but the introduction of simple cautionary prompts significantly reduced misinformation dissemination. This suggests that minimal interventions can enhance the accuracy of AI-generated medical advice, underlining the importance of integrating such measures into healthcare applications. Future research will build on these findings by testing the methodology with real patient data and exploring additional safety prompts.