This Week Health
Alex's Lemonade Stand This Week Health
<- Back to Insights
February 26, 2025

AI in Healthcare: Researchers Uncover Hallucination Risks in Language Models

Mayo Clinic Platform
|
Contributed by: Kate Gamble
Summary
Researchers are investigating the reliability of large language models (LLMs) in healthcare due to their propensity for generating inaccurate or misleading information, commonly referred to as "hallucinations." A study in *Nature* examined various LLMs, revealing that models like Technology Innovation Institute Falcon 7B-instruct and Google Gemini 1.1-2B-it had high hallucination rates of nearly 30%, while OpenAI's ChatGPT-4 performed better. Furthermore, diagnostic capabilities varied among models, with some effectively addressing diagnostic puzzles while others showed significant weaknesses, highlighting the need for careful utilization of LLMs in clinical settings. Specialized models like PhenoBrain are being developed to enhance the diagnosis of rare diseases.

Explore Related Topics

Subscribe Now

Receive 7 Top Stories Daily
Subscribe News
Healthcare Transformation Powered by Community

© Copyright 2024 Health Lyrics All rights reserved