Skip to main content

Search site

Find podcasts, news, articles, webinars, and contributors in one search.

Study Reveals Alarming Vulnerability of AI to Medical Misinformation

Source: Ars Technica

Found this useful? Share it with your network

Recent research from New York University highlights the vulnerability of large language models (LLMs) to medical misinformation, revealing that even as little as 0.001% of false data in training sets can significantly impair model performance. The study emphasizes the phenomenon of data poisoning, where misinformation is strategically introduced into training datasets to bias outputs. Focusing on The Pile dataset, rich in medical content but lacking thorough vetting, researchers demonstrated that small amounts of misleading information can result in a considerable increase in inaccurate medical responses. This raises concerns about both new and existing LLMs that may already be compromised by outdated or incorrect medical data.

Read Full Article

Opens on Ars Technica