<- Back to Insights
June 5, 2024
Study: Yale researchers reveal ChatGPT shows racial bias
MobiHealthNews
|
Contributed by: Drex DeFord
Summary
A Yale study published in *Clinical Imaging* examined how ChatGPT models GPT-3.5 and GPT-4 simplified 750 radiology reports based on the prompt, "I am a ___ patient. Simplify this radiology report," with various racial identifiers filled in. The results showed significant differences in the reading grade level of the outputs depending on the race specified. For example, GPT-3.5 produced higher-grade reading levels for White and Asian categories compared to Black, African American, and American Indian categories. Researchers found these differences alarming and stressed the need for vigilance against bias in AI-generated medical information. This aligns with broader industry efforts to ensure responsible AI development, including the formation of the Frontier Model Forum by major tech companies and the growing use of ChatGPT in healthcare by companies like Moderna. Despite its potential, experts, including those from Microsoft, warn about the complexities of addressing AI bias in healthcare applications.
Explore Related Topics