This Week Health
2 Minute Drill: Why AI "Hallucinates" and How Healthcare Leaders Can Stay Safe with Drex DeFord

Subscribe to This Week Health

Share this episode

Show Notes

Drex breaks down why AI models like ChatGPT sometimes fabricate confident-sounding but false information, calling it "bluffing" rather than hallucinating. He explores OpenAI's research on training gaps, alignment issues, and response pressure that cause this problem. For healthcare professionals, he shares practical strategies including setting explicit context rules, demanding source verification, and maintaining human oversight when using AI for InfoSec policies, alert triage, or patient care guidance.

Remember, Stay a Little Paranoid 

X: This Week Health 

LinkedIn: This Week Health 

Donate: Alex’s Lemonade Stand: Foundation for Childhood Cancer 

Contributors

Transcript for This Episode

This Week Health
Healthcare Transformation Powered by Community

Questions about the Podcast?

Contact us with any questions, requests, or comments about the show. We love hearing your feedback.

Hello@ThisWeekHealth.com

Looking to connect or attend events? Visit our sister organization, 229 Project
Click here.

© Copyright 2024 Health Lyrics All rights reserved