Drex breaks down why AI models like ChatGPT sometimes fabricate confident-sounding but false information, calling it "bluffing" rather than hallucinating. He explores OpenAI's research on training gaps, alignment issues, and response pressure that cause this problem. For healthcare professionals, he shares practical strategies including setting explicit context rules, demanding source verification, and maintaining human oversight when using AI for InfoSec policies, alert triage, or patient care guidance.
Remember, Stay a Little Paranoid
Donate: Alex’s Lemonade Stand: Foundation for Childhood Cancer
Questions about the Podcast?
Contact us with any questions, requests, or comments about the show. We love hearing your feedback.
© Copyright 2024 Health Lyrics All rights reserved