Spencer Dorn isn’t a technophobe – far from it. As Vice Chair and Professor of Medicine at UNC Health (and a practicing gastroenterologist), he has a vested interest in “how technology is reshaping our lives and reshaping medicine.”
What does not interest him is the increasingly prevalent concept of AI, or any technology for that matter, as a panacea for everything that’s wrong with healthcare.
One example? Integrating AI tools within the EMR to improve email management, which has been touted as a game-changer. “The in-basket is not the only reason why doctors are unhappy,” he noted. “There may be many reasons. We need to be careful and not oversimplify problems and suggest tools that will magically fix things immediately. We need to keep our feet on the ground.”

Spencer Dorn, MD
Doing so, however, isn’t easy – particularly for healthcare leaders who are inundated with use cases for tools that can help alleviate burnout and improve patient care. During a recent 229 Podcast with Bill Russell, Dorn spoke candidly about his concerns around advanced analytics, particularly when it comes to care coordination and output reliability, and the need to temper expectations.
During the conversation, Dorn recalled a tense moment during which a patient’s oxygen saturation level dropped significantly during a colonoscopy, prompting he and his anesthesiologist colleague to abort the procedure and quickly pivot, which helped save the patient’s life. The primary takeaway? The efforts put in by frontline workers go far beyond leveraging technology, he said, noting in a LinkedIn post that “it’s foolish to imagine automating their roles away.” Because while AI can certainly “make knowledge abundant, it cannot yet reliably manage risk or coordinate action under pressure,” especially in clinical situations.
“AI can’t do everything,” Dorn said. “It can unbundle knowledge from experts, but healthcare isn’t just about providing knowledge; it’s about coordinating care. And it’s about accepting risk when things go wrong.”
The ability to “unbundle” and help summarize information is significant and shouldn’t be overlooked. “There’s a reason why AI scribes have proliferated healthcare and pretty much every health system in America now has chosen a scribe to partner with,” he stated. “People often have very long and complicated medical records. And so, if you’re looking for a nugget or a few nuggets, reading through the whole thing to find them can be quite laborious.”
AI-powered platforms like OpenEvidence, for instance, can assist in clinical decision making by aggregating information from trusted sources like NEJM and JAMA and present knowledge at the point of care. In that respect, “AI can make physicians’ work easier,” he noted. “We’re already seeing that with a lot of tools.”
On the other hand, having access to so much information can be overwhelming, according to Dorn, who equated it to the alert fatigue that can be brought on by clinical decision support tools. And while it can be beneficial to point out potential “blindspots,” sometimes the information isn’t applicable to the situation at hand.
What it can also do, he said, is “flip the script” so that AI is producing the work and physicians take on more of a reviewer or editor role. “That can create a lot of challenges.”
The other point of caution with AI is the reliability of the output. “It’s not perfect, and so, when you start applying it to high-stake situations, it’s tough,” Dorn stated. The problem is that the information is right enough of the time, which makes it “very convincing.” In fact, he believes “it would almost be easier if it was consistently wrong, because then you ignore the messages,” or at least supervise them more carefully.
What that does is create a situation in which users don’t know if they should rubber-stamp the information or question it. “That’s real tension,” he said. “We need to think more broadly about how we work and not assume that these tools are going to make us better. In some ways they can, but only if we apply them to the right places in a thoughtful way.”
When it comes to AI scribes, a similar tension exists. Although some physicians have reported reductions in “pajama time,” the data doesn’t necessarily support that, according to Dorn. “If you look at the large studies that have been published, AI scribes have only saved about a minute, or even less, per patient.” And while that can make a dent, it’s not the time of impact he wants to see.
The more significant benefit, he believes, is in the capability to summarize data, particularly for specialists who spend a lot of time preparing for patient visits, especially the initial consult. “I’m more excited about that than anything else, because summarization applies to all healthcare roles, whether you’re a physician, administrator, or any other role,” he said. “I do think we’ll see some improvements in pajama time,” along with a boost in morale.
But does that justify the extremely high valuations of AI scribe vendors? Dorn isn’t quite sold. “I think these tools are amazing, but we have to temper our expectations with all technology in terms of what they can accomplish,” he said, while also avoiding the “polar extremes” that often surface with new innovations. “We need to be careful,” he said, and realize that AI is neither going to save nor sink healthcare on its own.
“These technologies offer some amazing capabilities, and if we use them the right way, I see tremendous potential,” he concluded. “But if we expect too much too soon, we’re setting ourselves up for failure.


Questions about the Podcast?
Contact us with any questions, requests, or comments about the show. We love hearing your feedback.

© Copyright 2024 Health Lyrics All rights reserved