
In the nearly three years since ChatGPT was introduced, artificial intelligence (AI) has been touted as transformative, revolutionary, and innovative – and understandably so.
GenAI has been shown to increase efficiency, reduce burnout, and improve patient experience by automating routine tasks and facilitating data entry.
One thing it’s not? Magic, according to Deepti Pandita, MD, CMIO and VP of Clinical Informatics, UCI Health. “AI is math, not magic,” she said during a recent interview. “The math has existed for decades.” What has changed is the way it’s being leveraged.
“People have so many different conceptual thought processes when it comes to AI,” she noted. And while it has shown significant potential in easing the administrative burden, the clinical world is a different animal. “That’s where I have some concerns. And it’s where governance and strategy comes into play.”
A big part of that strategy is providing the right education for users. During the interview, Pandita talked about the approach her team has adopted at UCI Health, their development work in LLMs, and how AI can serve as a uniter.

Deepti Pandita, MD
Many of the concerns that Pandita – and other leaders – harbor are rooted in the dramatic changes that algorithms have endured in the recent years. “Twenty years ago, people were doing their due diligence,” she said. “There was a process that you would run in the background for a period of time, and then turn it on and do a side-by-side comparison. You would do outcome analysis, look at papers published.”
The AI race, however, has changed all of that. And as the technology continues to evolve at a rapid rate, it’s becoming increasingly difficult to ensure users understand the importance of transparency and guardrails around data, along with how algorithms are being developed. “Educating the clinical workforce is the biggest challenge we have,” Pandita said.
To that end, her team has developed a multifaceted approach that starts with an AI readiness assessment. Tailored specifically toward the audience – whether that’s healthcare workers, faculty or undergrad students – the 60-question survey aims to create a baseline from which learnings are developed. The objective is to “get to the core of AI knowledge” by asking questions such as, ‘do you understand the difference between a query and a prompt?’
What they’ve found is that although residents and medical schools have more experience using AI, “they don’t know the guardrails. They don’t know the governance piece,” she stated. “They may use it very proficiently in their personal lives, but they don’t understand that if it’s being used on patients, it can cause harm down the road.”
That’s where education comes in. And not just the ‘what,’ but the ‘how,’ Pandita stated. “We know that the attention span and time availability for the average clinician is no more than 5 to 10 minutes.”
The solution her team has proposed? A series of microlearnings that are constantly being updated based on new information. “We know that we can’t just deliver information through a learning management system and sign off,” she said. “We have to constantly update the content so that it’s an iterative process.”
And it has to be one that caters to the unique needs of clinicians – something Pandita understands well. “We know that they won’t learn if it’s offered during patient care hours,” she said. She also knows that “if you present it in a didactic manner, they’ll disengage. You have to engage clinicians where they are, when they’re available, and in a manner in which they like to learn.”
They’re even looking at gamification and reward systems, she said.
It’s that innovative spirit that led UCI Health to develop its own LLM agent – which interestingly “wasn’t a huge lift,” according to Pandita. And it addressed a critical need. “We have a very lean team of trainers, and we couldn’t get to people fast enough to provide at-the-elbow support.”
And so, they decided to leverage their “huge repository,” and create an agent embedded in the EMR that enables users to ask questions. “It has branching logic where the bot leads you to a knowledge document,” she said. If it can’t reach a resolution, a live agent gets involved.
That way, “there’s still a human in the loop. But it has taken a huge burden off by not having someone monitor the chat 24-7, and the physicians are getting support in a more timely manner,” she said. “It’s a win-win.”
Another key victory has come in AI’s ability to break down barriers by generating notes in the patient’s preferred language. This way, “they’re more engaged with their care, because we’re able to have a conversation with them or their interpreter, and the note is still in English, which is needed for regulatory purposes,” she added. “From this lens, AI doesn’t have to be a divider. It can be a great uniter.”
And it can lead to better long-term outcomes, which is the ultimate goal with any tool. The add-on benefit is the potential that solutions like ambient listening have already shown in alleviating some of the cognitive load. “It’s always increasing, so if we can make a dent in that and offset one thing from everything else they have to remember, that’s huge.”


Questions about the Podcast?
Contact us with any questions, requests, or comments about the show. We love hearing your feedback.

© Copyright 2024 Health Lyrics All rights reserved