June 11, 2024: Zafar Chaudry, CDO & CIO at Seattle Children's, speaks with Clara Lin, CMIO and Associate VP at Seattle Children's. They explore whether AI is a real help or just hype by discussing its practical applications and the risks involved. How can AI be integrated without compromising patient safety? What governance structures are essential to mitigate AI-related risks? How do we balance the high costs of AI innovation with the financial constraints of healthcare systems? The episode also highlights the evolution of AI use cases, emphasizing the need for a strong governance framework and the importance of clinician involvement in designing and implementing of AI solutions.
Categories: Emerging Technologies & Governance
Donate: Alex’s Lemonade Stand: Foundation for Childhood Cancer
This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
Today on Town Hall
(INTRO) how do we maybe get some high yield, use cases that can be rolled out in a short time so that people know we aren't just rushing to put in millions of dollars into a new technology that we think is cool.
My name is Bill Russell. I'm a former CIO for a 16 hospital system and creator of This Week Health.
Where we are dedicated to transforming healthcare, one connection at a time. Our town hall show is designed to bring insights from practitioners and leaders. on the front lines of healthcare. Today's episode is sponsored by ARMIS, First Health Advisory, Meditech, Optimum Health IT, and uPerform. Alright, let's jump right into today's episode.
Welcome to Town Hall, a production of This Week Health. I am Dr. Zafar Chaudry, and I'm the Senior Vice President, Chief Digital Information Officer at Seattle Children's. I'm honored to be joined today by my colleague, Dr. Clara Lin, who is the Chief Medical Information Officer Seattle Children's.
Welcome, Clara, to the
show.
Thank you. I'm super excited to be here.
Today, we're speaking with Dr. Lin about AI,
the fun topic in healthcare. Real help or real hype? But before we do that, let's jump into Seattle Children's. Seattle Children's is a pediatric health system. We have 46 sites across Washington, Alaska, Montana, and Idaho.
We're transacting about 3. 8 billion in revenue. That's And a 407 bed hospital at the core. Dr. Lin is an actively practicing physician and she'll tell us about herself. Within that, I run the IT space at Seattle Children's. About 500 people, everything from infrastructure to digital health,
informatics.
our main clinical system is Epic, and our main
ERP is Info. Over to you, Clara. Tell us about your side of the function.
Sure. My name is Clara. I'm a general pediatrician, and I'm actually MET-PEDS trained. For listeners who know what MET-PEDS is, it's a small medical specialty that gets dual certified in internal medicine and pediatrics.
Thank you. My clinical role is a primary care physician and have practiced as a clinical physician for 10 plus years now. here at Seattle Children's, I'm a general pediatrician clinically, but I also run our medical informatics team under Zafir's leadership. On my team, I formally have 14 total informaticists that report up to me, each one with a small percentage of their total FTE dedicated to informatics work.
We have very intelligent and just bright group of physicians and APPs that do this work with us. They represent specialties across the space from anesthesia to surgical specialties to medical subspecialties to primary care. Myself and so it's a very wide range of clinical and technical expertise.
Many of them have very technical backgrounds, and all of them are clinically active, and all of them are very savvy with workflows, what should be best practices, and communicating what our work, is in the IT space with their clinical colleagues. This group of informatics informaticians, I should say also represents sort of leadership in the hospital, so they're not just providers and informaticists, but many of them are medical directors also, division chiefs fellowship program directors, and so we are in a really good spot with people that can help us voice our advocate for our IT and also advocate for their colleagues in the IT space.
Oh, thanks for sharing.
It sounds like you're super busy and then the world changed over the last six months and this magical AI came. Yeah. So how'd that impact you?
I think that's a really good question. I've been asked that question quite a few times Zafar and I think it went through sort of a wave, like a different, Periods of evolution just in the last six months or however many months since chat GPT became public for free popular use.
I think initially people are weary of what is this? What are we supposed to do with it? And waiting for it to normalize itself, going beyond the robot's going to take over the world type mentality. And then the public starts using chat GPT quite a bit. And of course, our physicians and our clinicians, our providers are all starting to wonder what this can do for them.
And so this is very much generative AI. But once people got the gist of what generative AI is, they start to learn more about machine learning. And then we'll come to me with some really creative use cases and proposals of what they want to do. And so for a few months, Myself and our chief architect, Nigel we're getting tons of requests, just on map coming to us on what they want to do with AI.
Most of them not fully thought out. They have an idea, they have a problem that they want to fix. And their question to us is how do we implement AI in that? And then suddenly everyone sees the flip side of those things. When AI goes wrong, what happens to the patient, the use case if it's outside of healthcare?
What is the risk in that? And so that all quieted down really quickly when we started to see that sort of in public press. And so now we've stabilized at a place now where the requests and the proposals that we're getting are very well thought out already by the requesters and the end users.
By the time that it makes it to myself and Nigel, and so when we're reviewing them, it's no longer just here's a problem, here's what I want to fix, can you put a robot on it, or can you put an AI on it? Now it's, hey, I think this is how we can fix the problem with generative AI or with machine learning.
How do we implement it? What is the risk to the organization and to our patients? So that's the evolution of the requests I've been seeing from my lens. Zafar, what about yourself? What have you been seeing?
I think, whether it's a clinical use case or a non clinical use case, there's definitely value in the tools that are coming, and they're coming on mass, and they're coming on speed.
I think what you tend to have to find out is You know, what is the actual exam question? What problem are we really trying to solve here? And can we do that in an affordable way where there is some measurable value? Because yeah, Gen AI may take the load off physicians and give them a better experience, which may lead to less burnout, but that may not necessarily generate cash savings per se, which many health systems due to dwindling margins are looking for.
But at the same time, what I worry about in the gen AI space is, how much advice are you going to take from an AI? And if you're practicing medicine using AI and something goes wrong, who is then responsible? How does that impact your license? Versus the AI that you use. And I think we've been debating these things in our organization for a few months now, right?
And the question for you would then be, so could you run through what Seattle Children's has done around preparing for this, ramping up to the point where you're now collecting these use cases?
Yeah, so I think that exactly to your point is, this is great and we all recognize the potential.
benefits to it, some of it overhyped, some of it real, some of it definitely feasible but then what's the risk, right? Like, exactly to your point, especially when we're dealing with patient populations, especially when we're dealing with humans on the receiving end of the technology, what is the risk to our patients?
What is the risk to our clinicians? What's the risk to the organization? So the way that we are approaching it here at Children's, as is that we're starting with a very strong well-defined intake process with a governance group that is comprised of experts in ethics, in equity, in legal and risk.
And in technology, of course to then help us view every proposal from a do no harm perspective. So what happens if
Marker
the AI hallucinates or what happens if the prediction from the machine learning algorithm is inaccurate or faulty, what happens if it's a very rare occurrence? What is that threshold that we're willing to accept?
based on the use case that's being proposed. And so all of that has to go through that very close sort of examination by the Governance Board. And what we're proposing right now is for every AI proposal to go through the Governance Board specifically for this review before we take it to the next level.
Separate, technology, architecture, and all of that down the stream. So yeah, so what I would recommend for anyone who is trying to get started on this process is to develop a method with very clear guiding principles and a clear intake process with a well defined governance structure.
Can
you share some samples, examples of how AI has been used our organization or impacted any healthcare outcomes?
Yeah, we can talk about what we are doing in our organization, maybe even what other people doing out there and what we are looking to do here. So in our own organizations, we've had machine learning algorithms in place for quite some time.
Some of it includes risk stratification for critical care patients, which population with what types of Features in their vital signs, for example they're admitting diagnosis, puts them at most risk for decompensation when they're here. And how do we prevent that? So that's been something that we've been looking at in other places, using machine learning and artificial intelligence to look at.
minor subtle differences in imaging studies, for example, that can highlight those things that could be potentially missed by a human eye or risk stratifying patients for their risk for developing cancer, for example, or again, decompensation based on subtle findings in their imaging studies or subtle things in their vital signs.
So a lot of The quality and safety use cases can be done by AI because of its ability to process tons and tons of data all at once, whereas in the past we have to, with our research studies, rigorous as we can get, we can only get so many, review so many cases and so many patient charts at once.
And we're also considering things like effectiveness, being, wellness, and efficiency types, uses of AI. And so one that we're actively looking at is medication prior authorization because especially at the tertiary referring hospital at Seattle Children's, we prescribe tons of specialty medications that often will require prior authorization with the insurance payers.
And in those cases the first step is usually very manual, but also not a whole lot of medical decision needed. So it's about Give me the last note from the prescribing provider. Give me the last five medications that they've tried in the same category. That doesn't really take a human to do, so to speak, but currently we do have humans and oftentimes very expensive resources doing this work manually.
And so can we leverage that? Generative AI to capture all of that requirement from the payer end or from the pharmacy end, put it into our own EHR, gather all of the relevant data, then send it back so that the first step is done. And when they do come back for an appeal or asking for your medical opinion, then the clinician can get involved.
So it is a fascinating space, right? I think You've outlined some of the challenges, some of the benefits in the different type of use cases that we're looking at and I'm sure there are many organizations out there in the healthcare space that are way ahead of us in terms of maturity in this space or jumping on the Gen AI bandwagon.
I think one of the interesting things for me though is, one of the challenges will be So even if you do come up with 10, 15, hundreds of use cases. How are we actually going to fund this level of innovation whilst we're also struggling with dwindling margins in certainly specialty care?
Do you see a moment where there'll be a balance between use cases that deliver real cash versus Soft benefits.
Like you're saying, when you're layering all of this innovation on top of a already difficult financial landscape, it's hard to convince the check writers of the soft impact, the human impact.
There are studies, so how we are approaching it at least, that are outlining what is the cost to physician burnout, the cost to physician attrition rate For example, the cost of onboarding. new physicians, new nurses the cost of rapid turnover rate. So if we use those kind of published things to help us estimate some, translate some of these softer human impact into real dollars, that's how we're approaching it at first.
And also Zafar your recommendation to us is to how do we maybe get some high yield, high impact Use cases that can be rolled out in a short time so that we can prove to the organization that there is real value in some of these innovative use cases in the short term. So then that way the organization is more willing to invest in future ones and doing it Slowly and thoughtfully so that people know we aren't just rushing to put in millions of dollars into a new technology that we think is cool.
And so Zafar, I'm curious from your thing, because I'm learning this from you too, and navigating this world. What are your thoughts on how do we navigate all the investment of innovation?
Yeah, so I think, if it's clear that There are definitely ethical considerations, patient safety considerations, that those should be paramount.
And definitely clinicians should be the people signing off against that. I think there's going to be a huge push on education in general, because used AI tools, you'll know that it's not as simple as it seems. This whole concept of prompt engineering, how do you actually write a query in AI to get the result that you're thinking about, or the type of solution you were thinking about, isn't native to people.
I've certainly been playing with this technology for the last six months, and I've realized that I am so much better now with writing those prompts than I was six months ago, and it takes a different way of thinking to interact with the machine, right? Versus me and you interacting and there's that sort of hidden, you understand me, I understand you, which you can't do with this machine, right?
So that's something you'll have to learn in this AI space. But I think. I agree with you. You have to be very careful about the use case you put into place. You have to be upfront about are you going to measure the before and after scenario around that and who's going to be responsible for that because you don't want to then add layers of resource needed to make this happen.
And then you need a forum where you can actually relay back. The pros and the cons of the work that you've done with AI, and it needs to be a safe space. Especially if you're going to use it for clinical use cases. You May remember the time when you would have morbidity, mortality sessions, right?
And they were always very Chatham House rules. Whatever happens here, we talk through this, we improve on it. I think you need a similar methodology around AI, because when you try things And something does go wrong, you want to have an open forum discussion on how do you adapt and how do you improve around that.
And not got to that point yet, but I think as we evolve this, we may want to think about when it comes to a clinical use case, what is the forum? As you mentioned, an intake process. But we haven't really figured out between you and me whether we should have a way, a forum, post implementation for lessons learned in a safe space.
So I think that's something we'll need to think about. So as we wrap this up, Clara, please offer some parting advice for those facing these topics. AI being top of the list for most.
Yeah as I think about this is such an interesting time for us to be in. Zafar, when you're talking about education, I remember 20 plus years ago when I was a new medical student learning how to write PICO questions.
I don't know if any listeners remember what those are. How to query for the right article in PubMed. That reminds me so much of how now we're having to train people on what is the most effective prompt, right? that just reminded me of that. But we're in such a new space right now, right?
Like everything is evolving so quickly. It's super exciting. How much of it is hype like the title of this podcast? We don't quite know yet and it really all depends on how you approach it and How you select your early use cases, but I do think, though, it's really important, especially in this early stage, to have the clinician at the center of the design of the use case and the center of the review and approval and, like you're saying, review of the outcome metrics in the use cases that we're selecting right now, because ultimately, as a physician myself, what we want to do is to improve patient care, whether it's by improving the quality and safety of the care that we're delivering or improving the well being of the clinicians for that.
A rested, happy clinician is a well taken care of patient. And so to me, I think that's the most important thing is to have a clinician at the center of the design. And then in addition, as I mentioned earlier, having a governance structure and a very thoughtful method with clear guiding principles, I think is all very important in the early stages.
Yeah, totally agree with you. I think I'll also tell the audience that as you work through this, make sure you have the right technology partner. Most health systems will not be able to hire a team of prompt engineers, so you're going to need to partner with someone to help you navigate the technology. I will emphasize as well, make sure whatever technology you choose is fully vetted for security.
Make sure that you're not losing PHI and that our patients remain anonymous and protected in this process because I think that's also something certainly would keep me up at night if we were to lose any of this information and you as well, right? Because you want your patients to be safe. So thank you, Dr.
Lin, AI with us today about whether it's helpful or simply hype. It's clear that leveraging AI effectively can lead to significant advancements in patient care and potential operational efficiencies. To our listeners, always embrace challenges and setbacks as opportunities for growth, as they make you stronger and wiser.
Stay tuned for our next Town Hall, a production of This Week Health. Thank you.
Thank you. 📍 (MAIN)
Thanks for listening to this week's Town Hall. A big thanks to our hosts and content creators. We really couldn't do it without them. We hope that you're going to share this podcast with a peer or a friend. It's a great chance to discuss and even establish a mentoring relationship along the way.
One way you can support the show is to subscribe and leave us a rating. That would be really appreciated. And a big thanks to our partners, Armis, First Health Advisory, Meditech, Optimum Health IT, and uPerform. Check them out at thisweekhealth. com slash partners. Thanks for listening. That's all for now..