August 15: Today on TownHall Matt Sullivan, MD, Chief Medical Information Officer at Advocate Health speaks with Eric Poon, MD, Chief Health Information Officer at Duke University Health System. What would be the ideal pace for AI integration into healthcare systems, considering both the potential for innovation and the need for safety and ethical considerations? How should we interpret and manage the so-called "black box" problem in AI, in which complex algorithms might not be completely transparent or understandable? How can regulatory bodies play a more proactive role in governing the deployment of AI in healthcare, and ensuring its safe and equitable use?
Donate: Alexβs Lemonade Stand: Foundation for Childhood Cancer
This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
Today on This Week Health.
sometimes owners of the models become tiger moms and tiger dads, they really want to promote their little models sometimes we put things in too fast without the right stakeholders at the table to think about, okay, the algorithm might have a great AUC curve, but how do you make sure that it's put in to the right place at the right time for the right intended audience
Welcome to TownHall. A show hosted by leaders on the front lines with interviews of people making things happen in healthcare with technology. My name is Bill Russell, the creator of This Week Health, a set of channels dedicated to keeping health IT staff and engaged. For five years we've been making podcasts that amplify great thinking to propel healthcare forward. We want to thank our show partners, MEDITECH and Transcarent, for investing in our mission to develop the next generation of health leaders now onto our show.
We'd like to welcome Eric Kuhn who's currently the chief health information officer at Duke health. Welcome to the show, Eric.
Thank you, Matt. Love to be here.
It's really great talking to you. heard you speak at Scottsdale and you were on a panel talking about AI. It seems like we've, beaten AI to death over the last few months of conversation in the informatics world. But you have a specific niche, which I think is great for people who are listening around academics, AI governance and your theories behind that.
So, you know, let's talk through just a couple of those things here in the next few minutes. Thank you. You know, specifically about AI governance and how you, you know, formulate your thoughts around how a system does that.
Yeah, well, happy to, I think this is a really exciting area. As much as we, think that we have we're done talking about AI.
I think it, it keeps being a really exciting area that I think will command our attention. I mean, it's for the right reasons, because the possibilities are quite endless. I think we spend the last few years thinking about how predictive modeling would start helping us make better decisions as clinicians so that we can focus our attention and resources on patients who are most at risk so that we can be smart about how we take care of them.
And then over the last six months, generative AI has now become a household name. And everybody at least in informatics circles and, and everybody's talking about chat, G P T and how it's gonna change the world. So I think that's kind of the world we are living in. Maybe this is stating the obvious, but when I, when it comes to AI governance I think first and foremost, I feel more strongly than ever it's about doing right by the patient.
As excited we all should be about the possibilities of AI, we need to make sure that AI that is used in clinical care and clinical operations is safe, effective, and equitable. Agreed. And what I mean by that that, how do we know that the AI tools are doing what they say they're supposed to be doing?
How do we make sure that it's not going to lead to. Unintended negative consequences, such as, well, clinicians being bombarded with pop ups and suggestions and alerts. Alert fatigue is not a new concept that AI could add to it. How do we make sure that that these AI algorithms that could work really well on the first day doesn't all of a sudden Thank you.
Decide to call everybody low risk just because somebody changed how serum sodium is reported in the lab systems. and more and more, we know that computer algorithms are just algorithms. How do we make sure that we don't build in bias into these algorithms so that we exacerbate the inequities that we already have?
In healthcare and in the society in general. So those are some of the key things that I've been thinking quite a lot about with others at Duke over the last few years.
So do you think with these, you know, really important conversations that we have to have around the specific things that you've just called out, do you think that's going to change the speed of AI healthcare deployments in healthcare?
And do you, what are your thoughts on whether that's the right thing to do? How do we make it faster? Should we necessarily slow it down?
Yeah, so I think so one way I hear your question is, are we going too fast or too slow with deploying AI in our current environment? I think you may not like my answer and I think as with anything new, we are doing things both too slow and too fast.
And let me, explain what I mean by that. I think we are in some ways going too slow because everybody is coming at it from different angles. We have vendors knocking on our doors. We have internal data scientists who want to build their own models with our faculty members. I mean, they're all coming from the from a great place, but because there is so much enthusiasm and this is so new it is in some ways there's not a lot of ways to create what I like to call an AI factory the ability to really take the raw inputs of ideas and turn them into models and then turn them into predictable ways of getting them in the hands of clinicians or the recipients of these tools and measure their whether they're effective and retire those that are not.
So, one other challenge is that because technologically, everybody has developed their own tools even things that look and smell and actually work similarly underneath need different pieces of infrastructure to plug in. It's kind of like, if you are an electricity company, you have to support several voltages.
And several plugs across the ecosystem. And that makes, us too slow, but I think we are also at times too fast because there are times when people feel so strongly about the model. think at Scottsdale, I talk about sometimes owners of the models become tiger moms and tiger dads, they really want to promote their little models and want them to.
Go to an Ivy League college 18 years down the road and I think sometimes we put things in too fast without the right stakeholders at the table to think about, okay, the algorithm might have a great AUC curve, wonderful, lots of promise, but how do you make sure that it's put in to the right place at the right time for the right intended audience, so that it will actually influence decision making for the better.
So that's the piece that I think sometimes worries me when you have something new, that's exciting. That's trying to, ride the hype cycle the, the, the crest of the hype cycle and, and then try to get, get it in. And we don't want to slow that down. create a safe way. for folks to experiment and then fail fast.
π
β π We'll get back to our show in just a moment. I'm gonna read this just as it is. My team is doing more and more to help me be more efficient and effective. And they wrote this ad for me, and I'm just gonna go ahead and read it the way it is. If you're keen on the intersection of healthcare and technology, you won't want to miss our upcoming webinar, our AI journey in healthcare.
See, that's, keen is not a word that is in my vocabulary, so you know it's written by somebody else. Maybe ChatGPT, who knows? We're diving deep into the revolution that AI is bringing to healthcare. We're going to explore its benefits, tackling the challenges. head on. We're gonna go all in from genomics to radiology, operational efficiency to patient care, and we're doing it live on September 7th at one p.
m. Eastern time and 10 a. m. Pacific time. So if you are interested in this webinar, we would love to have you sign up. You can put your question in there ahead of time. And we take that group of questions, we give it to our panelists and we discuss it and it's going to be a great panel. I don't have them confirmed yet, but I really am excited about the people who I've been talking to about this.
So join us as we navigate the future of health care. Trust me, you don't want to be left behind. Register now at thisweekhealth. com. Now, back to our show. π
β π
And what you're talking about is really just the necessary breakdown and the necessary evaluation so that we can accelerate the work and we can do great things. I think all of us want to do the great things and your developers who turn into the Tiger Moms really have a passion. You talked about sometimes having to tell them their baby is ugly, which I thought was a great.
Along those lines, When you get into the model and you start looking at these things closely, what are your thoughts on the sort of what we've called and termed in the hype cycle, the black box AI? Where not really most people know what the heck's going on. Certainly frontline clinicians probably don't.
either care or want to learn all the subtleties of the science. How do you evaluate that in light of, you know, governance the scientists that are coming to you that may really have a passion and also, you know, the end product, which is first our clinicians using it and then the effect it has on patients.
How do you evaluate that black box?
Yeah, so I'm not sure I'm a fan of the word black box. I think it sort of has interesting connotations, but I mean, I do think that algorithms that are really complex, billions of parameters are here to stay at least in the medium term. So, I think and as you were alluding to, when you and I are taking care of patients, we are not going to think about, oh, exactly where is this algorithm coming from?
We just want to know that. Yeah I think it works and maybe I should pay attention to its advice. So, bridging that gap is in some ways our responsibility in building AI governance and building trust. I do think that having a fit for purpose and thoughtful evaluation process is going to increase that trust.
I also think that making sure there's the right monitoring mechanism. So that. We know that things that work on day one will continue to work no different than our blood pressure machines in our clinics or in the ICU. And I do think that there are approaches to making these quote unquote black box algorithms a little bit more transparent.
but at the end of the day, the frontline clinician works with the currency of trust and efficiency. So, how do we make sure that institutionally we provide tools that are trustworthy and safe and equitable? I think those are some of our ongoing challenges.
Yeah it's basically uh, continuing this concept of governance, understanding where things are, and then giving a stamp of approval that, hey, it's safe, we've evaluated it, and we're going to, commit to our patients to go back and evaluate this and make sure that we're not introducing the bias.
We're not. You know, delivering bad information that there's something in the training that didn't go awry all the things that are showing up in the news cycles, and I think that's important,
and I do think that it's no accident that government regulators are beginning to look into this we are all watching with interest how that evolves, both in Europe, where they have taken a more proactive stance and what's happening in the current administration and I think uh, folks are appropriately concerned about how generative AI might influence society as a whole within the healthcare and otherwise.
And I think we need to be thoughtful. Who knew, like, we had no idea what the internet was going to do to society 20 years ago. I can't live without a search engine and all my favorite shopping sites now. But I'm not sure about social media has had the right impact on our society. So I think it's really interesting.
I think this yin and yang push and pull of revolutionary technology, how we harness it. It's in some ways, not just the responsibility of a single organization, but all of us in society in general. It's a fascinating time thinking about this.
Well, wonderful. Hey, your insights are fabulous, Dr. Poon.
Thanks so much for joining me today. And I look forward to catching up with you very soon. Thank you.
Great. Well, thanks for having me.
gosh, I really love this show. I love hearing what workers and leaders on the front lines are doing, and we wanna thank our hosts who continue to support the community by developing this great content. If you wanna support This Week Health, the best way to do that is to let someone else know about our channels. Let them know you're listening to it and you are getting value. We have two channels This Week Health Conference and This Week Health Newsroom. You can check them out today. You can find them wherever you listen to podcasts. You can find 'em on our website this weekhealth.com, and you can subscribe there as well. We also wanna thank our show partners, MEDITECH and Transcarent, for investing in our mission to develop the next generation of health leaders. Thanks for listening. That's all for now.