This Week Health

Subscribe to This Week Health

Share this episode

February 12, 2025: Artificial intelligence is transforming healthcare, but are hospitals truly ready? Sarah and Kate explore how 65% of U.S. hospitals are implementing AI tools, the gaps in evaluating bias and accuracy, and what leaders must do to ensure responsible AI adoption. Join us as we discuss key findings from a University of Minnesota study, challenges smaller hospitals face with off-the-shelf AI models, and the crucial role of AI governance in mitigating bias.

Subscribe: This Week Health 

Twitter: This Week Health 

LinkedIn: Week Health 

Donate: Alex’s Lemonade Stand: Foundation for Childhood Cancer

Transcript

  This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

 Today in Health IT, we're discussing hospital use of AI tools analyzed for accuracy and bias. My name is Kate Gamble, I'm Managing Editor at This Week Health, where we host a set of channels and events dedicated to transforming healthcare, one connection at a time. This episode is brought to you by Chrome OS.

Imagine a healthcare system. Where technology works seamlessly in the background, keeping your data secure, your teams connected, and your patients at the center of care. Visit thisweekhealth. com slash Google Chrome OS to learn more. Alright, so today we're talking about the hospital's use of AI tools analyzed for accuracy and bias.

And I'm joined by Sarah Richardson, President of Community Development. Sarah, thank you for joining. Good morning, Kate.

So we're looking at a recent study from the University of Minnesota School of Public Health examining adoption and evaluation of AI tools in U. S. hospitals. The research, which was published in Health Affairs, Highlights significant disparities in how hospitals implement and assess AI technologies, particularly concerning accuracy and potential bias.

this study, which found that approximately 65% of US hospitals are using AI models. Looked at accuracy and bias and identified some of the disparities that exist. And it also focused on what needs to happen going forward. So we know that AI is revolutionizing healthcare, but are hospitals truly ready for it?

We're going to break down what this means for healthcare CIOs, hospital leaders, and the future of AI in medicine. So let's dive into the big questions. How can hospitals leverage AI effectively while ensuring fairness, accuracy, and accessibility?

You mentioned that the study found 65 percent of hospitals are using AI tools for predictive modeling. This may include forecasting patient health risks, such as identifying those likely to be readmitted. They're optimizing hospital scheduling, so predicting peak patient volumes, and automating administrative workflows, such as billing and staffing needs.

Larger health systems tend to develop AI tools in house and smaller hospitals often rely on third party vendors or partners, which may create a gap in the customization and control of what's being utilized in the system. But AI is also being explored in clinical decision support, radiology imaging, and even robot assisted surgeries of which we've seen for several years.

So bringing these solutions into the facilities is really building upon some of the foundations that have already existed, which I'm hopeful also means that they can overcome some of the barriers and concerns with the types of AI that are continuing to be utilized in their hospitals. Yeah, with so much use across healthcare, we really need to look at whether it's being evaluated properly.

And what did the article tell us about that? 61 percent assess AI for accuracy and only 44 percent evaluate it for bias. That's a huge gap in responsible AI use. And many hospitals assume that an AI model is valid just because it was created by a reputable vendor, but without local validation and looking for errors and biases, they can persist.

It's important to have an AI framework in your organization by which you evaluate these programs and these outcomes. And if you don't know where to start. Go with someone, you know, so Lynn Shapiro Snyder is a good friend of mine. She was the founder of women's business leaders of us healthcare. She has been at Epstein Becker green law firm in DC for several decades.

And it did a recent podcast with her. She has an entire AI assessment model that allows you to actually consider, are you taking the steps necessary to ensure accuracy and evaluation of bias such that she says by partnering with her and using that framework, if she has to testify in court on your behalf.

She can with absolute certainty say, yes, they follow these protocols and procedures. So if there is an issue in these spaces that you are more likely to not have an issue as much as having the rigor around anything that could be audited or could affect patient care. That's really, those are really great insights and it's so important to really factor all of this in especially with what we're seeing with AI and bias is a really legitimate concern.

So talk a little bit about how this can show up in hospital systems and affect patient care. Absolutely. Depending on the model that you're using, they learn from historical data, which means they inherit biases present in past healthcare decisions. So if everything is contained to your healthcare system.

That may be representative of your market or the patient populations that you serve. But remember that underestimating risk for minority populations due to lack of diverse training of the data can present an issue. You could have incorrect prioritization of patients based on income, insurance status, even zip codes.

And then the gender based disparities in diagnosis, predictions and treatment recommendations. I also consider what type of city or kind of destination you live in. If you are living in a highly. Visited city by tourism, you could have an absolute diverse view on all of this historical data that may or may not play into some of the local decision factors.

So really understand your patient population to ensure that the biases you may be seeing and how they show up or something that is truly understood by the teams that are interpreting the information.

Yeah, it's pretty. It's pretty overwhelming when you look at it. If AI bias gets ignored, we could be looking at misdiagnoses, delayed treatments increased health disparities. But beyond that, hospitals could face legal and reputational risks for using AI tools that exacerbate inequalities instead of addressing them.

And the trust factor, which is so important, and you and I touch on this, but if patients lose trust in AI driven healthcare, when biases aren't addressed, that could have. some really detrimental effects. So really there, there's a lot here, but I want to talk a little bit more about the resource divide, which was mentioned a bit earlier and then why some hospitals seem to struggle with it.

Larger hospitals have the advantage that's larger hospitals are going to have an advantage for obvious reasons, but what about smaller hospitals? As you and I loved it to talk about access and equity and some of the rural hospital systems, specifically the wealthier hospitals develop the AI tailored wealthier hospitals develop AI tailored to their patient population.

While smaller hospitals rely on the off the shelf models we've already talked about that may not reflect local needs. Have that conversation with your partner, with your vendor partner. When you're going into contracting, understanding what data is being used, how it's being sourced, where is it being held?

How are you growing some of this modeling and information? They tend to be trained from large urban hospitals and they may not perform well in a rural setting. This can lead to errors and predictions and decision making. And then smaller hospitals are often going to lack the expertise and funding to evaluate AI models, effectively putting them at risk for what I would call adopting flawed tools for lack of a better term.

what I'm loving already into:

For these conversations, hey, we're evaluating this. Hey, we're looking at this. Can you share this governance model? Can you share which LLMs are most effective in your environments? We're seeing the bigger hospitals mentor the smaller hospitals. Even if they're not in their referral group, because we bring people across the country together.

And while we talk about health care being local, when you're thinking about what AI models to adopt and how to train some of these models, that's a universal challenge that our communities are continuing to come together and help one another solve. It's wonderful to

the type of interactions we're seeing from those involved in our 229 communities.

Yeah, I couldn't agree more. We had a summit back in the fall that focused a lot on AI and there's another one coming up in the spring in April, and it's so important to have those discussions and like you said, when you see it, Leaders from different hospitals, whether it's large systems, community hospitals, coming together, it shows that there really is such an opportunity for collaboration when it comes to AI and sharing models and ultimately creating more equitable technology, which is really important.

So in terms of the CIO, what steps can be taken to ensure AI is accurate and fair? One of our favorite topics, AI governance teams. So it's a mix of IT, clinical, legal, ethic experts. They should always be reviewing the AI tools. We've seen varying degrees of do you spin up a completely separate AI governance team or do you fold into your existing governance team for some of your project and spend, etc.

My favorite answer is always it depends. Know your organization well enough to know. Can this be folded into something that exists today versus spinning up another group? Because the more committees you have, the more cross committee collaboration and awareness that needs to occur. It could actually add more steps of confusion, but I'd say make sure that AI governance is an element of any governance structure that you have in your organization.

And if you don't know where to start with AI governance, again, reach out to us several of our CIOs in our mix have robust and really well functioning governance programs, and then test AI with diverse data before you deploy it, ensure it's trained on data that represents your entire patient population.

And this goes back again to, do I have local patients, do I have tourist patients, do I have people being admitted for specialty hospitals, really understand the mix of your patient populations. Know what your bias audits should look like, run bias detection models to find hidden disparities and AI predictions.

If you are someone who loves data and loves analysis and loves to look for some of these what a fun role to a degree that AI has brought into your universe. And then back to the transparency from your partners. CIOs. Must demand clear documentation on how AI models are trained and validated that can go several layers deep, may involve legal, may involve supply chain.

In fact, I believe that it should and understanding where is your data? Who owns it? What happens? Can we keep it on site behind the firewall? As an example, really understanding the ecosystem of How the information is being utilized. So when you put all those pieces together, you come up with, and everything's never going to be ironclad, Kate.

But at least you're going to have a space that says, we have done everything necessary to ensure we're making good decisions based on good practices. Yeah, that's a great philosophy to have anytime, but especially when we're talking about something as complex as AI. And what we've seen is that, there are organizations who are getting it right, whether, and whether that means like setting up real time AI monitoring systems, which can flag potentially biased decisions and allow for human review, or collaboration with universities to create models tailored to their region's patient demographics, which you touched on earlier, how important that can be.

As we look at AI, which is advancing very rapidly, what are the overall trends that hospitals should be watching? Ambient AI and chatbots. Even in our last summit, we were talking about, the AI versus the AI. And when two AI systems are going head to head, which outcome are you validating or looking at most effectively?

So those AI powered scribes and virtual assistants, they are going to help reduce clinician burnout. We're already seeing it. We already have those partners in our mix and they're constantly innovating. And some of these partners, they helped found some of these modeling techniques. And so really these are solutions we would put in place if we were CIOs and we're seeing how they are truly.

And then AI assisted radiology scans, more and more common, that accuracy has to be constantly validated by a human, yet that model can see things that the human eye may not be able to, which is leading to improved quality and potential outcomes. When you have strong AI regulations, and we expect new healthcare AI policies to address bias, transparency, patient safety.

Be more stringent in this, especially in the beginning as you are learning it too. It's okay to have stronger audits. It's okay to have stronger regulations to ensure that you are getting comfortable with the information. And lastly, what you want to talk about all the time, human and AI collaboration.

Instead of AI replacing decisions, the best future is AI supporting and enhancing human expertise. I love that last perspective. Human and AI collaboration. It is a union of the two to ensure that we have the best outcomes. Yeah, definitely. I really appreciate that too. And I think striking that balance is something that everyone's learning and we are going to see organizations that tend to do a bit better at that.

And that's where that best practice sharing comes into play. And we do have to keep an eye on the policy angle. Of course, the question is whether or not they should be re whether or not AI tools should be regulated like medical devices.

Yeah, some are, some experts argue yes, because AI models impact patient care, just like a medical device. Others worry that too much regulation could slow innovation in hospitals. And then CIOs, CIOs need to advocate for the balance policies that promote responsible AI without stifling progress. So it's a blend.

You're likely going to see regulation come in. You need to follow those rules, and then how do you adapt them into your own environments most effectively? So we've given a lot, we've thrown out a lot about this, but because it is such a pertinent topic, we really have to, but let's just try to put together some takeaways for CIOs as far as, AI and the evaluation for bias.

I would give you four things to think about, and the first one is don't distrust your partnerships. Evaluate for accuracy and bias. There are so many places you can go to find out information about a potential partner, whether that's from your existing peer group, whether that's from Class or Gartner or other reporting agencies, and doing your own independent research.

So evaluate for accuracy and for bias. Push for fairness. Advocate diverse unbiased AI models that represent all patients and monitoring its performance. Continuously test and adjust to meet the needs of your facility and then be a leader in what I would call responsible AI. CIOs, CISOs, and beyond should set the tone for ethical AI adoption in their organizations.

And that's likely going to be done through those same committees where you're achieving the governance decisions, legal, risk, security, clinician leadership, et cetera. Set the tone for AI in your

Yeah, said. Listen to what your peers are doing. Learn from what your peers are doing. AI is something that has so much potential but really does need to be managed.

So thoughtfully, it's one of the most powerful tools in modern healthcare, but only if it's used responsibly, the future belongs to hospitals that embrace AI innovation while ensuring fairness, transparency, and accuracy, and maintaining that balance that we spoke about. Don't forget to share this podcast with a friend or colleague.

Use it as a foundation for daily or weekly discussions on the topics that are relevant to you and the industry. They can subscribe wherever you listen to podcasts. Sarah, thank you for joining me. Thank you for listening. And that's all for now. You usually say that's a wrap. Yeah, I do. Okay. Thank you for listening.

And that's a wrap.

Thank You to Our Show Partners

Our Shows

Related Content

1 2 3 311
Healthcare Transformation Powered by Community

© Copyright 2024 Health Lyrics All rights reserved