This Week Health

Don't forget to subscribe!

Well this conversation isn't going away. How about we address it head on.


 Here we go today in health. It nurses are struggling with AI. Even to the point of not trusting AI. We're going to talk about that today. My name is bill Russell, the former CIO for a 16 hospital system. And creator of this week, how a set of channels and events dedicated to transform healthcare. One connection at a time. We would affect our show sponsors who are invested in developing the next generation of health leaders. Notable service now, enterprise health. Parlance certified health and handouts.

Check them out at this week. Slash today. Hey, this new story and every new story we covered, you can find on our website this week. Curated news stories just for you. Go ahead and check it out today. All right. Hey, one last thing, share this podcast with a friend or colleague.

You said it's foundation for daily or weekly discussions. On the topics that are relevant to you and the industry. A form of mentoring. They can subscribe wherever you listen to podcasts. All right. This was an interesting article. Came up in Becker's hospital review. And. It's a, what's the title I don't ever trust epic to be cracked. That's an interesting clickbait title, but anyway, I don't ever trust epic to be correct.

Nurses. Raise more AI concerns. All right. So we're going to take a look today on how I would look at making the case for this, but here's the gist of the article. Nurses are raising concerns. About the integration of artificial intelligence into electronic health records, arguing that the technology is ineffective and detracts from patient care. They report that AI driven programs like automated nurse handoffs and patient classification systems. Frequently fall short of practical needs. Setting examples where algorithms misjudge patient acuity. Or provide inaccurate warnings like sepsis alerts. While companies like epic and Oracle health promote. AI is tools to enhance and prevent adverse events.

Nurses insist. That the systems fail to account for essential aspects of their workflow, such as patient education and compassionate care. How systems like Kaiser Permanente and Keck medicine. Or attempted to address these concerns by emphasizing that ultimate decisions are made by licensed nurses and that AI should assist rather than replace human judgment. Absolutely. So that's a, that's how they're defending it.

So I'm going to defend the nurses. Absolutely. Nurses are critical. In terms of the relationship with the patient. Communicating empathy. Identifying acuity levels. At this point, The AI algorithms and the tools that are being used. They are. They pale in comparison to the human. Especially to a trained human who has spent. Decades, caring for individuals.

So there's no doubt about that. So a human in the center is. In the middle of the equation. Absolutely. Miss sir. Now, if I were sitting in front of a group of nurses, And they are asking the question about AI. Here's some of the things that I would be talking about and some of this. Obviously they address, but the first thing I would talk about is that complimentary role, right?

So AI is designed to augment, not replace the expertise and judgment of the healthcare professionals. Those nurses are critical. AI has a primary function to assist the nurse by automating routine tasks, by allowing them to focus on work. Direct patient care and compassionate interactions. But at the end of the day, The nurse has a critical function to play, and that is protecting the patient at all costs potentially from AI and the mistakes that AI can make. But also being able to identify things that AI cannot identify.

Second thing I would say is. AI is providing a lot of benefits, especially with regard to decision support, AI. The decision support tools that we've had not only over the last couple of years when AI has become Providence, but for years the decision support tools offer evidence-based recommendations and alerts that can help nurses. Make more informed decisions. So those kinds of alerts have been fine tuned over time.

They're not probabilistic. They're deterministic. Meaning that they are not guessing like an LLM might, they are more deterministic, meaning that they are looking at vital signs and those kinds of things. Based on On years, if not decades of information that has been put through it that says, Hey, when you see these things, there is likely sepsis on the around the corner.

Therefore you should address it. Or there's a code blue around the corner. Therefore you should address it. It's those kinds of. Decision support tools that have been built literally with decades of information. It's meant to augment. It's meant to support with these enhanced tools. The next thing I would say is we're always looking to improve efficiency.

We're looking to. Light. The cognitive burden. That is associated with not only caring for those individuals, but documenting. We hear that all the time that the documentation burden is so great. We're trying to improve the efficiency of that by streamlining the administrative tasks, such as documentation. Patient classification and those kinds of things, AI can reduce that burden on nurses, freeing up time to do. The more critical aspects of patient care. Then I would say systems are going to continuously learn and they need the nurses to support them in this process.

Feedback from healthcare professionals, including nurses is vital for refining algorithms and ensuring. They align more closely with clinical realities. This like every other technology will get better over time. And But it does require that input. This is very interesting. It used to be that we would have to program all those things in.

So there would have to be relationship between the programmer. And the user and that information would have to come across and then it would have to be programmed in. This is not the case. You can actually act as the programmer by providing feedback so that continuous improvement can happen within these. Within the systems. So the other thing to note this. Yeah, modern AI solutions are increasingly customizable allowing allowing healthcare professionals and systems to tailor algorithms to better fits specific workflows and patient populations. This is going to be really important. Because not every patient population is the same.

And this is so critical. We talked about this with a CIO and data professionals all the time. That we are going to have to customize this to, to avoid. The bias that could come from using a model that has been trained with the wrong dataset potentially has been trained on a national dataset when your local dataset. Is so important, whether that be rural or urban, whether that be. Whatever it happens to be, but your specific patient population is going to be critical in terms of fine tuning that those models.

And then what I would say is there's some things. For us as. As it professionals as technology professionals, as AI developers that we need to take into account in order to build that trust with the clinical population. And some of those things are the ethical considerations, right? But these are paramount in AI development, they will ensure that we have transparency into these models that we have that we've removed the bias that we have fairness that we have accountability. In the algorithms. So that is that's foundational to building that trust with healthcare professionals and the patients. I would say the other thing is we've got to go back and validate these things we need. Real world evidence studies and pilot programs and various health systems. Have demonstrated the potential benefits of AI and improving patient outcomes and operational efficiency.

We need to share these success stories. We need to document them. We need to vet them. Over and over again, we need to make sure that we have ethical safeguards that we have real world evidence. And then I, the one thing. I keep this on the outskirts in the last couple of comments, but have to bring it front and center.

And that's patient centric design. AI systems should be designed with a strong emphasis on patient centered care. Ensuring that they support rather than detract from the nurse-patient relationship. That we incorporate feedback from nurses and patients. In the development process and make sure that we're achieving the goals that they're looking for.

From an it perspective, from a technology professionals perspective, Ethical safeguards, rural world evidence, patient centric design. These are going to be critical from our standpoint, from the nurse's standpoint. It's complimentary role. Is critical. To understand that AI is not there to replace.

It is a complimentary role that we've been using these decision support models for decades, literally. Decades. And they've been trained on information. They're not probabilistic. They're deterministic. This is an important. Thing to understand as well. They're there to improve your efficiency to remove the cognitive burden. The cognitive burnout and overload that happens over time. That we're going to continuously improve these models, but it requires the nurses to provide feedback to these models.

We are training them. We are customizing these models as we go. And that customization helps us to push out the bias that can be found in some of these systems. And as we look at this. It is a great opportunity. To have the dialogue that we need to have with nurses, with clinicians, with patients, with technology professionals. Bringing them together.

This is why it's so important. To have those groups stood up within your organization. I know this sounds like a broken record. I've been talking about this for awhile. But that group should be. Not only stood up within your organization, but it should be an active group. They should be reading and pulling in as much information as possible.

They should be. Looking at policies Looking at ethics. Looking at use cases, looking at. AI literacy across the entire organization. These are just some of the things that. I call it an AI governance group. You can call it an AI work group. It could be a subcommittee of your governance group. Whatever it happens to be, that cross-functional group. That is bringing information in and then disseminating that information in order to help your organization be ready. For the time when AI does mature to a point. Where you're going to be using it. In. I think just about everything. Right now, there are specific use cases that we're seeing it, but I think it will continue to expand. To even greater use cases in the near future.

Anyway. That's all for today. Don't forget. Show this podcast with a friend or colleague. Agree with me. Disagree with me. The important thing is that you are having a conversation about it. We want to thank our channel sponsors who are investing in our mission to develop the next generation of health leaders notable. Service now enterprise health. Parlance certified health and 📍 panned out. Check them out at this week out. Dot com slash today. Thanks for listening. That's all for now.

Thank You to Our Show Sponsors

Our Shows

Today In Health IT with Bill Russell

Related Content

1 2 3 268
Transform Healthcare - One Connection at a Time

© Copyright 2024 Health Lyrics All rights reserved