A leading EHR provider took heat for their Sepsis model only detecting 7% of cases missed by clinicians. Today we explore an article by Angelique Russell on why that is unacceptable and totally understandable.
FTA
The alarms you see on television exist only for the sickest patients at best. There is some data supporting the cost effectiveness of transitioning to continuous monitoring for all patients, but progress towards this goal is often hampered by competing financial priorities. Neither human nor machine can predict risk on null data:
But Epic is not relying on such an algorithm to detect sepsis from labs and vital signs, it is relying on the medical billing code for sepsis, which has an even looser definition subject to varying payer guidelines. When a patient is deteriorating, the "acuity level" or resources necessary to care for that patient increase, so it is necessary that hospitals bill higher amounts for higher acuity care. But in practice, this can result upcoding.
---
This article is worth checking out. From the front lines of care from a practicing data scientist.
I love how she closes.
Despite what you might have heard about magical unicorns, data science is a team sport. In a clinical setting, it requires guidance from nurses, doctors, clinical quality specialists, medical device experts, EHR experts, data engineers, and data scientists. More than that, it requires leadership to ensure effective collaboration and proper prioritization of patient outcomes above all other end points.
#healthcare #healthIT #cio #cmio #datascience #chime #himss
This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
Today in Health it, this story is data science and predicting sepsis. My name is Bill Russell. I'm a former CIO for a 16 hospital system and creator of this week in Health IT a channel dedicated to keeping health IT staff current and engaged. I provide executive coaching through health lyrics, which is my company and advisory services for health leaders around technology and it, if you wanna learn more, check out health lyrics.
Dot com. All right, here's today's story, and this is from a post on LinkedIn. It's from Angelique Russell. She is a data scientist at Providence. And she wrote a really interesting article and I thought I'd captured it for you 'cause it is such an important topic. And her lead up to the post starts with, you might've heard a leading sepsis predictive model proved to be less accurate than promised by the vendor.
I. In this article, I explore the reasons why it is so hard to succeed at implementing any sepsis prediction model. Regardless of the model's accuracy metrics, many of these limitations would apply to other predictive models as well. We'll have some work to do to modernize healthcare technology and workflows, and she looks at the four reasons why sepsis predictive models fail.
And she starts with the fact that Epic, the largest EHR provider in the US is in the news recently for poor performance of its sepsis predictive model. The goal of the sepsis alert system is to avoid missing a diagnosis of sepsis before it's too late, but epic's sepsis. Algorithm only detected 7% of sepsis cases missed by clinicians, and she goes on to say, this is both unacceptable and totally understandable.
If you are in the business of wanting to see data science succeed in healthcare, it's worth understanding how and why sepsis prediction models can fail to deliver the anticipated. Benefits. And she goes on and, and identifies four reasons for this. Number one, lack of timely automated data in the EHR, including vital signs.
modern hospital in America in:Wouldn't alarm bells ring if the patient's heart suddenly stopped? Surely alerts would fire at the nurse's station, or perhaps even on the intercom or on the clinician's smartphone, right? Makes sense. The truth is, despite the advances in continuous monitoring technology, many hospital floors lack the medical device interfaces and monitoring tools to alert to a situation as basic.
As the absence of a pulse outside of ICU telemetry or emergency department, the alarms you see on television exists only for the sickest patients at best. Alright, so that's the first problem. We don't have access to those vital signs. They're not feeding in, and she says, neither human nor machine can predict risk on null data.
The absence of a nurse in the room or delay of one to three hours. Until the vital signs are entered into the computer as a key contributor to missing a patient's progression to sepsis, continuous monitoring and automated vital signs must become the standard at all hospitals, including rural and critical access hospitals before automated algorithms can be successful at detecting deterioration, including sepsis.
And I think that's an important distinction, and that's just number one. All right, so we need those vital signs. We need them. Early in the process. Number two, upcoding an uncomfortable truth and a source of label bias. In addition to doctors being unable to agree with each other on a clinical definition of sepsis, there are many doctors who would likely disagree with the inclusion of sepsis, severe sepsis or septic shock on a given patient's chart at all.
As this paper describes in detail, and it goes on to talk about doctors don't agree, and some doctors would argue that. It should not even be on the chart. She goes on to say, but Epic is not relying on such an algorithm to detect sepsis from labs and vital signs. It's relying on the medical billing code for sepsis, which has an even looser definition subject to varying payer guidelines.
When a patient is deteriorating, the acuity level or resources necessary to care for that patient increase, so it is necessary that hospital bills higher amounts for higher acuity care. But in practice, this can result in upcoding the addition of questionable diagnostic codes and questionable charges.
That do not reflect the true clinical state of the patient. This further biases the model to detect not just who clinically has severe sepsis, but who is likely to be medically coded as sepsis, which is not appropriate indication for any clinical intervention. Despite this obvious truth, when alerted to the risk of sepsis, a physician may order diagnostic tests to rule out sepsis in response to the alert.
This only furthers the upcoding problem. Billing software will note the physician's sepsis related orders. Coders will look to see if inflammatory response and organ failure can meet the minimum criteria for sepsis. And even if the physician neither diagnosed nor treated for sepsis, she is likely to be.
Prompted to sign off on a sepsis diagnosis to maximize reimbursement. When the algorithm is retrained, this will be detected as a true positive, and the influence of this bias will increase over time, a feedback loop that can contribute to over diagnosis of sepsis. Isn't that interesting? I, I found that's fascinating.
This isn't an area where I'm an expert by any stretch, but I do know Angelique and I have spent some time with her, and she is, uh, incredibly smart in this area. And so her understanding of this and her explaining this in this way to me, really helps me to understand why this model is only detecting 7%.
Using an algorithm based on medical billing codes is going to be problematic. I. Of course then we need to go back to problem number one. We need the vital signs in real time. Alright, so number three. Sepsis models may not generalize to other patient populations. Epic is very proud that over 400,000 clinical records went into their sepsis algorithm.
But given the issues stated above, it's easy to imagine that other institutions historic definitions of sepsis, may not match your current clinician's definition of sepsis. There's another reason for a model not to generalize certain at-risk populations. May be distinctly different due to specific underlying medical conditions.
It is technically possible for a machine learning model to adjust for this, but Epic's model does not. And she goes on to talk about, at City of Hope, she assisted a physician with investigating the issue. With bone marrow transplant patients who are immunocompromised and therefore do not present with sepsis in the same manner as other patients.
And they prove that a condition specific model can accurately predict sepsis in a subpopulation. Your institution may have similarly unique patient populations with distinct sepsis related challenges. Alright, so that's number three. And number three is. The sepsis models may not generalize to other patient populations.
Number four. Now what? There's no consensus for treating before sepsis diagnostic criteria is met. And she goes on with this one to say, what is to be done with an early sepsis prediction question. I. Even a timely sepsis alert can produce conflicting clinical responses is often a nurse who first observes sepsis when reviewing vital signs in observing systemic inflammatory response syndrome, and it is the nurse who must coordinate the urgent response, initiate full sepsis bundle, begin fluids, begin IV antibiotics, do nothing until the doctor calls back.
She closes it out by saying, what's the hope for the future? Where do we go from here? Despite. What you might have heard about magical unicorns, data science is a team sport in a clinical setting. It requires guidance from nurses, doctors, clinical quality specialists, medical device experts, EHR experts, data engineers, and data scientists.
More than that, it requires leadership to ensure effective collaboration and proper prioritization of patient outcomes Above all other endpoints, it's fun to publish a paper about predictive algorithms, but not useful unless you have figured out how to integrate this prediction. Into clinical workflows and I love her.
So what on this, which is it is a team sport. You have to bring all these things together and I love how she closes this out. I mean, that is really the so what, right? Data science is a team sport. It's not something where you put smart data scientists into a room. This is the mistake I made as a CIO. I went out and hired two data scientists.
We got them on board and one went in one direction, one went in another direction. One was . Clearly just sucked into the day-to-Day, analytics of normal problems. And the other really started to partner with the organization and try to identify areas. The problem was there are so many areas to work on sepsis, just being one of those.
But there are so many areas where data scientists can work. We need more data scientists, more data engineers. To work closely with our clinical quality specialists, with our nurses, with our doctors, with medical device experts, with EHR experts, all the things that she talks about here. This is such an important collaboration to identify these models, to identify the gaps in these models, and to create better models moving forward.
If you want more on this, Angelique Russell is out on LinkedIn, Angelique Russell, MPH. And she works for Providence and she has written several articles on LinkedIn. I highly recommend anything that she writes. Go ahead and check it out. That's all for today. If you know someone that might benefit from our channel, please forward them a note.
They can subscribe on our website this week, health.com, or wherever you listen to podcasts. Apple, Google Overcast, Spotify, Stitcher, you get the picture. We are everywhere. We wanna thank our channel sponsors who are investing in our mission to develop the next generation of health leaders. VMware Hillrom, Starbridge Advisors, McAfee and Aruba Networks.
Thanks for listening. That's all for now.