This Week Health

Don't forget to subscribe!

September 24, 2021: Examine. Diagnose. Prescribe. Is predictive analytics the magic pill for healthcare? Angelique Russell, Senior Clinical Data Scientist / Informaticist joins us today to share her advanced data science and deep healthcare expertise. How can we use visualization and modeling techniques to solve healthcare’s most challenging problems? What kind of insights are we looking to derive from the data? Have the tools changed much over the last five years? Are we still focused on provider data or are we starting to pull in some other data? And what is the future of machine learning, AI and NLP? 

Key Points:

  • In 2011 the focus in healthcare was that we needed to modernize [00:04:20]
  • Treatment guidelines and order sets, which are how hospitals standardize treatment, change all the time [00:12:40]
  • Analytics answers questions and data science explores what questions we should be asking [00:18:15]
  • Prediction is before the diagnostic criteria is met and detection is after that point [00:33:30]
  • The Undoing Project book by Michael Lewis
  • Cogitativo
Transcript

This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

 Today, on this week in health, it there's an idea in, in healthcare data science that we can apply deep learning basically in an unsupervised way. Just take a big, vast database and say, we're gonna let the algorithms find the signals. And the risk there is that the signals you're choosing are not going to be consistent over time.

And that kind of black box approach, I don't think it works at all in healthcare.

Thanks for joining us on this week in Health IT Influence. My name is Bill Russell. I'm a former CIO for a 16 hospital system and creator of this week in Health IT a channel dedicated to keeping health IT staff current and engaged. Special thanks to our Influence show sponsors Sirius Healthcare and Health lyrics for choosing to invest in our mission to develop the next generation of health IT leaders.

If you wanna be a part of our mission, you can become a show sponsor as well. The first step. It's to send an email to partner at this week in health it.com. I ran into someone and they were asking me about my show. They are a new masters in Health administration student, and we started having a conversation and I said, you know, we've recorded about 350 of these shows.

And he was shocked. He, he asked me who I'd spoken with and I said, oh, you know, just CEOs of Providence and of Jefferson Health and CIOs from Cedar-Sinai Mayo. Clinic, Cleveland Clinic and all these phenomenal organizations, all this phenomenal content, and he was just dumbfounded. He is like, I don't know how I'm gonna find time to listen to all these, all these episodes that I have so much to learn.

And that was such an exciting moment for me to have that conversation with somebody to realize we have built up such a great amount of content that you can learn from and your team can learn from. We did the Covid series, talked to so many brilliant people who are . Actively working in healthcare, in health, it addressing the biggest challenges that we have to face.

We have all of those out on our website and we've put a search in there. Makes it very easy to find things. All the stuff is curated really well. You can go out onto YouTube as well. You can actually pick out some episodes, share it with your team, have a conversation. We hope you'll take advantage of our website.

Take advantage of our YouTube channel as well. Today we have Angelique Russell with us. She's a healthcare data expert. With Providence provide and several, I'll let her give her bio, but good afternoon, Angelique. Welcome to the show. Thanks, bill. So happy to be here. Yeah, I'm looking forward to the conversation.

You have so many great posts on LinkedIn, but you and I have also already shared the stage. At one point before there was I, I think about maybe even five years ago, Youi and Darren D in Southern California. At a university. We were talking data that, that was an interesting day. People were just firing their questions at us and we had to field all their questions.

That was, uh, a lot of students and whatnot. That was, it's interesting the perceptions of healthcare data out in the, out amongst patients out in the world, how they think data is versus. Of us who know, sitting back and going, it's not as clean as what you think. It's . Definitely. Yeah. And I think a lot of the questions at that point were around interoperability and how can I get my data and that kind of stuff.

And we, we were sitting there going, well, we could probably get it to you, but I'm not sure it would do you much good because at least five years ago we were looking at it going.

I'm not even sure we could get to a proper definition of some of the terms that we were gonna go, we have all this data we're gonna give you and you're gonna look at it and say, I have this. And it's like, no, you don't have that. Because yeah, this was, this was the definition back five years ago and the definition today is very different.

And then, yeah, absolutely. It was, uh, yeah, that was a, that was a fun conversation. Alright, so tell, tell us about, uh, yourself current, previous roles and your work in data science. Sure. Yeah, I actually had the career in, in analytics and technical project management that precedes healthcare. But for the last decade I've really pivoted and focused exclusively on healthcare and I, I got in at the ground floor as like an EHR analyst, so I did a lot of build in Allscripts, and since then I've learned epic things like clinical documentation.

Structured notes, flow sheets, order sets, and at that time in 2011, the focus in healthcare was really, we just need to modernize. We just need to have systems in place. But once we had those systems in place and we started looking at what could we do with this data, I went back to my course skillset and started getting involved in predictive analytics.

I was at a City of Hope for a few years, which is a comprehensive cancer center where we formed a data team to look at how we could predict outcomes like sepsis and mortality. As you mentioned, I was with Providence Health. System for a few years, and now I'm returning back to my data science consulting roots, and I'm pretty excited to be in this space right now in healthcare doing data science.

Somehow I miss that. I have a master's in public health. I never know like where to fit that in, but and a Master's in Public Health from uc, Berkeley. When you got the Master's in public health, what were. What was I thinking? It's, it's, it's such a fun question in a pandemic because it's a time when people look at public health experts, but what I was thinking was healthcare is a forever

Focus for me, I'm, I'm always gonna be in healthcare and I really wanted some expertise around our population. And when I began my MPH population health was not yet a buzzword, but that's what I was thinking. I was thinking, I, I don't just wanna know healthcare data. I wanna know the whole picture of what makes a healthy person and, and what produces health outcomes.

And of that whole picture, healthcare is only maybe 10 to 20% of a contributor to those outcomes. Actually, we're gonna do an interesting walk through your LinkedIn post and just tell stories because I, I just, I think it's fascinating. You have so many interesting things there, but just going back to population health, to 10, to 15, to 20% of the overall is the actual delivery of healthcare to overall outcomes in health.

Who has the best data to really attack this problem today? I mean, the, the most complete dataset to really attack that problem? Yeah, that's a great question. I'm not sure that it exists. I think I can say conclusively it. It probably isn't your doctor. Your doctor is least likely to have that data, health systems that you're associated with, maybe second best 'cause they can at least get some of your social determinants of health information out of what they know about you.

But I think there's a real opportunity there to collaborate between different stakeholders because I don't think that dream data set that has where you live and your social circumstances and your behavior, I. Combined with your health information from your healthcare provider combined with your genomics and information about the environment.

On top of that, I don't think that that comprehensive data set has really been put together. Plus financial and educational background has Sure, such a significant impact as well. So we try to get that through surveys, but it's still pretty incomplete when we get that information. Plus it's changing.

That's the other challenge with that data set as well. Alright. You are one of the people. Every time I see a post by you on LinkedIn, I have to read it. There's a handful of people out there, but you're a good storyteller. And data scientists are good storytellers. They have to take the data and they have to come up with stories.

What is this data telling us? And uh, a lot of times we look at the data and we say, this is what it's telling us. And then the data scientists look at us and go, no, can't.

Early on, people are always asking, how do datas, how do

you, were down to very little. And you sort of looked at it and said, I'm strap. What? What should I do? And you decided to take a trip to Disneyland. That sounds like a very early on in your career, kind of. How old were you when you making that decision? 16. 16 . Yeah. Yeah. I moved to Southern California when I was 17, but that decision was just before my 17th birthday.

And yeah, it was very much just opportunity driven. I from a small town up north, and when I saw how many jobs there were in Southern California, I got so excited and knew that that's where I needed to be in order to be independent. What's interesting because I, I talk to people a lot about, I've moved around a lot in my career.

You actually haven't moved around that much. You went to Southern California and you've stayed in, stayed in Southern California, which is an area that has a lot of jobs. But one of the things I talk about a lot when people are asking about careers, I'm, the first question I ask is, are you willing to move?

And recently I had two different conversations with two people in. One of 'em, I, I start with that question and one of 'em said, yeah, I'll move for an opportunity, in which case I can now talk about the full breadth of what's available in healthcare. And one person said, no, I wanna live in this town. And I go, okay, well you have the local health systems, you have anyone that'll let you work remote, and you have people that will let you travel for your job.

I mean, so it really does it, it does limit you, and that's one of the big points that you make here, is if you feel like there's no opportunities where you're at. Maybe it's time to get in your car and go to Disneyland. I don't, yeah. . Yeah. Yeah. And I think things have changed a lot since, since I moved here 20 some odd years ago.

Housing prices have gone up a lot and the cost of living is such that I don't think I would necessarily recommend Southern California for people trying to get into healthcare, but definitely . If you don't find opportunity where you're at, relocating can be a huge advantage to getting your career going.

It's interesting because when I left Orange County, St. Joe's was headquartered in Southern California and now it's headquartered at a providence up in Seattle, and a lot of people were like, where do you go from there if you're in healthcare? I'm like, well. There. There are some memorial care still there, and so is Hogue is still there.

I guess according to the news articles, breaking off from Providence again, but at the end of the day, the largest health system in Orange County is Kaiser, which is out of Oakland and Providence, which is out of Seattle. That's an example of you think you're moving into a place to do healthcare when in reality the jobs are being hired out of somewhere else.

Yep. Absolutely. All right. I wanna talk some data science concepts with you since I have you on the line. You wrote an article, you said, model drift, concept drift, historic time bias. When working with healthcare data to train predictive models, it's always prudent to have an extra hold of recent data to make sure the accuracy is the same across time.

Help us understand those three concepts and what you're talking about, about holding some data back. Sure. Yeah, healthcare. So when you're doing predictive analytics in healthcare, there are two signals that you're picking up on. There's an individual's like biometrics, right? Like you might think of your vital signs as revealing that you're going downhill or declining.

And your lab values can be that way. But there are also a number, there's an overlap there. So there's your vital signs and then there's your lab values. Your lab values actually overlap with the treatment domain, right? Because a physician making a treatment decision is going to order lab values to monitor the effects of that treatment, and also to to confirm the justification for that treatment.

And once you get into the the treatment domain, this is. Very subjective data based on how you are being treated, you will have a different data set. So if. We're talking about sepsis, and it was pre 20 2011, I think it was. There were other drugs that were on the market that were pulled from, uh, the market.

ris. I think I'm, I'm just fell outta my brain, but there was a, a big change in treatment in 2011, so treatment decisions that might predict in your algorithm would no longer predict after that point in time. That's just, that's one big decision. But treatment guidelines and order sets, which are how a health system or a hospital standardizes the treatment that patients receive.

These change all the time, and these changes can result in, in drift in your model if you're detecting. Decisions, treatments, labs that are no longer available after a certain point in time. Your model might be really predictive when those indicators are available and then drift. There is also another example I think I gave in that post of historic bias related to just general scientific knowledge, and we could see some of this in the Covid pandemic.

So early in the pandemic, we believed we were having a very bad flu year, and the very bad flu year myth continued long after we knew that we were in a pandemic because we didn't have testing available. If you recall, there was a a real strict criteria for who could be tested for covid. So even among hospitalized patients, we were never really quite certain who had covid and who had a bad case of influenza in the beginning.

Until the patterns emerged and became just very obvious to the care teams. So there was a, an inflection point after that, we, we kind of knew just looking at a CT who has covid, who doesn't have covid with severe pneumonia. But there was, before that point we didn't know. So the data was frequently mislabeled.

We had cases that looked like covid. But we're labeled as influenza. Influenza is not always tested, so there isn't always a confirmatory test to rely on, and that mislabeled data can send all kinds of wonky signals. If, if what you're trying to do is, for example, detect covid, it's interesting cash. These are, these are challenges.

I mean, you say order sets changing all the time and whatnot. Do you have to capture those order set changes as part of your model? I, I think it's important to be aware of them so that you're not using signals that are not gonna be like reliably available. But I think early in, in how you're choosing to design your model, I think really limiting to things that are less likely to change is important.

So there's an idea in, in healthcare data science that we can apply deep learning. And basically in an unsupervised way, just take a big, vast database and say, we're gonna let the algorithms find the signals. And the risk there is that the signals you're choosing are not going to be consistent over time.

They might be related to things that are in Fluxx and you won't know that they're in Fluxx because you let the algorithm find the signal and you don't really know what the signal is. And that kind of black box approach just, I don't think it works at all in healthcare. And certainly can create problems like model drift.

It's, well, we're gonna talk about Watson here in a minute, because you had a post about Watson, you talked about holding out some recent data. Mm-Hmm. So I guess what you do is you compare the data against itself along the model and, and that's one of the ways you determine if there's been a.

Data itself will, the data will reveal the problems of the data itself. It sounds what, what you're saying. Yeah. Yeah. So traditionally, when you're training a model, what you're doing is you're teaching an algorithm how to detect a pattern, and the typical way of doing that is to . Is to split your data into training and tests.

So you have one big set, usually 80 to 90% of your data, and you're gonna train on that. And then you hold out 20% and then you're gonna test on that and confirm that it has the same approximate performance on your test dataset. So what I'm suggesting on top of that is also just take some recent data.

So if you're looking at five years of data and you're saying, well, across five years, . This is the accuracy we have in healthcare. It's pretty common for five years of data to not be purely consistent with the last year of data. So making sure you have a holdout that's more recent than your entire set is important.

How is an analytics just straight up analytics different from data science at a healthcare provider? Yeah, I, I noticed recently that . That the terminology is blending a lot. Uh, a lot of roles that I would've considered to be like just a purely analyst role are being labeled as data science roles. So it might be a little, um, less separated today than it still is in my mind.

But I think of analytics as being descriptive and it can be very advanced. You can use advanced statistics. In your descriptive analytics, but what you're trying to do is describe the current state, and I think of data science as being most focused on how can we use the, the vast data assets we have to anticipate what's going to happen next.

So less oriented toward, we really need to describe what's happening. And more oriented toward predictive analytics and machine learning models. I, I once her to describe that analytics answers, questions, and data science explores what questions we should be asking. Yeah. And I'm like, I, I, I, I get that, but you just sort of look at the data and then the data sort of reveals, Hey, we should be looking at this.

Vioxx is a problem kind of thing. Yeah. Yeah. I have some fun stories about things like that. I remember one machine learning model we were looking at, we kept seeing a signal related to patient fall events, so patients falling over in in the hospital and being injured as a major liability thing for hospitals that they frequently want to intervene upon.

And we were able to detect a signal related to a drug class. That didn't make sense to us, but when we told the nurses about it, the nurses were like, oh yeah, that's definitely from Ativan. It was just related to a, a common workflow we had where patients who were receiving chemotherapy were being given Ativan as an antiemetic to prevent severe nausea, and also to help them relax.

But then they weren't being reclassed as high risk for fall events, and it was actually kind of a known issue. They knew about it, but having the data to support it and being able to show . Once we had the data, we were able to do some statistics and, and show an odds ratio and really . Show the dose relationship as well to physicians to help them, um, come to a consensus on reducing the doses and, and intervening earlier on that.

And that's an example of the overlap between like descriptive analytics and data science. That was a data science project. It yielded some really valuable descriptive analytics and that was actually what was needed to affect change. Alright, so. Four reasons, sepsis, predictive models fail. Mm-hmm. And specifically the most talked about one was Epic's challenge.

Their algorithm only detected 7% of sepsis cases missed by clinicians. And you had four reasons. Was this something that was like a project that was kicked off or just something that you, you were reading these articles saying, Hey, I wanna look at this and, and see why this happened. That was my immediate reaction, to that story,

Oh, really? You read the story and you're like, I know exactly why this is happening. . Yes, that is, that is actually how that happened. And I think anyone watching this who's also done a, a sepsis prediction models had the same, similar No. Along because it's if, if you've tried it predicting sepsis, you probably know very predictable.

But in the actual workflows, . Not very accurate and, and potentially even not useful. Alright, so let's walk through the four things. So the first was lack of timely automated data in ther. Yeah, so important. So in a modern hospital, in an ED and in an ICU, and usually they're, they have a unit in between, they, um, may call telemetry or sub ICU.

You can rely on pulse oximetry and vital signs to automatically go into the EHR system. Anywhere between every 30 minutes to every two hours. Fairly well automated, really great nurse to patient staffing ratios. So even when a nurse. Action is required to make the data available. It's still very timely, but where there's often a need to detect sepsis in a hospital is not where you have a patient constantly being monitored by a nurse.

It's where you're in a. Med-surg floor recovering from a procedure and suddenly you have a fever. That's when you would wanna know that this patient is going septic. They may need to have an intervention. They may end up in the ICU, and those floors do not consistently have. Automated devices for entering even basic vital signs, and the workflow is often not even what you would expect it to be.

Nurses taking vital signs use, uh, napkins, paper towels. Wow. And this, these artifacts travel back to the nurse's station where she's gonna input them three or four hours later. That lack of timeliness, it, it just, it ruins any potential for accurate and reliable algorithms because the algorithms have to depend on the most recent time they have available, which can be the night before, and I'm not sure if.

So I'm coming from the CIO standpoint. I'm not sure why that is today. I mean, we have these tools, we have capsule, we have the ability to capture all that data and pull it in. There's part of me that's like, why haven't we done that? That's not a fair question for you. But it doesn't make sense to me why we haven't.

Yeah. Yeah. Well, as ACIO, you probably needed to take some of your budget and give it to your CNO . 'cause the CNO traditionally, and this can vary from hospitals, but . Traditionally, the CNO has a budget that includes medical devices, so smart beds, smart pumps, capsule devices, or hillrom vital sign devices.

Those often come out of the CNO budget. So as money has been, um, pouring into HIT for modernization and EHR implementation, it isn't necessarily pouring into the CNOs budget. For those automations that would be needed to really make the clinical decision support tools we wish we had. Interesting. Thanks for mentioning Hillrom, who's the sponsor of the show, so always, always good.

When you mention somebody, number two, upcoding, uncomfortable Truth and Source of Label bias. Help us to understand that one. Yeah. So I, I just came from a nonprofit health system. Actually all of my health providers I've worked for have been nonprofit. And I wanna be really clear about that because this isn't the kind of profit driven, uh.

Concern that a lot of people mention in the nonprofit space. It is the responsibility of the nonprofit organization to recover as much revenue as possible for each encounter in order to cover the cost of care. Like they're not trying to, to make huge profits or to give out big bonuses. They're really just trying to cover the cost of care.

And there are many . Many encounter types like stroke that routinely don't. What is reimbursed doesn't even cover the cost of care. So, so this is the driver behind upcoding. Before I say what it is, right? We're just all trying to cover the cost of care. So there are many systems in place to recover the cost of care.

They include computer-aided coding, so there's software that scans over notes and identifies key terms and . Individual coders. So humans who review notes or review charts, and they create corrections to medical records that doctors then have to sign off on for the purpose of making sure the bill is accurate.

And in the case of sepsis, how this often works is you have someone who maybe has a fever or they have, uh, hypotension, low blood pressure, and a physician may order a sepsis order set. And their intention is to rule out sepsis. So they're gonna take a Lac D and, and run some blood cultures. But there are billing rules that if you, I.

If there was suspected sepsis and all of the criteria was met, then it is sepsis. And so they'll add that code to the chart. Even if the clinicians at the time of when they were treating the patient didn't consider sepsis to be the probable cause. And I, it's so hard for outsiders to kind of understand this concept.

'cause we think of things like sepsis as being totally like. Objective, like there must be a definition for sepsis. It's like broken knee or a broken leg. Yeah. You're like, all right, there's a definition. We know what it is. Yeah. How could you not know? Right? Yeah, yeah. But there are a number of medical conditions that can produce a systemic inflammatory response.

And systemic inflammatory response is what you look for in sepsis. And there's often cases where you have sepsis, but your blood cred cultures are negative. So we don't rely on cultures. And there has been so much, uh, change in how we define sepsis medically over the last 15 years that even in the medical community.

There is not a consensus on the definition for sepsis. Do you have a medical background or is this all from your view, from the data side that you've had to learn all of these terms? Yeah, I've had to learn all of these from my view, from the data side for sure, and I think it, it was helpful to go through a graduate program like my MPH program to learn how to review

Literature and research and incorporate some of that into my knowledge. But no, I don't have a clinical background. Wait, did you pick it up from, from research or did you pick it up a lot from conversations or is it sort of a split? Both. When I was at City of Hope, I was very fortunate that my little data science team was actually inside.

Informatics team. So I worked elbow to elbow with nurse informaticists and tho they did have a clinical background in critical care and in med surg and ER nursing. And I worked for our CMIO who was a medical doctor. So that just the experience of being surrounded by clinicians and. Really making sure we had, uh, nurse and physician leadership on every project we worked on was very, very helpful in my own personal development of understanding the science and the, I get this question a lot.

Do you think it's easier to go from a medical background and learn data science or data science and learn the things in medical that you need to learn? Um, I'm gonna go medical to data science. Yeah, on on that one. I know some really fantastic doctors and nurses who have really embraced data science and predictive analytics, and I think you can always bring on a machine learning engineer when you reach a wall in your.

Your data science skillset that things could be better if you just had someone with that right match of skills. But at the end of the day, like what determines the success of a data science project in healthcare is, is usually how much clinical leadership and involvement that it had. Interesting. All right, so let's get back to the sepsis model.

Number three. Sepsis models may not generalize to other patient populations. This. So help us understand that. Yeah, so it, it does relate to things like upcoding and definitions because certainly if you have two institutions with two different definitions of what is sepsis, then you have some labeling bias, and in one data set, it's gonna be labeled another way, and another we'll meet a different definition.

And, and so that, that might not generalize. But the only paper I contributed related to sepsis, which came out of my work at City of Hope with Dr. Dowa, who's an infectious disease doctor. We were very sure that our bone marrow transplant oncology population was very different when they developed sepsis compared to

A community population, and there were some conversations with Epic where in the beginning they were less sure. They were like, no, we have big data, so much data. We think that our model is gonna be able to predict. I. But we were able to, to demonstrate through our own model development in comparing it with epics, that there really is a, a significant difference there.

If the underlying pattern is physiologically different in your patient population, then a model based on how things usually occurs, just not going to apply. And in the case of bone marrow transplant, those are immunosuppressed patients, so it just didn't generalize well. But I can think of other scenarios.

Where you could have difficulty generalizing because you could be in a part of the country with a lot of retirees, bill and . You might have a higher than average age in your population, and you should really consider the possibility that your population is not going to present the same exact way as a younger population in a nationwide dataset.

And you can . There are ways to validate, to confirm this. You can take your validation data sets and you can stratify them to answer how accurate is this across different demographics, and how accurate is this in my own patient population and do I have vulnerable patients such as pediatrics, elderly, or immunocompromised that are distinctly different from a general population and how accurate is in that population?

It's interesting that dataset. I'm scanning your article as we're having this conversation. So the, the Epic dataset relies on claims data a fair amount. So that claims data, that gets back to your first point, which is the telemetry data is going to be the most current, most accurate, and most beneficial for predicting any or creating any sepsis model.

The, the claims data is kind of, it's a huge data set. Because of definitions and a lot of things you've talked about, that's, I mean, it's good to have that data set, but it's not necessarily the best for creating this kind of model. It's definitely not the best for labels if we were gonna label sepsis there, there are other options.

I think when it comes to labeling a sepsis data set, the, the optimal way to do it is. To try to rely on some objective rules, which are in your data, so you can look, for example, SIRS criteria or muse criteria and a Lac D or uh, blood culture or some kind of rule that your clinicians have signed off on, and then use that to label sepsis versus not sepsis.

But you still have to resolve all those ambiguous cases because there's a lot of ambiguity in real life and, and you have to figure out how to, how to do that. So where do we go from here? I mean, 7% of sepsis cases missed by clinicians is all it's detecting at this point. We wanna do better. So where, where do we go from here?

How do we create a better model? Yeah. Well, you just cited one of my favorite statistics from the findings related to the Epics sepsis model, which is that . Epic is only detecting 7% of missed sepsis cases. And that really brings us back to what is it that we are trying to predict or detect, and are we even predicting or detecting, right?

Because prediction is before the diagnostic TER criteria met, and detection is somewhat after that point. In the case of sepsis, where we have preventable mortality is usually we missed it. We drop the ball and, and that's what we want to prevent. We don't wanna miss a sepsis case or delay intervention or care in a sepsis case.

And so the cases we want to be detecting the most, that's where the Epic model, unfortunately, has the worst performance. So I think we have to go back to the drawing board and ask, well, what could we be detecting and how could we be labeling our data? I'd love to see a big data set out of hyperspace epic's database that includes a separate label for missed sepsis.

Like let's just see how we can optimize the prediction for missed sepsis and see if we can find those drivers. I, I tend to think. My hypothesis would be that a lot of those missed sepsis cases, we were late in collecting and processing vital signs so they weren't seen by the right people. I think that's likely to be a, a pretty big cause, in which case the solution is again, those devices we were talking about a minute ago.

It's not even necessarily a predictive model. It's have the right technology in place for the clinicians. And then once you have that really understanding how is missed sepsis happening and how can we predict that specifically? Interesting. I'm gonna skip to a different story now. So you wrote a little bit about Watson.

I think this has been written about a lot in terms of just the, their biggest mistake was they came in loud and proud. Like, Hey, we, we want jeopardy now. We're gonna cure. So they came in a little loud and proud, which was a huge mistake. They didn't understand the data that they were sort of wading into, and they weren't able to really do the things that they thought they were going to do.

I, so part of me wants to ask you the question to really succinctly talk about what went wrong with Watson, but also what's the future of machine learning AI NLP. What's the future of these machine driven technologies with regard to healthcare data? Sure. Well, I'll answer the last question first. I think the future is an understanding how we can augment human decisions.

I, I think every time I hear about someone's breakthrough algorithm that's about to make it to bedside and has the most potential, usually what we're talking about is a tool that gives the physician insight that they didn't previously have. That's machine aided insight like. Here's some prognostic information and here's some trends, and this is how the AI algorithm is interpreting them.

Do you agree? And how do you wanna base your treatment decision on, on this information? I think that's where the future is. What Watson tried to do was kind of build some kind of, uh, quasi recommender systems where they used medical literature and treatment notes and progress notes that detailing treatment to kind of tell physicians what treatment should be, or like in the case of oncology, they would recommend drug regimens based on.

Historical patterns, but that doesn't, the value add there is very limited because the past isn't necessarily optimal. So if we're using the past patterns to predict future action or trying to automate off past patterns, we have to be realistic and honest about the fact that not every treatment decision is optimal in the current state.

That means our historic data contains a lot of treatment decisions, a lot of diagnoses. A lot of bias in what tests were ordered and what information is available, and if we only rely on past patterns, then that those treatment decisions and bias and, and suboptimal care is going to be what we predict and recommend going forward, which no one wants to.

Yeah, I, I sat across from a doctor one time and I said. Help me to understand. Are you guys just guessing? He said educated guesses, but Yes. Uh, I mean, what we're doing is we're taking the data that you present us. We're taking our knowledge that our learning, our experience, the journals that we've read, we've taken all of that and we we're saying, we believe you have this, and we, we prescribe it, uh, a.

And then you come back and you tell me it worked or it didn't work. In some cases it's just not an exact science and sometimes we prescribe medicine, people go use it, they come back and it didn't work, and we go, alright, try this. And then that works. It's like, well, why don't you prescribe that the first time?

Well, there's a lot of reasons why they wouldn't prescribe that the first time. Yeah. I really love this book. It's, this is Michael Lewis's the Undoing Project. It has a whole section about medical decision making and about algorithms and medical decision making. And it's so neat, and when I say algorithms in medicine, algorithms existed prior to machine learning, so there was a whole, they already were there and they were based on research that was done in the last 75 years that found that experts, even clinical experts.

Couldn't outperform. Just a simple decisionmaking algorithm that would have like a checklist. Like if, if the tumor looks like this, if it's yay big, if the margins look like that, check, check, check, then the diagnosis is this. So medical experts could create really good workflows like that, which are called algorithms in medical decision making, but they couldn't outperform their own tools, just using their own judgment alone.

That revolutionized medicine and it resulted in a whole bunch of like systematic ways to do diagnoses that we have today. But I think there's a lot of potential to go further than that now that we have things like machine learning, we just have to figure out how to use our data. I would assume this is why imaging is one of those areas where it has excelled because you have an image.

They can actually go through that algorithm pretty well. Now, there's still some things that are missed, but for the most part you're seeing the reads of computers are getting pretty close to the radiologists and whatnot. But I don't wanna take all those hate nails that are gonna come at me right now, , but because the reality, there's a lot of caveats to that.

I I, there, there are a ton of caveats, but one of the things I, I talk to doctors about is the nature of work is changing it machine.

Of the.

Computer algorithm from time to time. Just because you look at it and you go, no, that's not, that's not right. There is that interaction that is gonna, the nature of how we practice medicine, it's already changing. I mean, if you take a look at what happened through the pan, and I'm not sure, when physicians ask me, it's like, you know what?

What's gonna look like? I'm like, it's gonna keep changing. It's gonna keep morphing and it's gonna look 30 years from now. It's gonna very different than it looks today. I think five years from now it's gonna look pretty different than it looks today. Absolutely. I think when I talk to PHY physicians, I think physicians want the kind of changes that that I can see coming in terms of having more insight into the data, especially.

In the time of Covid, I mean, I sat on some calls with ICU physicians that were painful just to be on hearing. Um, the frustration from clinicians who were unable to determine who they needed to admit and put on a ventilator and who would be okay with oxygen. Or in an emergency room who could go home and not having that answer, not being able to, to rely on their own medical judgment.

And there wasn't any literature out yet that was really difficult. But in medicine today, even when we're not in a pandemic, there are still a lot of decisions where we just, we're nowhere near optimal and there's so much potential to use our data to to get to optimal. In, in pharmaceutical research is the, the concept of like numbers needed to treat.

I'm like, how many patients need to get Lipitor in order for one heart attack to be prevented? It's like a hundred patients need to get it for one person to benefit. Now that we have these large data assets, what if we could reduce that down to 50 people or 10 people? Like the cost, the safety, the outcomes are so different when we're able to target treatments and we can do a lot of things that we haven't been able to do up until this point with predictive analytics.

This has been a fantastic conversation. I'm gonna close this. It's human interest stuff, made a father card. Is that poster. Dad, you as brave. Asbo. As strong as. Th As clever as Goum, as generous as Elron, as wise as Gandalf. And I would face a dragging for you. And it has these graphic images and those kinds of things.

And I, this is what I love about your writing. You're like, here's the five lessons we learned from this . Um, I assume you remember this post. It's not, uh, it's about a month old, I guess. So five lessons, the right master. So your daughter was probably figuring out, okay, how can I make this thing? And she decided to use Microsoft Paint.

Wow. That.

It's, it's so the wrong tool, , which I, I got a kick out of. And also my husband always laughs at me for using paint for anything, but she, but I wasn't gonna teach an eight-year-old Photoshop, so and that, and that's the first thing you use the right tool. The second was commercial off the shelf can often accomplish what you want at a lower cost, and you shouldn't pursue a out custom off.

We couldn't find any proper hobbit themed poster, but we chose some clip art from an Etsy store as well. It's interesting. I mean, I hope your eight year old learned that lesson. That's a lesson every CIO needs to learn in technology. Yeah. She didn't go get a degree in graphic design in order to make her a Father's Day present.

She just. Went to Etsy and it, I think it cost us $8. $8. Well, that, that's a really nice poster by the way. Let's see. Number three. If you're trying to do something beyond your skillset, try pair programming, designing with someone more skilled like your mom. It's, it's interesting. This pair programming thing was really, has.

Two are better than one. They're able to fill in the gaps for each other, look at each other's code, help each other to learn things and whatnot. Is that true in data science? I just, I would assume it's ab. Absolutely. Yeah. There are all kinds of engineering principles and software engineering principles around the right person being able to get something done in an hour that would take the wrong person a hundred hours or even a thousand hours.

So having the right person there is is hugely valuable. But when it comes to data science, usually the right person is actually the right team. We talked about needing to have that clinical knowledge and also needing to have that machine learning knowledge. Sometimes you could just have two people with those expertise working together and kind of following along with each other as they work, and you can get quite a lot done.

I think about that when I'm reading some job descriptions. I think there's five of those people in the world, or you can just get two people or even three people at a far less cost, and they could do what that one person who is in high demand. Your number. Outsourcing is cheaper than investing in your own infrastructure.

Many thanks to Staples for the For only. As I would outsourcing, I've had two or three times in my career from improperly outsource. Yeah, staples makes a lot of sense. You don't want go out and have to buy one of those massive printers for your.

For an eight year old. That, that's a great lesson, Angelique. Thanks. Thanks for your time. I love your insights. I love your writing. I hope you're gonna continue to do that in your, uh, in your new role. How can people follow you? Yeah, you can follow me on LinkedIn. I also accept quite a number of connections.

I know different people have, uh, different opinions on how to connect on LinkedIn, but I'm always open to, to chat and connect with people who are deep in healthcare, data science, and, uh, trying to make a difference. I, I'm curious what your philosophy is. I get like, I dunno, five or six marketing every week.

Somebody's like, Hey, you only have 9,000 followers. I'm like, yeah. 'cause I say no to like half of the ones that come in because it's like, I can grow your number of connections I can, and I'm like, I, I, I just don't, I don't invite those people into my network usually. Sure, sure. But I'm always looking for genuine connections.

I appreciate it. You're now in the consulting world again, so if people have questions, they can reach out to you. LinkedIn services in we'll.

What a great discussion. If you know of someone that might benefit from our channel, from these kinds of discussions, please forward them a note. Perhaps your team, your staff. I know if I were ACIO today, I would have every one of my team members listening to this show. It's it's conference level value every week.

They can subscribe on our website this week, health.com, or they can go wherever you listen to podcasts. Apple, Google. . Overcast, which is what I use, uh, Spotify, Stitcher, you name it. We're out there. They can find us. Go ahead, subscribe today. Send a note to someone and have them subscribe as well. We want to thank our channel sponsors who are investing in our mission to develop the next generation of health IT leaders.

Those are VMware, Hillrom, Starbridge advisors, Aruba and McAfee. Thanks for listening. That's all for now.

Contributors

Thank You to Our Show Sponsors

Our Shows

Today In Health IT with Bill Russell

Related Content

1 2 3 267
Transform Healthcare - One Connection at a Time

© Copyright 2024 Health Lyrics All rights reserved