This Week Health

Don't forget to subscribe!

May 1, 2023: Sarma Velamuri, CEO at Luminare joins Bill for the news. How does Luminaire's software help hospitals identify patients with sepsis more quickly and effectively? What are the current limitations of AI and why is it important to not rely solely on AI for patient care? How is the problem of incorporating human feedback into AI models affecting hospitals' ability to find solutions that work for their specific populations? What are the concerns around the use of large language models like GPT-4 in healthcare, particularly with regards to the lack of transparency in their decision-making process? How can healthcare professionals leverage AI as a tool while ensuring that it remains their servant and not their master?

Key Points:

  • Sepsis in hospitals
  • Predictive and reactive data analysis
  • Deterministic AI and AGI (artificial general intelligence)
  • Alignment of AI with human values
  • Large language models
  • Reinforced learning and human feedback in AI

News articles:

We invite you to join us May 4, 1pm ET as we discuss different types of analytics used in healthcare and how they can gain insights into health data. Let's work together to create a more efficient, effective, and modern healthcare system with data governance and analytics strategies. Register Here.

This Week Health Subscribe

This Week Health Twitter

This Week Health Linkedin

Alex’s Lemonade Stand: Foundation for Childhood Cancer Donate

Transcript

This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

Today on This Week Health.

the problem is that AI is a great servant, but a terrible master. So if we decide to abdicate our responsibility to an artificial intelligence system On treating patients, you're going to have bad outcomes  (Intro)

Welcome to Newsday A this week Health Newsroom Show. My name is Bill Russell. I'm a former C I O for a 16 hospital system and creator of this week health, A set of channels dedicated to keeping health IT staff current and engaged. For five years we've been making podcasts that amplify great thinking to propel healthcare forward.

Special thanks to our Newsday show partners and we have a lot of 'em this year, which I am really excited about. Cedar Sinai Accelerator. Clear sense crowd strike. Digital scientists, optimum Healthcare it, pure Storage Shore Test, Tao Site, Lumion and VMware. We appreciate them investing in our mission to develop the next generation of health leaders.

Now onto the show.

 All right. It is Newsday, and today we are joined by Sarma Valemuri Dr. Sarma c e o and co-founder of Luminare. Sarma, welcome to the show.

Thank you. Thank you for

having me, bill.

I'm looking forward to the conversation. Before we get too far tell me a little bit about Luminaire.

What's the problem you guys are solving in healthcare?

I'd like to tell you a story. Eight years ago, I used to work as a transplant hospitalist in Houston. I was on the hospital sepsis committee. Sepsis is the number one cause of death in hospitals. One in three people who dying a hospital to die of sepsis.

Wow. It's the body's abnormal response to an infection. And number one cause death. And so, I was on the hospital sepsis committee and trying to help figure out how to help our nurses and doctors identify patients quickly, treat them quickly so they don't get as sick, and therefore don't die. And one day I admitted my friend's daughter, Emily, to my I C U for a lung and liver transplant.

And Emily was 22 years old. I recused myself from care and went home. One morning I come back, she's on a ventilator and her heart stops. So they call code blue on the overhead. The team rushes in, start doing chest compressions, and while I wasn't her doctor, I went in, helped out, and 45 minutes later I had to go tell my friend.

His daughter died of septic shock and she was 22 years old. I realized that she did not die because the team failed her. She died because the information that she was getting sick and why she was getting sick was hidden. And it's not the team didn't failure, the system failed her. So the system was designed to create an opportunity where folks like her could die.

So I went home, quit my job, mortgaged my house, and started a health tech company to create better systems that allow us to identify patients faster, treat patients faster, and get 'em home to their loved ones where they belong. So that's what Luminaire does. We're a software company that takes good processes.

Allows hospitals to identify patients with sepsis faster, prevent them from getting sick, help 'em get home faster, and also become more financially healthy. By by not sepsis is a big loss leader for hospitals. Yeah. So

that's what we do. Are you plugging into the existing telemetry data that's coming in and just putting algorithms against that to say, Hey, look, this is a higher risk for sepsis or is, how's that working?

it's working really well, so, The answer is yes, and yes. So we the reason the company is called Luminaire is that the information that the patient is getting sick or about to develop sepsis for the predictive aspect of that, like figuring out who's going to get it and the reactive aspect of it, who already got it in the hospital, and how can we prevent or treat it.

We Bring all of these data pieces together. So that includes the telemetry data, that includes the laboratory data, that includes unstructured data, like things that are like in notes like that are sometimes hidden and the clinical information. So if you look at what nurses and doctors do, when they look at the medical record, the first thing they do is they turn and look at the patient and they try to figure out what is the context.

For this information in the record. So we combine all of that data together. We shine light on it, so to speak, and that enables us to put, predict and respond to sepsis. And the way we do it is we plug into Epic. Or Cerner or Meditech or any of these large EHRs, and we have our platform embedded inside their system so the nurses don't have to log in again and we save them time.

So today it takes them between 10 and 15 minutes clicking through the chart, figuring out what's going on with the patient. We proactively organize all this data. Showed back to them and then present them with a pre-formatted summary on why their patient is getting sick, what to do about it, automate the response to the treatment with a single click.

And so we take workflows that normally take six to seven hours down to under 15 minutes.

Wow. And that's fantastic. I love your boldness too. Mortgaging the house, starting a health tech company, they're, I mean, what's the risk associated with starting a health tech company? It can't be, can't be too hard to do, right?

It's,

it's not like there's a grave of health tech companies that have tried to solve sepsis,

well, yeah, even the well, I don't wanna go down this path, but even there was some flack last year of epic's sepsis model getting things wrong. So, it's clearly not an easy problem to solve or it would've been solved already.

So,

Epic got a lot of bad press for this. Like, so the backlash you're talking about was a, was an article that came out in jama June 21st 2021. And the article was talking about how epic sepsis model misses patients. And it talks about how, it's being ignored.

So the problem is not that the model misses patients, the problem is that AI is a great servant, but a terrible master. So if we decide to abdicate our responsibility to an artificial intelligence system On treating patients, you're going to have bad outcomes. And that's what happened is the workflow and implementing the AI model in hospitals is broken.

So it's not that the epic model is broken, it's that the, what next portion of it is broken, so to speak.

Well, I mean, you're taking us in the AI direction almost immediately, and we are gonna talk some AI stories today and we'll talk some business things that are going on as well. Which conferences did you go to this season?

Did you go to a couple?

Oh, yeah. I was at the A C H E event in Chicago, the National Leaders healthcare Executives event. I was at Becker's, I was at ViVE and hoped to be at Health later this year.

Wow. So, what's your take, what's the conversations that people were having at the conferences?

Yeah, I'm a technologist, so you know, I run a health tech company so everyone who knows me or meets me, invariably the conversation like goes to AI tooling and what's coming down the pike. Invariably goes to ChatGPT 4. A lot of the conversations have been around efficiencies. I think Covid has forced hospitals to become more efficient at bedside in a very, what I call now, a low oxygen environment.

It's like we're basically having to. Trained for our marathon at, 5,000 feet, our sea level. And the analogy is basically we have more constrained resources, both from a nursing perspective, from a provider perspective, from a financial perspective, but being asked to have the same output with respect to quality and patient volumes.

And everyone's throwing their hands up and they're saying like, how is it possible? Like, how can we, make more bricks with less straw if you understand that analogy, right?

Yeah. Well, we can't overwhelm a problem with people anymore. It's just too expensive. It's like, well, let's have a meeting and you, when I was in healthcare, when I was a CIO for healthcare, I used to go into these meetings and be like, wow, do we really need 25 people in this meeting?

I mean, we're not even at the conceptual stage yet. Like what? And we just. Can't afford to be that cavalier with our resources anymore. We have to be, very crisp. We can't overwhelm a problem with people, and we have to be a lot more creative in, in the use of applying technology and tools to automate the process.

But as you say, there's some challenges there. And that's what I do want to talk to you about this today. The first article we're gonna talk about is the chat, g p t honeymoon over. And I mean, you've already mentioned it. And I'll answer that question. No. The number of people that are talking about it is just overwhelming everywhere I go.

This is the conversation. But the reason I chose this article, it's a medium article, it's a consultant, Jonathan Balaban wrote it on April 13th and I found it interesting cuz he talks about one of the biggest problems that we have in healthcare. And that is we're in a highly regulated.

Environment that cannot afford to have AI just spitting things out that aren't accurate, and it has to be deterministic. I have found that if I asked chat G p t four, the same question three times, I could potentially get three different answers. And while that's really neat and really interesting, it may not work in the healthcare environment.

And the number one question I'm getting asked right now is, How are we going to use ChatGPT in healthcare and how are we gonna make sure that it doesn't harm anyone? Have you given given a lot of thought to that?

Yes. I've given a lot of thought to it. I've given about a decade of thought to it.

Because we started back in 2014 before the cloud was so freely available for health tech. I started a time when plugging something into an EHR was considered like weird. And people would go, you're doing what you're building. You're building a tool that plug into the ehr.

I know.

And now if the EHR doesn't have APIs, we're like, what? You need APIs? I can't believe. Anyway. Yeah.

Yeah. So, yes, I've thought about this for a decade. And I've talked to a lot of people who are experts in the field with respect to what is now being called agi. You hit the nail on the head.

The thing that we are not hearing a lot of like talk about is this word called alignment. How do you make sure that the artificial intelligence tool is aligned with the human? How do you make sure that the decisions it's making, whether life or death is actually, from a value perspective, from an ethics perspective lining up, which is what I said earlier, that, AI is a great servant, but a terrible master. We are essentially expecting to plug in the system.

And we are expecting it to make decisions on our behalf with respect to patients. That's ultimately like what the goal is, like. Yes. Right now we're using it for like these, annoyances, which is like appointment scheduling. We're using it for like helping us figure out our nursing schedules or physician schedules, like, I know companies that do AI for scheduling and it's a great problem to solve.

But the future, like this article that, I also got to review it. He makes these like four points, right? Like one around like chat bot, chat bots and large language models. Like that's great. Having a chat bot for a patient to interact with a health system saying, I need an appointment with the best oncologist.

I need an appointment for my colonoscopy. That's great. Fantastic. That's easy. The next layer after that is we have, and Al talks about this is like our models right now are built very narrow, right? Like they're built around disease conditions. Like you have a sepsis AI tool, or you have a heart failure AI tool, or you have an AI tool that does looks at the logical images, looks at CT scans and figures out with more accuracy that a patient's spot on the CAT scan is actually more likely to be a tumor than not, and then sort of makes downstream like recommendations of what to do next.

So that's the next layer of it. The third layer of, which he makes a point of is like, healthcare businesses need experienced ML ops or engineering teams. That is true. Like, think about that for a second. So you expect the healthcare tight industry titans to start creating in-house ml AI teams.

Okay. Where d where does that stop? Like, so you want your hospital to become like a technology company, like how, so that's the conversation going on right now and. Hospitals are doing this. They're going out and hiring ML and AI teams and they're spending millions of dollars on it.

And I would argue that the era of is starting now, like, I like to make this analogy, Robert Wacker makes this analogy on like how healthcare right now is the wild west. And like Epic and Cerner are building the railways out into the west. So they're digitizing the community, they're building the backbone infrastructure for us and folks like me, I used to wear this big black hat at conferences and folks like me and you are the ones that are going to create the cities on the side of these railways, right?

So we are just in the first pass right now of digitization. So expecting to build the cities is unreasonable and that's where we are at right now as an industry. Like I sit in the world's largest medical center, so I spend my time between Houston and Los Angeles.

Houston has 18 member hospital systems within walking distance of each other. So the Texas Medical Center and I've worked there for probably 15 years now, and you have these giant, we have three transplant centers within walking distance of each other. Three, right? Literally, I could like park in our garage and then hit three transplant centers within 15 minutes of each other, right?

These are the folks that have built. The backbone of healthcare, and these are the folks that have built the railways out west. So to think, and it's the new wave now is people are going to come up and produce value. This is where, I pull out like the iPhone and I go, how come Apple didn't build the Uber app?

Or how come Apple didn't build? Pick your favorite app right? It's because they're great at digitization. They're great at building the infrastructure pipeline, but expecting them to build all of the functionality that goes onto the digital environment is just unreasonable. So expecting a hospital to hire ML teams and then build out the tooling that requires to produce value, that's kind of like a big stretch, right?

And then finally he talks about it's important to be rigorous and systematic in monitoring your results. And that is basically the technical word for that is RLHF Reinforced Learning and human feedback is how is the human in the loop going to determine that the output of the AI is valid or not.

And it is a very hard problem to solve if you've not answered the alignment problem. And the alignment problem is basically saying this artificial intelligence that you've built, is it making the right decisions, not just based on data, but based on like the overall, how do you take care of person Because you're going to get into a territory where we will start making these.

Decisions that an AI is making that might say it's futility of care to give this patient chemotherapy because they only have three months left based on looking at, 10,000 patients that came before them, we should be offering them hospice care. And that's a very Strange line to be crossing where you're allowing an AGI to start deciding who lives, who gets treatment, who doesn't get treatment.

And while we don't do that now and people are listening to me, might have different opinions. Again, this is my opinion, we're getting there very quickly. We're getting there faster than I'm comfortable with as being a physician. I've seen hundreds of patients die. I've had this conversation on whether you give someone treatment or not treatment to people who are in their late thirties, early forties, with, with loved ones around them, with families.

And we just need to have a great framework around how we have these conversations and how we tool the ai. It's a, again, great servant. Terrible master. Terrible master. We're gonna get it. So that's my take on his article is like, yeah, he's kind of right. He's kind of scratches the surface, but there's more like we need to start talking about alignment.

all right, we'll get back to  our show in just a minute. We're excited. We have a great webinar for you in May on May 4th at one o'clock Eastern Time. It is part of our leadership series on modern data strategies in healthcare. In this webinar, we're going to explore data driven approaches to healthcare and how they can improve patient outcomes, increase efficiency and reduce cost, which are also critical at this time.

In this juncture in healthcare, our expert speakers will explore data governance, analytics, strateg. anything that can help healthcare providers gain actionable insights from healthcare data. We would love to have you there and we're excited about it. You can register on our website.

Just hit the leadership series, modern Data Strategies. It's gonna be in the top right hand corner of our website this week, health.com. you can discover how we are going to use data to be more efficient, effective in the modern healthcare system. we would love to have you join us again.

Hit the website this week, health.com. Top right hand corner. Sign up today. Hope to see you there. Now back to the show,

Yeah, I'm trying to figure out what I want to grab onto here. One is I could look historically at technology because you're saying, Hey, did, do we want the health system to be a technology company with ML ops and engineers and that kind of stuff? I'm gonna answer that question.

Yes. You may answer it. No, the reason I would answer it yes is because if we just rewind to the. I don't know if it was the eighties or early nineties, and you could go to every health system in the country and say how many it people did they have back then? And it would've been a fraction, it would've been like 10% of what they have now.

So instead of a thousand people, they had a hundred people in it, and it was like they just kept those things running and whatnot. But as you start to digitize things, there was more value being created from the technology. Therefore, they started hiring people to build out that technology.

Now, healthcare is the worst example of this. In every other industry we've seen huge. Movement in terms of productivity, in terms of gains and that kind of stuff. And healthcare has been a little slow in terms of seeing those kinds of returns. So I can hear that argument coming in pretty quick, but, so you go from a hundred to a thousand because of the value that can be gained.

Now if there's that kind of value that can be gained and I'm not I will come back to the to the models and the reinforcement, cuz I agree a thousand percent. We cannot give the reins over to AI yet. It's like in its infancy, it would be like teaching your two-year-old to drive and saying, Hey, here's the car.

Go. We think it's sophisticated because it's spitting these words back. But that's what it's spitting back to us. It's spitting back words. It has no idea what it's looking at. It has no concept of the person that you're treating. It has no there's just a whole host of things, and then not the least of which is when you look at G P T four, one of the first things it's gonna tell you is, I don't have any data.

Since, whatever the date is, I think it's like February of last year or whatever the date is. It hasn't been trained on the new data. And you're like, well wait a minute. I wouldn't want my doctor to have not read the periodicals and the journals that are necessary to be effective for the last year and just go, yeah, I'm treating you on.

You know what I knew back in 2022, and I think that's current. I hope it's current, and so the models in its infancy. We definitely want it going through people and it really is interesting to me just how much excitement there is around this because people are using it in certain ways and they're coming back and going I mean, not like you and I, the technologist, like.

The frontline nurse who doesn't, hasn't coded a day in her life, is using this thing going, oh my gosh. I just fed a medical record through here and it gave me a summary in like five seconds. I

think nail on my head. You just hit the nail on the head.

That's exactly the point is the information is hidden. Like we are doing something known as what I call rational drowning. You're giving this person so much information at bedside, the nurse, right? Like he or she is having to put together this enormously complex puzzle. Every day at shift change, they come in and they have a patient, they get five to six patients at shift change.

They have to put this puzzle together in real time on what's happening to their people. So it's almost like a pilot having to do a preflight checklist and there's five planes taking off at the same time. And Oh, yeah. People on that plane will die if you miss one thing on your checklist.

Yeah. Well, how do you, so that,

Here's a great exam. So I don't know if you're familiar with Belan Meco md PhD. He very active on social media and he found this video. Jen Berger put out this video and he said the short video demonstrates what an AI-based system could do with 30 pages of unstructured medical records.

Just imagine this. A new patient arrives and brings their 30 pages of long form PDFs that contain different forms of medical records, handwritten notes, summaries, lab results, and more. In a minute, an AI tool creates a concise, easy to read summary, and then he has the video of it going through and doing it.

But man, some of the comments right out of the gate are Like, are you kidding me? Like, what it's going to, what if it didn't understand that handwriting? Do we know it didn't understand that handwriting? Did it look at, 500 milligrams and actually get that wrong and put 50 milligrams?

Did it, I mean, yes, we have, yes. The whole trans I. How do you know? And, but here's my question to you. It's the age old problem with ai. So we don't trust AI to drive cars yet, because if AI kills one person, we've gotta shut it down. But how many people die every day from. Bad driver. I live in Florida.

There's so many bad drivers. I mean, you're in California and Texas, I'm sure there's a lot of bad drivers. But you know, there will be how many automobile deaths this year? And we don't say, Hey, let's take away everybody's license and, or let's do away with cars. We accept a certain amount of error, but we will not do that

with technology. I used the car example because how many times a shift change does the nurse or the doctor not get the complete record?

Right. We'll tell you the answer. It's 789 people per day, at least with respect to sepsis. Yeah, we lose over 300,000 people a year because the information that this patient is developing sepsis and is going to die is a lost in the noise.

Let me tell you about so one of, one of the hospitals that we work with is the Cedar Sinai Medical Center in Los Angeles. Yeah. Fantastic team. Obviously don't want patients dying of sepsis. They have an in-house, fantastic ai group large data language models built. The problem with sepsis they faced was exactly this is, like, how do we know what's happening with the patient at bedside?

Which is why we began working with them and. We've seen over 23,000 patients together with them in their emergency department. Nine out of 10 patients who come in the ER today at Cedar Sinai get evaluated for severe sepsis. Cause patients, number one cause death in a hospital. You don't catch it, you miss it.

People die. And we want a zero death rate. And so the question is, how are you able to take. An AI tool and get accuracy and get the nurses and the providers, the physicians, the nurse practitioners, the PAs, how do you get them on the same page and making sure this information's accurate. We call that model surveillance.

Yep. So I have this like niche of expertise in health tech ai. It is not in building the large language model. It is in deploying the AGI into the workforce, and workflow. Maybe I can come up with like a augmented word on that. So ML models are notorious, especially for sepsis, for creating non-actionable alerts.

It's an alert goes off, it's a black box which says your patient has X probability, seven per, SE 0.7, probability of developing SEP. In the next 24 hours. You go look at the patient, they're eating a muffin and watching tv. They look great. And so what do you do with that? And so the, so that's one problem is the clinical version of this.

So instead of saying your patient has points and property of sepsis, what you do is you can implement. A tool that helps you figure out why the AI alert went off. So again, we don't have any tooling into ai, how it works. Like we have no tooling that looks at how these decisions are being made internally.

By definition, AI outputs today are black box outwards. Yeah. And we've done like very little research into going into like what exactly happens under the hood. And that's the problem of alignment again, where AI is a great servant, but a terrible master. Like we need to understand how the motivation around these alerts go off.

Like, let's look at sepsis, right? Like I have a picture to show you. So one thing that's known with ML and I put on an article around this was that the models, predictive performances degrade over time. And this becomes more obvious when the model is used in an environment where it was not created.

So let's say a hospital up on the East Coast, huge hospital, millions of patients, uses the millions of patients records to create an AI sepsis tool that says, Hey, I'm going to predict which patients get sepsis. We put out a research article that says, my receiver operating characteristic is 0.8. I am 80% accurate.

I'm 80% sensitive, better than 50%, which is today's industry standard for alerts. Everyone's happy, right? But again, models degrade over time. You take it into an environment where it's not built, it'll degrade faster. And this Strange Occurrences is a version of model degradation. Go to the CDCs website around sepsis mortality, right? And you go and look at the map and what you see on this map, and I, and for the sake of our conversation, like I'll share my screen so we can talk about it.

Yeah, please. If you look at this map, they're darker colors are age adjusted death rates that are higher than the lighter colors with age adjusted death rate for sepsis is lower.

So lower on the West coast, lower in Maine. Upper Midwest is pretty good. Midwest is okay, but man, that, that band from Texas up to Pennsylvania jersey, yeah,

That's not good.

This is where this is like, millions of people over time. Age adjusted death rate. So now let me ask you this. If. A patient is going to get sepsis in Texas, Louisiana, Kentucky, New Jersey New York. Are you going to have more obvious sepsis or less obvious sepsis? It's a hard question to answer.

Yeah, I'm not I'm not sure it'll be any more obvious or less obvious, yeah, I'm not sure. I would argue that, Yeah, I don't think it would be more obvious.

So the clinical exp, so it's not intuitive, right? It's like, how do I know? I don't know. Right? But here's the point I'm trying to make is when you are building models in a certain environment, the model is going to work extremely well in that environment.

We're now deciding to take these models and then apply them across the board to all hospitals and all health systems. There's several commercially available FDA-approved models that are hitting the market. And it's like, okay, great. So it's wonderful that it works in like, this area. So the question is, Is it fit accurately?

So models built in states with low death rates will perform poorly being deployed in states with high death rates. Right? And this is called, and vice versa. So it's either too sensitive or not sensitive enough. And this is called a model or fitting problem.

Well, this is the we talked to Stanford about this, and I think it was Dr.

Michael Pfeffer who said the most important thing to him in getting AI right at Stanford is their data. He's like, look, he goes, our data doesn't hold up within our four walls. He goes, because the patients we see from eight o'clock in the morning till about seven o'clock at night is a different population than we will see from like, Midnight to seven o'clock in the morning and he goes, sometimes if we apply the model from the day to the nighttime patients, the model will break down.

He goes, so we have to really know our data. We have to really get good, clean, actionable data. For our environment. He goes, and so we're not leaning as much on these national models anymore because they're not helpful in the Bay Area, in the, in, in Silicon Valley, in the places that they practice.

Yes. And I will add to that, which is the reason a hospital feels like they need to go hire an ML team to help them build a model that works for them. It is the reason that hospitals are reacting to the market and saying like, we don't have any solutions that are working really well because of this variation, even just within our own population.

But the problem comes into the actual problem they're trying to solve is an R L H F problem? The reinforced learning and human feedback. Yeah. Trust me, this is going to come out more and more. Six months from now, eight months from now, nine months from now. This is a problem that we thought about 10 years ago.

Like how do you incorporate the human in the loop? And atop talks about this. One of the articles that we'll discuss goes into this is how do you incorporate the human in the loop? But he doesn't go deep enough into this. Yeah,

well, I we're already out of time. Believe it or not, because there's still so much to talk about here.

I mean, when you, these large language models like they're, they are black boxes. G p t four, as cool as it is, and as many people are playing with it in healthcare, the reality is we don't know the data it's been trained on. We don't know how it's making decisions. We don't know how it came up with the answer.

And in healthcare, we have to be so careful about that. And then the other thing is you're talking about reinforced human feedback. We can't give it that. That's not, they don't let us train the model yet. And so,

there you go. So we're getting somewhere. So the good news is this is good news and it's bad news.

The good news is I don't speak for Microsoft. We don't have like an official partnership with them. For my conversations with people at Microsoft, they're being very careful with this. The, because of all the ideas you've outlined, they're not releasing this thing into the wild, allowing it to make life or death decisions.

So, which is why I keep bringing up in six months from now, eight months from now, you will see the conversation shift. Towards how do we start leveraging, how do we make it our servant as opposed to it being our master, which is what it is. When you give it 20 pages of patient information and say, give me a summary, like how do we tune this thing to the individual hospital?

And the way you tune it is you use the human.

From Chad, sir I'm glad you're the one who's saying make AI our servant. That way when the AI does take over, it'll come after you instead of coming after me. But but I agree with you. It should be, it's a great minion. It's a great like, Hey, go do these 10 tasks for me and come back with the results.

Kind of thing, but it's still requ. I can't be lazy. I can't just, take that there, there's a story of somebody who's using Chat GPT to write. A emergency response or something like that, and at the bottom it had something like, or it said literally, generated from ChatGPT or something to that effect in this emergency thing that was sent out.

And you're like, we can't shut our brains off. Like, we still have to have experts in the field who are looking at the data, but we want it to be the minion and say, Hey, go into the EHR and get me all the information pertaining to this specific disease state that this patient that could be relevant for this patient and have it go, gather all that stuff and bring it forward to me so that I can do my job more effectively and not essentially be.

Really good at keyboard shortcuts to, to find stuff.

Yes. And you also hit the nail on the head when you said that you don't want your doctor being behind on the times You don't want it going, like, I'm not trained on data after like X date cutoff. And again, I. The way you fix that is reinforced learning in human feedback.

So for me, like I have a hammer, so unfortunately everything looks like a nail. But it just so happens at this moment of time in where we are in like our AI journey in healthcare, like the O N C put out this thing on like transparency of algorithms, right? They just like release something. They said that we are going to like regulate this.

We need to have better transcripts of algorithms. The question is like, how. How are you going to have been transferred? Well,

because that is your intellectual property, isn't it?

It is our intellectual property and we are happy to give it away because it's more than just we are, I mean, that is our expertise.

Our expertise is how do you take AI models deployed into the workflow workforce and create that real type feedback to make it smarter. But we are happy, like, I'm not an Elon Musk, but in the early days he was like, Hey, you wanna build an electric car? It's better for the environment. Go ahead. My mission is to eliminate sepsis deaths in hospitals to start with.

The way we do that is we make AI our servant. The way we make AI our servant, is we use its capabilities and anchor it to our workflows to make sure that patients get a better outcome. I have evidence that shows that when you flip the switch, and this is gonna come out, I'll share the article with you when it's published, when you flip the switch on.

Reinforced learning human feedback with respect to AI models, workflow downstream changes. Behavior changes. Something called as an anchoring heuristic goes away, an anchoring heuristic is like, that that tweet that atop put out that like, hey we patient had long covid, saw multiple doctors, multiple neurologists.

They, the patient's relative went and typed in these symptoms into ChatGPT and it gave an output saying that this patient has this like rare version of encephalitis. And, that was confirmed later through genetic testing. And all of through antibody testing side and all of that was great, but the problem that it was addressing is that these doctors were anchored to this idea that the patient had long covid and refused to consider anything else that doesn't require an AI tool to fix.

That requires the doctors and the nurses and all the people taking care of the patient to get rid of that anchoring heuristic, which said someone labeled the patient with long covid, and that's what we're gonna think of. I have actually had the opportunity that's a one in 4 million disease, have had the opportunity to take care of two patients with that particular disease.

In, in, in our I C U in Houston when I was a resident. And I was like, this is not rocket science. Like, I mean, like, I know it's a rare disease, but we have rare diseases all the time that you diagnose. Like we can't shut our brains off and type something into a prompt and expect and we get wowed by it.

I think. The conversation needs to shift toward like operationally, what does this look like? We're looking at this thing in a lab and we think it works great while we continue to have 700 plus patients a day dying because the information is already there, that they're sick, the information is already there, that that we can intervene to stop him from dying.

And the question is like, how do we mobilize that information into the workflow? So we are far enough in our journey technologically that we know who's going to die. We can predict that. Got it. We don't need better tooling around that. What we need tooling around right now to start with is how do you get the human to make the system better in real time and get the human to actually, start an IV and give antibiotics to this patient who's going to die of sepsis in six hours in on your watch because you missed it.

Yeah. Yeah. I mean, it's been the age old challenge is generating the insights and then putting the insights into the workflow in a way that people can grab onto it and actually do something. Wow. I, we went a lot deeper in this subject than I anticipated. Went a little longer, but Sarma, I really appreciate your time and thank you for sharing your expertise in this space.

It's fantastic to have this conversation.

Thank you, bill. Very grateful for you having me on your show as well. And if people have questions, they can reach out to me directly. It's you could go online, Google me or sarma

luminaire.io.

io. There you go. All right. That's all for today.

  📍 And that is the news. If I were a CIO today, I think what I would do is I'd have every team member listening to a show just like this one, and trying to have conversations with them after the show about what they've learned.

and what we can apply to our health system. If you wanna support this week Health, one of the ways you can do that is you can recommend our channels to a peer or to one of your staff members. We have two channels this week, health Newsroom, and this week Health Conference. You can check them out anywhere you listen to podcasts, which is a lot of places apple, Google, , overcast, Spotify, you name it, you could find it there. You could also find us on. And of course you could go to our website this week, health.com, and we want to thank our new state partners again, a lot of 'em, and we appreciate their participation in this show.

Cedar Sinai Accelerator Clear Sense CrowdStrike, digital Scientists, optimum Pure Storage. Sure. Test Tao, site Lumion and VMware who have 📍 invested in our mission to develop the next generation of health leaders. Thanks for listening. That's all for now.

Contributors

Thank You to Our Show Sponsors

Our Shows

Newsday - This Week Health
Keynote - This Week Health2 Minute Drill Drex DeFord This Week Health
Solution Showcase This Week HealthToday in Health IT - This Week Health

Related Content

Transform Healthcare - One Connection at a Time

© Copyright 2023 Health Lyrics All rights reserved