This Week Health

Keynote: Our AI Journey in Healthcare

September 29, 2023: Join CIOs Michael Pfeffer (Stanford Health Care), Chris Longhurst (UC San Diego Health), and Brent Lamm (UNC Health) in this webinar re-lease talking all things AI. In an era where technology is becoming increasingly personalized, how can AI models maintain the genuine ‘voice’ of healthcare professionals, ensuring a balance between efficiency and authenticity in patient communication? How can the evolving technology, observed in the continuous improvement of models like GPT-3.5 to GPT-4.0, be harnessed to address the unique challenges and requirements in different healthcare sectors, particularly in specialties compared to primary care? Reflecting on the quick adoption of AI models like ChatGPT in healthcare, how can the industry ensure that these tools are being implemented responsibly, taking into account the limitations and the potential impact on patient care? With AI models offering potential solutions to physician burnout by reducing time spent on tasks like note writing, how can the healthcare sector reassess and redefine what is truly essential in clinical documentation? 

Key Points:

  • AI Implementation Pace
  • Legislation of AI
  • LLMs and Local Models
  • Ethos of AI
  • Obstacles with Generative AI

Join us for our webinar "Interoperability Outcomes: A Discussion of What’s Possible" on October 5th at 1 PM ET/10 AM PT, discussing challenges in healthcare interoperability. We'll tackle key issues like fragmented technology systems, data privacy, and cost-effectiveness. Engage with top-tier experts to understand the current landscape of healthcare IT, learn data-driven strategies for patient-centered care, and discover best practices for ensuring system security and stakeholder trust.  Register Here.

Subscribe: This Week Health

Twitter: This Week Health

LinkedIn: Week Health

Donate: Alex’s Lemonade Stand: Foundation for Childhood Cancer

Transcript

This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

Today on This Week Health.

 I think we're going to look back at the introduction of generative AI and healthcare as a huge milestone, as big as the introduction of penicillin.

So generative AI and these tools applied to our healthcare data are going to revolutionize the way that we deliver healthcare.

Thanks for joining us on this keynote episode, a this week health conference show. My name is Bill Russell. I'm a former CIO for a 16 hospital system and creator of this week Health, A set of channels dedicated to keeping health IT staff current and engaged. For five years, we've been making podcasts that amplify great thinking to propel healthcare forward. Special thanks to our keynote show. CDW, Rubrik, Sectra and Trellix for choosing to invest in our mission to develop the next generation of health leaders. Now onto our show.

(Main)   The title for this is Healthcare's AI Journey, Challenges and Opportunities. I want to thank everybody who's a part of this. I want to thank our panelists, obviously.

I want to thank our listeners and participants. We received... This is probably a record. I think we've received close to a hundred and some odd questions from participants coming in. If you want to ask additional questions, feel free to throw those into the chat. We will collect those. But I have a feeling you'll find that a lot of the questions have been asked ahead of time and we're going to delve into those. If you are wondering what our panel is for next month, we have a great panel. We're going to talk And we're specifically going to hone in on outcomes and where we're going with this. We have Mickey Tripathi with the ONC joining us, Anish Chopra. Who I'll just term as an interoperability evangelist and then Mariann Yeager, who is the CEO of the Sequoia project. So that's next month. It'll be the first Thursday of the month at one o'clock Eastern time.

And again, we have three great panelists for today. We're going to get kicked off. Mike Pfeffer, CIO for Stanford, Brent Lamm, CIO for UNC, and Chris Longhurst, CMO and CDO for UC San Diego. I'm going to kick us off. We'll just go around. I'd like to get a little background from each of you.

But I want to start with a very high level question, which is the promise of AI. How is your organization framing up the AI discussion, and how is... That impacting your approach to the use of AI at your institution. And I'm going to stay in the same order. Eventually, I'll switch it up. But Mike, I'll start with you.

What does Stanford see as the promise of AI? And how are you framing up that discussion at your system?

Thanks, Bill. So incredibly exciting. I think is how I would summarize it. we've spent years really fine tuning the process of moving from the analog world to the, to digitizing what we've had.

And now I think we're really seeing the opportunity to take it to the real digital world where, everything's really helping people do incredible things with the technologies we have. So I would just say lots of excitement. And then how we're thinking about this. We have a Chief Data Scientist role that actually reports into the IT organization to really lead this process for us.

We have a governance structure. We have our patient advisory councils becoming involved and thinking about it in two kind of large buckets, we're thinking about automation and we're thinking about augmentation. And so automation, I think is where we're going to see the initial value. And that's really where we're taking like large language models and other kinds of AI to really automate.

Hence the term. A process that someone can do now. So, for example, there's been a lot of talk about, drafting in basket messages from patient messages, right? Well, a clinician can write a message. So that's an example of AI automating it. The augmentation part would be something more like around precision medicine, where you're really taking data.

And the clinician decision making, putting it together and doing something that you couldn't do without AI and the clinician together. That's a lot harder and I think we're starting to dive into use cases around

that. Fantastic. Brent, what's going on at UNC? How are you guys viewing AI and how is the discussion moving forward?

Yeah, I think, I think when you distill it down, I think it's not very different than what Michael just described at his organization. I think we're, the promise of AI is the way you framed that and I'm thinking about that. And I think, so much of this is about how our providers and teammates and patients are.

Viewing it and we'll view it in the future. And so we are really trying to approach it and message it from that perspective of, we think AI is a tool that can help our providers and our teammates and our patients be more productive and more effective at delivering care and engaging in their care as patients.

And that. It is the augmentation word Michael said is absolutely front and center for us. It's, this about as a tool that we think will manifest in many different ways and different use cases to help move the needle in terms of efficiency, quality engagement at the patient level, and ideally, hopefully, in the end improved health and wellness.

We do have an organizational view that this will be ultimately a game changing technology. We do not think this is gonna go backwards, that we're gonna go through some typical Gartner hype cycle, I'm sure and those kind of things, but we really see this is a long term game changer.

You Chris, I'm gonna come to you. I mean... even be measured on the Gartner Hype Cycle at this point?

That's a great question. I agree with Brent that we're going to see a Hype Cycle here, but it may be a slightly different curve than we've seen before. UC San Diego is very similar to the organizations you just heard about.

We're really leaning in. We're very bullish on the opportunities. I'd say two things have come together that make this a particularly exciting time. One was the release of large language models last fall. That's a real game changer. We've had AI committees stood up for four or five years.

Everything with machine learning that goes into our system is evaluated for equity, for ethics, and patient impact. But the release of these large language models is particularly interesting. When our own analysis suggests that about 99% of our clinical record by volume is unstructured data. And so the opportunity to use all of this text data to drive better outcomes, better decision support suddenly becomes insight.

The second thing that happened at San Diego was a very generous gift from Joan and Irwin Jacobs last fall that funded our center for Health Innovation to the tune of $20 million in a big. focus of our center is on AI use in healthcare and patient care. And for me, this is really particularly exciting because, you know, frankly, when you talk about the promise of AI, a lot of it's just hype and speculation right now.

We don't have clear outcomes yet. It's not a proven sort of technology in terms of how it's going to benefit workflows, benefit clinicians and Benefit the patients we serve. And so, frankly, a lot of this needs research. It needs outcomes analysis. It's an experiment and what this generous donation allows for us at San Diego is to fund those type of outcome studies and research efforts that would be difficult to fund with operational margins otherwise, particularly in the current financial environment.

Yeah. And that is going to be so necessary as we move forward. Let me give you an idea of the questions that came in. I put them into categories. Use cases, far and away, the largest group of questions that came in as a category. ROI is another. Culture, changing the culture and adoption being another.

Governance was another large category. Futures, people are wondering what you think it might look like in five years or so. There's a whole category around planning, risks and challenges, policy, architecture, patients direct use of this technology to interact. for their care, essentially. And then there's validation, which you just mentioned a little bit.

And then there's another category that I'm just calling other. So, let's get right into current use cases. So, we've read some papers, we've read some studies, we have heard that, these large language models can be more empathetic, and in some cases, more accurate, and whatnot.

Chris, I'm going to come to you to start this, since your name and your institution's name has been attached to some of this stuff. Where is UCSD? testing this technology and and what are you finding? What do you, what are you learning?

Sure. Well, it's a really interesting story.

And it's an evolving story, of course. So, remember ChatGPT was released late November of last year. And one of my colleagues on faculty at UC San Diego, Dr. John Ayers had the really brilliant idea in December to take publicly posted patient comments on a social media forum and run them through GPT and then compare them to responses of validated physicians that were posting on this website.

And I got to participate in that study. I was actually one of the seven reviewers blinded to the source of the responses and ranking them, rating them. And, of course, the headlines that came out when the study was published in April was just what she said, that the chatbot is higher quality and more empathetic than doctors.

And it was sort of an unfortunate interpretation of the outcomes, because the way I actually interpret that was that In a given amount of time, and many physicians are time limited, right, the chatbot could create longer and apparently higher quality responses than a physician in that same period of time, right, and it also generated artificial but apparent empathy.

Now, if you ask patients, what they think of robotic empathy which some companies have done, it actually is very creepy to patients. But what this study did is it really motivated us to talk to our EHR vendor partners about how we might use this technology. to help draft responses for our doctors because this has been a long standing issue.

Remember last fall, the other thing that was in the news was many different health systems charging patients to send methods to their doctors as a way of disincentivizing what has really become a tsunami of patient messages since the pandemic and virtual care took off. And so putting this all together with our vendor partner, we actually were one of the first sites to enable this functionality turned it on in mid April, ironically, before that paper was actually published.

And the... Outcomes have been really interesting. We ran it for a couple months on GPT 3. 5, all sorts of feedback from our physicians about the need to do prompt engineering to make the responses better and more useful. In general, we're finding that our primary care doctors are finding this more helpful than some of our specialists.

Which isn't terribly surprising when you think about the corpus that, GPT and these large language models have been framed on. And then most recently we completed a eight week crossover trial where we actually randomized physicians who were interested in participating to either be in the early group or the late adoption group.

And then we compared their responses. And we just shared this data for the first time a couple weeks ago. Of course, we're submitting it now for peer review, but the the spoiler alert is that the physicians using the GPT response saved on average about 10 seconds per message. But what was really interesting is that their messages were almost twice as long.

And so if you believe sort of our earlier study from April, that those longer messages are likely higher quality, then potentially we're sending higher quality messages to our patients. And we're doing it more quickly, because those 10 seconds, accrued across tens of thousands of patient messages on a daily basis, they add up to real time.

My final comment on this is something I'm really proud of, that the AI committee that I mentioned, co chaired by Dr. Amy Sittipati, our Chief of Biomedical Informatics and CMIO for Population Health, wanted to make sure that we are transparent with our patients. And so. Although every single method is reviewed by a physician and edited as needed before being sent, we actually have put a disclaimer in the message so that we're completely transparent with our patients.

It says, this message was generated automatically and reviewed by your, clinician and that doctor's name goes in there and we've gotten a lot of positive feedback from patients who like the fact that their doctors are giving this help. So it's been an interesting learning experience and more to come for sure.

Yeah, so Brent and Mike, I think both institutions are participating in the notes trial. You're both doing a trial on notes. What's the clinician feedback? I mean, are they excited about it? Are they struggling with it a little bit? How did you introduce it to them and how are they responding?

Michael probably will have a better, broader response, but I want to build on something Chris just mentioned if I could which is the feedback. Literally this weekend, I had a chance to look through a lot of feedback from our pilot group of physicians that are using it. Now, we're a little bit, a few months behind where Chris and Michael's teams are, but we're live now and have it in use for our pilot group of physicians for a number of weeks.

And what. Was interesting to me, and I should have maybe, and maybe others would have foreseen this, but as the technology has gotten so much better just in the last month, and it seems like week by week it's getting better and better with the prompt engineering behind the scenes that's going on with the vendors and in our other Peer institutions, what I'm getting now is, oh, wow, this is an excellent response, but it doesn't sound like me and I'm going to change it.

And so I didn't think about that and it's almost like it's. I don't want to say too empathetic because our physicians are wonderful, caregivers and they want to, provide a great response to the patient, of course, but I think they're worried about what I would say being genuine, if that makes sense.

And I think Chris, it goes to his point around. The disclaimer kind of model or mentality. But our physicians are going, wow, that sounds really good. But I need to change that because it's not my voice coming through. Yeah.

And Mike, what are you hearing?

A lot of the same things. I mean, when we launched this similar to what Chris was saying, I mean, the responses were very lacking with the original.

GPT 3. 5 and the initial prompting that went into it. And through a ton of learning on how you prompt these models and then moving to GPT 4. 0, we start to see significant improvement in the responses. We've also included not just physicians, but pharmacists and nurses in the pilot to really mimic what typical workflows are for patient messages.

And again, seeing a lot of the same things works better for primary care than specialty. And actually we're seeing that the pharmacists and the nurses actually find more value from it than the clinicians. And so, probably part of that, to Brent's point, it doesn't necessarily sound like them and probably a lot of the messages that are being handled.

Upfront by pharmacists and nurses are a little bit more routine and less, say, empathetic needed so to speak, I'm refilling your medicine or that this or that, or this is, some things you want to look out for when you're taking that medicine. So we're learning a ton here. I think what's really exciting is, as these, as the technology continues to improve and.

Recently a version of GPT 3. 5 that can be instruction tuned is launched and that's really different than prompt engineering that the model you can actually train it. It's going to be very exciting to see can we get better responses for our specialists and can we begin to incorporate, To Brent's point, a little bit more of the personal touch.

So, think of it this way, if you have a limited amount that you can prompt, you can't send the entire medical record into the model to prompt. So, if you could tune the model so it really understands your environment and how to answer these messages correctly, but then you can prompt a lot more specific about the provider answering.

The question, I think we're going to see even more adoption. So, it's just really exciting. This is going to keep, we're going to keep learning. The technology is going to get better and better. It's going to become more personalized, and I think it's going to provide value.

It's, we are just on the front end of this.

I mean, it really was Thanksgiving last year. All of a sudden, people were talking about, Oh my gosh, have you used? And this is probably the quickest I've ever seen healthcare adopt any kind of new technologies. It's really fascinating. But anybody who's used ChatGPT or any of the other large language models knows how important it is to ask the right question and to have it educated on certain things.

We've all gotten that prompt back that essentially says, Hey, I'm sorry, I only know things up to 2021, whatever that date is. And to realize that the data going in and how it's trained is so important in terms of the value you're getting back. And then the prompts become so important.

, I'm gonna skip my framework, go into some of the questions we have here. Beyond the basket, beyond the auto generated messages, what are other use cases that you are tracking for increasing efficiencies using AI models? Brent, let's start with you.

One of the things this probably is in the category of maybe a smaller use case, but we're really focused on as we've rolled out our electronic health record system across our hospitals and practices across North Carolina, and through the pandemic and post pandemic period, we've obviously, like, I think most other health systems have had a tremendous amount of turnover at times as we've all struggled with health care staffing ITS staffing.

And one of the challenges that we hear all the time is just helping people helping our teammates, care team members on the front lines have more and easier access to training materials and how to information. And so we're working to embed we've got it in a prototype model now where we're going to allow our care team members while they're in our electronic health record system to be able to get help using generative AI type prompts, returning back cited documents or training materials, tip sheets and then be able to escalate that to a problem ticket, if there's a point in time where that's a problem.

I know So, Other organizations have done a lot of work around this as we have around trying to make that getting help easier for care team members and we're trying to move that more to the front end of the process to say it's not just about, hey, I got a problem. I need, someone to call me.

It's also, how can I get that information in real time? How do I document this? How do I do that? That's one of the I would say smaller use cases that we're looking at. I think like us, like many organizations are also looking at a lot of things in the revenue cycle. For example, prior authorizations, how we can potentially leverage generative AI to help make that a more streamlined and automated process.

We've got a vendor partner we're working with on that has shown some early success relative to trying to make that a more efficient process. And, I think. It's to the previous conversation. The current generative AI responses that our physicians are doing in these pilots are going back to patients.

And we've talked about that issue of the voice coming through, being genuine, those kind of things. There's a whole lot of documentation. In the broader electronic health record ecosystem around patient care that doesn't have that problem and might be lower hanging fruit, if you will, for generative AI technologies.

I know there's a lot going on, as we all know, right now around. Ambient listening and turning that into a draft note for a clinician. Things like that. I think those are going to be relatively quick second phase to a lot of this in healthcare.

Fantastic. Mike.

Yeah I would echo what Brent's saying.

I agree with all of that. You, again, when we think about opportunities to automate in healthcare I think. Robotic process automation has reached its limits, and now I think we're at a whole new category of ability to automate. Like, I'd love to see the rev cycle fully automated for clinicians.

Why should clinicians need to code, bill, answer queries? Things like that. All this stuff, I think there's such an opportunity to automate, documentation in general. And, to echo Grant, everything from ambient voice to asking questions of the materialist, summarizing material, if you're thinking about consultant workflows and e consults, being able to generate a summary that could speed things up for providers discharge summary.

Summarization there's so much potential opportunity. And again, that's not necessarily on the patient facing side, right? This is for the clinicians and automating different avenues. So, I would agree with all of that. I think we have to start, diving into how do we look at outcomes and using these models for patient care, but that's going to be.

I think much harder, as Chris saying, we have to study this. We have to understand how these models are going to work in clinical care. And our initial kind of thinking on this is the models are going to have to be, trained to really understand how clinicians think, and we, it was actually a recent article published looking basically creating a list of clinician asked instructions that you could use and begin to use to train both, both the questions and the answers to start to really figure out how to train these models.

So I, I think the models are going to train Get more specific, probably even smaller than the large GPT models to really start to dive into that clinical decision making aspect and that has to be studied at length. But again, thinking about all the possible ways of automating tasks, I think is where you're going to see significant opportunity.

  📍

We'll get back to our show in just a minute. Our monthly Leader Series webinars has been a huge success. We had close to 300 people sign up for our September webinar, and we are at it again in October. are going to talk about interoperability from a possibility standpoint. We talk a lot about what you need to do and that kind of stuff.

This time we're going to talk about, hey, what's the future look like in a world where... Interoperability, where data, where information flows freely. And we're going to do that on October 5th at 1 o'clock Eastern Time, 10 o'clock Pacific Time. We're going to talk about solutions, we're going to share experiences, we're going to talk about patient centric care.

And see what we can find out. We have three great leaders on this webinar. Mickey Tripathi with the ONC. Mary Ann Yeager, Sequoia Project. And Anish Chopra, who I'm just going to call an interoperability. evangelist, which is what he has been to me ever since I met him about 10 years ago. Don't miss this one.

Register today at ThisWeekHealth. com. Now back to our show. 📍  

We were I think all of us were at UGM recently. I got an invite. It's my first one ever going. Not to offend anyone, but my highlight was listening to Satya Nadella, CEO of Microsoft, talk about this, and he introduced this concept of the dream machine.

And Chris, you sort of teed it up. We have all this unstructured data, just strewn. I don't want to make it sound like it's all over the place, but it is all over the place. I mean, it's just it's everywhere within healthcare. And he talked about the dream machine. He said, it's. It's something that can take natural language input.

And this is what we've experienced with GPT. We can just ask it, it's like, hey, give me a diet that I can follow and that kind of stuff. And it responds, right? So natural language in. The next thing he said is it has a reasoning engine. And that's what we're experiencing from computers for the first time.

It used to be like we had to go in and find things and we were the reasoning engine, but now it's reasoning and giving us. feedback. But the third thing that he pointed out, and I thought it was one of the, one of the most important points, was the the design concept around the implementation of this is one of a co pilot, not a pilot.

And he talked a little bit about, we feel comfortable if the computer's coming alongside to help us. We don't feel so comfortable when hear things like, automated driving, where the car is driving itself, or the plane is flying itself, or that kind of stuff. We like the concept of, hey, there's still somebody with their hands on the wheel.

Chris, I want to come to you to close out that question of use cases. How are you exploring new use cases and what are you looking at? Yeah, well, first of all, I'll just

say that while the the pilot that we've been performing on draft messages for patient questions is interesting and we're learning a lot, I actually don't think it's even going to be the best example of a use case of large language models.

Some of what Mike and Brent mentioned. I think are going to be really more impactful in some ways. So things such as revenue cycle automation, I mean, we've got a large team as does every health system doing coding of our charts and we can probably make them more efficient and reallocate them to higher value roles.

In the health system, if a large language models are taking the first pass and they're validating, much like years ago, voice recognition did for transcriptionists, right? The executive chart summaries for our ED physician and specialists, I think are going to be hugely impactful. And we're seeing our clinicians today using tools to help draft, letters to insurance companies about authorization review and denials and things of that.

So there's clearly a lot of promise and we can all agree on that. I would say, a note of caution. What we don't want to do is get an escalating war of AI. So while it's great, there's tools that can help draft physician letters to insurance companies about denials. AI.

And so it's a bit of a war of attrition and, similarly, there's no doubt one of the most common requests we're hearing is for AI scribes, can I get a microphone that listens and helps to helps me to document? And there's lots of data that's a source of burnout. The amount of time a physician spends answering patient questions in the electronic health record.

on a daily basis versus the amount of time an outpatient doctor spends actually writing notes. It's night and day. It's all spent on the note writing and so that the AI scribes are really promising. On the other hand, we also know that a lot of what goes in notes is useless. We're documenting it for compliance or regulatory or perfunctory kind of reasons, right?

And so I'd rather see an effort that reduces what needs to be documented for billing purposes, rather than an AI that makes long notes that are compliant with all these regulatory requirements. And so we just have to balance how we're using it in smart ways so that we're not just generating a bunch of garbage text that really is not useful for patient care, and actually adds to the overhead in the system rather than making it more efficient.

so we have a ton of questions rather than ask questions and have all three of you respond. I think what I'm gonna start doing is picking a question, picking a person, have you answer that question. Then moving on to the next one. If you guys want to add to whatever the person who I choose to answer the question.

Feel free to Brent, I'm gonna start with you. This is more of a technical question as we start down the road of implementing ai. What does it look like? Are we going to end up with 20 large language models around? Are we going to buy all these solutions from these various...

Vendors, and they're all going to implement a large language model, or how are we thinking about it from from a I was going to say an architecture standpoint, but just, you know what I mean, from an IT standpoint, how are we going to make this manageable long

term? So, I may not be answering your question, sweet spot, Bill, as asked, but we have spent a lot of time lately thinking about the following.

I think every health IT vendor out there now has an AI component to their solutions. And I'm sure many of those are fantastic. Some I questioned at times when I see a slideshow or presentation about it. What I think we're seeing right now is all this, siloed one off, everybody's baking AI into their existing solutions that are working in a space and they're incrementally enhancing that solution to add some additional values, that's good.

But I actually think what we're going to find, and I'm really curious what. What Michael and Chris think, but I think we're going to find that the real value is going to be unlocked once we stop looking at these silos of data and looking at these siloed solutions and really focus on bringing all the data together.

We've got. For example, we've got a collaboration that we've started with the American Heart Association where we're trying to, I guess the invoke term right now is multimodal AI, we're beginning to bring together AI that's analyzing images from our legacy images from our patients and the electronic health record data to begin to correlate those together, looking for undetected, undiagnosed cardiovascular disease and to try to drive, obviously, Preventative measures or, intervene early in those patients cases.

And so I think we're going to see more and more of these data, come together. Claims, traditional electronic health record data, social determinants of health. I think bringing all the data or as much of it together. And analyzing it in just like the human physician or human clinician does in the broader array or broader context is where the value is going to ultimately be truly unlocked.

So let's stay in this area of value and ROI. There, there's a couple of questions here that, how are we going to measure the ROI on these models and ensure that they're delivering value? I think there's an assumption that they're going to deliver value. Actually, Chris, I think I'm going to come to you for this one, because you mentioned 10 seconds per message.

So it sounds like you have some metrics you're looking at and working with.

Yeah, and I want to credit Dr. Ming Tai Seal and Marlene Mellon, our CMIO and outcomes lead who really led this effort. As I mentioned, it's, being submitted for peer reviewed publication. But we took a rigorous approach to outcomes measurement, and that's really something that we try to do as a learning health system with a lot of quality improvement projects.

Brent mentioned this earlier, these tools are really mind blowing, but they're still just tools. And so we have to think about first principles. What is it we're trying to accomplish? We're trying to, reduce the amount of time physicians spend in the chart answering patient questions.

Well, let's measure that then. So I don't think we need new metrics for the ROI on these tools. I think we need to apply these tools to existing problems and challenges that, have formerly been intractable, right? Another good example is we're partnered with Dr. Rod Turago and the AWS team on looking at our incident and safety reports.

Hundreds of these reports are filed every day. For the most part, they're text based reports. And our ability as a quality and safety department to identify trends is limited by the fact that they're all text. And so we're taking a pilot based approach. To trying to feed this data into a large language model, develop dashboards to help us better guide preventive and proactive measures for quality and safety purposes.

And those are the kinds of outcomes that really matter to patients. It's the quality outcomes, right?

Yeah. And I'm going to stay in this area and I'm going to come over to you Mike with a question on the clinicians and there's a series of questions around. Making the clinicians more effective, improving outcomes and those kind of things.

And one of them talks about increasing visibility to healthcare studies and white papers globally. And it's, it's almost impossible for clinicians to stay up on all the current writings and whatnot. But it's not impossible for machines to digest all that and make it available in the workflow.

And I think that's the promise that's sort of being alluded to here. How are we... How are we doing that? How are we helping the clinicians to be more effective with these kinds of tools? That's a

great question, Bill. I think we're still learning how to do that. I mean, there's an enormous amount of articles published of those articles published in the medical literature.

How many of them are actually changing practice, right? I mean, there, there's not many that, that occur each year and you're right. It's, I think someone published that. If you tried to keep up with all the articles, you'd have to read, 96 hours worth of articles every day, which, as we know, the math doesn't really line up there.

So there's, I think, significant potential for these tools to be used to mine data sets, whether they be asking questions of EHR data sets Great example is a company that spun out of Stanford called Green Button, and that's really using clinician informaticists to mine data and come back with that question, and Chris is smiling because he had a big role in that while he was at Stanford but that's the early kind of way of doing this, and I think, as we integrate it.

Great. Large language models into that kind of experience, you're going to be able to ask those questions of the data set for the patients you're caring for. You're going to be able to, I think, get better and better summaries and understandings of the literature that's coming out that's relevant to what you're looking for.

Again, this is very early stage Thank you. But I think there's huge potential there. I do want to talk about just really quick to dovetail on Chris and Brent's talk about, value ROI how we can do this. I think a framework is really important. We have a, so our chief data scientist, Nigam Shah put together a framework that we use called the firm assessment, but it's really about defining the value upfront and really mapping out a five year total cost of ownership.

So really understanding what that. Model is going to do for you because, if we just take models in and of itself, they make a prediction. And so what you do with that prediction really drives the value of the model. And so you need to work all the way downstream to understand how that's going to be.

And it's not always going to be a money value, right? I mean, there's going to be quality. It's going to be qualitative. How, how we're making clinicians feel, how our patients feeling. So there's a lot that goes into determining what the value is. I will say though that there's no way that we could pay five dollars for every prediction for hundreds and hundreds of models that we put in our, system in the future.

So it's not, that is not sustainable. And so I really believe these things are going to become more commodity. And they're going to be plugged into platforms that can handle these things, which is the right way to do it because these predictions, these models need to be part of the workflow. They can't live outside that.

So that's probably where it's going to go. I can't say for certain, but you know, I don't think we're going to be buying models. I think we're going to be You know, understanding value of these models. We're going to be sharing them. I think they're going to be mostly open source and then it's going to be the platforms that can ingest them and then the compute on the back end to make all this work.

Yeah.

Chris, you had a follow up?

Yeah, well, Dr. Pfeffer, as usual, I think, is really prescient in his thought. I just want to take something he mentioned and amplify it, which is, I think, one of the greatest long term opportunities here is to use these tools to help make our clinicians all maximally effective.

And what I mean by that, is ensuring that differential diagnoses, workup plans for patients, et cetera, are as data driven as possible, regardless of where, doctor or nurse trained them, or what patients they've seen in the past. And, this is one of the original promises of medical informatics.

Larry Weed described this in his famous 1971, papers about soap notes, that the idea of computerizing those notes would allow for more standard differential diagnoses. He tried software in the 1980s called the PROMIS system, which was a patient reported outcomes medical information system.

And it really was this idea that when presenting with a symptom, we should be asking a standard set of questions, doing a standard physical exam, ordering standard tests. And yet still, here we are 50 years later, we don't do that. I think that these models can really become assistive Bill, as you said, like Satya described on stage, a co pilot to, to help our clinicians all really function at their peak.

So another area that we're investigating at UC San Diego is actually how we can use these large language models. to better educate our future doctors. So when we think about our medical students and our trainees, they're writing a lot of these notes that are in the record about patients. And yet there's metadata in these notes about how the medical students or trainees are thinking about their patients.

And we can use that to help identify gaps and opportunities and train to those so that we're graduating the best possible future physicians. And that our faculty members are operating at the top of their competencies as well.

1 thing I was going to add a little earlier was when Michael was talking about some of the, ways we could see this moving my words, moving the needle, right?

For our care teams. I'm really excited about what. This technology can do for our nursing workforce. We haven't, in my opinion, we could debate it, but I don't think as an industry, we've talked enough about that yet, and I think it's something we need to elevate the conversation more.

I have this dream that our nurses are going to walk into a patient room. With a smartphone and have a smart TV on the wall, and that's the only technology devices in the room, and they're going to be able to talk to the electronic health record system. They're going to be able to display what they need to with these to patients and other care team members, and they're going to walk out.

And I don't know. I think there's work out there that I don't. I'm not educated on it enough to quote it, but, I think measuring this concept of, the time that a clinician spends in a room, we want that to be maximizing with the patient and minimizing with the computer, with the device.

And I think there's a lot we could do in that space. Probably not in the too far distant future with the performance of the models that we're seeing over the past few months.

It's funny you mentioned that because just like two days ago, I was talking with our chief nursing informatics officer, Gretchen Brown, and I was like, how can we just get a smartphone as the only device we need in the patient's room?

Because, we have computers in all the rooms and from a sustainability standpoint, it'd be really nice not to have to keep replacing them and really create that interaction that you speak of. I'm hoping we get there. I'm hoping we get to that point. But the nursing workflows and the opportunity for virtual nursing and the list goes on and on I think is really incredible and can be enabled by a lot of this.

Yeah, 100 percent agree. We need to look at our professionals beyond physicians and nursing is one area. I think there's a lot of paraprofessionals and other clinician leaders in the organization are going to benefit from these tools as well. I'm going to ask a question. It looks like it was posted by an anonymous attendee.

Which is what sort of legal, regulatory, and operational challenges are you finding a source of friction? How are you working through those? Maybe Brent, you can go first, and then Mike. I,

I think the regulatory challenges, I know that there's a lot of question about, where do you draw the line between these AI models and medical devices?

I think that's going to be an ongoing conversation, and I hope we can find right path forward to make sure that we don't overregulate this technology while also at the same time making sure it's safe and used effectively. I think to someone's earlier point our legal department is one of our departments here that actually has been early pilot group with our internal.

ChatGPT solution. And, I again it's obvious now, hindsight, but it's amazing to me how much, as they've worked with this, it's one use case or test works perfectly, and the next one doesn't, and it's because there was some minor ruling or change to the statute or law or regulation and, how it gets it completely wrong.

Because I think our legal team have been one of the ones where we've struggled actually, finding it have real traction and value for them.

I'll just add I agree with Brent. It's having the right processes in place, I think, when you think about this. So you have a governance that looks at models before they go into production.

on some framework to make sure they're as safe and reliable as you can make it. Not everything's going to be perfect. We all know there's plenty of studies out there that show you, you show an x ray to five radiologists, you get seven results, and you can do that for every specialty and every type of, diagnosis.

And then you have to monitor these things, so you need to have a process to say, okay, we're going to watch these things on a quarterly basis, we're going to run the analytics, make sure they're not drifting in a way that, is going to be dangerous. But yeah, I do hope we don't get into a regulatory paralysis where we're spending an inordinate amount of time figuring out the regulations, and it doesn't allow us to test these things and learn from them and really drive healthcare where we need it to be.

I mean, costs keep going up, I'd really love for us to leverage the technologies that the promise of kind of health IT was really to be able to bend the cost curve and provide better care. Let us figure out how to do that in a safe way I think is going to be really important.

Yeah, Mike, I couldn't agree more.

I think this is not an area to come in with a heavy handed regulatory approach, and I worry a bit about some of the recent guidance coming out of the FDA about treating some of these decision support algorithms as if they're medical devices. I was recently down in Australia, and they have a much more,

Regulatory

Focused kind of framework for this.

And it's really prevented innovation at the local health system standpoint, and you hear that from the health system leaders who wish they could employ some of these tools to help their patients, but they're really prevented by the regulatory burdens. There has to be a balance, though. I mean, obviously, with these AI algorithms, there's potential for bias and inequities to be introduced inadvertently, and that's where I think It really becomes obligatory on us to be learning health systems, particularly the early rollout sites to, to look at the data, ensure that we're not introducing those unintended consequences.

And so just as an example, our AI committee looks at two or three different types of, algorithms. One is what we would consider vendor based AI. And so we've got tools today that we use for things like stroke prediction and evaluation of scans. And those are pretty well described algorithms.

They've gone through some level of regulatory scrutiny because the vendor is selling them, right? The second is what we consider our locally deployed algorithms using vendor tools. So, for example, our EHR vendor has You know, partnered with Microsoft on cognitive compute. We can learn from our own data and roll out, sepsis prediction tools, etc.

To be honest, we've found that a lot of those are not high value. In fact, we've got some data coming out on the sepsis prediction building on work that Dr. Karandeep Singh did at Michigan that really shows that it wasn't timely enough to make a difference. And then we've got what you referred to Brent as the multi modal AI.

So these are the bespoke algorithms. That we're not building with vendor tools, but you know, rather with kind of a homegrown solutions that are using commercial applications. And we're finding a lot more mileage with that, we can actually predict sepsis with more accuracy and farther in advance when it.

actually helps us care for patients if we integrate the electronic health record data with bedside monitoring data and other data sources. The EHR data is necessary, but not sufficient because it doesn't have that high cadence of input where the bedside monitoring data is real time, but doesn't know enough about the patient to really help us.

And so integrating the data together, we've really been able to find some new outcomes and opportunities, but we're doing the work now of the ethical and governance overhead to ensure that the algorithms that we're constructing that are bespoke are not introducing bias or impacting outcomes in a negative way.

So that's how we're thinking about it locally.

Models are local, I think and the irony of the regulations, when new regulations come out in healthcare, we often use clinical decision support tools to help enforce them. So now you're going to regulate that, it's, these are really important to us and we have to have that flexibility to be able to provide clinical decision support without having to worry that And that's why we do it.

Everything we do has to go through some higher level approval. It will shut us down, really.

Yeah, Mike, there's some sort of joke in there. I just don't think it's a good one.

Thank you, Chris. I would have run it through GPT. It might have been funnier or more empathetic. Who knows?

All right. So, thanks for giving me that break today.

I appreciate it. The internet went down at my house. I want to close with one quick question and then one question for all of you on futures. The quick question is smaller systems. So, all of you are larger systems fairly some grants, some money flowing around from other things and other sources.

For those who are listening in who are with smaller health systems, is there sort of an easy button? Is there an easy on ramp for some of these use cases or some of the things that they're looking at? Keeping in mind that they're clinicians, they're doctors, they're just across the board, they're already using it.

Like, they're just going online and typing things in. I mean, if you were to be put in that situation, how would you be shepherding your organization to utilize the technology? Well,

I would partner wherever I can. I know that, a lot of our hospital systems across North Carolina that are part of the UNC Health network and family now, they were in similar situations and chose to affiliate with UNC Health.

And I think. I like to think we've, that we've provided great value to them in terms of helping them be more quick, more quickly adopt new technologies. I think that would be something that would be front and center here with AI. I think it's can drive additional partnership within the health,

health system community.

So it's partner with existing health systems. It's partner with potentially your EHR provider. I think every major EHR provider is heading in this direction. Partner with your PACS providers. A lot of them are heading in this direction. So there's a lot of, there's a lot of different avenues.

There's I wouldn't call them easy buttons, but there's a lot of avenues. to tap into work that's already being done. That's being informed by some of the work that you guys are doing. And a very

quick bill, this last thing I would add to that is, and I would also probably wait. I would actually wait and adopt proven solutions.

I think if I was a smaller health system with currently the financial headwinds we're facing in the industry, I think, there is an argument to be made, for waiting and being early or quick adopters to proven solutions as they

emerge. Alright, two minutes each. Futures. I'm going to have you project out five years.

What does this mean for the clinician? What does it mean for the system? What does it mean for the patient? You can choose any one of those to really comment on. Five years from now, how will healthcare be different? And we'll go in the order we started in. Mike, we'll start with you. Thanks, Bill.

This has been a lot of fun.

Always wonderful to have a chance to chat with Brent and Chris. Five years from now, I think. It's going to be completely different from what we see today in every one of those categories that you spoke about. I mean, my dream is always creating an environment that's just so simple and easy to use across the board.

Whether you're a patient, whether you're a provider whether you're a staff. And I believe we're going to get there in ways that I can't even predict today. And I think the real point here is this has to come from clinicians, from physicians from nurses from health systems driving this and not from different angles.

I, I think we have to be deeply engaged in how this is going to go in order to make sure in those five years, the dream of, Better patient care, lower costs, simplicity, usability, all of those things come true. So that would be my thinking on that.

Fantastic. Two minutes or less, Brent.

Yeah I'm all actually build on what Michael just said. I think if you think about the Star Trek analogy of, a computer, do XYZ or give me whatever. And you compare that as one of the spectrum to where we are today in healthcare. I think we're going to be closer. We're not going to be there in five years, I don't think, but I think we're going to be closer to that Star Trek type model five years from now, then we're closer, then being closer to where we are now, I think we'll be on the other side of that spectrum, if you will in five

years.

And Chris, last word. I hate to confine you to two minutes, but what have you got? Of course.

Well, I'm going to start with something Dr. Pfeffer said earlier, which is, we've been digitizing the medical record for 20 years, but in many ways, the promise of health informatics and the electronic health record is largely unfulfilled.

In fact, just the opposite. It's contributing to burnout with the doctor spending time after hours in their pajamas, completing documentation. And so, as I look out the next, let's say, one to three years, I'm cautiously optimistic. We really need to take a rigorous approach and measure outcomes, integrate things in workflow, and make sure that these tools have the intended consequence.

But as we look out five to ten years, I'm incredibly bullish. In fact, As Brent said, I look forward to the Star Trek computer, being my co pilot, and in fact, I think we're going to look back at the introduction of generative AI and healthcare as a huge milestone, as big as the introduction of penicillin.

So generative AI and these tools applied to our healthcare data are going to revolutionize the way that we deliver healthcare.

Fantastic. I want to thank the three panelists.

Again, thanks everybody for being a part of it and that's all for today.    📍 I love the chance to have these conversations. I think If I were a CIO today, I would have every team member listen to a show like this one. I believe it's conference level value every week. If you wanna support this week health, tell someone about our channels that would really benefit us. We have a mission of getting our content into as many hands as possible, and if you're listening to it, hopefully you find value and if you could tell somebody else about it, it helps us to achieve our mission. We have two channels. We have the conference channel, which you're listening. And this week, health Newsroom. Check them out today. You can find them wherever you listen to podcasts. Apple, Google, overcast. You get the picture. We are everywhere. We wanna thank our keynote partners, CDW, Rubrik, Sectra and Trellix, who invest in our mission to develop the next generation of health leaders. Thanks for listening. That's all for now.

Want to tune in on your favorite listening platform? Don't forget to subscribe!

Thank You to Our Show Sponsors

Our Shows

Keynote - This Week HealthSolution Showcase This Week Health
Newsday - This Week HealthToday in Health IT - This Week Health

Related Content

1 2 3 242
Transform Healthcare - One Connection at a Time

© Copyright 2023 Health Lyrics All rights reserved