July 14: Today on TownHall Brett Oliver, Family Physician and Chief Medical Information Officer at Baptist Health speaks with Matt Lungren, Chief Medical Information Officer at Nuance Communications, a Microsoft Company. How can generative AI revolutionize healthcare, specifically in terms of the reception and experience so far with Nuance and the Dragon Ambient eXperience (DAX)? What are the misconceptions surrounding the use of generative AI, particularly in the context of ChatGPT? What are the most exciting use cases for generative AI in healthcare, and how can it transform tasks such as text summarization, knowledge retrieval, and analysis of medical literature? How can healthcare organizations navigate the regulatory landscape and ensure the safe and responsible use of AI algorithms, particularly in terms of continuous learning, monitoring, and evaluation?
This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
Today on This Week Health.
Not to get dramatic, but I've not seen. Doctors teary eyed about a technology.
I think maybe ever but this is one of those moments. It's kind of like the iPhone moment almost for health care. We all know that we go home and sign our notes and get 20 emails a day saying, did you finish this? If you take even just a good percentage of that away from me it's like a weights lifted off my shoulders. I think that's the experience people are getting.
Welcome to TownHall. A show hosted by leaders on the front lines with interviews of people making things happen in healthcare with technology. My name is Bill Russell, the creator of This Week Health, a set of channels dedicated to keeping health IT staff and engaged. For five years we've been making podcasts that amplify great thinking to propel healthcare forward. We want to thank our show partners, MEDITECH and Transcarent, for investing in our mission to develop the next generation of health leaders now onto our show.
welcome. I'm Brett Oliver. I'm the CMIO for Baptist Health in Kentucky and Southern Indiana, and I am super excited today to have Matt Lundgren with me. Dr. Lundgren is the CMIO for Nuance and is also a pediatric interventional radiologist at UCSF. Matt, thanks for being here.
Thanks so much for having me, Brett.
Listen, you've got an informatics background, something that I didn't have as I got into my IT role. I'd love to just understand how you got interested in the area, just to get things started and particularly in how it applies, in medicine.
Yeah, it's kind of a strange story. My informatics background is definitely a casual education.
I started out in sort of health services research as part of my public health degree. And I was really interested in sort of how we're looking at population health. Level data, right? That was my initial thought. And when I kept digging in, I realized that. A lot of these big population level initiatives and some of even the clinical guides is based on, claims codes, right?
To drive diagnoses. And if you ask anybody who's played with claims codes, they'll tell you those are maybe 30 to 40% inaccurate. And you're sort of like well, there is there a better way? So, when I was when I was in my radiology training way back in the day, I realized that, well, there's a lot more, like, information slash almost ground truth about certain diagnoses that we can get.
from radiology studies. I wonder if that could be a better signal than, maybe ICD codes or some of that very sparse structured data. Of course, the challenge is that it's unstructured, right? Because radiologists like to, pontificate narrative way. So that's when I got into NLP and you know back then it was pretty rudimentary but even support vector machines were the big thing.
and did a lot of work there so what we found was that, in fact, we could Get really good granular population level health information from the rates of positives and certain findings on imaging studies. And then we could actually tie that to policy. And so we really started to take a lot of our findings and say, well, we can show that a given health system, or a given practice that owns their own imaging equipment.
Even though we know there's an incentive to self refer we can actually show that they are right based on the rates of normal, not normal and things like that versus a system. That's not. So those kinds of things actually led to policy. And I just got the bug. I mean, you know you get lucky your 1st time project and then you're hooked.
You think it's always going to be like that. And of course, I failed so many times since then that I. I've learned I've learned What the reality is, but it really got me interested in truly trying to leverage data to, translate and affect care, affect policy, and really, sort of move the needle to something that we can use as more accurate.
Oh, fantastic. All right. Well, I would be remiss if I didn't start with you being at nuance with, the most recent announcement there, the generative AI using chat GPT4 into your dragon ambient experience, your DAX. So, so far what's been the reception. I know some of that. Build and construction is still underway looking at launching this summer, but I'm just curious.
Is there anything you can talk about or that you want to share that the experience so far?
Yeah, absolutely. I think, you know overall, like, I think my role is, as you know I spend about half my time at Microsoft because of my background in AI and healthcare.
And then the other half of my role really on the translation of some of that. Cutting edge stuff that we're seeing either in a research lab or startup group or whatever. And how do we bring that into health care, but do it responsibly and safely, which is actually those words that should be in bold and, all caps.
Right? Because we've all seen the power of some of these tools and technologies in just the commercially available solutions that we've played with but, we are also more fault tolerant in those environments. And then you start to really think critically about if this was used For real for health care particularly for things that are critically important, we have almost a 0 tolerance for errors or failure.
Right? So that's really where it starts to get really exciting because it is a newer field. But, these are some of the brightest minds, in my opinion, that I've ever worked with and that's saying something, right? If I've definitely been in a lot of great academic institutions and really trying to sort this problem out from.
Some of the best computer scientists, but also, the crew at nuance who has 20 years in health care. So it's not like they're having to learn health care along the way there's these really beautiful multidisciplinary collaborations that are spinning up and have spun up now for the past year.
And it's been incredibly exciting. So Dax will be 1 of the 1st things that, again, this is around the clock work for almost a full year on this because of our relationship we've had. A little bit of a headstart, right? You're working with some of those bigger models that hadn't been released at the time.
And we're seeing just, it's just phenomenal. What these things can do, I think in particular tying it to a use case that immediately changes the game, right? We've obviously had DAX and, there's challenges in deploying something like DAX because of that error rate with some of their traditional machine learning models, they just weren't smart enough.
To do all the work without another human in the loop and that sort of really hurt the ability to scale. And so now, again, with this product you'll see a lot more about it at him's we're really seeing, I mean, not to get dramatic, but I mean, I've not seen. Doctors teary eyed about a technology.
I think maybe ever maybe maybe when the x ray first was invented, maybe people got here, but this is one of those moments. It's kind of like the iPhone moment almost for health care. Because again, I feel this. I'm a proceduralist. I have a clinic. I spend all my time dealing with the notes.
We all know that, right? We all know that we go home and sign our notes and get 20 emails a day saying, did you finish this? Did you sign that? Right? 100%, right? And so if you take that away, even just a good percentage of that away from me it's like a weights lifted off my shoulders. I think that's the experience people are getting.
We're still in early, sort of that, early private preview just to make sure that everything is 100% buttoned up. But so far I'm incredibly hopeful for this. And again, as we continue to adopt new technologies to really start addressing real problems, right? And not just be another, app on your desktop that kind of just gathers dust.
can tell you I'm excited. Just I've used the traditional if we want to call it that decks for about six months and what you describe that cognitive load that's lifted. Number one, it's going to remember things that I have forgotten. There were times that I would look through a note and think, I guess they did mention that, you know that I wouldn't have gotten down myself.
And, the patient's perception to when you are no longer The note taker, I never understood that. Like, as you go through your career, you just do it because that's what you've always done, but you know, we're not supposed to text and drive, right. Or you don't have generally have the person leading a meeting, taking minutes.
Why? Because number one, multitasking doesn't work, but two, you're distracted. And yet we think. It's okay for me to listen to you, develop a differential diagnosis, come up with a treatment plan. And at the same time, when we document these quality metrics and things I mean, it really is, it's one of those game changers.
So I'm excited to see where once we even take the person out of it that it's applicable to almost everyone. I think it's exciting stuff. So thanks for that.
I really like what you said there that the don't the text and drive, I think that's a phenomenal tagline. It might steal that. That's a very good way to look at it because that's the kind of how it feels.
When we're doing, obviously, we were very well trained. We can do a lot of things, even with some of our brainstem kind of motions because we've done them so many times, but I always there's always the back of your head. That wonders, did I miss something right? Or did I was I not listening when I documented something and the Medicaid everything right?
And that's a phenomenal way to put
it. Yeah, I think what most people don't realize that they've not used it is it's not a transcription. It's a true summation, of the notes or of the encounter. So, yeah, more to come there, but I'm excited. speaking of generative AI, in your opinion what's the biggest misunderstanding at this point?
Maybe in the in the genre itself or in ChatGPT specifically.
Well, I'll start off by saying, I don't think anyone on the planet knows everything about what this technology can and can't do yet. I think that's really important to know. I think the other thing that I I think the most common.
I wouldn't, I don't want to call mistake, but maybe even like user error, quote, unquote, that folks, when they 1st start to encounter, they treat it like a search engine. And generative AI is not a search engine. It's now we can create environments with which you can interact with the agent then to then do search type tasks.
That's a hundred percent doable. You'll see that obviously if you use Bing chat it's a phenomenal kind of user experience but really these are knowledge engines, not search engines. And what I mean by that is that it's knowledge kind of compressed and orchestrated in a way that we don't fully understand.
But it's useful right and the interconnections at this scale have created what we are, we're all terming emergent behaviors meaning you wouldn't expect. a model that is, trained on Wikipedia, even a traditional NLP trained on Wikipedia to predict the next word can then do three or fourth order logic problems, right?
These are not things that anyone predicted, right? And that's why we're all kind of both surprised and excited by this. And so, but treating it like a source of knowledge, which also means that you have to interact with it in a new way. You can't just be a, can't be Google-ese right and get the information that you necessarily want.
You really have to look at it as a, and again, To use a term that's been used quite a bit lately, but it's really a copilot. Right? It's almost like having an intelligent, I guess, precocious undergraduate student that has a lot of broad knowledge and you need to focus that person or that assistant.
To get the exact kind of information that you're looking for. And it's an interactive thing. It's not like a, hey, do this and it's perfect every time, but it's just beautiful to see if you can start to get a cadence and a system that works for you when interacting with these systems, what you're capable of.
And again, we've seen I think a lot of folks, once that light bulb turns on, this is how I can use it. then the use cases just start spilling out, right? And they start to fail. I can use it here. I can use it here but it takes a little bit, right? It's a little bit of a learning curve, but it's powerful technology.
📍 📍 Alex's lemonade Stand was started by my daughter Alex, in her front yard. It By the time she was four, she knew there was more that could be done. And she told us she was gonna have a lemonade stand and she wanted to give the money to her doctor so they could help kids like her.
It was cute. Right? She's gonna cure cancer with a lemonade stand like only a four year old would.
But from day one, it just exceeded anything we could have imagined because people responded so generously to her.
We are working to give back and are excited to partner with Alex's Lemonade stand this year. Having a child with cancer is one of the most painful and difficult situations a family can face at Alex's Lemonade Stand Foundation, they understand the personal side of the diagnosis, the resources needed, and the impact that funded research can have for better treatments and more cures.
You can get more information about them at alex's lemonade.org.
We are asking you to join us. You can hit our website. There's a banner at the top and it says, Alex's lemonade stand there. You can click on that. And give money directly to the lemonade stand itself
now, back to the show. 📍
📍 What are some of the use cases that have you most excited? If that's fair,
I mean, in healthcare for me, anytime I see things that we're doing with texts, whether it's summarizing creating reformatting. Looking through large amounts of just, let's say, even literature from a research perspective, knowledge being able to interact with that in a way that has never been possible before is really what I love to do.
So, just to give an example, there's something called grounding, which is where you take a language model, like a massively powerful model, and you tell it to focus on, let's just say a store of documents, in this case, these are all papers that I. promise myself I would read and I try to read at night and I fall asleep.
So I'm not getting to it, right? And maybe I'm not remembering the key points. I can ask this model to please summarize all of the key points, interrelate them, and then have a conversation with the model. Well, what about this technique? Did they try this? Was this in the limitations?
What would you do next? It's just so fun to do that. And these are the kinds of things that I personally, I would need two or three days. And a long walk to get to the level of what I can now accomplish in an hour or two, just interacting with the data. And again, I think we're just stretching the surface, Brad.
I really feel like. The stuff that's coming out from the community about how they're using these agents it really does make me feel like I, I'm a gen Xer, so this kind of feels like you got your a AOL disc in the mail. You're like, what's this internet thing? Right? And you log in, you're like, oh, okay.
I can see what this is gonna go. Right. It's, that is that same kind of feeling for me as I watch this thing kind of accelerate.
I love listening to you cuz I think I'm. Thinking of use cases at a more superficial level that, I haven't even thought of some of the deeper I'm thinking from a research perspective, I'm seeing a patient with a certain diagnosis, and it's able to go out scrub all the maybe the government websites for current research opportunities for that diagnosis and presenting them to me rather than having a.
Database or registry, what have you. But I think I like what you're saying. You know let's take it a step further. Not only here are some trials available, but here are some treatments and the outcomes based on patients just like yours. But doing that in the workflow in real time, that's the key.
Like we could, to your point, you could read all those papers, you can, take two days in a long walk and, but you know, you don't have the time for it. Or at least not making the time for now. You can condense that to an hour. That's exciting.
That's pretty powerful stuff. And I think that the, again, just.
The stuff that you're seeing, I mean, just imagine now you have agents that are interacting with other agents, they're using tools, they're using, maybe you don't know how to code, but you've been working with a bot in natural language that then does the code for you to create an application.
I mean, these are the kinds of things they're coming up everywhere. Right. And I think that the term that I love at this point, which is overused, but I think it's true. Is a democratization of this knowledge, right? So I don't have to have gone through med school and then also see us and then also, application design, I can actually get pretty far.
with some ideas and exploration without necessarily having to go back to school and necessarily right or really spend a lot of additional time there. Yeah, that's
great point. Well, as far as just other AI applications, imaging seems to be kind of a no brainer that a lot of organizations are at least starting to scratch the surface on are using already.
Focusing their efforts. What's in second place right now? I've had trouble kind of determining What's coming up? It seems to be a big gap. Maybe that's because the pattern recognition and the algorithms that are available. Any comments there?
Yeah, I mean, obviously, I think we can sort of agree that almost any text task is theoretically, and I don't want to, it's a little strong to say solvable, but at least you can see a path, right?
With these technologies, so text, whether it's translation, whether it's customizing it for different, use cases, whether it's patient friendly things, whatever those things are, I think a text world is absolutely within the realm possible when it comes to imaging. Now we definitely have seen a ton of work.
And what we call vision language models. So it's like, instead of large language models, they often are called vision language models or collectively it's called foundation models, just big giant models. And these of course are trained on images and texts, right? And there's various ways to train them, but contrastive learning is a really common way to do it.
And what we find is that. The interaction between the pixels. So maybe there's a dog in a photograph and then a long paragraph and the word dog. Those 2 things are correlated in that quote unquote embedding space or in that knowledge ball of knowledge that makes it very useful. Now, that's great. And as we learned in the very early days I was definitely there in the earliest days of computer vision and radiology right because of course that's where my research started in computer science and ultimately.
Thank you. We think that it's possible to start moving away from narrow AI to generally in that space as well. However, there's always a, but the, but is that as we recognize those of us that figure that practice every day, the signal to noise, the variability, the potential for bias is pretty substantial.
I think it's going to be something that we're going to have an opportunity to work on as a community. It's not just going to be 1 group or another, but I do think that's going to be the next level. And this is where when you hear the term multimodal, that's really what we're talking about.
So we, have a model that has incredible language and writing skills. Can we now teach it to see right? Can we teach it to look at different tables and charts and graphs and EKGs and EEGs and ophthalmology, right? All those other, because as humans, right, when we practice, we are multimodal people. Can we take that next level and see what, what might happen there?
And I think, It's going to take some time for sure, but I think it's an incredibly important task just to say, how can we make these things even more useful and powerful and healthcare applications fired up over
here. It's good stuff. You and I have talked a little bit about evaluation of AI algorithms from an organizational perspective and then the monitoring of it once it's put into a clinical setting.
What are your thoughts right now where we are from a regulatory or do we need more guidance from the government? At this point, I think, we recently had sort of a blueprint printed out by and forgive me, I'm blanking on the consortium of folks that put that out within the last week or two.
But what are your thoughts about does the FDA need more input or not? I'm kind of a small government person myself, but I also recognize as I'm putting these things together, like if I don't have your background or someone in my organization with your background Whoa, you know especially learning models.
I think it's something that we're all grappling with and not just in health care, right? Like, I mean, imagine that these you start talking about these things with regard to cyber security and the capability to start thinking about them in terms of corporations and potential for having insider knowledge or a heads up all these kinds of weird kind of sort of things that government would normally look after.
And that's not even to mention some of the worst case scenarios in terms of bad actors and, misinformation. There's a lot of challenges here without question. And I don't think I would take a job as a regulator right now, just given how thorny this is but in the healthcare space I have to say that the FDA, because we've been looking at AI, I think for a long time, right in healthcare and where that can be applied.
And I think the FDA's sort of started out treating this like a software, right? , no continuous learning, no retraining. These are all 100% reasonable approaches. But obviously even narrow models. Are able to learn and get better. That's kind of the reason why we like using these models.
And ultimately, as you point out, monitoring these things, if they go off the rails, if a new pandemic hits, how are we going to have a model that can keep up? Right? And so to the FDA's credit this was even back, I think, in the late, 2019 range, they had put out sort of a comments, sort of a proposal, a draft to think about how might we do continuous learning?
How might that be safe and effective? And just recently, I think in the last month, actually they've come back with another proposal that was refined and I do think that they're very tuned into this. Will they get it right the first time? I don't know.
I don't know how you can because things are moving so fast, but to their credit, I think we are getting to a place where the literacy level, I think in a lot of healthcare institutions is rising. At the same time, the technology is maturing and becoming more safe, I think generally. And I think the combination of those two might, enable an environment where let's just say, for example, a healthcare environment does have the right pieces in place.
To take models and retrain them on their own data and manage that process themselves without having to go back to the FDA every time. they've put forward an idea of a continuous learning plan or continuous improvement plan or performance plan. I can't remember the exact wording, but the point is is that if you put that proposal forward saying, here's our plan.
Here's how we're going to use these models and have them learn on our data as they go. that seems pretty responsible to me. Particularly we know so much more now than we did 5 years ago, right? We had no idea of the amount of biases that would creep up on us and the drift and the problems that we'd run into.
I think we're all pretty wise to that. Are we going to encounter more problems at 100%? I mean, this is still relatively early days if you really look back. But I think the upside is so big that I just don't see I don't see us not continuing to plow ahead.
Do you see a day that, it's so pervasive, the algorithms just in.
every aspect of health care that it's almost maybe part of a general consent. Because I also wonder to like at what level of understanding does the patient and provider of that patient's care need to know? Hey, the decision I'm making on your treatment or your care is based on this right now. I'm kind of torn, but I also feel like, gosh, it's coming.
So like I'm gonna have Hundreds, if not thousands of these things in play. And is it just going to be part of the general consent process where you say you're consenting to treatment? And guess what? We've got some of these things in play. Any thoughts there? I'm just curious.
As a patient, maybe it's a generational thing to there too.
I really like to have a human that's my, you know and I, and if they're using AI tools, I want to be able to look the person in the eye that's responsible for. looking at that information or using that tool and have them process it through their interaction with me. I don't know if everyone's going to feel that way indefinitely and certainly the newer generations.
Without a doubt are much more technologic first, right? I would say. But I think as the number of applications kind of continue to proliferate I wonder if there's an expectation in general. It's just like, when the Internet came up I guess if now, you know maybe if you saw a doctor using the Internet, you'd be like, what are you doing doc?
Like, and but now, like, someone looks okay, fine, totally normal. We do it every day in our lives. We use GPS. We use Google, whatever we use, right to get around. I think that these tools are going to, completely saturate our society. You've seen the announcements from Microsoft, every application, your email, your PowerPoint, your Word you're going to be interacting with these models on a daily basis.
So. I think the sense of normality, maybe like the cell phone to right something about the smartphone. I think that's going to be similar to the expectations of probably our patients as well. However, for me, at least, even with all that great technology, I still want to be able to say, who is that person, that will help break the news that will help guide me, that will help answer my questions in an interactivity that we frankly can't replicate.
And I don't want to either, right. I really want that relationship to stay um I guess, special. Yeah,
no, I agree with you. I think most clinicians, I would say, that's why you got into medicine, yeah, there's this great knowledge base and you like. But touching a patient, being with them, supporting them is key, even for a radiologist, right?
Just kidding. Hey, last question. If you could have a conversation with CMIOs, CIOs, sort of across the country right now, what's the one or two things that you'd want to make sure they understand about all that we've kind of talked about, either they should be preparing for, thinking about doing at this point in time?
I will take away my sort of commercial interest hat just for this part because I don't want to, do that. But I will say that if you have any sort of operational decision making with regard to data, I think now is the time. I don't think there's clearly before data is important.
We can run analytics. We can understand our business. But I think that's all been sort of table stakes understanding. Bye. But now, more than ever, preparing your health system for being able to take advantage of these tools will require data literacy to the extent of being able to have your data together, interoperable, and flexible.
Because, again, these tools are only as good as the data that they can either access or use, right, in various applications. And what I would hate to see is that a health system is sort of, I guess, constrained to only using solutions that are coming from other either corporations or other health partner institutions.
It would really be nice if everyone could sort of work with these tools on their patient population with their data. And that requires, again, in these times, it's going to be tough, but I can't think of a more important thing just to focus your energy on is just ensuring that digital transformation of that data readiness Great is priority because again, all this stuff is going to come up practically out of nowhere.
And if you're not ready, you're going to put yourself a couple of years behind, trying to get there. And again, I think ultimately these tools will really unlock a lot of patient access, a lot of patient care. Opportunities. And so again, getting your data house in order, I think is priority one.
that because you don't have to predict what's coming next. You didn't have to know that y'all had been working on, generative AI model for the last year, you were just ready and then whatever tools make themselves available, Matt, man, it's, this has been fantastic. Probably could talk for another hour, but be respectful of your time.
Thanks for joining us and I really appreciate it. No,
thanks, Brad. Thanks for having me and great work on the podcast and look forward to listening to future episodes. Absolutely. Thanks. Thanks.
📍 gosh, I really love this show. I love hearing what workers and leaders on the front lines are doing, and we wanna thank our hosts who continue to support the community by developing this great content. If you wanna support This Week Health, the best way to do that is to let someone else know about our channels. Let them know you're listening to it and you are getting value. We have two channels This Week Health Conference and This Week Health Newsroom. You can check them out today. You can find them wherever you listen to podcasts. You can find 'em on our website this weekhealth.com, and you can subscribe there as well. We also wanna thank our show partners, MEDITECH and Transcarent, for investing in our mission to develop 📍 the next generation of health leaders. Thanks for listening. That's all for now.