February 6, 2025: Jennifer Owens, Senior AI Program Administrator at Cleveland Clinic, discusses the evolving role of AI in patient care and provider efficiency. As AI scribes redefine documentation and predictive models refine diagnoses, are we truly seeing the impact we expect? How do we measure success—through efficiency, patient outcomes, or clinician satisfaction? And when AI sneaks into healthcare workflows, do we fully understand the models guiding critical decisions? With governance, enablement, and the delicate balance between technology and human judgment at the forefront, this discussion dives deep into AI’s future in medicine and what it takes to move from innovation to implementation.
Key Points:
Donate: Alex’s Lemonade Stand: Foundation for Childhood Cancer
This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
[:(Intro) Formally or informally clinical user acceptance testing is going to be a really crucial step in AI adoption um, because all statistical and clinical validation in the world doesn't matter if your end users don't trust it.
My name is Bill Russell. I'm a former CIO for a 16 hospital system and creator of This Week Health, where we are dedicated to transforming healthcare one connection at a time. Our keynote show is designed to share conference level value with you every week.
Now, let's jump right into the episode.
(Main) All right. It's keynote. And today we're joined by Jenny Owens, senior artificial intelligence program administrator with Cleveland clinic.
Jenny, welcome to the show.
Well, I'm so honored to be here. Thank you so much for inviting me on the show. I'm really excited to chat with you.
Well, I'm looking forward to the conversation. Most people now, when I talk about artificial intelligence on the show, they apologize. They're like, man, I'm sorry.
son there's buzz is there's. [:Yeah, absolutely. So a little bit of my background. had a kind of a wandering career path. So I moved from basic sciences into clinical research when I started at the clinic, you know, mumble to years ago procured specimens did some data entry and was talking about clinical research and kind of looked at that from start to finish.
So I really had a grounding in the laboratory and that interest in data has kind of driven me through the rest of my career. I was previously in IT and got handed a project to kind of start shaping when our chief digital officer had mentioned, you know, I want to focus on three specific areas for our digital strategy.
d working on it with a bunch [:who has just joined us in mid:His technical expertise is amazing. His perspective, because he comes from outside of healthcare is really interesting and watching him interact with a different regulatory environment and a different kind of. A set of data expectations has been so interesting. There's tons to learn from him.
So I'm really excited to be here.
your responsibility. So what are you focused on primarily for the clinic? What's outcomes for you look like?
e an AI governance group set [:So I'm working on iterating with that, trying to clarify the role of governance, trying to streamline some of the processes so that somebody bringing in an AI idea has a clear idea of what do they need to present and what are their next steps. The other thing that I'm really focused on is enablement.
And I'm focused kind of on the low end of the funnel. So once you have a model that's been developed, that's been validated, you've had access to all the data, then literally, what do you need to do to get it into practice in the healthcare ecosystem,, who needs to be receiving that information?
What are they doing with it? What do we need to make sure we've got on the it side to make sure that it's supported. So those are my two main areas of focus.
One of the questions I get asked probably the most, and you're the best person to ask we're evaluating so many things, right?
re and go kind of stuff that [:What's the validity of those models? What are those models from a transparency standpoint and all that? But I'm also, getting asked, what's the return? What's the return for the clinician in terms of quality? What's the return from the clinician in terms of experience and efficiency?
What's the return for a patient in terms of outcomes and health? love to explore those topics with you a little bit.
Boy, I feel like I could talk with you about this for several hours, honestly. So whenever we're looking at a new product that's going to enter into our clinical ecosystem or operational ecosystem.
I kind of look at three separate things and that applies whether this is a vendor product that they're trying to sell to us or whether this is a model that we've developed in house. The first one that we want to think about is does it do what it says on the tin, right? , does the model.
of chronic disease? Does it [:The second one is do you see the impacts? Operationally that you expected to see. So if you're putting in, for example, like a sepsis monitoring algorithm, do you actually see the patients with sepsis are getting their antibiotics earlier? Are they getting the care that they need earlier?
Are you seeing mortality rates decline? For the example of , like a digital scribe, you know, are you actually seeing notes completed within 24 or 48 hours after an ambulatory encounter? Are you starting to see fewer days to close the encounter as you're looking at data? Are you seeing the doctors are spending less time in their notes outside of their clinical hours?
impact feeling that impact? [:Yeah for those notes, it was pretty interesting because we've had a couple of people on the show and they said, yeah, we turned on the ability to respond to inbox and that kind of stuff. I said, well, is it saving the amount of time you thought? And they're like. No, not really. it varies based on clinicians.
Some of them are so highly efficient at what they do that actually slows them down a bit. Some people just end up deleting, which gives them another step so that they can type the note in. And it's interesting. The experience It's not uniform across the population, across the groups of people that are using it.
And then sometimes it's just hard to measure and get to that number. We want to get to one number and it's probably a chart of numbers like it's better for these clinicians, but not for these. How do you have those conversations?
Where do those conversations go?
now, improving clinical care [:It's really important to be able , to kind of quantify. Hey, is this having the impact that we thought it was going to? And then to be able to solicit kind of those narrative stories as well. I want to be careful here as I caveat, because I've spent a lot of time in the AI scribe space and I've talked with a lot of other institutions.
So , my opinions here are my own and reflect a lot of different expertise, not just my experience at Cleveland clinic. But what's really interesting is exactly the point that you called out, right? A tool that is tuned for like a primary care practice may not do very well for a cardiologist who's seeing patients, you know, in an outpatient ,setting.
And so it's ideally, if you're going to be piloting something, if you're going to be looking at the impact, you're going to be doing some standard surveys to make sure that you're actually able to measure how clinicians are feeling. About the impact of this tool on their documentation experience and be able to marry that with.
t are they doing? You know,, [:This has made my life so much easier. I've heard people at other institutions say that they're delaying retirement because these tools are really helpful. And then I've heard people say, you know what, the way this tool is implemented, it does not cut time out of my documentation experience. It adds time it's introducing some, bloated language that I then have to edit.
And that takes me more time than using my smart phrases or, you know, my template that I already have with all of my dropdowns. It's really helpful to keep that really close pulse on your tool as you're implementing.
while you were talking and I was listening, but my scribe was taking notes.
about A. I. Assistance with [:And quantum computing. How do you manage the noise? Like there's so many I would imagine. And there's just so much noise out there. How do you get people focused on, Hey, look, we're, I mean, and you sort of mapped it out before it's like, Hey, can we baseline this? Are there quantitative measures? Are there subjective measures that we can do interviews and collect that data?
How do you stay in front of this and how do you keep the conversation productive instead of a lot of times where it ends up, which is ethereal.
I mean, a lot of things that one can do, and you know as well as I do, Bill, the amount of data that we generate in health care is so tempting.
ou want, that you can see if [:I don't know. Let's say, my call centers aren't great. It takes patients a really long time to get through to make an appointment. And then once they make an appointment, you know, maybe, they're waiting a long time to get to the appointment and they don't know that they have required testing before that.
This is a problem that a lot of health systems are grappling with, but there are, there are a lot of other points for improvement in that workflow that are not necessarily artificial intelligence. Problems to solve, right? These are workflow problems. There are decision trees that could maybe be made more simple.
So thinking really clearly about, A, what is the problem that you're trying to solve? And then B, what if that solution really needs to be artificial intelligence and what can be solved with the other tools that we have available to us?
Just flat out automating things and getting good quality data , and addressing workflow
the shortcut. People want to [:It's the most prevalent thing that people are talking about. The ability to listen to a conversation. And to essentially create the transcript of that conversation is I mean, we now have 12 year old kids who are doing it. I mean, they just take the artificial models and pull them out there.
The ability to make a good quality. Soap note, if you will, again, is is fairly easy for most of the large language models. If you give it the right input in the right approach when you're looking at the scribe solutions, are you looking at it from a platform perspective? This is a platform for voice across our system.
em for the clinicians, which [:So kind of both. And I realize that's an annoying answer, but I'll explain a little bit. So when we're, I've been in
healthcare for a long time. It's a very common answer because it's a, it's a complex environment.
Yeah. So when we're looking at scribe, we're looking for a solution in the first place for a very clearly defined problem, right?
Ambulatory encounter documentation, we feel is taking too long and is taking away from providers experience. So if it doesn't succeed at that gate, then kind of don't care. You know, how well it replaces dictation, how well it does other things. If it cannot solve the problem that we asked it to solve, then it's not the solution for us.
ting, but it's not quite the [:So I would also be thinking carefully about my vendor. How does my vendor feel about being asked to push their product outside of its. Limits. Are they excited about that? Or do they not love that?
The integration with your existing systems, how important is that for the AI tools that you're bringing in?
I will say that the literature is quite clear on this. A non integrated scribe tool in particular does not save providers time. I would be very curious about a workflow that moved some of the documentation workflow out of the EHR, and then had providers literally lift and shift it back in, and I'd be very curious to see how that was saving people time and effort.
What specific pain points do you feel AI is uniquely positioned to address within healthcare today?
ous ways for various formats.[:It's also great at ingesting data at scale. So I would put use cases in kind of those two buckets, right? So lots of language based use cases, and then lots of large data use cases.
this year in:
A huge thanks to our partners Parlance, Rackspace, and SureTest for making it possible. Join us and let's make a difference together. Visit ThisWeekHealth. com to learn more.
Is AI taking our predictive models to the next level or are they just enhancing them slightly or are they incremental gains?
Are [:I think we're going to see a real blurring of distinction between a predictive model and an AI enhanced predictive model because I feel like this is kind of a continuum on which our data scientists are going to continue to move.
Talk to me about integrating A. I. With existing clinical workflows.
I mean, how? Obviously, that's critical. the challenge in doing that? always talk about transparency of these models, and if it's going to be using the clinical setting, that's that's so important. , and in a lot of cases, models like that's their, I don't know, that's their secret sauce.
That's their intellectual property, if you will. I mean, how do you strike that balance between you know, their intellectual property and our need to understand how these models are making the determinations that they're, they're making.
Yeah, because if you don't understand how the model is ingesting data and working with it and producing its particular output, then it makes it a lot harder to kind of call horse feathers on the output if it feels like it's not aligning with your clinical judgment.
g enough of the process in a [:But in a way that your end user is going to be able to not only understand, but also digest on the timescale that they may have to really look at this output. Maybe click a thing for more information and kind of understand that. I think it's an area of opportunity, honestly. And I think formally or informally clinical user acceptance testing is going to be a really crucial step in AI adoption um, because all statistical and clinical validation in the world doesn't matter if your end users don't trust it.
You
live in Florida. at some point you have a health conversation with almost everybody you run into. And you know, one of the questions I keep getting is how long before this gets down to the patient? Like the patient feels a difference. You talked about call center technology and those kinds of things.
t on the phone, whatever. Or [:That's just call trees and decision trees and that kind of stuff. And I'm trying to explain to them, it's like, you know, having a conversation with AI feels an awful lot like having conversation with a person. And that lines getting blurred more and more. To hear how it's going to impact the patient or where they're going to feel the impact first.
look like a reduced time to [:If you're on maybe like a diagnostic journey, maybe it's a reduced time to diagnosis, right? If we're able to take advantage of more lab tests, if we're got, like AI doing a second read on your imaging. Yeah, I'm thinking in the very short term future here. If you're inpatient, there's a lot of AI that might be brought to bear on your care.
You know, we talked about sepsis risk prediction., we may be looking at some monitoring of hospital acquired infections. We might be looking at, ways to manage floor staffing and, are you at a regional hospital or are you at the main hospital? I think a lot of patients are going to feel a lot of it, but they may not know that they feel it.
Just like there's a lot of stuff that if we brought it onto the market today, it would be called artificial intelligence. It's just not called that because it was developed earlier. You know, I'm thinking about a lot of the innovations in imaging.
cond read. And as a patient, [:It was really the AI that said, Hey. Take a look at this. They probably will experience something something to that effect. I want to give you the floor and say, what's top of mind? What conversations , are you seeing out there or conversations that you like wading into?
Oh my gosh, there's so many. I think the conversations that are really interesting to me are the ones where we get really into the nitty gritty of what is the impact that we're expecting from any particular tool and how would we know if we felt that. It's not as much as I want everything to be quantifiable.
It's not always quantifiable, but to really understand, like, , if you're looking at an inpatient tool, can you go, can you walk the floor? Can you watch people go through their rounds? Can you hang out at the nurse's station and watch as they go from room to room and try to understand what are all the factors that are going into this particular solution, which.
f data that are flowing into [:In what we do, because on the one hand, a doctor or a nurse who is trying to document care on a patient is trying to minimize the amount that they are interacting with the electronic health record. They want fewer clicks, they want fewer boxes, they want, a streamlined workflow.
And then on the other hand, you have this enormous appetite for data and discrete data, which requires clicks and dropdown menus and this and that. And so the tension between those two is always going to be really interesting for me to explore.
We were trying to build a clinically integrated network in California.
If you know anything about California, one of the biggest challenges there is you don't employ those physician practices. They're all part of external entities. And so they said, well, we want to manage this clinically integrated network across this, this group of patients. And Oh, by the way, it's, and this was not an exaggeration.
und AI is this whole concept [:And that is going to have so many significant downstream effects. I mean, we're going to see. Research is gonna be better and we're gonna be able to identify, you know, risks within the data itself. We're gonna be able to accept more data if technology can process it. I can't tell you the number of conversations I had with doctors are like, I don't want your Fitbit data.
I don't want your Apple watch data. But you know, more and more you're seeing Apple go. We're gonna get FDA certified on this and this and this. I'm like, well, then that becomes real. Medical grade data, and I'm not sure why I wouldn't want that in my medical record. And I don't know, maybe as a patient, I'm signing off and saying, look, I'm not holding you liable for this, but I want this to be part of my medical record.
watch, which is measuring my [:Like, I think a lot about the oh, the consumer grade genetic kits that were all the rage a few years ago, right? Everybody could get, you know, their Helix or their 23andme and that data is still out there and it's still interesting and useful. Like, what if you could bring that into the health system and have just like, Okay.
I don't know, like an AI trained by some board certified molecular pathologists scroll through your helix stuff and say, Hey, you know,, maybe we'll bump up that recommendation to see a cardiologist by a couple of years. That'd be cool.
nship was at M& M Mars in:And I remember [:I think this is going to change the game. I, and if we were to imagine that the phone we're carrying around today and all the things we can do with it versus the phone that came out when it was released, it's very different. I think the same thing is true with AI. I think we go, oh, it'll never do this.
It'll never do that. But I'm not sure we know what it's going to do. I mean, the PC did a lot more than a spreadsheet I mean, where do you see it going? , how do you see it evolving? And not only from potentially a technology perspective, but , our views at it as a society, you know, self driving cars.
ked to people who are taking [:I'd rather have Waymo driving than my grandmother driving.
I mean, that's a fair point. The thing that's interesting to me about the self driving cars in particular is, keep up on my reading. I'm aware of the stats. I know they're safer. But it just doesn't feel the same, that tension right there, I think, is what's going to drive a lot of our adoption and our innovation in the future.
But you're the quantitative person, right? So I can pull up stats for accidents today and accidents that lead to death, and like. Today, zero will be done by autonomous driving and like 10, 000 will be done by humans. Yeah., but tomorrow, one will be autonomous driving and it will be a fatal accident.
Still the 10, 000 that were caused by humans, but there's going to be one and that's one too many.
Where's your headline going to go? And we, we come up against this a lot in healthcare AI to how good does it have to be? Does it have to be better than a first year medical student? Does it have to be better than a fourth year medical student?
[:And I think self driving is a great example of that. And I think healthcare is another example of that because it starts to feel like your standard philosophical example, right? The trolley problem. , if you don't pull the lever, you're going to hit five people. If you do pull the lever, you're going to hit one.
But if you pull the lever. Then you're responsible, right? If I put this AI model into existence, then I'm responsible for the harm that it causes. Whereas, you know, people getting in accidents on their own. I didn't cause it. It's interesting to think about. I think things are really going to drive the future of healthcare AI.
One of them is going to be the technology, right? And obviously the technology is moving very quickly. And I think Where it's going to really come into play is a lot of secondary findings and safety net type stuff like, I might go in for , my annual physical and be like, Oh, did you realize, , you have a family history of, you know, colon problems.
Maybe we should [:The care is still there and , the judgment is still there, but the actual like running down of the list and the differentials, maybe some of that is there for them to interact with. The second thing I think is going to be the capacity of the health system then to deal with that. If I go in, like, for example, if I go in for an eye appointment, they image my eye and they're like, Ooh, it looks like you might have some cardiovascular disease.
Can they then get me in to see a cardiologist? Will I be able to be maintained within the same health system? And will health information exchanges be a barrier to making use of all of a particular patient's data?
Interesting.
And then the third one is adoption and how, and how patients feel about it, right?
, do I like that? Do I want to go in for my eye appointment and have my whole self taken care of? Or do I want to go in for my eye appointment, have my eye appointment and leave and not worry about it?
[:And then we talk about it. Like, it's 1 thing. Oh, it's health care, but there's different risk tolerances across the health system for different things. Like, if you're going to. You know, put AI in the cafeteria, blah, blah, blah, that, you know, my risk tolerance is a lot higher than in the or, or any, any kind of diagnosing or whatever.
It's my risk tolerance is it's better. But, we also read about these studies. I'm putting you on the hot seat here. So we read about these studies and, you know, you have physicians who are diagnosed, they do the studies based on things where we already know what happened, right?
It's a retrospect, but we use doctors and we say, okay, diagnose these things that we use AI models and the AI models are, you know, predominantly better, not 100 percent but better. But then there's the one that everyone's talking about right now where there was physicians was one level. AI was the highest level.
And in [:It just is. , they're known for being better doctors or most academic medical centers. If I get cancer, I am not going to any of the hospitals around me. I am getting on a plane and I'm going somewhere. Well, if I can train the models with that level of expertise, doesn't that elevate the doctors down the street from me to actually, you know, come up with a better diagnosis because they're going to do the same blood tests.
d it comes out with a better [:So I want to respond to both of these because I think they're both really interesting. In the first place, anytime you see a paper that comes out that pits physicians against like chat GPT for diagnosis, look at two things.
Look at the sample size. Yeah. And look at what they're actually doing, right? Are they doing, the new in new England journal of medicine, like case studies? Are they doing yes, Emily and then ask yourself like, okay, well, you know, When was the last time I asked my doctor what their score on like the USMLE was?
That's sort of my point on the second one, which is we don't know, but I do know that I'm going to get better care at certain facilities. I mean, that's well documented. I'm going to get better care in certain facilities than others.
Absolutely. And so I do think that artificial intelligence offers us an opportunity to do knowledge sharing and education and training.
d or a technological support [:just hopeful. I want to keep living here because I really like living here. Yeah. I think it was the head of Walmart was in Walmart has all this. They're the largest employer in the country, or at least they had been for many years. And they essentially were self insured.
So they had just tons of statistics and they were able to say, Hey, look number of our cancer patients received. Either the wrong diagnosis or the wrong treatment plan. Based on, you know, escalated because what Walmart would do is they sign deals with you and others, whatever.
If somebody presents with a certain diagnosis, they actually. Fly them somewhere else for a second opinion. And they're partnered with cancer centers. They're partnered with others. And so they were able to say, look, here's just the sheer numbers, the number of people receiving.
der spectrum. I don't expect [:or we just don't have the expertise there.
Yeah. I think my hope is that as we start to get some AI support for diagnosis and for treatment plans, right. If we're thinking again about the short to medium term future, I would hope that we could also get it to take into account some of the patient autonomy.
That might result in a different path that might be strictly medically indicated, right? You tell you spoke about cancer treatment, right? There's a lot of decision points that go into your cancer treatment. And what is absolutely clinically recommended may not be the best fit for that patient. I think it'd be really interesting to start to see some patient voices in some of this as well.
. And that's really exciting [:People actually taking ownership of their own health.
What a novel concept that would be. But yes, that would definitely change the game. Jenny, I want to thank you for your time. I want to thank you for the work you're doing and I look forward to keeping in touch and hearing how things progress.
so honored to have been able to speak with you today.
Thank you so much, Bill.
Thanks for listening to this week's keynote. If you found value, share it with a peer. It's a great chance to discuss and in some cases start a mentoring relationship. One way you can support the show is to subscribe and leave us a rating. it if you could do that. Thanks for listening. That's all for now..