The combination of Voice, ML, AI, and NLP have the ability to make the computer disappear from the exam room. Nuance demonstrated it and Joe Petro explains it in this discussion form the HIMSS floor.
This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
Welcome to this week in Health It where we discuss the news information and emerging thought with leaders from across the healthcare industry. This is Bill Russell recovering healthcare, c I o, and creator of this week in Health. It a set of podcasts and videos dedicated to training the next generation of health IT leaders.
This podcast is brought to you by health lyrics, helping you build agile, efficient, and effective health. It Let's talk visit health lyrics.com to schedule your free consultation. We are recording a series of discussions with industry influencers at the Chime HIMSS 2019 conference. Here's another of these great conversations.
Hope you enjoy. Great. Uh, here we are from above the hymns floor. We've actually elevated our game. Gone One step up. Uh, we're at the, uh, nuanced booth and, uh, we're here with Joe Petro, who is the, uh, c t O for nuance. And, uh, as I keep saying, I love getting the CTOs in the room so that we can have, uh, in-depth conversations and as, uh, as we've heard from our audience, they wanna hear more about voice.
Mm-hmm. voice is, uh, is one of those things that, um, I think was an immer. Urging technology a couple years ago, and now it's okay. Where else can we use it? I mean, we've been doing notes and dictation for a while now they're saying, Hey, where else can we do it? So what are you guys showcasing here? So talk a little bit about, um, you know, what is, what's the big announcements for Nuance at this show?
Sure. So, uh, downstairs we have what's called an experience room. And, um, the, uh, the, the name of the product or the solution that we've created is called Ambient Clinical Intelligence. And fundamentally, it's, it's almost a combination of everything that the company has been doing literally over the last 20 years.
So, uh, it's a combination of, um, a hardware device, which basically listens to the patient, um, and the doctor conversation it, uh, and it, as it's actually listening, it turns that into, uh, into a, a transcript and it diarize the speech, which means it splits it up. So it, it separates out what the patient is saying versus what the physician is saying.
Thing. And then it starts to derive meaning. And this is where the intelligence part comes in, provides the physician with feedback, uh, as, um, as the conversation go through, as in real time at the point of care. So, uh, and, and we're extracting facts and evidence, and we're creating documentation. So it automatically is generating the documentation and it, it allows the physician to stay engaged with the patient without turning their back into going to the computer.
So it's, uh, super exciting stuff. So what are we talking about? I mean, we're talking about, uh, you know, voice recognition. We're talking . About really machine learning ai. Mm-hmm. we're talking about, um, is it N l P involved natural inter processing? Yep. Yeah. What other things am I missing? I mean, you're really hitting all the, all buzzwords and actually doing it.
It hits, it hits all of it. And, um, that's the amazing thing. We, we believe that, that this really is the future destination. That the company, in terms of all the different products we've created over the last 15 or 20 years, it has been pointed at in a natural way. And, um, the vision that we actually create
By bringing folks through that experience, actually, it just speaks to just literally everything that we brought together. So is this starting from the, the conversations, 'cause our doctors used to always say, my patients hate this. Hey, how you doing? How are things going? Yeah. Um, so now we're gonna be able to put this in the, in the clinic.
They're just gonna be able to have conversations. Is it, are the notes going in the, the E H R? Yeah. So it, it basically enhances and augments the experience inside of the electronic medical record. One of the things that we say for our pro. Dragon Medical one, that's a, that's a, a cloud-based speech product.
Right. Um, about, uh, 25% of the physicians in the US are on it now, and about 50% of the physicians will be on it over the course of the next couple of years. Um, so it's a very prevalent product. What we like to say is we, we turn the chair around, right? And, and that's one of the values that we actually bring to the physician patient experience just with regular speech.
This even takes it a step further, so there is no share. Right. So we're, we're basically listening . To what the conversation, uh, is transpiring during the conversation. We're deriving meaning from that in real time, and it's all happening at the point of care. And so the idea is once that, um, once that note is, is created, the physician will review the note and then that will automatically get journaled into the electronic medical record.
And so it'll, it'll relieve the physician from a lot of the typical burden that they actually feel through the documentation process. Uh, and it's exciting. We're, we're starting out, um, with a set of specialties, . There's an ortho demonstration that we're actually doing downstairs where it's a condition of the patient is experiencing their experiencing in their knee.
Uh, and then we'll expand through a number of different specialties like ear, nose, and throat, ophthalmology, et cetera. There's like 18 different specialties and we'll eventually migrate our way, uh, up to a general medicine, uh, type of scenario. So is that just because you have to get the, the language down for those specialties?
Yeah, it's, um, it's a good point. It's like, it's not so much the, the language. It's, it's kind of like when you think of , But it's the narrowness of the problem, right? So by narrowing it to a very specific domain, like ortho is very prescriptive in terms of the dialogue that the patient generally, uh, the physician generally has with the patient and what they document, right?
So it's kind of the simplest form of a, a patient physician encounter. And so by starting out with the simple form and narrowing it, it, it kind of in an artificial way, imposes this condition where the accuracies and all that stuff go up. So where, so the technology . He's more capable of solving the problem.
And if you think about how speech evolved, we started with something called interactive voice response. It was just the menu system navigation by, in the early days when speech wasn't very accurate, it was only like 75% accurate by narrowing it to a really constrained vocabulary, the accuracy went way up.
It went up to like 95%, created businesses around that. That continued to exploit the technology more and more until the general accu accuracy kind of tipped up over 90%. And then it turned into long form dictation where the physician. Could actually just rail for 15 minutes and then do editing. It's interesting, this is an example of that sort of exponential curve.
'cause back in the day, you know, you had Dragon naturally speaking and you were trying to navigate. Yeah. And eventually you threw it away 'cause you're like, ah, it's just quicker to, yeah. Click on the mouse. But we've gotten to a point now where you incremental, incremental, incremental and now machine learning ai and you just see it go through the roof.
Where, where else is voice gonna be used? Are we gonna see it being used in the OR for checking the record and those things? Yeah, I think, I think, um, I think . The way this is gonna progress is part, part of ambient clinical Intelligence is something we call virtual agents. And this is, this is similar to the type of stuff that you actually use on your phone when you ask what your calendar looks like and you ask for the weather, uh, information retrieval use cases.
Um, we believe that's gonna be the, the pathway between like where we are in terms of just general speech and general kind of clinical decision support to that kind of next phase where you're actually engaging with the system and you're skipping most clicks, right? So, Let's say you wanna pull up, uh, abnormal, uh, lab values on a patient.
Just being able to say, let's look at their abnormal lab values and have it instantly pop. That saves the, the, the physician X number of clicks. And they might do that 30 or 40 times a day, a day. Stack that up with a bunch of different, we call them, show me use cases, like, show me the growth chart. Show me where the patient sits on the growth chart.
Um, show me the, uh, the patient's last procedure. Show me their, uh, the patient's imaging study. It just short circuits the distance between . The physician and the information, and they can make it a very interactive experience because these, uh, these treatment rooms and so forth, the kind of the treatment room two, oh, they all have large screen TVs in the, in them and they're all kind of high fidelity experiences.
And so the ability to kind of leverage that screen, uh, leverage the electronic medical record with all the information that it has in it, and then using voice to kind of control that experience, we believe there's gonna be a natural migration from where we are through virtual agents to ambient clinical intelligence.
Yes. So you're very, I I mean, you're hitting both the consumer experience and the clinician experience. Yep. Um, is there anything that you're really focused in on the, uh, uh, consumer experience specifically? Not, uh, this business, not today, but, um, if you look at, at Nuance, we've got a, a large footprint in, uh, uh, in enterprise as well as auto.
Okay. So we're in like 500 million cars where that voice experience speaker, diarization, what we call, uh, gaze detection. So, We've got technology in a car nowadays where you can look at something, you can look at a store and you can say, what are the hours? And it knows that you're actually looking at the store.
It, it figures it out relative to the g p s, relative to where the gaze is actually going. And it will actually retrieve the information off of the, off of the internet. That's completely a consumer experience. Wow. And we're like in essentially every major car brand that you can imagine. Uh, and then on the, uh, on the enterprise side, we've been using virtual agents on the enterprise side.
I forever basically. And it's all about engaging with the enterprise. So if you call up like, uh, uh, any major airline, any major bank, and you've got an agent, an intelligent agent, electronic agent asking you questions to get you to the right information, that's generally us as well. And we've, we've taken that technology and that all the battle hardening associated with it.
And we've migrated it to the, to the healthcare use case. And this is what a lot has allowed us to really accelerate. 'cause if we had to develop that from scratch, we call that conversational ai, it would be interesting. Conversational AI on top of a call center where I call into a health system and interact with my medical record.
And you know what? My father-in-law's great. 'cause he's 87 years old, he now lives with us and I take him to the, uh, urgent care center. Mm-hmm. . 'cause he, he just, just moved here and he's getting a new set of doctors. And, uh, he said, well, what meds are you on? Oh yeah, right. He pulls out this long list. Well that's, that's how a lot of, um, older people carry around their Yeah, absolutely.
Their med list and. There are times where, um, you know, we need to verify something or whatever from his health system. It would be nice to do that. And not all these health systems are connected. Right. So you have to go back. Uh, we have some international, uh, listeners. Mm-hmm. , and, uh, you know, I, I, this conversation's ending English, so I assume a lot of stuff we're talking about is in English.
Mm-hmm. . Um, what other languages are you? Uh, we have, we have a large variety of, uh, of, of languages. So, you know, our business from healthcare . So when you're doing orthopedics, you're doing it in multiple languages at the same time. We can do it in multiple languages at the same, same time. This of course will be released in where, wherever the, uh, that English is the, the language of medicine.
Right. Okay. So it can be like, uh, easily brought from the United States into places like the uk, even some Asian places where English is the language of medicine. But we've got solutions in just about every language that you could possibly, you know, imagine we've got great business in, you know, France, um, uh, France, Germany, but
The Nordics, you know, Australia, obviously the, the uk it's a major expansion opportunity for us. And so we're really serious about the business. Um, and our, I won't say that all the solutions are language agnostic, but they're agnostic enough so that if we get serious about a market, we, you know, we can go there.
Wow. Uh, biggest obstacle at this point is just time. I mean, you just, it's time and you know, it's time and money. Um, you, you don't know how quickly some of these problems are gonna yield. Right. And . And, um, I appreciate like what happened in the early speech days because when speech was 75% accurate, our company continued to throw money at the problem until it yielded.
And there were a lot of people, I'm sure from the outside that were looking at this, like when, when you're running it at 25% error rate, that means like one in four words is wrong. Right? And, but what they saw is they saw the opportunity to narrow the problem. And so there's a lesson that we should take from that, that by kind of changing the definition of the problem and moving the technology to someplace where it works.
It creates a commercial opportunity so that you can keep the business going as you kind of grind through that, that kind of, that that resistance from a tech point of view. So we'll see how fast the problems, you know, the problems yield. We've got some human assistance and human augmentation. We're in the virtual scribing, for example.
We're using the virtual scribing to collect data and train the models 'cause that helps us create the true sets. Um, so it's gonna, it's, it's interesting. It's gonna take some time. Um, it's gonna take a bunch of money 'cause the error rates in medicine are, have to be . Yeah, and this is the pressure that technology actually puts on the physician.
And, and, uh, this is just kind of a sign of the times. I mean, the physician is responsible for, you know, delivering quality of care at a high level of accuracy. So we can't be putting stuff in front of them that, um, that creates even more problems from that point of view. So we're very thoughtful about, as we roll out solutions into prime time, we have to be, um, you know, very kind of engaged with what the physician is going through and make sure that we're not disrupting that experience anymore.
We have to, well, it was as a c I O for a health system. It was interesting 'cause because we use your product throughout, um, throughout our clinical setting. And, um, there were people that trained it. There was people who didn't. So there was a huge, we had a, a team that was just dedicated to training them how to use it because we knew that the benefit was so high.
Yeah. Because the satisfaction rate of the physicians that were using it was really high. Mm-hmm. and, but there was a whole bunch of people that used it, saw one word wrong and said, I I don't trust it. I can't use it. Yeah. Um, but we knew it worked and we knew it. Worked effectively. So we, we utilize those clinicians to train other clinicians and Yeah.
And get them up space. Yeah, a hundred percent. One thing that's changed relative to that, by the way, because everything's multi-tenant and it's in the cloud now, um, we can go into, we can go into one of our clients and say, this is what your physician population looks like. This is how they're using the product.
Some of them are using macros, some of them are using advanced commands. Um, this is what their correction rates are and we can sit down and have a really informed dialogue with them relative to what are your very best . Physicians doing, and how can we take that best practice and actually apply it to the rest of the physician community?
And that's super helpful because one of the questions I always ask when I'm meeting with a client is, you know, what, what's your adoption rate? So they might say, 75%. So that, okay, so there's a a category that we need to revisit. Like what are the 25% doing? Why isn't the tech there? But then within the 75%, how many of them are using it efficiently?
High productivity, there's an opportunity there as well. So we use this to kind of continually improve and make the client successful. That's one of the first big data projects we did is we collected all the, the, uh, interactions with the E H R and we identified the physicians that were struggling with the E H R just based on the, the raw data that was coming back to us.
And uh, this is the advantage of the cloud. Advantage of big data. Advantage of Yeah. Machine learning, we can hundred percent identified these patterns. Yeah. This is fantastic. Thank you very much for your time. I really appreciate it. I appreciate it. Keep up the good work. Thank you. Thanks. I hope you enjoyed this conversation.
This show is a production of this week in health. It. For more great content, you can check out our website at www.thisweekinhealthit.com or the YouTube channel at this weekend, health it.com/video. Thanks for listening. That's all for now.