Can you believe he said that. I do, and we've been saying the same thing. We have little transparency into the models training. Gen AI may not give you the same answer twice. Generative AI is probabilistic, meaning it give you the most probable answer, or in the case of Large Language Models, it's best guess on what the next word should be. Regardless. John believes it could be transformative.
Today in health, it generative AI, not reliable yet says John Halamka. love that title. Don't you love that title that is clickbait. If I've ever read it, and we're going to delve into the article and see what he has to say, my name is bill Russell. So it worked, I guess my name is bill Russell. I've been former CIO for a 16 hospital system and creator of this week health, a set of channels, dedicated to keeping health it staff current.
And engaged. We want to thank our show sponsors who are investing in developing the next generation of health leaders. Short test artists. I parlance. And service now, take them out at this week. health.com/today. Having a child with cancer is one of the most painful and difficult situations a family can face in 2023 to celebrate five years at this week, health.
We are working to give back. We are partnering with Alex's lemonade, stand all year long, and we have a goal to raise $50,000 from our community. And we've exceeded that goal. On August 1st week, seated that goal. Thanks to the generosity. Of our partners and our community. And specifically the last couple of events, we had a Sarah Richardson trust, a Springman were chairs, and we were able to raise $5,000 at that event. And, , Mike Pfeffer and, , Donna Roach chaired an event.
, for us, one of our 2 29 events. And at that event, we were also able to raise $5,000 as well. So that put us over the $50,000 mark. And we want to thank them and our partners for being a part of that. If you want to be a part of it, we would love to have you be a part of that. Go ahead and check out our website top right-hand column. You're going to see a logo for the lemonade. Stand, click on that today.
To give a, leave a note there as well. So we know it's you, we'd love to thank you for that. We believe in the generosity of our community and we thank you in advance. All right. I, I fall to a clickbait often, and this one's not surprising to me because John Hopkin has he's great for sound bites. I love having him on the show.
, we generally have him as the first episode of the year, so he can sort of set the tone for what we're going to be talking about, , in that year. And as you would imagine, he talked a lot about AI earlier this year with us in January. And, , In this article, it's a title. Generative. AI, not reliable yet says John Mayo's.
Mayo clinic, John Malanca. , and immediately it backs off, right? So that said, despite his current limitation, it will never replace empathy, listening, respect, personal preference. It's clear artificial intelligence is leading to fundamental changes in care delivery. Says the it innovator. Okay.
, The premise for this is they went back to him because he made this statement. If your doctor could be replaced by AI, your doctor should be replaced by AI. And they wanted to ask him what he meant by that. And, , he said, you know, w what I meant by that was essentially a medical doctor who holds a master's degree in medical informatics from Harvard that's who he is, but a.
A, , doctor who holds a degree. , empathy, listening, respect personal preference, no matter what generative AI quality and accuracy we get to it is unlikely. These generative AI systems will have empathy. We'll have respect. We'll have the kind of things that we want from human. So in effect, if all you have as a doctor, who's reading a textbook back to you, you're probably don't want that doctor. So that's essentially what he was saying by that, but they go on to ask him about generative AI and here's some of the things he said, let's step back and look at the predictive and prescriptive AI lock explained.
The idea is, Hey bill, do you have a disease? Will you have a disease, if you do. What do, what do we do about it? Those kinds of AI. You can measure truth, right?
AI holds promise and risks. And so it gives this a bill. So Wiki is the one who did the, the interview. So. He said, let's sit back and look at the predictive and prescriptive AI Halabja explained the idea is, Hey bill, do you have a disease? Will you have a disease? If you do, what do you do about it? Those kinds of AI, you can measure truth, right?
A set of inputs, a set of outputs. What actually happened? Did they work? Did they not? Okay. And this is the point I was making a little bit yesterday. Generative AI is very different beast because it's not deterministic. It's probabilistic. So it goes on probabilities. It doesn't go onto, it's not deterministic. It doesn't have definitive answers. Okay. So he continues.
And really all it's doing is predicting the next word in the sentence. Every time you write a prompt, you're going to get a suddenly different answer. So, how do you assess the quality and accuracy? When every time you use it, it's different. Halakah said he could argue that AI of all kinds needs to have transparency.
How was it created? It needs to have a sense of consistency. And it's interesting. We talk about this all the time on the show, because we've really been looking at this and there's reason for it. It's because every time I get together with CIO, They're looking at this. This is an important topic. We have a belief.
That this is going to be transformative, and we want to explore how it's going to be transformative. But like I've said before, if the thing was trained on Fox news, you're going to get one answer. And if it was trained on CNN, you're going to get a different answer. All right. So it does matter what trains.
These models. Now over time, what's going to train it is the responses and the feedback that it gets. If every time you ask it a question, it gives you an answer and you say, you ask it a different question. It's going to determine that it didn't really give you the answer that you want. And hopefully because it's AI and AI systems get smarter with use, they get smarter with.
, repetition. So the more you give it feedback, the more it's going to say. Oh, that wasn't really the answer they were looking for. Maybe I didn't answer the question correctly and it's going to update its algorithms. All right. So he goes, , Sense of consistency. He goes on to say that is every time I use it, it's going to give a sort of reasonable result and reliability.
That I actually feel like I can use this for giving context. He noted. So I think where, where you are with generative AI is it's not transparent. It's not consistent and it's not reliable yet. So we have to be a little bit careful with the use cases we choose. And this is why I like the notes use case that we have, because the notes use case that we have has a human interface in between the generated.
Content and the delivery of that to the patient. Right. So it's going to go to the physician. The physician is going to review it. The physician's going to send it out. So that is a checkpoint. Now we can make the argument that doctors are just going to send it out. Well, that's, that's their own fault. I mean, if they're not going to check the work, if they're not going to check.
, the thing that that's their own fault, and we can't really build around that. What we're saying is, Hey, this tool is going to generate a note for you. You have to re review the note for accuracy. We believe that in a majority of the cases, this is going to save you a couple minutes and because you have hundreds and hundreds of notes in there.
That, that couple of minutes per note could be, you know, 200 minutes a week, which I haven't done the math recently, but that's a little over three hours. That's significant. And we believe that's going to reduce burnout. So I like those kinds of use cases. He goes on AI at the Mayo clinic. The Mayo clinic has been working over the past few years with Google on a landmark partnership in June, the two organizations showcase some of the generative AI use cases they're working on together.
I've been in academic healthcare for almost 40 years. And one of the challenges with academic healthcare is that every project we do. Is at hockey explained that is you get an idea. The innovator talks to the lawyers. 18 months later, the contracts are signed and the work is finally begins and it's very inefficient process.
What we've done over the last three years, he said with this partnership is templated the process. So we go from the idea of running code in two weeks. And how does that happen? Well, we took the entire Corpus of Mayo data, structured, unstructured, Omex, telemetry images, digital pathology, de-identified it moved it to the cloud container, and now it takes almost no time to bring any innovator.
Into that cloud container and work with that data. And if you want to hear more about that, I've interviewed, , John several times on the, , keynote podcasts. And we talked extensively about how they're doing this and why they're doing this. In fact, he, , spoke at the JP Morgan conference specifically about this because they thought this was so transformative.
That it was important enough to get in front of investors. So anyway. Title of the article is generative AI, not reliable yet, but in context, essentially what he's saying is genitor AI in its structure and how it's it's it's , , how it determines thing it's probabilistic. It's not deterministic, it's probable probabilistic. And because of that,
It is, , it's a different model. And we have to look at that in, especially as we're looking at use cases. To determine where is this a good application? Now there are other AI models. We keep say that there are other AI models that are more deterministic. Those are the models you might be wanting to look at. If you want pinpoint accuracy, which we do in healthcare in a lot of cases.
But there are some cases where generative AI can be utilized. , and those are, I think in coding, I think it can be used again with, with checks. I think it could be used to notes. I think it could be used, , as a front end. For some things, I think you're going to see generative AI. , obviously it's being used for transcription and those other things, but I think it can be used as a front end to help to simplify solutions on the backend.
Right. We can put a generative AI wrapper around some of these complex tools that we have, and we can see some. , great benefits. And I think, , my friend read Stephan for a conversation we had today, where he talks about, , some of the areas that are looking at doing that. So, , that's all for today.
You don't be careful what you read as a headline. I noted that just the other day I was, , I saw a headline and the social media just went crazy. So I decided to read the article or the article, and the headline had very little to do with the article. If you put it in context. And I thought, this is what we do. Now. We put the headline out on social media, people react to the headline and then they go tell 50 of their friends.
, how crazy it is, what this company is doing, or did you hear what John Halakah said? But in context, what John is essentially saying is generative. AI is probabilistic and there are other AI models that you would use for other things. All right. That's all for today. If you know someone that might benefit from our channel, please forward them a note.
They can subscribe on our website this week. health.com. Or wherever you listen to podcasts, apple, Google, overcast, Spotify, and everywhere else. Happens to be out there. You get the picture. We try to be everywhere. We want to thank our channel sponsors who are investing. In our mission to develop the next generation of health leaders, short tests, artists I 📍 parlance and service. Now check them out at this week. health.com/today.
Thanks for listening. That's all for now.