How do you deploy Chat-GPT in your healthcare environment safely?
Today in Health IT deploying Chat G P T in your environment. Really not chat G P T large language models, but you get the picture. My name is Bill Russell. I'm a former CIO for a 16 hospital system and creator of this week Health. A set of channels dedicated to keeping health IT staff current and engaged.
We wanna thank our show sponsors who are investing in developing the next generation of health leaders. Sure test and art site. Check them out at this week. Health. Dot com slash Today, having a child with cancer is one of the most painful and difficult situations a family can face. And in 20 20, 20, what year are we in?
And in 2023 to celebrate five years of this week health, we're working to give back. We are partnering with Alex's Lemonade stand all year long. We have a goal to raise $50,000 from our community this year, and we are already up over $30,000 for the year. And we want you to join us. Go ahead and hit our website top banner.
Click on the logo for Alex's Lemonade Stand and go over there and go ahead and be a part of the community giving back. We believe in the generosity of our community and we thank you in advance. All right. Found this article on, , it's a media article. It's Jonathan Balaban, and he is a data scientist.
Has about. , well over a thousand followers. So he has a bunch of followers and he has a post that says, is the chat g p t Honeymoon Over. I don't think it's a great title. , really it gives us a framework for deploying it and he, , talks a little bit about it. And I, that's really what I want to cover cuz I'm getting that question more and more.
It's like how, okay, this is great. It's really cool. We're using it here, we're using it here. How do we deploy it? And there's a couple of things that I want to cover in this. , you know, one is governance, ai governance around AI is going to be really important. I think this is going to be, we are gonna see the advent of a secure browser.
Be, , more and more prevalent in the, , healthcare space. That is a browser that can tell if you are putting, , protected health information, P I I P h I, you name it, , into a, , remote site, like a chat G B T, and it will block it. It'll have, it's a rules-based browser and it will be able to determine what's going in and out of your house system.
I think that's one of the things that we are going to see. , grow so that we don't have the average user who's just sitting there funneling medical records in there in order to get their job done. And I can guarantee you it's happening. It's probably happening today. And if you're a large health system, it's probably happening in your health system.
Cuz people, once they use chat G B T four, they think this is amazing. It's a lot like the cloud. This is amazing. Oh my gosh. I can share my pictures with anybody. Hey, you know what? These medical images, they're pictures and we could share 'em with patients. That's what I found when I went to be a CIO for a health system and how they were using cloud.
While the same thing's true with chat, G P T four, people are using it today in your health system and they're going, oh, This is saving me 10 minutes of doing this or doing that, but, oh gosh, I have to feed in this PHI or pii. Well, I'm on a, a secure company computer. I'm sure they're, you know, if I'm doing this wrong, they'll stop me and they are pumping information into chat g PT four that potentially shouldn't be going in there.
Now I'm exaggerating, but, , probably not too much. I'd be on the lookout for it anyway. Regardless. Secure browser, I think is the direction that that's gonna go. And , that's one thing that's going on. The second is, I think he covers pretty well in this article. So let me give you a little bit of the excerpt.
I've had a number of interesting conversations recently on deploying AI platforms like chat, G p T four, for decision making in FinTech and business operations. And I wanted to share my framework for thinking about. How to apply AI thoughtfully in cases it's hel in case it's helpful to you and your business.
My perspective comes from 15 years of consulting, applying technology and data-driven processes to problems that are too complex to solve on paper, too risky to get wrong and too scalable to ballpark. When you start from the first principle, how in vogue or shiny the tool used becomes irrelevant, and it's the outcome that truly matters.
Amen. That said, I'll share four key points. First point, chatbot, chatbots, LLMs, and other ML models that are functionally non-deterministic create this amazing experience where they feel organic, human-like, and creative. And isn't that true? While that's exactly the experience we seek in chatting with a bot, I'll argue you do not want these types of surprise outputs in a highly regulated space like FinTech.
Or healthcare where rules and best practices have been tightly defined for ages, surprises is your surprise or surprises are your enemy, not your friend. When it comes to scalable and dependable decision. Decisioning, right? So chat, e p t, awesome. But if you ask the question three or four different times, You're gonna get a different answer three or four different times and surprises in healthcare, not your friend.
Now. It's really creative and it's really doing some great things, but you have to make sure it's not giving out medical advice that's not accurate or not appropriate or, you know, funneling. And so here's where we go to the next step. Based on the above, it's important to show restraint regarding where you'll deploy technologies like chat, G P T, use ML models where and only where they add value.
LLMs are still narrow. Ai, they are simply a fast and a sophisticated query filtering results. Aggregating engine that weighs, summarizes and structures internet, text decisioning on what is right, was still pre-baked into the training data and is being identified algorithmically in our page rank. , , approach.
And, , you know, he goes on to give examples of that. So it's essentially, it's taking what it's learned, good, bad, or indifferent, and it's saying, yeah, this is, seems like a good answer. And it's spinning it out, but it's not deterministic and it's kind of, it's, it's, you don't know what it's been trained on, good or bad data.
Right, and you cannot deter, you cannot guarantee it's gonna spit out the same answer or the appropriate answer every time. So that's what we're concerned about. So he goes on to say, your business needs experienced ML ops or engineering teams to embed LLMs into your existing pipeline. Remember, remember, narrow AI is not your silver bullet
unless you have a very specified business application, I'll argue that your L L M API outputs should feed into a larger ensemble master model or transfer learning pipeline that can filter, merge, adjust. And decision on the results before they ship to the customer. Often that's a human in the loop approach or a complex ensemble that provides oversight, but the alternative is to fly blind and hope for the best.
I think that's the key paragraph I wanted to capture here, which is if you can't, if it's not deterministic, if it's not gonna spit out the same response every time. It has to have oversight. And if it's gonna have oversight, you either need another sophisticated model that you're pumping the results through, or you need a human in the loop.
Okay, so let's, let's be real clear what we're trying to do here, and in a lot of cases when your clinicians are using this to generate. A letter and whatnot. They are the human in the loop that's looking through it and saying, yeah, that's accurate. Go ahead and send that letter. On the flip side, when they get really busy, are they gonna scan the letter and are we gonna get cases where people read things and at the bottom it says something that's completely inappropriate or essentially says, This letter was generated by chat, c p t or something to that effect.
We make mistakes in this copy paste all the time, so we gotta be careful. , let's just give you this fourth point. Fourth point. Finally, it's important to be rigorous and systematic in monitoring your results. We now see the dangers of AI hallucinations and the foundational reality that these models are trained on messy and errant human data.
Key point. How are you testing intercepting and evaluating outputs? Are you tracking key metrics so you know if results are improving or decaying over time? Do you have a feedback loop from trusted and key customers? You get the picture here, so this is not just a slam dunk of, oh, let's pop it in and away we go.
We need to think through how we are going to funnel these results through deterministic models or human beings and come out with the results, especially in an environment like healthcare. And we have to be very careful how we do that. And we also have to always keep in mind that these models have been trained on, , inaccurate data in a, in, in a lot of cases.
Now, , the results, , don't hear me. Stepping on chat, G P T or any of these other things. The advances are amazing, and the reason I'm doing so many shows on it is because I believe it holds a lot of value for healthcare and we're gonna see it integrated. We just have to be aware of the boundaries that that currently exist, and quite frankly, those boundaries could be gone in a couple of months at the pace we're moving.
But with that being said, these are the boundaries we have today. We have to be very careful and we have to put those guardrails on top of it. All right? That's all for today. If you know someone that might benefit from our channel, please forward them a note. Tell 'em listen to this week, health.com, and you could do that wherever you listen to podcasts.
We wanna thank our channel sponsors who are investing in our mission to develop the next generation of health leaders. Sure. Test and Arti site. Two great companies, , our site using computer 📍 vision and ai. There you go. It's everywhere. Thanks for listening. That's all for now.