These new generative AI tools are really amazing. However with great power comes great responsibility. Today we discuss where it can go wrong.
Today in health it with great power comes. Great responsibility. I've been playing around with some of the generative AI. Over the last couple of days. And I want to share some of the stories of some of the research I've been doing. My name is bill Russell. I'm a former CIO for a 16 hospital system and create, or this week health. Instead of channels and events dedicated to leveraging the power of community to propel healthcare forward. A couple of changes in the beginning of this, we want to thank our show sponsors. Who are investing in developing the next generation of health leaders, short tests are decide parlance, certified health, new to the show and service. Now check them out at this week. health.com/today. Having a child with cancer is one of the most painful and difficult situations family can face in 2023 to celebrate five years. If this week health we are working to give back. We will be partnering with Alex's lemonade stand all year long. As you know, we had a goal to raise $50,000 from our community and we are up over that $50,000 for the year. We asked you to join us. We're going to plow through that number, hit our website in the top right hand column. You're going to see the logo for the lemonade stand. Click on that to give today. We believe in the generosity of our community. And we thank you in advance. All right. So, , we're, we're using generative AI. We're using, , chat GPT and open AI over here at this week. Health. We, as, you know, we generate a lot of content and one of the things we're using it for is to organize that content makes a lot of sense. Right? So you take all the, the transcripts that we have from the various episodes and you. , you generate additional content that sort of organizes that and brings that together. And so we're playing around a lot with that. We're using some automations. And those kinds of things. And as, as we were, as I'm building these things, I'm the. On the website developer, I'm the programmer and whatnot. I still have not. I handed that off mostly because I just love doing it. And, , you know, I'm seeing the challenges of working with chat GPT, and I'm seeing the challenges of working with generative AI. By the way we have a great webinar coming up just next week. We have, , next Thursday. And if you haven't signed up, just go to our website again. And that top right-hand column, just below the lemonade stand logo, you're going to see the, , The generative AI discussion we're going to have, and it's a three great guests. We have, , Mike Pfeffer. , we have, , Chris Christopher Long Hearst, and we have Brett lamb. With, , with Stanford, UCLA. At UNC respectively. , it's going to be a great conversation. We're going to talk about a lot of stuff that is going on, and it's already incredibly attended the registrations through the roof and we'd love to have you be a part of it. So it's next Thursday, one o'clock Eastern time. So. As a background, I'm going to share this story. And it comes from psychiatrist.com and it's a little older story. It's June 5th, 2023. And it is that the NA NEDA national eating disorder association, suspends AI chat bot forgiving, harmful eating disorder advice. All right. So I'm going to just give you some of the ways that, , some of the warnings that go along with using this technology. And it has three points here. I like this when they, they give you the three points early on, and then it goes into the story. So clinical relevance, AI is not even close to being ready to replace humans in mental health therapy. The national eating disorders association removed his chat bot from its help hotline over concerns. That it was providing harmful advice about eating disorders, the chat bot named Tessa. Recommended weight loss, counting calories and measuring body fat. Which could potentially exacerbate eating disorders. NEDA. , initially disarmed. The are dismissed. The claims made by an advocate. But later deleted their statements after evidence supported the allegations. And it goes through and talks about the fact that, , the advice that the chat bot was giving. Was actually harmful. And, , it was brought up by an advocate. Who, , had, had worked her way through an eating disorder. And, , essentially was calling out the advice that this was giving. And, , and they were forced to pull it back. And so this is one of those things. That we. You know, we have to take caution. We are healthcare. We are not a media company. Like I, I have here. Where, oh my gosh, you made a mistake. You brought two articles together. It's , , you know, the, the, the consequences of that are not that great. The consequences of giving. Bad medical advice are significant. That's why the episode, if you haven't listened to the episode I did on Friday, such an, a Dolla. Talked about a design construct. He talks about, , the three things that are the perfect machine and the, you know, the first was a natural language interface, which is what you have with these chat bots and chat CPT. Then he talks about a reasoning engine and that's what you have again with the center of AI models. But then he talked about the, the design construct and the design construct is that of a co-pilot. Not a pilot, a copilot. It's someone that is assisting a human in. Delivering better care. Right. So you cannot replace, in fact, this, this model, I read this article and it was kind of, kind of crazy. , they essentially said we're getting rid of all the people that we had on our help desk and we're, we're, we're going to replace them all with. , technology and with the chat bot, and that is overzealous to say the least. But, and that's also not the design construct. The design construct has to be one where it is a copilot, not a pilot. And that's what I'm finding with the technology is I'm playing with it as I'm building out these automations or whatnot. It does not give the same answer twice. And this is the same kind of warning that we got from John Malanca, which I shared a couple of weeks back, , where he was saying, you know, it is. , w we have, we have, , clinical decision support and those kinds of models, which are, , which are very predictive. I mean, they are very, they, they, they. We'll deliver the same answer every time. They run through a framework and that framework delivers the same answer every time. If you give it the same inputs, it'll give the same answer. That is not true with gender VI. And so because you have that kind of model where it can deliver different answers every time you need a copilot construct. And I think it's really important. , to consider that as, as we are moving forward. And it's interesting in the. And even in building out the things that we are over here, that I am building out over here. , I'm finding that the fact that it gives you different answers creates some. Some new and interesting. Programming challenges. Right. So I could run it through multiple times and get different answers. And because of that's the case. You know, do I like the first one? Do I run it through a second time? Do I run it through a third time? Can I check those answers in any way for correctness as they come through? Can we create a programming models that actually. , air checks. On these models. And, you know, as I'm thinking through this, this is just a new paradigm for us. And it's a very difficult paradigm and it's not one we should just burst through and, , and ignore the warnings of, , of people like John of, , of stories like this one from NEDA. Or the, , , you know, the, the, , other warnings that we've gotten out there or the, the instructions we're getting from people like such in a dollar and the warnings we've gotten from others. , you know, and so I've titled this with great power comes great responsibility, obviously from the Spider-Man movie. , and I think that is true. I think we should progress as quickly as we can. , and with that being said, I think we can only progress as quickly as safety. Allows. And so we should be. Learning about these models. We should be developing new muscles around these models. We should be hearing how others and continue to have that dialogue around what others are doing around these models. That's why we're doing the webinar next week is to not only hear the. The, the, the really interesting use cases and models that these health systems are pushing forward. But to also hear some of the warnings and some of the challenges, and, you know, in one of them I did hear was even in the copilot model, the adoption rate for one of the health systems that we're going to be talking to, , was only 25%.
So even though it was generating a note for, let's say a hundred percent of the doctors, only 25% of those doctors were actually using the generated note. The others were just deleting. Leading it and starting over. So just some things to consider today.
All right. That's all for today. One of the ways you can help us out. If you know someone that might benefit from our channel or someone that you would love to have conversations with about the kind of things we're covering on the today, show, let them know you're listening to it, have them subscribe, and you guys can have conversations. Around those things. It is, , it helps us a lot and it helps us to get closer to our mission. Which is to progress these kinds of discussions, and to build a community that has these kinds of discussions so they can subscribe wherever they listen to podcasts. We want to thank our channel sponsors who are investing in our mission to develop the next generation of health leaders. Short test artist site parlay it's 📍 certified health, new to the show and service.
Now check them out at this week. health.com/today. Thanks for listening. That's all for now.