February 9th, 2024: In this episode of This Week Health Dr. Kevin Maloy, Professor of Innovation at Georgetown University School of Medicine. As they explore the innovative applications of OpenAI API in medical projects at MedStar, questions arise about the potential of conversational technologies to redefine patient engagement and the traditional call center model. How can the integration of AI in call centers shift the focus towards patient-oriented outcomes? With the advent of GPT and other AI models, what are the implications for prompt engineering and the ease of accessing medical information? And as healthcare IT evolves, how might the ambient listening technologies and AI-assisted documentation change the landscape of emergency medicine and patient care? This episode not only highlights the current projects and insights from Dr. Maloy also prompts a broader discussion on the future intersection of AI, healthcare, and patient interaction.
This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
Today on This Week Health.
(Intro) there's so much right in front of us, but it's just so hard to, connect the dots. That's the way I think sometimes when I use GPT, like I know that common knowledge would have something about this that I'm not thinking about.
My name is Bill Russell. I'm a former CIO for a 16 hospital system and creator of This Week Health, where we are dedicated to transforming healthcare one connection at a time. Our keynote show is designed to share conference level value with you every week. Today's episode is sponsored by Artisite, Dr.First, Gozo Health, Quantum Health, and Zscaler. Now, let's jump right into the episode
(Main) all right, it's keynote today and I'm excited to be joined by Dr. Kevin Malloy. with Georgetown University School of Medicine, Assistant Professor of Emergency Medicine, leads up innovation projects at MedStar or participates in innovation projects at MedStar Medical Group.
lot of stuff going on. Did I get all those things right? Or am I a little bit off? I'm usually a little bit off on those things. By the way, welcome to the show.
Oh, yeah, thanks for having me. Yeah, I'm a big fan. I actually, I listen at least once or twice a week. You're always on my kind of podcast feed.
But yeah, I'm I do two things. One is I do emergency medicine and I'm an assistant professor of emergency medicine. And I still see patients clinically about half time. And half time I do innovation projects for the medical group at MedStar.
And I'm going to say this affectionately.
I'm talking to you because a physician nerd. You do a lot of really cool projects, and right now you're doing a lot of really Interesting things with the OpenAI API, that's hard to say. OpenAI API and you're doing stuff around data sets and different models and whatnot.
And I was thinking earlier this year, I was thinking, I want to talk to somebody about my premise and. Some of my premises around chat GPT and not just chat GPT, BARD and other things, but where this could take us in medicine over the next 10 years or so, but to set that up, and I don't usually do this.
We're going to go through your LinkedIn posts, because you're pretty active on LinkedIn. I know it's not the official, that's not where you guys publish papers and that kind of stuff, but there's a lot of good stuff that you put out there. around this, and it's not only the post where you write extensively, it's the stuff, it's like the comment post, this one's from five days ago.
Can a call center be a patient engagement center? And that's one of my premises for this year is that this conversational technology gives us new avenues to engage with patients. say, can it be a patient engagement center, what are you thinking about there?
Yeah, so I was thinking about reframing stuff that's already around you right?
Like we, I was listening to a podcast on, I think it was Out of Pocket and they were interviewing one of the chief operating officers at Out of Pocket. And he was the one who actually said, we took it we changed the concept of a call center to a patient engagement center. And it changed the way we looked at the metrics and how we were evaluating people.
And I thought that reframing was really interesting because he went into if it's a call center, then you measure how long somebody's on the phone with someone. Yeah, get off the phone be efficient. But he said there are metrics. When it was an engagement, when it became a patient engagement center, when they reframed it as that, was how many prescriptions got filled after that conversation.
And it struck me as They started shifting to something called patient oriented outcomes, right? Like poems in clinical medicine, there's does and poems, right? Which is there's disease oriented outcomes, which is you know, somebody's lab test changed by this percentage point, right? So the patient doesn't really notice it.
But then there's also patient oriented outcomes. Some patient awareness outcomes are, people notice it. It's you're taking a medicine every day, you are seeing the doctor less, you have less appointments. So I thought it was just interesting how a simple change in framing what was already there, which is effectively a call center with somebody there, changed the metrics that they were evaluating people on and changing what they were hoping to get out of that call center.
Yeah, I think there's going to, it's interesting when I think about the call center and whatnot, and we're going to get into GPT really quick here, but because I think that is the foundation for this. It's we've been introduced to this new technology and we're looking at all these things we've done in the past saying.
Hey, is there a better way to do this? Is there a way to create a dialogue between the health system and the patient with regard to their care journey, but also with regard to their journey just in general, like finding a doc, moving through the system. My wife told a story last night of she got a new OBGYN and her appointment.
is for June, like her first appointment. Like we can get you in June. I'm like, how can that possibly be? But, that may be an inefficiency of just the person she contacted. But with the ability to pull all this information together and then have a natural language front end, we should be able to take out some of this friction that exists.
Yeah, no it seems that way. And there's also this weird phenomenon of everything is like available, but you just don't know about it, right? Similarly, my wife was having some GI problems, and I feel like I, know this space. And part of the reason I listen to your podcast is I constantly discover more that I just didn't know even existed.
For instance, there's like a virtual GI clinic called OSHI, right? And you can go and, get an appointment in 48 hours or something like that to see like board certified GI doc. Our insurance The funny thing is my wife had a GI problem and it's like weeks later and I was like, Oh, there's actually this virtual clinic called OSHI, right?
She's looking at me like, that would have been good information to have.
Yeah, but there's so much right in front of us, but it's just so hard to, connect the dots. That's the way I think sometimes when I use GPT, like I know that common knowledge would have something about this that I'm not thinking about.
So sometimes I'll put stuff in GPT just to be like, I know that, it's like when I would do a Google search, it's but yet it, if I, if you have. Just render it in a different format being like, Hey, I want a table of like different options that like somebody would want if they're having if they want to see a GI doctor quickly, or something like that. Cause it, it just strikes me that there's just so much in front of us and at our fingertips, but it's just it's hard to just connect that, oh yeah, And maybe that's an option for me.
Yeah, that's one of the challenges with these GPTs at this point is there's this whole idea of prompt engineering, and think of it like, oh, we're just going to be able to talk to it.
But there is a way of phrasing the question. There's a way of engaging with it, that it gives you back what you want. I went through five iterations of something this morning, and I'm like, Oh, that's the key. That's the phrase that got it into the format that looking for. some ways, my programming background helped.
Hopefully, these models will evolve to such that you don't have to be a GPT prompt engineer to do it. It will be able to prompt you back with questions to clarify and then get what you're after.
one of the, one of the things I saw was this idea that as GPUs and stuff get faster, you might be able to run just like a bunch of parallel queries, and have GPT select the best actual response and just like fire off, get a hundred queries about the same question and be like, Oh, actually this is. The one that you're looking for because it's right now like when you do GPT, it's like just one at a time. So you're stuck with whatever seed started it or whatnot.
But there's probably like in the future a way of just doing a hundred all at once and then taking the best response like engineering a method to take the best response from that.
Your next post is interesting. Make a custom GPT in OpenAI for synthetic patient snapshots.
with me in five to ten minutes.
Yeah, you've done this, come on, Phil. I have done these
GPTs. It is that simple, but how powerful is it, really, when you get to the end of it?
I realized I hear a lot about different AI companies or options.
And I, I saw a one of the custom GPTs was, I'll take a clinical note and I'll automatically give you ICD 10 billing codes. And I was like, oh, that's like super fascinating, but how would you test that out, right? I was like, oh, the way you could test that out is just have GPT. Create synthetic data for you.
Cause you can't really just go and, like at MedStar right now, we're not using open AI GPT. So this is like a side thing that I'm doing, but so I was like how could I test it out and see if it's. doing as expected. I could sit there and try to create a fictional case, right?
And that's going to take a bunch of time. Or I could just create a custom GPT that would create fictional cases on demand, right? Because then there's, by definition, no PHI in there, right? Because this is being synthetically produced by GPT. So that's how I came about doing that. I don't know, what's your experience with custom GPTs?
Does it take a while to actually get what you want?
So I created a GPT for I think it's called healthcare CIO Yeah. Healthcare CIO. Cause I didn't want to do health system CIO. brand's already taken, so I did Healthcare CIO. And then I populated it with handful of things. Now, I wanted to upload, like, all the interviews I've ever done with CIOs, which would be I don't know, six years of interviews would have been a lot of stuff, and it did not have the ability for me to upload that kind of stuff yet.
You're too productive, Bill.
You're too productive. And I started thinking about it. First of all, it's a lot of data. But I understand why they're not doing it. There's probably a lot of people out there who are like, I'm going to upload all the Harvard Business Review articles and create a GPT around it, whatever.
It's not your data to do something with. Now, in this case, it is my data. But still, understand there's going to be some barriers before they can do this. And there's other tools out there if you if you want to pay and create a custom GPT like that. So I had to create some documents, and I put some quotes and other things in there, so it can upload a couple of PDFs, and I did that.
And then I started asking it questions hey, I'm getting ready to do an EHR rollout. Give me an idea of, the steps that you would take. And then it started coming back and I was like, wow, that's really good, . I was like, yeah. Not only just the basics of, hey, you have to do a project plan, you have to get a budget.
All that other stuff is start talking about project champions engaging the community. It started talking about all the nuanced things that we learn over the years and that we have learned over the years. That's part of the body of knowledge, but I was really impressed. I don't know how much better my GPT response would be than just going to a prompt.
I didn't really test that out.
share a similar surprise with GPTI made for synthetic data. One of the conversation starters, like at the bottom, where you can be like, Hey, here's some examples of things to ask about. One of them I put there was a 40 year old female with abdominal pain on Zolota, right?
Zolota is a chemotherapeutic sometimes used for breast cancer. The interesting thing is when you put that conversation starter in there the prompt behind it says, okay, you need to make a triage note, you need to have a past medical history, past surgical history, some labs from the ER visit.
But when it starts going with the 40 year old female on zelota, it says, there's a triage note, and then it says, past medical history, breast cancer. And you didn't tell it that, it just was like, a 40 year old female on zelota. Oh, breast cancer, right? it gives the ICD 10 code too, right?
And it's just oh, that's like super interesting,
wouldn't it have to have been trained on this kind of data in order to do that? It would have to. Yeah,
I guess the entirety of PubMed and the web kind of points towards that, right? Like I think that's what you're seeing there.
I don't know.
10 codes essentially this whole idea of you're using fake data, but we could use real data. And how is this going to change the practice of medicine, let's say in the near term? Because, a lot of health systems are signing up with either Google, right now I'm seeing Google and and Microsoft's your past path to OpenAI, Google's your path to Google stuff and BARD and MedPalm and the rest of those things.
I know that there is a, Amazon has made a significant investment and they're heading in that direction. I just haven't read any stories in healthcare yet. But. And Epic, actually, we've seen a lot of stuff from Epic recently about how they're playing around with this technology. And I'm going to interview Seth Hain here shortly, and forward to talking to him about, how they're thinking about it.
But couldn't this really change and revolutionize the way you do documentation
Yeah I think so. part of it is like always like you hear so much about it. Like I personally haven't used any of the ambient solutions. Like I use Nuance and like kind of the microphone.
I'm curious as to whether the Ambient Solutions will work in a very noisy emergency department. And part of it being that like, how do you select what patient you are seeing to associate them with that conversation inside of the EHR? Because you're roaming around. People are You know, in different rooms, but that's not foolproof and likewise a lot of emergency medicine in the U.
S. today, there's like hallway spaces, right? That's if you go to any ER, there'll be people in the waiting room, in the hall so how do you associate that conversation with that specific person. I think that's like a challenge and also like just the noise that's going on in the I did like years ago before actual LLMs, I was trying to do this and it occurred to me because I'm always like how do you do this without real patient data, right? So it's oh, like We have this awesome simulation center, right? Why can't we just go and try to record stuff with a bunch of noise in the simulation center with a resident, right?
So we went and we did two cases in the SIN center with as much noise as we could do. And this was all simulated. So we were using, we didn't need like a BAA or anything. And we used Otter at the time, when Otter was new. It actually did okay ish, it was like, it was all right but then you realized oh, like what is the flow going to be for me to associate this transcript with this specific patient encounter and not have it be super just a bunch of more clicks to do,
so it's interesting as you're talking about that. Back in the day, a long time ago, now we were a Meditech shop, so you can imagine the technology we were, and this isn't Meditech Expanse, this is Meditech back in the day, client server and whatnot. So we had to think a little differently of things we're doing, and we're trying to change the clinician experience.
And one of the projects I gave to a couple people on my team, was essentially for this to be the computer, the phone to be the computer. And I didn't really care which one it was, like we'll give them out if you do this and what they went down the Android path and essentially what they were demoing at the time was you go into the room.
And you dock the thing and on the screen comes up a full blown thing the EHR and whatnot. Which I thought was interesting, but I think we've progressed. This was back in 2016, 2015. But I think how we progressed is the primary tool for ambient listening now is the phone. And so I think the other thing about it is the primary navigation.
Being your voice, I think that next iteration is essentially you're navigating your EHR on this phone. And you say I'm seeing Kevin Malloy right now pull up his record, it pulls it up, and then you say, all right, I'm ready to do the note for Kevin, or I'm ready to, whatever it happens to be still a little clunky, but that's, I think the direction that Nuance is heading with deep integration into the EHR.
Yeah, and I think in the ER, we're all unscheduled I think there's something to be said if you have a schedule and know that, hey, it's this few people that it's going to be, whereas if you have the entirety of a hundred person ER with a bunch of active patients it's not impossible.
My thought on the solution and Is to have a QR code we're in the ER, we're very like desktop oriented, because we have just one geography, and I have a QR code inside of the EHR that you scan, and then it like, pulls up that exact patient, right?
And then you can go and see them, right? Because then you know that you're documenting on the actual correct patient. I haven't seen that yet, but that's my inkling on that. But I think that might just be ER specific, where we have a lot of unscheduled
care. that's the thing about this ambient technology.
There's so many specifics, right? So when you go, it's like you go to gastroenterology, it's a different language. And then you go to it's a different language. And, and different environments. And by the way, we also have to think through, especially in the emergency department, you have to think through disaster scenarios, right?
So, it's already busy and loud on a normal day. Let's assume there's an event that happens, an earthquake, a flood, a fire, whatever. Now all of a sudden that place is inundated. Do you really want everything to be being done by earthquakes? It's already loud as all get out and whatever. Just a lot of stuff breaks in those environments.
And as well as I do, having worked there, I know as being a CIO, you have to have a very solid plan for all those things.
Yeah. Famously, Mark Smith, who used to be chief innovation officer at MedStar ER doc when he was making his own EHR, he had this quote, which was engineer for the extreme, use in the routine.
So you would, he would engineer for the extreme cases like you were talking about, but then use in the routine. It just stuck with me ever since he said it.
📍 📍 In the ever evolving world of health IT, staying updated isn't just an option. It's essential. Welcome to This Week Health, your daily dose of news, podcasts, and expert commentary.
Designed specifically for healthcare professionals like yourself. Discover the future of health IT news with This Week Health. Our new news aggregation process brings you the most relevant, hand picked stories from the world of health IT. Curated by experts, summarized for clarity, and delivered directly to you.
No more sifting through irrelevant news, just pure, focused content to keep you informed and ahead. Don't be left behind. Start your day with insight at the intersection of technology and healthcare. This Week Health. Where information inspires innovation. 📍 Increase
So give me your thoughts on the Sam Altman keynote. You had a couple of posts after that. It was given Steve Jobs or it wasn't quite Steve Jobs in terms of, and one more thing, but you I mean, I remember going out of that keynote and being fairly like, Oh my gosh, the world is changing as we know it.
It felt a little bit like that first iPhone.
Yeah, in some ways you're hearing like, wow, what is the next one going to be like when he gets further down his road? I think, I remember one of the take homes being a context window. And I was actually looking at this just the other day, like you were talking about earlier, having all your transcripts, right?
And you've got to put them in a file and upload them to GPT, those custom GPTs. If you can have a context window that's much larger, I think it's like 3, 000 or 4, 000 tokens currently, right? You're limited currently about how much in terms of medical history, as well as how much you can get out.
So if you're Trying to create a custom GPT to create FHIR, like synthetic FHIR. It actually, FHIR is so verbose, right? So many characters that you run out of context window. So I thought the, one of the big things would be, I think it was like 000. Context window length, which would be about like the size of a novel, right?
So then you could start actually might be able to put like a majority of the discharge summaries on a patient into the actual prompt itself and be like, Hey, like what summarize the entirety of all the discharge instructions. I just. I feel like I'm always coming up against context window and like how big you can actually, how much you can actually give it, and how much it'll give you back.
we really are at the beginning of this. It does feel like that first iPhone. I have the first iPhone box behind me. Do you remember how small this thing was? I look at it now and I think, man, that was small. But I think the other thing we forget about it was there was no App Store.
this virtual, it's whatever Apple we'd like to use and had made there.
what they envisioned was really everything would come through the internet not an app store. Was their vision. Now, That changed in like version two.
I think they came out with the app store and a way they went. This feels a little bit like that in terms of we're playing around with it and going it's not it's really amazing But it's not quite there and it's the same thing as holding this phone. You're like Wow, this is really powerful. And Steve was right.
It's a communicator, a phone an email. I forget what the Oh, and a music. That was his four things. Like these four things we're releasing these four new things and it's all one thing. That was his big thing. And I feel like chat two piece is is the same thing as well as the other tools.
That we're playing around with them going, Hey, this is really cool. But man it's close to being able to do what I really want to do with it.
Yeah. Have you used actions? Inside GPTs where you could have it do other stuff for you
like yourself, programming for me is my part time job. And really more of my hobby than my part time job. I shouldn't call it a
job. Hold on, you should have it be your full time job.
But when I'm looking at these new GPTs that are coming out, that is the power of it.
It's going out either through Zapier or another set of APIs and it's hitting something else, which is pulling the information back to chat
GPT. Yeah, so I actually have one as a side project. I think I referenced this earlier where I feel like there's just so much going on and I'm always like one of the last people to know about a bunch of stuff.
So what I've started doing is taking podcasts and getting a transcript. And giving it to GPT and being like, Hey could you pull out leadership lessons, antidotes statistics from these transcripts and post it over to this Google sheet? So with actions, you could actually give it a transcript and it'll actually make post requests and create data inside of a Google sheet.
Which I think is fascinating, right? Because there's the ability to go out and retrieve data with them, which you were referencing. But there's also the ability to like post stuff and create new stuff, which is just like super fascinating.
Yeah, I've used Zapier for years. I assume you're using something similar or the same thing.
Yeah, I like Chidi. co for Google sheets. But these are all just like personal project
type. Are you pulling that stuff from the RSS feed or where are you getting the transcripts?
Yeah, so RSS feeds I find are fascinating, right? Because. You basically curated a bunch of content and you have the creators of the podcasts, like markup, like what this is about and where the audio is, right?
So what you can do is set up a Zapier for an RSS feed, be like, here's an MP3, post it up to something like DeepGram, transcribe, put in a Google Drive sees a new thing, Right now, you can't do the custom GPT, so you can't do actions. In OpenAI with the assistance API. But so what I do is I just like copy and paste and then the GPT does the rest.
I imagine you could set up like puppeteer or like the automation, like browser automation stuff to do that for you, log you into OpenAI and do that. So it is like the iPhone, right? Where you're looking at it and you're like. Almost all the pieces are there, right? It's there's still some manual stuff.
You're just like, I wish it could do this, but it's sure better than the Palm Pilot I was using.
do you envision a time where the nerd the, you know, the programmer or whatever is less necessary? We see that a little bit with the GPTs, don't we?
Yeah, you must be doing this yourself where you tell it to create some code for you.
ChatGBT has written 85 percent of my code. Now, you have to instruct it. It's I look at the code and I go Hey, where's the security? Oh, you wanted security. Yeah, let's put tokens in. Let's put, I'm like,
okay. The funny thing
is when it gives you
something and you type in what about security?
And just the insane speed it comes back. It's oh yes. That's
what I heard. I knew that. I just wanted to see if you knew it.
It was like, just waiting. But yeah, I haven't I haven't read some of that code. Like it's, like how can you be more productive nowadays is by having offloading some of that mundane tasks to GPT or GitHub Copilot,
how is this, I mean it's cost prohibitive at this point, but you're both Google and Microsoft. have you have Copilot on one side and I don't know what Google's calling theirs, but essentially within email, now you're going to have access to functionality that is a large language model.
I'm really concerned about this, to be honest with you. I can't imagine people turning on the auto function of Yeah, just respond. I'm going on vacation this week. Good luck. But
Haven't you seen this? I have a sneaky suspicion that I've gotten some GPT generated.
Yes. Even from not even advertising. It's like someone, who ran it through GPT to be like, can you like, rewrite this to be a little more, I don't know, funnier. Hey,
this isn't in your usual style. It's missing the. The punctuation errors and the spelling. Yeah. What did you do?
What, what's going on? I closed an email recently with some alliteration. I used three C's for something and afterwards I read it and I'm like, somebody's gonna accuse me of . I said, just so everyone knows, this was not written by a large language spot on. We're on alliteration for 2024.
How much stuff are we reading now that is generated? Especially within healthcare, it's really interesting to me. Some of the leading. Journalists, or places we go for news a lot of times are summaries. And so I looked at that, and I'm like, look, that news site's a summary, that's a summary.
If some big announcement happens, they all cover the same thing. And it's You sort of start reading them and you go, man, because I, we report on the news here and I read a lot of stuff and I go, man, that's like the same stuff over and over again. And so our new site, I just, I make no bones about it.
It's we summarize it. What you read on our website is summarized by GPT. Now, if you want to read the whole article, go ahead and click on the button. You can read the whole article. And we designed it that way for, ease of reading and people can read the summary and determine whether they want to read the whole article or that kind of stuff.
But how much stuff are we reading now that is AI
generated? Yeah, I don't know. It's, and
the follow on to that is, I remember talking to doctors and when we were doing the EHR implementation, and I'm like how do you want to mark this? How do you want to mark that? And they said I want to know what doctor made that note.
I'm like why is that important to you? It's because if it's this doctor, I don't really trust him. But if it's this doctor, I really trust him. And if it's this doctor, whatever. I'm like. That's really fascinating to me. So when we get to this point where we're actually generating with AI, we're generating stuff that's going into the EHR, are we going to mark that as AI summarized, AI generated content?
I would assume we are.
Yeah, I imagine, probably, right? Wasn't that no, that wasn't Mictripathy, didn't you, was he talking about something similar, or I may have got my wires crossed there.
Yeah, and I did talk about AI and the need for transparency in AI models and that kind of stuff.
Yeah, it's interesting because you do see this on some people's notes, which is like even using the Nuance kind of power mic, where they'll have a dot phrase that's this was created using speech recognition software, and there's, there could be errors and stuff like that.
So, it's plausible. Yeah, it's plausible. It's pretty likely that you could just have a DOF phrase that says this was automated by GPT or by a large language model.
All right, we're gonna close this out. There's new podcasts out there that are like Healthcare 2034. I think that's the name of the podcast.
2034. 2034. I'm like, wow, that's bold. But I think what it means, the title essentially tells you, hey, we're gonna be talking about futures. We're not gonna be talking about present. We're gonna be talking about what could be possible and we're gonna be talking about when you and I talk 10 years out now, it's I can't even imagine 10 years out.
Things are moving so quickly. Oh, yeah. Yeah. But I want to do that. Healthcare 2034 the ED probably still looks an awful lot like the ED, right? People still get hurt. They come in. There's not enough room. There's people in the hallways. And that's not just your hospital. It's, that's every hospital.
Every, hospital you walk through. So generally, unless they have unlimited budget and it's a fairly new facility there's people in the hallways today. So I imagine 10 years from now, the ED will still look like the ED, but what do you think is going to change because technology has advanced in 10 years?
This is all futures, by the way. This does not represent George Washington University or MedStar or whatever.
I guess it's like, what do I hope versus Yeah,
Let's start there. What do you hope as a practicing physician? What do you hope? Oh,
So it would be cool if I don't even know if it's like, you need to have Andy and listening technology as you like talk to somebody.
But kind of that auto completer in Google where it just starts like ghost typing in front of you and put just the relevant stuff. If somebody has chest pain, put that they've had like a cabbage or a stent or something in the past. If they've had abdominal pain, just start writing like 40 year old female history of cholecystectomy.
It should be like grayed out. Like hopefully there's The LLMs, do, make it easier to document, but also just just start auto completing stuff, I think would be like super cool, at least for the er 'cause , it's just so noisy and chaotic. I
That's where I would go first.
We're seeing significant advancements in computer vision. I'd like to see. advancements in computer hearing and listening. What I envision 2034 is, you're going to have earpod in, and that earpod is going to be able to do noise cancelling at a level we can't even imagine today.
Where it's essentially going to take all the ambient noise, take it out, and that is going to be able to pick up what the patient is saying, what the nurse is saying, like all the relevant things right around you, and cancel out everything else. That way when you're having a conversation, it's going to be able to say, Oh, you're dictating the note right now and that kind of stuff.
I'd like to see Significant AI advancements in space in and of itself. Cause I think otherwise we're going to create a of noise in this culture of people talking to their devices. And I noticed ChatGPT's iPhone app now it's cool, but it has the ability to essentially set it up to.
You'd like you can talk to it and it talks back. Yeah,
It's neat, but it's loud.
Yeah, it's so neat. gave it to my kid to use. He's almost nine. And because he was like, I'd like an emoji. That's half cat and half dog. But I can't, like right down the middle.
But I can't find it on the internet. And I was like I think Dolly could do that. So I just hit that little button. That's there and the cloud came up or whatever. So it made the emoji, which was like super smack on, right? And then it was like the fascinating thing when you use that feature is they want you to keep talking.
So it was like and do you like cats or do you like dogs better? And my son who's, he's nine. He's I really like dogs a lot better. He's oh yeah, like what type of dogs? Like they're having like a full blown like conversation. And then you could actually, was just fascinating that they had gotten that next step to keep people engaged and he was super engaged.
And that will be a, an ongoing learning mechanism for chat2BT, I would assume, if it can have conversations. I think, yeah. As you were saying that, I was thinking. The old Saturday Night Live skit of Amazon Silver or Echo Silver, I don't know if you remember that one, but it was, Echo and old people interacting, people of age acting, interacting.
I don't know. Can we say old people anymore? I assume we can. The
older people. Only if you're talking about yourself, Bill. Yeah, exactly.
I'm not that far anymore. The hair is completely gray. But, they're interacting with Amazon Echo and they're having these conversations and they never say the right, whatever word.
So it responds to 45 different words and whatnot. But I think about that, half joking, that's interesting. But these can become companions moving forward. They literally can, yeah, you can engage with them on, Hey, who won the game last night? Tell me, did they score a lot in the fourth quarter?
You can go back and forth with these things if first of all, they have to get current, right? They're not all current models. They can't tell you about the game yesterday.
I think the thing I observed when my son was talking to him was that it summarizes what you just said and then asks you something new.
So the GPT for what you just said would be like, Phil, I hear that you're really interested in the scores and in sports. Why do you like sports so much? Yeah,
it's still very basic. It's it would be better if it said, what sports are you interested in? And then I go I'm interested in this.
He's would you like to know the scores from last night? tell me if the Kings won yesterday. Oh, they did. Great.
they're a really good team. I'm glad you like them. Is there anything else?
still a long way to go.
I share your optimism of much like I used to look forward to the Apple announcement every year. I'm really looking forward OpenAI announcement every year, and I think it will get that kind of glitz and glamour. And I'm not sure that Tim Cook, you have Steve Jobs to Tim Cook.
And I thought, oh my gosh, how are they going to do this? And they've been able to make the transition because It's interesting stuff they're talking about. And I think Altman doesn't have that kind of flair of a Steve Jobs, but he's talking about some really interesting stuff. And I think you saw that because when Satya and Altman were on the stage together, Satya does have some of that flair.
And he was a very polished and dynamic speaker.
Yep. And, Sam Altman's he's just a little rough, but the people he brought out to, from his team that demoed some of this stuff I thought they were, like, totally spot
on. Oh, yeah. and I'm sure it's heavily, I know it's heavily scripted.
But, they made it appear like, hey, we're trying some things out right here. Oh, yeah, do this and do this. And it was doing it. It was pretty impressive.
I thought it was nice they gave away what was it like a thousand credits? Yeah, credits. One person randomly, and then they're like, oh, but for everyone too.
That was a fun case that didn't work right. But I am looking forward to that next set of announcements. You should do a watch party, man. We should do a live watch party and just sit there and comment on it like, Oh, look at that, man! We could do gosh, the Manning brothers watching Monday Night Football.
We'll just sit there and go, Oh, geez! That would be a lot of fun. Oh, man. Hey, Kevin, I really appreciate you coming on. We'll have to we'll have to stay in touch, compare notes, and see How things progress in the AI age of medicine.
Absolutely. Thanks for the time, Bill.
Thanks for listening to this week's keynote. If you found value, share it with a peer. It's a great chance to discuss and in some cases start a mentoring relationship. One way you can support the show is to subscribe and leave us a rating. it if you could do that. Big thanks to our keynote partners, Artisite, Dr.
First, Gozeo Health, Quantum Health, and Zscaler. You can learn more about them by visiting thisweekhealth. com slash partners. Thanks for listening. That's all for now..