OpenAI Dev Day - This Changes everything, again.
Today in health, it open AI has their dev days. And the pace of change is not slowing down anytime soon. My name is bill Russell. I'm the former CIO for 16 hospital system and creator of this week health, a set of channels and events dedicated to transform healthcare. One connection at a time. We want to thank our show sponsors who are investing in developing the next generation of health leaders.
Short test artist, site parlance, certified health, notable and service. Now check them out at this week. health.com/today. Hey. This story and all the other stories we're going to be talking about are on our website this week. health.com/news. In fact, it's on the menu. Just click on news. There they are.
All of them. That we talk about, they could be from Becker's. They can be from modern health care. They could be, this one is actually from the open AI website. So they could be from any of those, but they're on our site for you summarized easy to get to, easy to find all the stories that you need. All right there, check it out. We're still in beta. Send me feedback, DME, LinkedIn, Twitter.
Let me know what you think. Alright, one last thing, share this podcast with a friend or colleague use it as foundation for daily or weekly discussions on the topics that are relevant to you in the industry. They can subscribe wherever you listen to podcasts. All right. Open AI. Are there. If you're not familiar open AI as the founder in the parent company for chat GPT and is 50%, I think about 50% owned by Microsoft at this point, or Microsoft has funded them to the tune of billions of dollars. And they had a, their first step days.
And to be honest with you, it's really interesting. Because it was a smaller ish auditorium. If you will. And that in and of itself was a. Surprising to have that kind of a small auditorium, but I think. It's going to be one of those things that you're going to remember back to the day. When it was a. It was that small, in fact towards the middle to demonstrate the software, they were they were doing some code where they essentially talked into it and the guy said, Hey, give everybody $500 in open AI. API credits and it started to do that.
Started to give people credits. And so again, smaller type auditorium. I think it's the last time it will be that way. Because they are killing it. They're absolutely killing it. If you look at it a year ago, it was a hundred million the fastest. Release of technology ever to a hundred million users. And now they are touting a hundred million daily active users. A hundred million daily active users. I will tell you that everybody on our team has an account, a paid account. Which is $20 a month per person. Because we are finding that much value in it. I will also tell you that in API credits and whatnot, we spend, I don't know. Roughly 40 to $80 a month on API credits because we are automating things and I am as happy as I can be to spend that money because the The return on the investment has been fantastic. So anyway, let me give you an idea of what they talked about. This is straight from the open AI site is from their blog.
Today. We shared dozens of new additions and improvements and reduce pricing across many parts of the platform. These include new GPT four turbo model. That is more capable of cheaper and supports 128 K context window, which is huge. There are times where you're trying to send stuff to open AI and you have to chunk it and you have to essentially do. These gyrations, because it can't handle. The the larger amounts of things, either on a request or on a return of that request. And so that is huge. They reduced the pricing by two ax to three X on requests and returns, which is really amazing. But anyway, that's just the basic and by the way, it's faster.
But that, wasn't what they really focused in on. They really focused in, on driving the cost down. They focused in, on the additional capabilities. I have a feeling they will make it faster as we move along, but it's fast enough from where I sit and what they're going to do is they're going to make it faster so that you are now going to see it integrated into all sorts of applications that are real time. And I'll get to that in a minute. So new assistant API that makes it easier for developers to build their own assistive AI apps. That have goals and can call models and tools.
What's interesting. The most interesting thing to me about that is there's a no code version of building assistance where you can essentially talk to it and it starts to build the assistant for you. You could upload PDFs and it starts to build the assistant for you. It's a. It's It's a no-code solution.
It's really powerful. The things that you can do with this thing. I'm envisioning. And then there's a couple points in here while I'll talk to you about what I think healthcare can do. I'm envisioning that every health system will have a voice. We'll have a voice or an assistant that can give you information about that system. The best way to get an appointment, the best way to find parking the best way to find a new physician. The, the physicians who have open spots on their schedule and you name it, all the information that people generally call your service desk about, you're going to be able to build assistance that capture that information and. Give that information back and it can even do it in voice.
And we'll get to that in a minute. New multimodal capabilities in the platform, including vision image creation and text to speech. So not a minute 30 seconds. So there you go. The multimodal capabilities are really fascinating. I started playing with the Dolly. Three integration. And I put in some texted a GPT and said, Hey create for me some cards for our podcasts.
That's, at the intersection of technology and healthcare. And it's spit back to cards. I then said, Hey, our core color is da 21, 28 in hexadecimal. And can you include that? And it changed the color. Now we're seeing this everywhere, right? We're seeing this in Adobe Photoshop, or we're seeing it in all the Adobe products there.
They're bringing it out. We're seeing it in some of the other. Graphics products as well, that you're going to be able to do things that are just amazing and it's going to be voice. So it used to be, you had to be an artist in order to do certain things. And now it's just going to be, you just have to be creative to do certain things.
Now, I think there's still going to be a premium on people that do. Art that do hand. Art and those kinds of things. And just the model's going to change. Graphic design is going to change. And there's going to be. Are you going to be able to look at art and say that's a graphic designer or that's an AI model, maybe? But if you're a true efficient auto of art, you're going to start to appreciate graphic designers that have a style and have a. Yeah, I style essentially that you're going to be able to look at, but I think that's how graphic designers are going to have to differentiate.
They have a certain way of approaching a project or a certain way. Of developing images so that they will have a style, but you're going to be able to, talk to a prompt and get images back now. Just like everything else in chat GPT. PT prompt engineering is going to be important. And understanding how to get that out.
But I will tell you as somebody that has used some of the other tools that are out there. And the prompts that were required, they were very complex. And what this does is simplifies the interface. It is just flat out straight text. Are straight. Voice. Prompts that lead you to where you need to get to.
Very interesting from that perspective, don't need the vast technical knowledge in order to generate the images. And finally text to speech. This is, I think very interesting to me because you can access this via the API. And the responses now can come back as voice and their voice sounds really good. And we've heard this over the years, how voice sounds clunky when you type words into it and say, speak this back to me.
And it's my name is bill Russell. It's just, it's clunky. And that's not what this sounds like. And there's six voices. It's very fluid. It sounds really good. And what I envision you being able to do is to create a voice interaction kind of mechanism. You could do that at a kiosk.
You could do that on a chat bot on your website. You could do that for disabilities and those kinds of things. There's a lot of things you're gonna be able to do with this. That quite frankly, our new capabilities. This is a new muscle for us, and a lot of ways. And I think a lot of people are toying with GPT four and they're toying with these large language models and saying, oh, we've got time.
I think the time windows are getting shorter and you're going to see. Organizations really differentiate themselves with this kind of stuff. I will tell you, our small organization has. Is able to do, I think, run circles around our competition because we are heavily. Automating tasks using GPT four as a reasoning engine. The 128 K contacts, they have more information on that and why that's important. They also updated their. Knowledge base to April of 2023.
It used to be 2021. I forgot, I think November of 20, 21. And that was a real limitation. And it was it was, it's silly. You say who won the super bowl when it would give you. The, the information from. From 2021. Now we're at April, 2023, and this is I think in direct response, first of all, it needed to happen.
But I think that this is in direct response to what Elon Musk is doing. And. The model that they're coming out with, which is going to be up to the minute based on the Twitter feed. So I think that's going to be very fascinating to keep an eye on. The 128 K context window can fit the equivalent of more than 300 pages of texts in a single prompt. 300 pages of texts.
That's huge. As I said outputs are two times cheaper. Inputs are three times cheaper, which makes the model more accessible and that's pretty exciting. Function calling you now have the ability to do multiple function calls. At the same time you can there's a Jason object, so it can return in Jason. They're one of the things that's really interesting to me is the ability to do reproducible outputs.
So you can you can take, as we talked about on the show before probabilistic is one of the problems with these models. It, if you ask it the same question over and over again, it is going to potentially give you different answers. That is a feature and a limitation. And for those of you who don't want it as a feature, and you want to limit that ability like in healthcare, where there is a specific response to a specific question. You're going to be able to you're going to essentially be able to give it a model that a response out. In a reproducible way.
So that's pretty exciting. There's some updates to GPT 3.5. Turbo they've come out with this concept called assistance. And where you're going to be able to again, you're gonna be able to build these assistants and they're bundles of features. That can produce some interesting results and the example they gave on the. On the video that I watched was essentially, you have a website, a travel website, and you say, Hey, I'm thinking about going to Paris in the spring. And it pulls up a map of Paris.
What are the 10 places I should see in Paris? And it drops the things. The markers on to the map. And you say, Hey, I'm I have an Airbnb. That I've reserved and you can drop the PDF and share the Airbnb information. And it puts a marker where that is. And then you could ask you the question, like how much am I paying for the Airbnb?
And it gives you information back based on the PDF that you drop. Now you can drop your airline ticket in there. And you get the picture. It's this assistant that is going to help you with the entire trip. The more information you give it, the more it's going to give back. And again, It's a no code kind of thing. The other thing that's crazy to me is that it has a code interpreter in there.
And if you ask it a question that it doesn't necessarily have the ability to do, let's say you're going to split the bill for the Airbnb, but there's four of you going. And one of you is going to stay for three days. The rest of you are staying for five days and you want to know what your share is.
And you're one of the people saying for five days, It's going to actually write code to do that. Math return the answer on that and give you the specific amount that you are required to pay as the person who is staying for five days versus three days. Anyway, you get the idea. This is it's really powerful stuff. They also have a playground that they put together. Nouveau modalities GPT four with vision, so it can see it could see the images that you're putting in there. And they have the cost associated with it.
Dolly three, I talked about it a little bit, Texas speech. I talked about it a little bit. I just think it's really really interesting. I don't know if I hit this button, whether it'll play and you can hear it, but I'm going to hit it
the golden sun Dips below the horizon. Casting long shadows across the tranquil meadow. The world seems to hush. and a
There you go.
So that's the text to speech, reading back some things I don't know if that picks up on the microphone. I hope it did. I think it's smoother than things I've heard in the past. There's a, now the ability to do fine tuning of GPT four, you can actually train these models. I think. By training these models directly into cheap T4, I think they just put a whole bunch of startups out of business that were living in that space of, Hey, GPT four.
W we're going to create this a mechanism for you to train it on our, on, on your information, and then ask questions of that. I think that whole. Area of startup just went Just went belly up or at least they will have to figure out a way to differentiate themselves. They have higher rate limits.
So they used to rate, limit people. You can only put in so many requests for our neck of stuff that went up. There's a copyright shield and my gosh, there's so many things. Oh, they're coming out with a An app store and there's, the apps are called GPTs. You need to develop these GPTs and put them in the app store.
And I think that is going to be a. Billion dollar operation within two years is my guess. What's the, so what for you in your healthcare system? The, so what is this thing is real. It is moving at a pace that is extremely fast. I think we we're healthcare, right? Our core business is delivering care to the community, but your core business as an it professional is always looking for ways to make the organization more effective at its core tasks. And this is a tool I believe. That can help you to make your organization more effective, more efficient. It is a Satya Nadella has said about the perfect machine, natural language front end. Reasoning engine. Co-pilot design. I'm going to keep restating that because it has been transformative went from when I heard it until now it's been transformative.
So natural language front end. Meaning you can interact with it. You can now do things that you couldn't do before. And the example is as a graphic designer, you can now talk to it, give it a prompt, and it gives you a graphic design back. You can talk to it and it can create a cartoon back.
You can talk to it and it can create code back to you. So it is a natural language front end, that reasoning engine gives it the ability to do a lot of things. Like you feed it information and say, what do you think this really means? And it gives you a response back on those things. And it's pretty doggone good at it.
And so we are using it to automate a whole bunch of tasks that quite frankly, my team did not like doing and did not enjoy doing. Like maintaining the website and updating things and those kinds of things. And we're automating that stuff in the background using this as a reasoning engines and essentially say, Hey, here's the post that's currently up there.
Does it have good SEO does have current SEO based on the terms that we're looking at. And if it determines no, we update the information on the website based on that. It's that kind of stuff that used to be done by an individual and used to get on the back burner. It used to be on a list of things like, oh, when we get a chance, we're going to go back. Back and clean all this stuff up.
That's how we're using it here. And I think if I were CIO of a health system, I would identify, just within it, I'd identify a hundred tasks within the first month that I think we could automate using this kind of tool. So anyway. Just thought I'd share that with you all is yes, I am high on this.
I think this is a great tool. I think there will be other tools by the way, there will be competing tools. There just has to be. This is Microsoft's play. You have an Amazon play as well. You have Elon Musk's new thing. As well you have Google's Bard. There's going to be different plays, different things.
They're good at. But my gosh, this is going to be, this is gonna be huge organizations with big checkbooks. Running as quickly as they can at utilizing these kinds of tools for advancing things. Anyway, that's all for today. Don't forget. Share this podcast with a friend or colleague, keep the conversation going. We want to thank our channel sponsors who are investing in our mission to develop the next generation of health leaders.
Short test artists, I parlayed certified health, notable and 📍 service. Now check them out this week. health.com/today. Thanks for listening. That's all for now.