This Week Health

Don't forget to subscribe!

AI is advancing rapidly. So I thought we would interview ChatGPT 4 on ChatGPT 4.

Transcript

 Today, I'm at the Vibe Conference. We have a special week for you. This week we did Newsday. On Monday we're gonna do Newsday. On Wednesday, we're gonna do a special drop of No day. On Wednesday we're gonna do the same thing. Next week, Monday, Wednesday, we're gonna have a new stay episode. But Tuesday, Thursday, Friday, I pre-recorded some episodes.

That I could drop, so that I could pay attention to my time at the conference and capture some great interviews for you. So that's what I'm gonna do today. I'm gonna interview chat, g p t four about chat, G P T four.

 My name is Bill Russell. I'm a former CIO for our 16 hospital system and creator of this week Health. A set of channels dedicated to keeping health IT staff current and engaged. We want to thank our show sponsors who are investing in developing the next generation of health.

Sure. Test and Artis site two great companies. Check them out at this week, health.com/today,

 Well, we've been talking about chat, G p t four. Everybody's been talking about chat, G p T four, and I wanted to explore it a little bit this week. And so we're gonna take a look at, , a couple things. One is today I'm just gonna interview chat, g p t four. Consider this like an intro, a basic, basic, , conversation so that you can understand what this thing can do.

Then we're going to actually look at the potential of. technology. We'll do that on Thursday and now Friday we'll look at some of the risks associated with the technology and some of the things that have been explored and looked at in terms of the downsides of chat G P T four. So that's what we're gonna do.

But today I thought I'd interview chat g PT four about G P T four. And why am I doing this? One is, , this is the most adopted technology, the quickest adoption of technology on the internet on. Right. A hundred million users have signed up for, , Bing because Bing incorporated G P T four, and that's been confirmed now.

It wasn't G P T 3.5, it was G P T four that was incorporated into Bing. Microsoft's in the process of incorporating it into nuance for transcription and it's also being incorporated into Excel and Word. , this is something that should really concern Google a lot. , if that shift goes from the search engine, from from Google to Bing, that is a big deal in and of itself.

But then you have all the Google Office users and, and I'm a Google office user and, and have grown accustomed to using these tools, but it's because I've moved away from Microsoft Office. There was nothing it was doing, it wasn't adding any additional value, and Google was a better cloud platform for sharing documents and all those other.

Well fast forward, you incorporate G P T four and there's a lot of things they're going to be able to do today is meant to demonstrate some of that to you. So I decided to ask GBT four. Can I interview you about G P T four and it's response? Of course, I'm happy to answer any questions you have about G P T four.

Please go ahead and ask your questions. I'll do my best to provide informative and accurate answers, and this is the interesting thing. . It's a great chatbot. I mean, it comes back with well-formed answers, sometimes repetitive, sometimes it hallucinates. But if you're asking it basic questions, it is really good at this.

, and I'll go into some of the more advanced things on Thursday, but today we're gonna keep doing the interview. How much content has G p t four consumed in its learning? G P T four is like, its predecessor, G pt. Has been trained on a diverse range of internet text. While the exact amount of the content consumed during its training is not publicly disclosed, it is estimated to be in the order of hundreds of billions of tokens.

Tokens. A token in this context can be a single character, a word, a sub word, depending on the language and its structure. There you go. That's why it's not words. The training data for G P T four comes from a variety of sources, including. Books, articles, and other text-based content. The vast dataset allows G P T four to generate human-like responses and understand a wide range of topics, but it's important to note that its knowledge is limited to information available up to September, 2021.

and the irony of that is it doesn't know about itself. , I mean, you'll, you'll see some of the questions and some of the answers. It's just, it's interesting. September, 2021. So I asked the question, Hey, why stop at 2021? And it's answered. The knowledge cutoff at September, 2021 is due to the time when G P D four was last trained on new data as an AI language model.

I'm not continually updated, if you like that. I am. Right. I am not continually updated with new information. Instead, my training data is static and my knowledge is limited to the information available up to the time when the training data was collected. Creating and training language models like G P T four is a resource intensive process that requires substantial computational power and time.

It is not feasible to continually update the model with new information in real time. OpenAI periodically releases new versions of their language models, incorporating more recent data in the training process. When a new version is released, it will have knowledge up to the more recent. . And , and that makes a lot of sense, right?

So even though it's on the Azure platform, there is a certain amount of resources that's required to train these models. And it's not unlimited, therefore, , it's not taking in realtime information. And perhaps that is the advantage that Google may have, who knows, to try to combat the, , the huge progress that Microsoft is making at this moment.

All right. , where are some of the places G P T four has been utilized in healthcare, and this again gets back to that date. While I don't have specific examples of G P T four being used in healthcare due to the knowledge cutoff of September, 2021 and G P D four, not being a specific model, I can provide some examples of how G P T three, the previous version of the model and other AI language models have been utilized in healthcare.

It is highly likely that G P T four. With its advancements could be used in similar applications or even more sophisticated ones. Some areas where AI language models have been utilized in healthcare include. Gives me six examples. Number one, medical documentation. This is what we're seeing with nuance right now.

It doesn't know about the nuance deal with GPT four, but it was used in medical documentation. Prior. AI language models can assist in transcribing physician patient conversations, generating summaries, and organizing medical research. We use it at this week, health for summaries and for organizing content and creating bulleted lists.

It's really good at that. So medical documentation. Number two, patient triage AI chatbots can help patients determine the severity of their symptoms and guide them on whether they whether to seek immediate medical attention or follow home care guidelines. Number three, medical research. AI language models can help to analyze scientific literature, identify trends and relevant articles, and even generate hypotheses for further.

again, it's not, it's not, , cognitive, it's not generating new thoughts, even though it's saying generate ideas for new research. It's doing that through pattern matching and those kinds of things. , number four, drug discovery. AI models can aid in the process of drug discovery by analyzing chemical structures, predicting protein, drug interactions, and providing insights into potential new treatments.

And what it does is it just consumes all this information and then you say, okay, now, this, you know, protein drug interaction and it has all this knowledge to look at it and say, I see these pattern. , right. Drug discovery number five. Mental health AI chatbots can provide support for individuals dealing with stress, anxiety, or depression by offering a safe place for conversation and suggesting coping strategies.

The interesting thing about this model and mental health is when people interact with Alexa at a very low level, but when they interact with some of these robots that are being created, if you could imagine a chat G P T four language model behind. Robot that's actually, , and again, when I say robot, it could be something that we personify in some way in the home.

It could have conversations good. challenging good conversations with people who are struggling with isolation, loneliness, and those kinds of things. Mental health is a really interesting application to this. Sorry, I keep throwing in my comments. , number six, health education, AI language models can provide personalized health information and answer questions related to various medical conditions, treatments, and lifestyle choices.

There are several cases online that you can read about where G P T is being used as a. , right? It's consumed all this information, so it's, it's a better tutor than you are probably for math, for English, for social studies, at at least if it didn't. You know, or history. If it didn't happen before September, 2021, you get the idea.

Let me close out this paragraph. It's important to note that while AI language models like G P T four can be valuable tools in healthcare, they should not replace the advice of medical professionals. AI models can provide information and support, but should always be used in conjunction with the guidance of qualified healthcare providers.

There you go. , and then I ask it, what are some of the limitations of G P T four? and it gives me eight of 'em. So I'll try to summarize these a little bit. G P T four, like other AI language models, has several limitations. One static knowledge, we've talked about that. September, 2021, lack of understanding while G PT four can generate human-like responses, it does not genuinely understand the content it processes.

Its responses are based on patterns and associations found in the training data, which may not always result incorrect or meaningful. Okay. Lack of understanding, static knowledge, lack of understanding. Number three, inaccurate or misleading information. G P T four can sometimes provide incorrect, misleading information as it relies on the trading data, which may contain inaccuracies or outdated information.

number four, verbose answers which we're experiencing. G p D four may generate long-winded or overly verbose responses that do not directly address the user's question or request. , which is really not the case here. I mean, it's, it's answering specifically what I'm asking. Number five, sensitivity. To input phrasing, G PT four's responses can be influenced by the way a question or statement is phrased.

By the way, I think this is going to be the key that the new programmers of this kind of model are the people who can create the props, who understand how to create the props and you can actually feed it information. , and we'll talk about this on Thursday, I saw really interesting. This, , person took the instructions on how to do, , the text to image, right?

So they're creating these sophisticated computer generated images with a next generation, , AI model. And what it did is it took all the instructions that that thing had put it into chat and said, you generate the props

and chat. C b T started generating the props for the , For the image model and it was really interesting and it was really good. , and so anyway, I will come back to that at a later time. Input, phrasing, prompts, , ethical concerns. Number six, G P t four may sometimes generate content that is offensive, politically biased, or otherwise inappropriate.

Although efforts have been made to mitigate these issues, there's always the potential for unintended consequences, and we will always have to keep an eye out. Four bias and those kind things. Overuse of common phrases, G P T four may overuse certain phrases, lack of creativity and originality. G PT fours output is.

Limited by the creativity and originality present in its training data. It might not generate truly novel ideas or groundbreaking insights. Despite these limitations, G P T four can be a valuable tool in many applications. However, it's important to be aware of as these limitations and use the model with caution, especially in situations where accuracy and reliability are.

Healthcare, for example, , although, you know, I used it in programming and accuracy and reliability are crucial and it generated some great code. Have outside organizations vetted G P T four four Bias and flaws? The answer to this is yes, but it doesn't really know about it because of the timeline in which, , it is looking at it.

So I will specifically cite a study when we start looking at some of the challenges with it. It's hallucinations, it's bias, and some of the other offensive language and stuff it could generate, , later. But anyway, , I think it was Cornell University is what we're going to be looking. So typically AI models like GPT four undergo rigorous internal testing and evaluation before being released to the public.

Once released, external organizations and researchers may independently analyze the model to identify potential bias flaws and other issues. External evaluations can include testing the model's, response to specific inputs to measure the presence of political, racial, gender, and other bias. Assessing the model's ability to provide accurate and reliable information across various to domain and subjects.

And number three, evaluating the. Performance in real world applications to identify any unintended consequences or problematic behaviors. And it goes on and answers some other things around that. That's essentially the, , the interview with chat. G P T I. The thing I want you to hear about this is you can have a conversation with this thing and it can educate you.

I went into chat G p T version four, and started asking you questions around a topic and you could just keep burrowing into this thing and getting more and more specific answers on topics. You could even, , educate it as you go along and we'll talk more about that. I think this is groundbreaking. I think it's going to be a really interesting tool for.

And I have people on my little six person team looking at it almost every day to try to figure out how we can utilize it. If I had my old team of 650 to 700 people, I would, , absolutely have a whole bunch of 'em looking at it to see what we could do to utilize this in healthcare and in health. It specifically,

 Well, that's all for today. If you know of someone that might benefit from our channel, please forward them a note. I'm serious here. Think of it right now. Who could you forward a note to and say, Hey, you should be listening to this channel. I'm getting a lot out of it.

I'd love to just talk to you about some of the stories that they cover that would really go a long way in helping us to continue to create content for the community and events for the community. They can subscribe on our website this week, health.com, or wherever you listen to podcasts. Apple, Google Overcast, Spotify.

Stitcher and I could go on and on and on because anywhere that a podcast can be listened to, we're already out there. We wanna thank our channel sponsors who are investing in our mission to develop the next generation of health leaders. Sure. Test and 📍 Artis site. Check them out at this week, health.com/today.

Thanks for listening. That's all for now.

Thank You to Our Show Sponsors

Our Shows

Solution Showcase This Week Health
Keynote - This Week Health2 Minute Drill Drex DeFord This Week Health
Newsday - This Week HealthToday in Health IT - This Week Health

Related Content

1 2 3 251
Transform Healthcare - One Connection at a Time

© Copyright 2023 Health Lyrics All rights reserved