This Week Health

Don't forget to subscribe!

ChatGPTs Summary:

How far can Large Language Models go? Today we explore Eric Topol's latest post.

Transcript

today in Health it generative AI and the new Medical generalist. Interesting article. I thought I'd share it with you. Try to move the conversation forward a little bit. My name is Bill Russell. I'm a former CIO for a 16 hospital system and creator of this week Health Set of Channels dedicated to Keeping Health IT staff.

And engaged. We wanna thank our show sponsors who are investing in developing the next generation of health leaders. Sure. Test and Arti site to great companies. Check them out at this week, health.com/today. Having a child with cancer is one of the most painful and difficult situations a family can face.

We are so excited that in 2023 we have partnered with Alex's Lemonade Stand to raise money for childhood cancer. We have a goal to raise $50,000. We're up over 29,000 and if you're hearing this, it is Wednesday of the hymns conference and we are doing captain's. For Cures for Childhood Cancer, get your picture taken with Captain posted to Social media Tag this week Health, and for everybody who is in the picture facing the camera, we will give $1 to childhood cancer.

We hope to continue towards our goal for this year of raising $50,000. We have a generous community here. And we are excited to, , be a part of this campaign. We thank you in advance. Alright, , ran across this article, Eric Topel, , who is a leader in ai, , clinician out of, , Scripps Scripps in San Diego.

, and just want to read this, it's just a, it's really good, , synopsis of some of the things that are happening with generative ai. Generative AI is the broader term for those large language models that we're hearing a lot about. Chat, G P T. For and others. So, , let me read some of this for you in the Journal of Nature.

Today, my colleagues and I published an article on the future directions of generative AI for the practice of medicine. These new AI models have generated a multitude of new and exciting opportunities in healthcare that we didn't have before. Along with many challenges in liabilities, I'll briefly explain how we got here and what's in store.

Great article. Highly recommended. This one's on his ck eric topol.ck.com. , titles generative AI and the new medical generalist. All right, so I'll end up doing a lot of reading today. So the transformer model that transformed AI back in 2017, Google researchers published a paper. Attention is all you need describing the new model architecture, which they dubbed transformer that could give different levels of attention for multiple modes of input and go faster to ultimately replace recurrent.

Convolutional Deep Neural Networks, R N N, and CNN respectively. For shadowing the future of generative ai, they concluded we plan to extend the transformer to problems involving input and output modalities other than text to investigate local restricted attention mechanisms to efficiently handle large inputs and outputs, such as images, audio, and.

As I recently reviewed, AI and healthcare to date has been narrow unimodal single task for the over 500 FDA cleared and approved AI algorithms almost all are for one or at most two tasks. To go beyond that, we needed a model that is capable of ingesting multimodal data. With attention to the relative importance of inputs.

This also required massive graphic processing unit computational power and the concurrent advances in self, self supervised learning reviewed here. So he has a link to that. By the way, self supervised learning for machine learning is, , A concept that you should really understand. Anyway, I move on these building blocks, the transformer model architecture, GPUs, self supervised learning and multimodal data inputs at massive scale, ultimately led to where we are today with G P T four is the most advanced ll m , large language model, and we're still at the very early stages.

By now. You are like, To have had conversations with chat G B T and seen what it can do. It's only got text language input and output. Its unimodal and a training data cutoff in 2021. Yet you've probably had fun interacting with it and been impressed with the rapidity and sometimes remarkable accuracy and fluency of its outputs.

It often makes old Google searches seem weak. , but that's just the warmup act. Okay. So here's what it looks like in Medicine, GM a I. Goodbye to the constrained phase of just medical images, the sweet spot of deep learning. Hello to the full breadth of data from electronic health records, sensors, image labs, genomics, and biologic layers of data along with all forms of text and speech.

The domains of knowledge include the publications, books and corpus of literature, and the network relationships of entities, knowledge graphs. G P T four is the first multimodal large language model, fale with text, language and image. That's why you can already converse so well with generative AI for medical matters.

Even though there are none yet accessible that have been pre-trained with the corpus of medical literature and multidimensional data from millions of patients, the transformer architecture, which is ideally suited for these inputs, enables solving problems previously unseen, learning from other inputs and tasks.

We called this GM I generalist Medical. It sets up remarkable flexible interactions and a very long list of possible applications with a few examples shown below patients asking questions about their symptoms and data. , as we wrote in the paper, G M A, I can build a holistic view of the patient's conditions using multiple modalities.

Ranging from unstructured descriptions of symptoms to continuous glucose, monitor readings to patient provided medication logs, the potential for clinical keyboard liberation via automated generation of notes, discharge summaries, pre-authorizations, and all other forms of clinical documentation. A surgeon asking to identify something in an operative field.

, augmented procedure, a CT scan report generated that spotlights the image abnormality or the image being queried about a particular area of concern grounded, , grounded report or quantifying the difference of something between the images taken at different times, and how does that compare to the progression of the condition in all medical literature.

I apologize for reading so much, but there's so much depth here. It's great article again. Go out. Go out and read. , it's really good. So anyway, for bedside rounds and related LLM support for physicians besides the illustration below for administering insulin, see the fictionalized grabber prologue summarized in the section of the new G P T four book.

I just reviewed the G P T four book within six months of testing, this m provides many more examples of its capabilities, and then there's a whole bunch of images of. The, , various use cases, bedside decisions, port routed radi, radiology reports, augmented procedures, and there's images associated with that.

Parenthetically, I found this glossary of terms that is very helpful to speed you up if you haven't already, , been updated on the space. And then there's a link to things like large language models, , generative ai. , again, Good stuff. However, this is the big butt and there's always a big butt. , while there are so many exciting potential use cases, and we're still not even into chapter one of the GM a I story, there are striking liabilities and challenges that haven't been dealt with yet.

The hallucinations, AKA fabrications or bs, are a major issue along with bias, misinformation, lack of validation in prospective clinical trials, privacy and security, and deep concerns about regulatory issues. These points are reviewed in the paper and my previous posts. In medicine, it's absolutely essential that there is a human in the loop providing oversight of any L L M output.

Knowing full well it is prone to making serious mistakes, which includes making things up. The technical challenges are formidable. One sentence in the paper provides some context. Palm, a $40 billion, $40 billion parameter model developed by Google required an estimated 8.4 million hours worth of tensor processing.

V4 chips for training using roughly 3000 to 6,000 chips at a time, amounting to millions of dollars of computational costs. Data collection, inputting of massive diverse organized data, training of models. Comprehensive validation and deployment are some of the many daunting aspects of scaling LLMs in the real world.

, for for real world. That's hard to say. Anyway. An anecdote in the old era of chat, G P t I recently posted a chat G p T anecdote sent to me by a friend whose relative was diagnosed with limbic encephalitis after having carried the diagnosis of long covid. , after the patient saw multiple physicians and neurologists over six months and was assigned and diagnosed, long covid, a relative, entered her symptoms into chat G p T with the correct.

Diagnosis was confirmed by antibody testing and therapy has been initiated. A positive test result for , CAS P R two antibody is consistent with limbic encephalitis, , MOR syndrome, and. , neuro, my Tony, this is really small print, I apologize. Symptoms include, and it's a picture, so it's, anyway, include, , memory, , deficits, seizures, confusion, so forth and so on.

I. You know, it's just interesting chat. G P T is giving a significant amount of information for something that hasn't been specifically trained on healthcare yet. Anyway, it goes on. I'd already seen many examples of G PT four making difficult diagnosis of rare conditions from the book before posting. I asked chat G p T more on this matter since I knew nothing about , CAS P R two antibody tests and whether it's differentiated along Covid, which has no validated treatment.

From this form of autoimmune encephalitis, which is treatable, you can see the output below that convinced me this was interesting illustrating and potential of potential value. So, , you know, and then there's a, another image with tax tinier than the last text, so I'm not gonna be able to read that. And then he goes on to thank his co-authors and whatnot.

, I would keep an eye on Eric Topol what he's saying in this space. Obviously, he has done a lot of research, , in this, this generative AI space. He's keeping a close tab on it. He is providing us insights into, , where this could potentially go in medicine. And he will be one of the sources that I continue to tap into for this, , because I believe.

, I believe it has a lot of potential, lot of potential and we do have to keep an eye on the butts, the big butts that exist with this generative ai cuz we are still, , it feels like we are in the hype cycle that just never ends. So it keeps going up and up and up, but, , there is a lot of potential and, , just wanted to keep an eye on it.

So wanted to share that story with you again. It's Wednesday at hymns, , still not ready to talk about. And we will, , let me think. I, it could be the first show you hear me talk about hims. Might be Friday. We'll see. But that's all for today. If you know someone that might benefit from this channel, please forward them a note.

You can tell them to, , sign up at this week, health.com, or wherever they listen to podcasts. We wanna thank our channel sponsors who are investing in our mission to develop the next generation of health leaders shortest and 📍 Artis. I check them out at this week, health.com/. Thanks for listening. That's all for now.

Thank You to Our Show Sponsors

Our Shows

Solution Showcase This Week Health
Keynote - This Week Health2 Minute Drill Drex DeFord This Week Health
Newsday - This Week HealthToday in Health IT - This Week Health

Related Content

1 2 3 253
Transform Healthcare - One Connection at a Time

© Copyright 2023 Health Lyrics All rights reserved