Interesting Question, no? Cedar's Sinai takes a look at health health diagnosis.
Today in health, it is AI better at assessing heart health. Wow. You knew that you knew these studies were going to start coming out. So be interesting to talk about. My name is bill Russell. I'm a former CIO for a 16 hospital system. And creator this week health, a set of channels dedicated to keeping health it staff current.
And engaged. We want to thank our show sponsors who are investing in developing the next generation of health leaders, short tests and artists. I check them out at this week. health.com/today. As, you know, if you've listened to the show at all, you know, that we are partnered with Alex's lemonade, stand to, , help support childhood cancer and chores for childhood cancer. , we have a goal of raising $50,000. We are up over 34,000. If you want to participate in our website, top banner, click on the Alex's lemonade stand logo, and you can go ahead and give today.
We want to thank you. And we believe in the generosity of our community and we thank you in advance. All right. This kind of from let's see, Cedar Sinai. Okay. So is artificial intelligence better at assessing heart health? This is from April 5th. It's from the Cedar Sinai health system website. It's , it was also published in nature.
Alright, here we go. Who can assess and diagnose cardiac function best after reading an echocardiogram AI. Or a sonographer. According to the Cedar Sinai investigators and their research published today in a peer reviewed journal nature. AI prove superior in assessing and diagnosing cardiac function. When compared with echocardiogram assessments made by sonographers.
The findings are based on first of its kind blinded randomized clinical trial of AI cardiology led by investigators in the Smith heart Institute and the division of artificial intelligence and medicine at Cedar Sinai. There's also have immediate implications for patients undergoing cardiac function.
Imaging as well as broader implications for the field of cardiac imaging said, cardiologists. , David. Wang. Oh, U Y a N G M Dr. M D
principal investigator of the clinical trial and senior author of the study. The trial offers rigorous evidence that utilizing AI in this novel way can improve the quality and effectiveness. Of echocardiogram imaging for many patients. Let's see what'd they find investigators are confident in this, the successful clinical trial sets.A super precedent in:
Building on these findings, a new study assess whether AI was more accurate among the findings. Here you go. Cardiologists more frequently agreed with the AI initial assessment and made corrections in only 16.8%. Of the initial assessments made by AI cardiologists made corrections to 27.2% of the initial assessments.
Made by sonographers. Interesting. So 16.7 16.8. , were corrected. , and that was the, where the AI did the initial assessment and 27.2, where the sonographer made the initial assessment. The physicians were unable to tell which assessments were made by AI. And which were made by sonographers and the AI.
Assistance saved cardiologists and sonographers time.
It goes on. We asked our cardiologists to guess if the preliminary interpretation was performed by AI or the stenographer, and it turns out that they couldn't tell the difference. And I'm hearing that over and over again, by the way, including in writing. It's really interesting because one of the things that Google is saying, Hey, if we see a bunch of content generated by AI,
We're going to down rank that content. And I've been talking to a bunch of experts and they're like, they cannot tell. And if they can tell they're lying to you because they cannot tell because it's essentially taken from so many writing samples and whatnot that it's, it's guessing. And, and they, they say, oh, there's these tools out there that can tell. And the answer is, they're guessing, this is what experts are telling me. I don't know if that's the case or not.
But at the end of the day, , You know, this, this content that's being generated by AI. It's going to be really hard for us to tell the difference between, you know, especially in words where the words generated by AI or where they generated by a person.
All right. So we have these diagnosis done by AI. And I'm not even sure it's important to know which generated it. , we could always just note that obviously we, we have the metadata, we can say this is AI generated, which will give the physicians the ability to know. Whether they want to check it or not based on their bias, you know, do they have a bias against AI?
Do they have a bias against certain physicians? , which we didn't know that that does happen and does exist. , let's see anything else. The clinical trial and subsequent published research, also shed light on the opportunity for regulatory approvals. The work raises the bar for artificial intelligence technologies being considered for regulatory approval. As the FDA has previously approved AI tools.
Without data from prospective clinical trials said Sue Susan Chang, , Dr. Susan Chang, director of the Institute of research of healthy aging in the department of cardiology at Schmidt, heart Institute and coasting you're author of the study. We believe this level of evidence offers clinicians extra assurance as health systems work.
To adopt artificial intelligence more broadly as part of the efforts to increase efficiency. And quality overall. , I'm excited about this study. I think we're going to see more and more and more of these studies. So we covered one recently, , before the notes. Projects started at UCLA and Stanford. There was a study on, , empathy and quality of the notes that were being written by AI. And it turns out they were more empathetic and there were higher quality.
So. , you know, so the more studies we do, the more it will place a, , trust level on the AI that's that's happening. I still think there's a transparency. Challenge to get over here. We have no idea how chat GPT four was trade. We don't know if it was trained on what data it was trained on and what data it wasn't trained on. There's a, there's a significant.
Risk of bias here or of, , of, of the. You know of unrepresented populations, all that, all the problems you have with not having the right data sets. And we just don't know. Right. And so transparency into those datasets, even if it's to. To a few that can validate for the rest of us and look at it and say, this has been trained on the proper information, our proper data sets in order to deliver a clinical assessments and viability.
Not always necessary, but it would be great. So I'm looking forward to more of these studies. I believe we're going to see them come fast and furious now. As the adoption of AI and medicine starts to take off. We'll see. All right. That's all for today. If you know someone that might benefit from our channel, please forward them a note. They can subscribe on our website this week.
out.com or wherever you listen to podcasts. And that's a lot of places. We want to thank our sponsors who are investing in our mission to develop the next generation of health leaders, shore test and 📍 artist site two great companies. Check them out this week. Health. Dot com slash today. Thanks for listening.
That's all for now.