September 5: Bill is joined by the Clearsense team of Charles Boicey, Co-Founder and Chief Innovation Officer and Rick Shepardson, Chief Strategy Officer to delve into the world of data integration and AI in healthcare. What will it take for us to be able to educate AI models to be able to use them to interact with humans? Is healthcare data in a form that is usable by these models and if not, how can we get them to a place where they are? How can we clean data so that little input variations are standardized, and what considerations does that uncover? How do Charles and Rick feel about the pace of AI, and where do they see it heading in the near future?
Sign up for our webinar: Leader Series: Our AI Journey in Healthcare - Thursday, September 7th at 1pm ET / 10am PT.
This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
here we are for a briefing campaign.
This is in anticipation of our upcoming Leadership Series AI Journey in Healthcare. It's on September 7th, 1 p. m. Eastern Time. If you haven't signed up, go ahead and hit our website. It's in the top right hand column. We have a great panel lined up for this one. We have Brett Lamb. with UNC North Carolina.
We have Michael Pfeffer with Stanford, and we have Chris Longhurst with UC San Diego. And we're going to talk about where health care is in our journey thus far. Today we're joined by the Brain Trust over at Clearsense. Not the entire Brain Trust, just two of the leaders over there.
Charles Boise, who has been on the show many times. And Rick. Sheperson is joining us as well, and we're going to talk a little bit about AI, specifically with regard to data, getting the data ready, and those kind of things. And to preface this conversation, it's interesting, we're hearing a lot about generative we may or may not talk about that as much as we will talk about how to get the data ready for models.
And earlier this week, I did a show on DrGPT. And Charles, you'll love this because you and I talk about this all the time. they took the smallest 2 model that Meta put out, which is open source, and then they fed it with just medical data. And it fits on an iPhone. It doesn't even have to be connected to the cloud.
It fits on an iPhone, and you can ask it questions. And it will respond at a, very high level passing the medical exam and that kind of stuff. And this is a, like a five gigabyte large language model. that they have condensed down to this, to this form. But Charles, what does it take to get healthcare data in a form that we can start educating these models, we can start teaching these models to interact with humans?
Bill, one word, accurate. It absolutely has to be accurate, and it has to be trusted. So, prior to entry to a language model, or even a relational model, it's got to be accurate. And, that's kind of, what we do here. Yes, we can use techniques from an AI, ML perspective, as well as, from a rules based perspective.
So, but again, Andreessen with, Hippocratic that's something that Subject Matter Experts spent eight years developing. It absolutely has to be correct. Otherwise we're going to get responses that, that are not accurate. And, with your example Bill the education level that example was delivered at was at a higher education level than most of us can interpret.
So, it also has to be delivered back to the user at a level that they can understand. So, whether it be a, 5th grade, 12th grade or, postgraduate level that delivery has to come back in, in that fashion. We're
holding it to a higher standard. And we've talked about that.
When we hold cards to a higher standard, they can get zero accidents. If they're autonomous versus other things, because I can honestly tell you, there are times I sit across from my doctor and he tells me things and I'm like, okay, keep coming down. Okay, now I get it. Now I understand. But the models need to be able to interact.
Bill, that's intent. And that's what's really important. And, from a generative AI perspective, is that we understand the intent. So again, whether the input is by text, Whether it's by voice, whether it's by sign language, or even by lip reading, it's really important that we construct a prompt that attacks that language model and or other sources, so that what gets delivered back to the person that's asking the question is absolutely 100% accurate.
And that's what we're working on here. And it's from a one understanding the intent putting together the proper prompt, and then orchestrating where that prompt is delivered. It may be more than one language model. It may be a language model and a data source. It may be a language model, data source, and bringing an image back, depending on what the question being asked.
And I'll, And I'll state it again, Bill. We got to have our ducks in a row here. Again, We can forgive each other, we're human, but that machine makes a mistake and we got a big problem on our hands. So, again, looking at this as adjunctive, whether it's adjunctive to a clinician, researcher, somebody in operations, or somebody in finance, this is not a replacement proposition.
This is making us better.
Well, here's the prompt I want to put out there, and I'd love for you two to just go back and forth if you want on this, which is a lot of health systems are trying to figure out how to use generative AI, and they're wondering, what do I have to do with my data, with our data, with our health systems data, in order to make it enough quality, in the right format ready be used to inform these models.
Rick, I'm going to come to you first. I will assume that healthcare data isn't in a form yet that we can just start feeding these models. What do we need to do in order to... to really make sure that our data is ready for that kind of experience.
I think that Charles's point, depending on your context, some of the data is in a format that's ready to feed these models.
If you're looking at it very generally, right, and you look at how chat GPT and BARD and the like have gained popularity, it's because people are entering these. Very generic prompts and they're getting some information back. That's more than they were able to readily curate out of Google or however they were searching previously, Bing making great progress, but when it comes into making decisions, it's much harder.
So where you see things that are happening right around some of the decision support alerts, some of the, replacing documentation or enhancing documentation you're really what you're. What you're seeing right now are ways that you're taking the vocal prompts that Charles is talking about.
You're taking some of the documentation that's being entered and you're using large language models in very specific focus areas that are driving value, right? And usually they're the most popular ones and best ones are around collaborative with the providers so that they can actually validate that before it gets, published in, in the chart.
Now, what we see organizations really needing to do then is being able to build off of that, right? To get their data mapped into certain models, evaluate the data quality or the trust and using AI to support some of those data management capabilities. As Charles said, some of this is rules based, some of it truly gets towards actual general AI.
I also bucket our RPA or rules based into this because you're not just doing this for the sake of getting some insight out. You want to take the next action, you want to take, you want to be able to respond to it. So then that got a feed and a little bit of a data ops flow, right? Turn it into that collaborative experience where you're developing iteratively.
to refine it. So I think adding in a lot of reference data, adding a lot of standards, normalization so that you can ultimately standardize their, your prompts, you can standardize your outputs, you can trust the data more.
Yeah, Charles, talk a little bit I mean, building off of what Rick talked about, talk a little bit about the ingestion of data, how AI and ML is making that process.
I mean, I know specifically a clear sense you could talk to that model, how that's making it better, how that's making that that data available to research and other avenues to be used.
So, if you think about from an ingestion perspective using you know, more advanced technologies, if you will, to, pick up a, relational database that may have, a billion plus records, just pick it up, lift and shift it.
But more importantly as you're, you're bringing data in, the ability to ontologically right size. We did a lot of EMR implementations, seven, eight, ten years back where we put the proper ontologies and whatnot, but we never maintained them. So that's really important that those attributes, those codes are, you know, attached you know, to that.
At Clearsense we do a lot of application rationalization. We don't just archive the data. Bill, we actually bring it in and make it usable. So from an ML AI perspective instead of having two or three years, or if you have an m and a or an m and a activity, we now have from that region data as well.
So if you think about having the. requisite, maybe 10, 15, 20 years worth of data to build out some of these models, then absolutely we maintain that integrity and we ensure that you have that, as opposed to, hey we just went from, name your system to Epic or Cerner, and, now we only have, two years worth of data to build this out.
Not enough. And when you look at some of these, building out these language models and so forth, having the depth of data is really important, especially if you're asking questions like, you know, how many patients with these characteristics that were given this medication responded positively.
I'd want You know, more than a year and a half's worth of data in that regards, but getting back from the ingestion perspective bringing the data in, placing it in the proper whether it be a relational model, whether it be a graph instance, or, language model, whatever that needs to be, that's where we excel from, uh, from an ingestion perspective, and we bring it all in, we don't discriminate on what comes into the environment.
We bring it all in, because Bill, as AI and ML continues to develop and mature, there are data elements that are contributory in an AI ML algorithm that we haven't even discovered yet. So. If you're making decisions on what you bring in and what you leave behind that's problematic.
So again, all data is brought in, no data is left behind, because we don't know what the future holds as far as what a key element may be, um, may have higher value than, something that we deem that we think has higher value.
Yeah, and you know, what's interesting to me is as you talked about the bringing the data in.
And almost scrubbing and cleaning the data up. I'm thinking of when we actually did our migration from our EHR. I mean, we found all sorts of things. I mean, just little differences in how doctors would. Prescribe things or values in, in terms of one tablet this and what it essentially ended up being the same thing, but it's a little different.
I mean, is that the kind of stuff you can do with these models accurately? Or is that , a little scary territory?
Now, so you're talking about some variability. You can semantically represent that variability. so yes, it can be done. And you're primarily, referring to , you know, more text based type of information where the data contained is the same from document to document.
It's just stated a little bit differently. So yes, we have technology that allows us to, to make those differentiations. So when extracting that data, we can. And everything built from an NLP perspective, I think it's worthy to note. NLP is increasingly getting more mature, better and better.
So The other thing we don't do is we don't process documents and then toss the documents. The documents are processed at the time of need as opposed to processing and discard because, you've lost it. As your NLP progressively matures and gets better, then you can extract more data from those documents.
Yeah, and it's also Really critical to tag off of that, Charles, to maintain those to support the lineage, because if you are, normalizing, or if you are scrubbing, or you are, ontologically rightsizing any of that data, You're changing it, right? And no one, no one just wants that changed data without understanding how or why it happened, because then you're eroding trust.
So maintaining that original source, demonstrating the lineage between what happened, making sure that people who are using the data know exactly what data they are using so that they can use it for their purposes, for their use cases, all really critical.
So a closing question for you guys. I mean, is it accurate to say that ClearSense...
Can be utilized as a, a data governance platform.
A hundred percent. Yeah. And being able to track that lineage is just one aspect of that. Being able to capture metadata about the data, understanding how the business is using it for clinical, financial, operational, as Charles said, understanding whether that data is ontologically right sized whether or not it's being integrated anywhere.
Those are all things that we support and we put that metadata to work.
And Bill, not as a religious effort. Healthcare went through the whole Six Sigma religious effort. They did it with governance, and they're still doing it. We take a very practical approach to this. So it's part of the process.
So you're not You know, taking everybody out of their job and putting them in an auditorium and going, data element by data element, and there's thousands of them. This is really putting it to use in a practical way that is meaningful without all of the excess.
So, closing question for you, and I know this is my second closing question, but the closing question is...
The pace at which AI is going to move in healthcare. I'd love to hear from both of you as we close this out. How fast are we going to be moving? I'm already seeing some papers written. I'm seeing Epic is integrating into the notes some generative AI stuff. Partnership with Nuance, so forth and so on.
It feels faster than normal to me, but I'm curious you feel with regard to this. Rick, I'll give you the last comment. Charles, I'll have you go first.
Okay, Bill. Take a look at some work that was done by William Shoemaker back in 1988 through 1994. I participated in that. We were actually building out AIML algorithms back then.
We called it math. I was really happy to see AIML come back into being in like 2015. I'm really happy to see us, now actually doing it, which is really cool. So, the technology is non linear. We as humans are linear. So, the technology has already outpaced us. So, we're rapidly trying to figure out how best to apply that to healthcare, and we'll continue to do that.
And I think we're doing a really good job of it, Bill, across the board. I'd like to see us get a little bit off the generative AI track a little bit and get, back into, you know, more of the, AI ML, from a, predictive as well as operations and so forth. But I'm really happy the direction it's been embraced.
I think we all understood and understand generative AI, which is really cool. But again, I'll say it again, it's not the end all be all, but we're on a path. that we're not going to stop. Like, back in the late 80s, early 90s. Hey, Bill, Charles, that's really cool stuff. But yeah, we're not really ready for that yet, but we're ready for it now.
So I think it's cool. Yeah. But Bill, again, we provide sick care now. We don't provide health care. This will allow us. to provide true health care because our patients, the citizens, will primarily be taking care of themselves and they'll be coming out to us when they're truly sick. Yeah.
Rick, you get the last word.
Hey, I think what we're seeing right now is the fruit of that labor that, that Charles and others have put in. Combined with the technological advancements and the scalability of the cloud. And those two things are right now coming together. We're at peak hype. on generative AI. It's getting implemented really fast right now.
we're gonna follow the traditional curve, right? We're gonna see some problems with it. People are gonna recognize that generative AI is not the be all, end all. We're gonna go through and be like, oh, well, when are we getting the real AI? When are we getting the general AI? and then we're gonna see it just blow up, right?
So I think we're probably you know, at least five years away, maybe four or five years away from getting to that next stage. Charles may disagree, but I think we're going to, we're going to continue to see this year, right? And probably, and then we're going to see all the limitations of the data and people weren't able to quite get it into small enough, specific enough models.
And then they're like, oh, well. We got more work to do. There's going to be another cycle, but the technology is ready now for us to get through this next cycle. So we're just going to keep pushing on through. This isn't going away, but it's going to be at different stages of adoption for different use cases as we go.
Well, I want to thank the two of you. I want to thank the Clearsense team for making you available. If people want to know more about ClearSense. com, you could check them out. You could also see them on our website there, one of our sponsors, so appreciate that.
Don't forget, Leadership Series, our AI journey in healthcare, revolutionizing patient care, the AI factor in healthcare. September 7th, 1pm Eastern Time, 10am Pacific Time. When you register, you could actually put your questions in there ahead of time. What we have been doing, quite frankly, is spending a majority of time on your questions to make sure that we are relevant.
in these conversations. Again, I'm going to be joined by a great panel, Brett Lamb, CIO over at UNC, Michael Pfeffer at Stanford, and Chris UC San Diego. So love to have you sign up. Go ahead and hit our website. The 📍 webinar is in the top right hand column. Thanks for listening. That's all for now.