December 8, 2023: In this keynote, Dr. Pete Clardy, Senior Clinical Specialist at Google, discusses the evolving role of AI in healthcare. From his transition from a critical care physician to a tech expert, Clardy explores the impact of AI on medical data organization and patient care. How does AI reshape healthcare providers' approach to patient data? The conversation also delves into generative AI's potential in medical assessments and the crucial balance between technological innovation and data privacy. How can trust be built in clinicians towards AI technology? This discussion offers valuable insights into the future of healthcare in an AI-driven world.
This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
Today on This Week Health.
(Intro) having time for the important conversations and not. The things that make the provider experience feel more that you are moving data from point to point rather than care for people at their most vulnerable.
Thanks for joining us on this keynote episode, a this week health conference show. My name is Bill Russell. I'm a former CIO for a 16 hospital system and creator of this week Health, A set of channels dedicated to keeping health IT staff current and engaged. For five years, we've been making podcasts that amplify great thinking to propel healthcare forward. Special thanks to our keynote show. CDW, Rubrik, Sectra and Trellix for choosing to invest in our mission to develop the next generation of health leaders. Now onto our show.
(Main) All right, here we are for another keynote episode. I'm excited to be talking with Dr. Peter Clardy, Google Senior Clinical Specialist. Peter, welcome to the show. Bill,
thanks so much for having
me. I'm going to call you Pete from now on
though. Perfect. Yes, absolutely.
Let's start here.
Share your journey from Pulmonary and Critical Care Physician to Senior Clinical Specialist
Yeah, no, absolutely. So my background is internal medicine followed by fellowships in pulmonary and critical care. And I spent about 20 years within the academic medical system, largely at Harvard at a couple of different Harvard teaching hospitals.
And my roles there included mainly as a frontline ICU doctor and educator, teaching medical students, residents, and fellows. And increasingly took on other educational and leadership responsibilities. One of the key things that I had the opportunity to work on early in my career was at a company called UpToDate, which really thought differently about how medical information could be served to clinicians at the point of care to help make better decisions.
And so... The intersection of clinical care clinical reasoning and medical decision making and how information technology intersected, the focus of my career. And I started working with Google as a consultant in 2018 as they were thinking about medical records and how you could both organize medical records and then use certain technologies that Google was very focused on relating to search and summarization and prediction to help with clinical tasks.
And that work started in sort of an advisory role. And then towards the end of 2020, I joined Google full time and now lead a team where clinical Folks like me work with. engineers and designers, and most importantly with providers and other partners, including EHR companies to create these new tools and experiences that leverage Google's experience with lots of different kinds of machine learning and AI.
and increasingly are really focused on the ability of generative AI to impact clinical care. So it's a bit of a long answer but really the connection runs through the unmet informational needs of clinicians and how new technologies can really help.
And we're going to talk about search.
We're going to talk about generative AI. We're going to go down a lot of those paths. It seems like we're getting smarter. We used to have a bunch of technologists walk into a hospital and say here's the stuff that you need to make healthcare better. And I guess with that as the background how does your experience as a physician influence the approach to developing these technologies and approaches at Google?
Yeah so it's been a very conscious effort by Karen DeSalvo who's the Chief Health Officer at Google to really create a team that spread horizontally across the whole company and brought Not just medical, but health insight to the whole organization. And so we work very much in a collaborative model where clinicians and health equity and regulatory and other specialists from from Karen's team all collaborate to support efforts in different parts of the company.
But if you ask me sort of day to describe. What it is that we do and the value that the clinical team in particular brings to our work is that we are bi directional translators, right? So to your point, Google engineers are fantastic at building many different kinds of elegant solutions and if the task is related to organizing, say, photographs or emails, their natural intuition will guide them very well.
But in the healthcare space, where the language is very complicated and nuanced and the workflows are highly specific. It's useful internally to have folks with clinical experience be able to say, look here, this is an unmet need or, that's a very elegant piece of engineering, but it doesn't address a real world problem.
So internally, we work with our teams to help them understand unmet clinical needs and to, Build intuitive solutions, and externally, and as an example, we work heavily with Meditech, an EHR provider. Having the opportunity to have clinicians from the Google side work with Meditech, and even more directly with the Meditech sites where we pilot some of this technology, has really created an important closed loop for our learning and I think that's one of the reasons why having internal clinicians has been valuable to Google and these types of partnerships.
Yeah, I do want to talk about the Meditech partnership, but before I, I still want to stay a little bit high level right now. What are some of the challenges in medical information handling that you believe AI can address or actually is currently addressing? It seems like this is moving very fast.
It's moving very fast, and it's interesting because I think at a first approximation for a clinician, what you might say is that the problem is information availability, that now patients create so much data. That interoperability and bringing it all together is really the most critical piece, and I think that interoperability and being able to connect all of the digital crumbs that we leave around is one piece, but if I said to My colleagues, hey, listen, I've now got this kind of super system that brings everything together and you have absolutely everything in one place, they would say, we're already drowning in data.
We already have too much data and not enough information. So, what I And I think the way we've thought about this as a company and in our partnerships is first address the issue of data fragmentation, find a way of bringing information together. And that's really the interoperability story.
The second piece is help me deal with data overload, and that becomes an issue of taking some of the most complex information, things like unstructured data that shows up in medical notes or in lab reports in long written text and find better ways of searching and summarizing that.
And a lot of that can use what I would call Old fashioned machine learning AI that's built on things like natural language processing and optical character recognition to be able to find information in unstructured data that lives in scanned documents and PDFs and hard to find places.
But those two challenges the data fragmentation challenge and then the data overload challenge I think are paired and that now we are at a place where certain kinds of, again, it's not a perfect term, but I'll say old fashioned machine learning or AI can be extremely helpful for tasks like search and summarization.
We are now entering into. A new era with generative AI models that are very powerful at putting out outputs, written summaries of a condition or a hospitalization or a whole patient's experience. And that's been very exciting, but that also is a first in history sort of problem. So figuring out how to do that safely, how to do that responsibly, how to study.
What the impact of those tools are is really part of the challenges that we're thinking about at Google.
it's interesting, because I think we'll get into the Meditech partnership I interviewed Marty Past, like we talked about the Meditech implementation that they're doing. And just the time to ROI for that project is amazing in and of itself.
But he was talking about some of the tools that Meditech has brought to bear, such as the search functionality. And this is what Google has been good at for decades. It's the foundation for the business. But I want to translate this into the day to day for the healthcare provider. So they are, if a typical...
Let's say ED doc, somebody presents and they have to find all this information. What were they doing before that search functionality and how has it changed their day to day life?
Yeah, so, the first thing I'd say is that clinicians don't generally think of search as a tool EHRs. We're very used to a lot of screen time or a lot of time, historically in front of big, Stacks of paper charts and what clinicians end up doing is browsing data by type.
Meaning, I look at a bunch of notes, I look at a bunch of labs, I look at a bunch of reports, or maybe scan documents. And you have to create a mental model of the patient's... and all of their associated conditions by seeing data organized by type. And so it's very effortful. It's a high cognitive load for clinicians to say, okay, I need to understand this patient's diabetes.
And to do that, I'm going to look sequentially at notes, labs, reports, scan documents, and then I'm going to. Create that mental model. So one of the things that, that I think we've really focused on is that one problem is how information is presented to clinicians. And so you might think of search as a, a very narrow way of understanding you're looking for a needle in a haystack, you wanna find a very specific lab result.
And so you use that term. And EHR systems can work fairly well, but it often has to be an exact match, and you only then see, let's say, the lab result that you were searching for. And at Google, and with our partnership with Meditech, we've really thought about search differently and have, to your point, used a lot of things that Google has learned.
in the development of Google Search to apply to medical records. And specifically, rather than looking for an exact match of letters or an exact match of a specific term, we use search based on what's called a knowledge graph, which is a much more complex representation of the relatedness of different ideas.
And so searches that would probably not be very helpful in one of those string matched or more confined types of searches can be quite useful using our technology. As an example, if I search diabetes, which is a very broad concept in a diabetic patient record I might get a lot of very confusing results, but using the knowledge graph we're able.
to bring forward information like notes that talk about diabetes and the specific references, but also labs like glucose or hemoglobin A1C that are relevant to the management of diabetes. So bringing structured and unstructured information together. And so with a simple search term, The provider can see a lot of detail from a lot of different data types that relate very specifically.
And our sense is that creates a much faster mental model of that concept of diabetes. So that's one way we've thought about how search can really do something that has historically been difficult for clinicians to do when they're browsing data by type.
of the record is still unstructured?
hear 80 20, I hear 70 30. 70 30 sounds pretty optimistic to me.
Yeah, 80 maybe, but I think what's tricky is that a lot of medical intelligence is locked up in unstructured data. So whether the ratios which way they fluctuate may vary by specialty or patient or. even by region. The unstructured data is often where you get to see what a clinician is thinking and understand not only what brings me up to right now, but what are people looking for going forward.
What do we anticipate will be happening with this patient and how can I put all of that into context? So. Our sense is that unlocking the value of unstructured data is really one of the things that is most important about some of the ways that we've approached this space and some of our partnerships.
I want to talk about summarizing the record because we could use a use case of an ICU patient that has bounced around and gone to different facilities and We're collecting all that information via the HIE or whatever the mechanism is to bring that in. And generally there's a, maybe an ICU nurse or somebody who's responsible for creating like the life of the patient.
So that you can see, how they progressed, how they started, all that kind of stuff. they can make sure that they have all the relevant
information work from. Are we getting closer to creating that life of the patient? Summary for an ICU
team? Yeah, it's a really great question and I really like that formulation.
And I would say you could even broaden it out so that there are these moments where informational needs are high and knowledge of a patient is low. This happens at hospital admission, ICU admission, when you're a consultant seeing a patient for the first time, when a patient's kind of new to a practice, maybe at an annual physical where you want a holistic view.
And the way that we've thought about that, and the way that we're implementing work with Meditech is To really think about the representation at a couple of different layers of granularity. So we use are what I'm going to call old fashioned machine learning and AI to index the record and organize things by condition, generally in fairly coarse buckets.
But then we allow the user to explore inside of those groupings to see so as an example, when I said, diabetes, that would require me typing diabetes or DM or diabetes mellitus into a search bar. What if instead we just automated the process of bringing forward the information related to diabetes?
So now when the provider opens up the record, there is a list of the patient's conditions. But not a static list, an automated list, and that they can look at each condition and see the details. What we're doing and what we're really excited about with Meditech is creating that kind of view.
That kind of experience where a provider, first time they're seeing a patient, can go in, see the conditions. Explore them depending on their needs at different levels detail and that what they're able to see takes them right to the medical record. It's not at this point, what I'm describing, generative.
We're not building a new representation. We're just creating a really highly structured index of the record and showing you where to look for all of the details. That's really the first step of how search and summarization might create that whole patient representation in a more automated way. And that's really the direction of the product that we're creating.
So we don't have to worry as much. about accuracy and reliability because it's literally pointing to the medical record. And because we're going to talk about generative AI, and that's where we get a little sideways when we talk about it. So that's, again you're saying old school, traditional AI models that have been around for a long time.
We know how
to do this. Yeah, and long time in this world might mean five years, as opposed to one or two but yeah your point is well taken and that it doesn't mean that there isn't a lot of risk in working in this space because we are doing a form of transformation of that data and organization of that data around.
medical concepts as opposed to data type. So we spend a lot of time thinking about the evaluation of how accurate these systems are and how well they work. But the output is not something that was generated by the computer. It's something that's more organized by a computer and understandable by a human who would use it as a navigational tool.
How do we
feel about making? Diagnosis from that information.
Yeah, I think that's an area we would approach with great humility. And we think of our tools right now as being at a state where they help organize information and bring a provider up to right now. And they do that in a way that's a. rather than predictive or proscriptive.
So we very much had the idea of what would be assistive, like what would a really good medical student or medical assistant give to the provider. But we've been very explicit that we are not building a system that says, Hey, we're looking over all of these symptoms and diagnoses. And we think The patient might have X, Y, or Z, or that your next action is this.
And it's not that there aren't great opportunities in thinking about that space. It's just that we need to be very explicit about what we are using AI for, and that as of now, we have created some firewalls around certain sorts of use cases, not because they aren't, super interesting but because that probably is a little bit ahead of where we are right now
in the moment.
They are very interesting, but I understand not to reference your competitor here, but Satya Nadella was talking about the co pilot design construct and how important that is going to be over the next couple of years to make sure that we bring the culture along with us, that we help people to see what it can do before we try to do I don't know the truly visionary things that we're just not ready for, we're just
Yeah, no, , it is the question of establishing trust. It's a question of understanding how new tools work. And it's one of the reasons why the idea that we're grounding you back to the primary data is useful. Clinicians will frequently say, trust, but verify, right? I believe what the ER tells me about the ICU admission, but I'm still going to look at the x ray myself.
I'm still going to look at the EKG myself. And I think that mindset, you can create patterns for clinicians so that they recognize, okay. I see what the tool is doing, and I can verify it in this way.
want to talk a little bit about data privacy and security, because in healthcare, there's a...
There's a concern that somehow, if my EHR provider is using this technology, that somehow it's going to show up in a Google search result over here. know the answer to this question, but... I would love for you to just explain
it. Yeah, no, absolutely. And I think we, both Google and the clinical team and the the health team at Google have for a long time recognized and Google has from its outset recognized that personal data is highly private and we know there is no data more.
personal than health data. So, in addition to any work that we do with what we would call partner data, the health system or EHR data is done with a business affiliation agreement, a BAA in place and under the guise, not only all the mandates of HIPAA, but external.
regulatory review based on a whole bunch of external criteria and external audits. So all that data lives in a very heavily firewalled space within Google and is never mixed with Any other person data or joined to any other data. And so, so we treat that as extremely privileged within Google.
And and that never gets shared across search or any other features or really even across any other parts of Google. So it's a tightly controlled list. What I think is interesting to think about is the degree to which over time. Certain kinds of data dependencies may be less incumbent on a company like Google and more an opportunity for partners to, to understand and own their own data that a company like Google never has to touch.
So the idea here would be. We create certain models and platforms and technologies that then can be used and essentially personalized by other providers for a very specific use case without needing to be in person level or patient level data to move back and forth between groups. So, I think I would say it's a long answer, but I would say in two ways.
One is, right now that is highly privileged data that is only managed under a BAA, and two, There are potential opportunities over time to think about building such systems without as many data dependencies or requirements.
Yeah, I'm curious about the Meditech implementation. The, in, in a lot of cases, there's an integration that's done and it feels like an add on or a bolt on.
The things I saw felt very much like it was... Like, I didn't know I was even using Google Search. I it was very integrated into the workflow, very integrated into the experience. Was that part of the design?
Yeah, very much so. I think one of the things that, that we really are focused on is wanting to have a seamless experience for providers that doesn't force them to jump around between different applications or get out of their workplace.
So that really was part of the design, and that, I think, is something that you're going to continue to see as we progress forward with our Meditech partnership. And I think that, yeah, the goal really was to create a tool that would be intuitive for clinicians to use, and that would help them.
Within the processes of care, but that would be flexible enough to be useful, not only for doctors, but for nurses, for medical assistants, for people who manage health of populations who have to do lots of repetitive searching tasks. So those were all different use cases that we were trying to support.
And that's why we come up with the implementation that we have for our pilot which will be going live. end of this year with search and summarization.
So generative AI. I, by the way, I want to commend you for the discipline. We haven't really talked much about generative AI and it's such a hot topic.
But yeah, we did. So I did a webinar with UC Davis and Stanford and others, UNC and others, and we were talking about the use of generative And it's interesting. It is out there. It is in front of And one of the discussions we had was around or so basic generates the note, right?
Response to the note and, it's more empathetic, it's more clear, whatever the criteria was that they did for the study. but even with that study and that information, they were telling me essentially that it generates the note and in 75 percent of the cases, the doctor deletes it.
Right. They just start over like they're not ready to even just to read it and just correct it or read it and just approve it. And away it goes. So I want to talk about that. I want to talk about use cases for generative AI. I want to talk about physician adoption of the tools and what it's going to take for physicians to feel comfortable.
And then probably what I'm going to end with is the Biden administration came out with their executive order yesterday, and one of the things was watermarking, and I'm wondering how much in healthcare we're going to be. required to watermark that, hey, this information was generated by a set of AI tools.
So that's three different topics, but, I'll let you start with. Where are we going with generative
AI in healthcare? Yeah, so it's, I think it's a great, it's a great question. And there's a couple of different ways to answer. The National Academy of Medicine had a a symposium last week where a lot of these same issues were surfaced.
And yes, with the new. mandate from the Biden administration. It will be very interesting to see what happens over the coming weeks and months. I think the use cases, one way that has sometimes been proposed is to go from operational and administrative use cases up to more clinically specific use cases up to things that are more future looking prediction of a diagnosis, for example. So, so you can think of one continuum that, that goes from more operational to more advanced. But another way to think about it is like, what are the different stages of a workflow? And so we think about tools that are helpful for assessment. Where can you use summarization to get a provider up to right now to understand what's going on with the patient?
What are ways in which summarization might be useful to prioritize or decide certain interventions? And then, what are the outputs that could be supported with summarization? And that tends to be examples like note generation or document generation of some sort. And each of those has slightly different needs.
I can maybe speak at the first level to the assessment piece of what generative AI can do, and it leverages a little bit of what we've been thinking about with search and summarization. So, what we're doing with Meditech and what we're doing in general with sort of search and summarization using more pre generative AI outputs, those that organize and index records like like we were talking about is take an output that's.
It's all of the stuff that we know about diabetes, or all of the stuff that we know about heart failure. And now write me a summary of this patient's condition, and make it two or three sentences like a clinician would use, so I could really very quickly... understand this patient's heart failure. Not that I can look through all the resources and piece it together, but tell me.
And here there are a couple of ways to think about how clinicians might gain comfort with that kind of output. The first is the watermarking, like labeling, and you'll see this on Google when you use search or Bard that we. Try to make it clear that an output is generative with labeling and images and other things that kind of flag it out.
But what I would say is different about health care and one of the things that's different about how we're thinking about adoption from a clinician perspective is that telling me three sentences about heart failure when there's 500 pages of heart failure information. may actually not make my situation better.
Because if I don't trust those three sentences... I'm still looking at 500 pages of heart failure information I have to sort through. So one of the ways that we've thought about this is what's called grounding. And you start to see this with some of the generative AI experiences that you can have on Google search generally.
But the idea would be if I gave you two or three sentences, I could look sentence by sentence and I could see the evidences in the chart that supported that statement. And so grounding a summary is one way that you might encourage use and build trust between a clinician and an AI system. So that's maybe a Early example of some of the things that bring together what you were discussing, which is like, what would be a use case?
Bringing information together and using natural language to express it, how would you can label and watermark that in some ways. But how would you then trust, that's probably an issue of grounding into the source material where a clinician would look for a source of truth.
With AI, it's really interesting to me because I saw the Gartner hype cycle on AI. Yeah. And there's, generative is at the very peak. In fact, I think it just keeps going higher and higher. But there's a whole bunch of AI technologies like NLP and quite frankly, OCR, just a whole bunch of other things that are way past the trough of disillusionment.
Like we've been using it for quite some time. In fact, I heard an academic medical center said they went out and inventoried all the places they were using AI so that they could let people know, Hey, this isn't new. Like we have it in a lot of places.
Yeah, David if you use autocorrect, if you ever wonder how your sentence gets completed on your email, yeah, those things are really well established examples of AI that we have gotten very comfortable using.
And you're right, that's also, I think, part of the story of how do you get people comfortable with these new tools. I like to work from analogy. When I started as an ICU doctor, we placed... central lines, large catheters using anatomical landmarks. Like you knew where to put the needle based on where the muscles and the bones were intersecting.
And you didn't get to see below the surface. And then ultrasound was introduced and really changed how that procedure was done, but you didn't start using the ultrasound without training. You needed training. And one of the. Interesting things you learned was that in addition to being able to see the difference between an artery and a vein, Sometimes you saw shadows.
Sometimes if you had the probe in the wrong place, you saw artifact. You saw, it looked like it was maybe a fluid collection, but it was just an artifact from a bone. I think that same process will happen as clinicians start to think about AI. It's I need some education in this technology, and I need some experience using it.
to understand both the strengths and also the limitations, the shadows and the blind spots, because those are going to be part of the reality going forward. And I think that's really one of the most important processes that's that's really starting is to think about how do we get people comfortable with this technology?
And I think to the hype cycle point part of that is being very forthright about What we know and what we don't know and that this is really still a very new
area All right last couple of questions. We're coming up to the end of our time. i'm curious about the patients So what do we expect the impact of patients will be?
Will they experience a higher quality of care? Will they have better access? Will they, will the cost come down? Will their physician be less stressed,
hopefully? Yeah, right. So, so one of the ways that I think Meditech looks at the development of new technology, and certainly one of the ways we've thought about.
Progress in this space is around the triple, or I really like to think of, the quadruple aim and what can we do that improves outcomes, that improves the provider experience that improves quality and reduces cost, and I think the answer is there's opportunities really across all of that, but one way that I would like to see technology work, and this may sound strange coming from someone who works for a technology company is I would like to see.
more face to face time when patients and providers are together. I would like to unburden providers from browsing data by type and all of the effortful work of creating that complex mental model of a patient's disease state and have really intuitive tools that allow them to quickly get up to speed and focus on the stuff that Computers can't do, reading the room, feeling a pulse, getting a sense of what really matters to the patient, having time for the important conversations and not.
Being overwhelmed with emails and pajama time and the things that make the provider experience feel more that you are moving data from point to point rather than doing what you're supposed to do, which is care for people at their most vulnerable.
with the closing question, I want to take you out far enough that it's it's going to make a difference.
So let's, seven years from now, the trajectory where AI is going, actually, we can say two years from now, it'll probably be pretty dramatically different, but seven years will be. Significantly different than it is today, I believe. What does it look like to practice medicine in seven years?
And now, understand the challenge you're going to have in answering this, because there's going to be physicians who are sitting there going, Oh, but seven years from now is a long time. And this technology is moving very rapidly, We're looking at multimodal models, we're looking at just a bunch of different ways that these models are going to check and balance each other, and we're going to have very specific models, and so we're making a lot of advancements.
Seven years, this could be pretty well integrated into healthcare, I would imagine. What does it look like to practice medicine in seven years?
I think one of the ways I think about that I don't think it's going to look much like it looks right now. I do think that things are changing very quickly.
I think that one of the places where we will see change. fastest is in education. So seven years from now, the providers who are finished with medical school or finishing their training will have grown up at a time when we really see an epistemic shift in how information is handled and exchanged.
When I was in medical school, the value was to know 17 causes for hypercalcemia, which would make you a better medical student than the guy next to you or the woman next to you who knew 16 causes, right? The value was in the stuff that I had internalized, whereas now, the value is who can look that up faster?
Or, who knows how to find information more quickly and put things together? And I think for the next generation, it's going to be... How can you use tools, some of which we don't even have names for yet in the acquisition and application of knowledge that is going to look very different.
But I think the impacts will be large on how we think about education, how we think about evaluation, how we think about maintenance. So from
medical journal to bedside will be. a lot shorter time frame.
I think that also the acceleration of certain kinds think one place where I try to try and maintain equipoise because it is a heady time is that generative models and multimodal generative models are extremely exciting.
But right now the language models are really good for certain kinds of unstructured data. And limited for certain kinds of structured data. This patient level representation is going to be both structured and unstructured and how that all comes together. But I do think that will unlock certain kinds of acceleration in discovery.
And I also think that it will unlock changes just in the patient clinician exchange, what that looks like. And my hope is that it looks more human and less screen based over time.
It'll be interesting. Yeah, just ask your large language model to do some math and you'll see what we're talking about here.
It, does not perform well on a math test.
Exactly. And I think there's You know, one of the last thoughts that I have is that there's a ton of excitement around very specific models and the newest model and the biggest model, but the models themselves, with few exception, are not actually tools or products.
And what I think is really exciting is the ways in which all of these pieces, older models, newer models, different ways of sharing and exchanging information come together to really build innovative products separate and aside from the model
itself. Pete, I want to thank you for your time and we will have to stay in touch because I think if we talked a year from now, we would be talking about a whole bunch of different things and I think it would be exciting.
Thank you for your time. Thanks
so much, Bill. I look forward to it and thank you.
📍 I love the chance to have these conversations. I think If I were a CIO today, I would have every team member listen to a show like this one. I believe it's conference level value every week. If you wanna support this week health, tell someone about our channels that would really benefit us. We have a mission of getting our content into as many hands as possible, and if you're listening to it, hopefully you find value and if you could tell somebody else about it, it helps us to achieve our mission. We have two channels. We have the conference channel, which you're listening. And this week, health Newsroom. Check them out today. You can find them wherever you listen to podcasts. Apple, Google, overcast. You get the picture. We are everywhere. We wanna thank our keynote partners, CDW, Rubrik, Sectra and Trellix, who invest in 📍 our mission to develop the next generation of health leaders. Thanks for listening. That's all for now.