The introduction of artificial intelligence into healthcare didn’t just change the game – it flipped it on its axis. But after a few tumultuous years, AI may be hitting its stride, according to Clara Lin, MD.
“These are exciting times we live in,” she noted. “People are starting to see the broad applicability of AI” and recognize the potential it has to automate manual tasks and improve diagnostic capabilities.
At Seattle Children’s, where she has held the CMIO role since 2022, AI is layered on top of decision support to give clinicians quick access to decades’ worth of research. As a result, providers “don’t have to scroll through algorithms to figure out what to do with a three-year-old who’s vomiting,” she said. “They’re able to deliver the best quality care to the child in front of them.”
Reaching that point, however, hasn’t been without challenges. In fact, as AI assumes an increasingly vital role in care delivery, it is becoming more important for organizations to develop a structured, deliberate approach to evaluating and implementing tools. During a recent interview with This Week Health, Lin discussed the strategy her team is utilizing to ensure any application of AI isn’t just solving a problem, but is doing so in a way that’s compliant, secure, and equitable.
A critical part of that strategy is in establishing governance around AI and incorporating it into executive roles, noted Lin, who sits on the AI Review Board along with Zafar Chaudry, MD (whose title now includes Chief AI and Information Officer in addition to Chief Digital Officer).
“We assembled a group of experts, including AI specialists and technologists and our chief architect (Nigel Hartell), to have the same sort of structure that we have for human subject research,” she said. “We want to have the same guiding principles when it comes to AI,” which means assessing every initiative from multiple perspectives.
The driver for the review board was the explosion in AI that started with ChatGPT’s introduction into healthcare. “Everybody wanted to use AI because they could see, even at the beginning stages, how ChatGPT could help them write emails, PowerPoints, or patient letters,” she recalled.
It was so promising, however, that she and Hartell were bombarded with “raw, unfiltered ideas” that were focused more on the technology itself than on solving a problem. Then, once reports of AI hallucinations started to surface, the demand plummeted. Now, however, “we’re coming out of that lull and moving into a place where the requests we’re getting are much more structured and well thought out,” Lin said. Clinical, operational, and administrative users are coming forward with specific use cases in which AI can have an impact.
That’s where governance comes into play, she said, noting that every proposal goes through a standard intake process that asks specific questions around risk management, equity, security, and more. For example: Have you thought about equity? How does this impact people you haven’t thought about? What are the risks it may bring to privacy? How are you planning to mitigate those risks?
The board then provides recommendations on how to move forward – and it isn’t always a green light, Lin noted. “Once we get the idea, we start to think about, do we build it? Do we buy it? Or do we wait for Epic to develop it? Because we know Epic is working on some things.” And if that’s the case, “we don’t necessarily need to spend money to buy or build. We can wait for Epic, which can be integrated seamlessly into our clinical experience.” If that’s not an option, they won’t hesitate to develop in-house or in partnership with Google, Microsoft, AWS, or another vendor.
Before that can happen, of course, a set of criteria need to be met, one of which involves equity. “We’re looking at every AI project from the lens of ethical considerations,” she said, ensuring that no solution works to “further marginalize our underrepresented and underserved groups. Equity is a really big deal for us. It’s in our blood at Seattle Children’s.”
To that end, the standard intake includes an impact assessment that encourages users to examine the training data sets used for algorithms and make sure various groups are represented.
“A lot of times we don't even think about it,” she said. “We think that if it’s an algorithm, it has to be right. We have to be very rigorous about how we’re evaluating the quality of the data.”
When the proper policies and structures are in place – as is the case at Seattle Children’s – there are myriad ways in which AI can increase efficiency and improve outcomes.
One is by leveraging AI to search Clinical Standard Work Pathways, a set of documented treatment approaches developed to improve quality of care through standardization.
Weeding through dozens of these decision trees, of course, isn’t feasible at the point of care. A chatbot, on the other hand, can sift through information and answer key questions for clinicians, who are able to focus more time and energy on patients and families.
The organization is also utilizing AI to reduce administrative burdens by taking on tasks such as prior authorizations and coding. “The burden of that work is heavy,” Lin noted. “The same workforce member who’s doing that work could be interacting with our clinicians and giving direct guidance on how to document better.”
Another area of focus is improving translation, which has been a significant challenge for many organizations. According to research from the UPenn School of Nursing, those who don’t speak English well “have lower satisfaction rates and worse health outcomes, including more hospital readmissions and longer stays.”
As part of a pilot scheduled to take place early this year, Seattle Children’s hopes to determine whether GenAI can facilitate improved communications.
“We’re dedicated to making sure the correct translation gets into the hands of our patients and their families,” she said. The problem is that it takes time, “particularly if it’s a language of a lesser diffusion,” which in some cases can entail sending discharge instructions to a specialized translator, who then has to relay the information back.
“Best case scenario, we get it back later that day after the patient has already left. You email it to them, call them, and hope they get it,” she said. “The worst case scenario is that it takes two days or even more to get it back, and you then send it to the patient. By that point, we’re done.”
It’s not acceptable, particularly when it comes to time-sensitive instructions such as caring for a post-surgical wound, for instance.
The ultimate goal is to be able to speak with patients and families in person, through an interpreter, and be able to address any concerns. “That will improve the quality of our care so much for patients who prefer not to use or can’t use English to communicate with their physicians or receive instructions,” said Lin, who presented on the topic at the ANIA Annual Symposium along with leaders from Boston Children’s, Stanford Children’s, and University of Washington. “It’s super important to us.”
What’s also important, she added, is ensuring any initiative – regardless of whether it involves AI – is designed and implemented with pediatric patients in mind. “A lot of times, pediatrics is an after thought. We trained it on adults, and if it doesn't work for kids, we’ll improve it later on,” she said. “That doesn’t work.”
What does work, in any clinical setting, is acknowledging that it’s not just the patient that needs to be considered, but caregivers as well. “They’re the reader and receiver of the information.” And therefore, patient family experience has to be front and center in the design of technology.”