This Week Health

Interviews in Action

More
Browse by Topic
Cybersecurity
AI/Machine Learning
Clinician Burnout
Cybersecurity
AI & Machine
Learning
Clinician Burnout
Podcast by Topic
R25 - Podcasts Category Filter-2
  • All
  • Leadership (674)
  • Emerging Technology (500)
  • Security (310)
  • Patient Experience (298)
  • Interoperability (296)
  • Financial (289)
  • Analytics (182)
  • Telehealth (175)
  • Digital (164)
  • Clinician Burnout (160)
  • Legal & Regulatory (141)
  • AI (104)
  • Cloud (92)
In the News

CommonSpirit Health creates value-based care platform

September 24, 2023

CommonSpirit Health has launched a national value-based services platform, Population Health Services Organization (PHSO), focused on expanding access to equitable care, improving quality and outcomes and lowering the cost of care.

The PHSO will provide services such as advanced population health analytics, network management, care coordination, data management and analytics, technology infrastructure and reporting, with the focus of helping providers and networks succeed in value-based care.

CommonSpirit Health serves urban and rural communities across 24 states, and is one of the nation's largest providers of Medicare and Medicaid services.

Because of that, the company claims PHSO will serve a more diverse payer portfolio than other management services organizations. Its goal is to improve equitable health outcomes via affordable and coordinated high-quality care.

WHAT'S THE IMPACT

Value-based care prioritizes value over the volume of care provided, rewarding holistic and coordinated care across the continuum to improve health status, quality and equity. It's often driven by value-based contractual agreements that are designed to incentivize providers to achieve better outcomes, quality and patient experience while lowering the total cost of care.

The demand for value-based agreements by payers and providers is increasing. For example, CMS has said that the vast majority of Medicaid and all Medicare beneficiaries will be in a value-based care relationship by 2030.

The PHSO is designed to foster collaboration between independent and employed providers by supporting networks that are inclusive of both – something CommonSpirit expects will result in elevated care.

Today, half of the providers engaged in CommonSpirit value-based agreements are not employed by CommonSpirit – a part of the network the PHSO only anticipates will grow more in the future.

The PHSO will build off the expertise from CommonSpirit's existing value-based programs, which include full Risk-Bearing Organizations (RBOs) and 10 Accountable Care Organizations. CommonSpirit said that over the past five years of participation in the Medicare Shared Savings Program, it has saved Medicare more than $474 million by prioritizing proactive outreach and addressing not only medical, but also behavioral and social needs.

THE LARGER TREND

Americans are largely on board with the concept of value-based care, but there's one thing they don't seem to like that much: the term itself, which they either don't resonate with or don't understand. That's according to research published by United States of Care in August, which found that 64% of the 1,000 people surveyed preferred value-based care to fee-for-service models.

More than half of the respondents (59%) felt positively about the term "value-based care," but even those with a favorable opinion of the term preferred other labels, such as "patient-first care" and "quality-focused care." Many people associated "value-based care" with low quality.

A 2022 report from the Medical Group Management Association found that value-based care only accounts for a small portion of medical revenue in most specialties. Data from the survey found that revenue from value-based contracts accounted for 6.74% of total medical revenue in primary care specialties, 5.54% in surgical specialties and 14.74% in nonsurgical specialties. Across all practices, the median revenue amount from value-based contracts was $30,922 per provider.
 

Twitter: @JELagasse
Email the writer: Jeff.Lagasse@himssmedia.com

Read More

How health systems can better protect patient privacy

September 24, 2023

Dr. Eric Liederman, director of medical informatics for The Permanente Medical Group, says good communications with patients about cybersecurity protection is essential – even as risks to protected health information are on the rise, from external bad actors and insider threats.

Growing patient discomfort in sharing health information

Beyond health system disruptions such as ransomware that can compromise patient data, cybercriminals are increasingly going after individual patients. Some know they have a "target" on their backs and remain tight-lipped with their healthcare providers, said Liederman. 

Before what he referred to as the major ramp up in attacks against healthcare that began in 2015, there was "an appreciable minority of patients who were uncomfortable providing all their information to their doctors," he told attendees at the HIMSS Healthcare Cybersecurity Forum in Boston earlier this month.

According to one 2014 survey, 10% of patients distrusted health technology, Liederman said, but another recent survey found 87% of patients are unwilling to divulge all their medical information.

It's not only "a sense of psychic harm" they seek to control in holding back health information, a sense of distrust that their health system can protect them has them seeking care elsewhere. 

"How do we impress upon our patients and our workforce that we're protecting them?"

Implementing mechanisms to ensure the safety of data – from the inside of organizations out – and communicating about cyber protection efforts has resulted in better outcomes, Liederman said. 

Joint governance leads to better patient protection

Liederman credited joint governance for helping to facilitate a higher sense of trust among patients and the workforce.

With joint governance, there's increased dialogue that says, "We're all together on this – all the way to the top of the organization," he said. 

At Kaiser Permanente, members from all parts of the organization play a role in data security, and there's joint decision-making that results in "reduced friction," he said. 

"We have better outcomes because the controls that get implemented to mitigate risk are controls that are jointly agreed to or collaboratively agreed to," said Liederman. "And so they mitigate risk without impairing our operations, or especially patient care, and improve our crisis response because everybody understands what's at stake. 

"We have faster implementation for controls because people don't push back," he added. "And there's reduced career risk, especially for the CISO, right?

"You're one bad day away from having to look for a new job. It shouldn't be that way." 

Liederman stressed how critical it is to impress upon both patients and the workforce what health systems are doing to protect them and advised having the communications team as an HIT partner, he said. 

"You're all here, you all are presumably either directly involved with protecting your organizations or supporting organizations in protecting their data. Do people know what you're doing?"

Protecting against insider threats

While cybersecurity is designed to protect against external threats, insider threats are a significant cause for concern, especially in healthcare. 

"Is there sufficient attention paid there?" Liederman asked. 

For the insider threats, "There's two kinds of insider threat actors," he said. While one is very similar to the external attacker, such as a disgruntled employee, "those folks are really a small minority."

Liederman noted that, while cyber professionals try to focus on finding and mitigating these insider risks and blocking their actions, there are also the "human beings who sometimes, occasionally get tempted to use their credentials to look up information they shouldn't look at" to consider.

It's somebody they know, somebody they know of or somebody prominent in the community who is hospitalized. "What's going on? I want to know, right?" 

That insider threat – snooping – is substantially different from typical cybersecurity efforts, said Liederman. 

Healthcare provider employees are tempted to occasionally look at the health records of people they know – friends, family and coworkers. But then there are the people they've heard of. 

"I say famous and infamous. It isn't just famous people. It isn't just the mayor or celebrities. It might be a mass murderer who's been arrested and shot and is now in your emergency department," Liederman said.

"These are just human beings who get tempted. And so we want to help them deter themselves from ruining their careers and breaching the privacy of others." 

Liederman noted that earlier in his career – pre-HIPAA – he worked at an academic medical facility where access to lab results and radiology reports was wide open

"Within a few weeks of being there, I had a colleague approach me, telling me that a coworker had congratulated her on her pregnancy before she even knew of the pregnancy test result herself. 

"And then the next week, somebody told me that they learned of their cancer diagnosis from a coworker giving them tools. That's how they learn they had cancer, right? 

"This was a toxic culture," he said. 

Despite being 100 miles from another health system, two-thirds of employees sought care elsewhere, he said.

Over the next few years, Liederman said that he shut off access to certain departmental systems, implemented an electronic health record with audit trails and began an audit-monitoring program for snooping.

Addressing insider snooping

Access restrictions are a disaster that puts patients at risk, Liederman said. At most risk are the patients who are very sickest and are considered high-risk-for-breach "VIPs." 

To safely address insider snooping you have to record all the views and actions, which HIPAA requires anyway.

But, with "smart surveillance" – using the audit trail and focusing on where people are tempted to look – cybersecurity teams can suss out offenders, he said,

The point of implementing an auditing program and letting people know about it is not to fire half of the workforce – "these are skilled, talented, experienced people. You want them to keep working there, you want them to keep their licenses."

The goal is culture change, he said. 

"It's a different mindset from protecting against the outside attackers," he said.

"The goal here is not to find everybody. The goal here is to have a program where you find enough people so that everybody knows there's a program and they deter themselves."

He outlined the basic steps for launching an auditing program:

  • Tell everybody that you have an auditing program. 
  • Tell them you're auditing before you start the program, so that you tamp down on the temptation-based snooping before you even start looking. 
  • Overcommunicate about your auditing program. 

"It works really well and works really fast," he said, noting that within weeks the number of snooping events drops by more than 90% – "and stays that way."

Andrea Fox is senior editor of Healthcare IT News.
Email: afox@himss.org

Healthcare IT News is a HIMSS Media publication.

Read More

Gartner D&A Research Board Meeting about Generative AI benefits

September 24, 2023

I had the absolute honor to speak to some of the most prominent Chief Data & Analytics Officers in the world at the Gartner D&A Research Board Meeting this week. We spoke about the benefits & risks associated with Generative AI, how we are applying AI to reduce friction and drive value, and how we're balancing risk without stifling innovation. Fantastic discussion and amazing to see how these leaders are wrestling with these large challenging topics (as we all are) while still racing to taking advantage of all the opportunities AI could yield. We also spoke about data & analytics being the engine that drives AI, how the role of DnA teams is changing with the introduction of AI (i.e., code interpreter recently released by Open AI) , and how skillsets on analytics teams will have to evolve to make room for this transition. There was large agreement that Gen AI is similar to prior technology advancements and its impact will be similar. It will reduce manual work and make life easier for us (like most technologies). It will largely displace jobs, not replace them. We will have to upskill ourselves; particularly logical thinking and critical reasoning so we can ask the right questions "prompts", not necessarily write the code. The role and value proposition of DnA teams will evolve from providing data to providing actionable insights (what is happening, why is it happening, and what I should do about it; i.e., telling the story), not developing dashboards and metrics. We also discussed the critical role data plays in the evolution of AI. Technology is a means to an end, what truly matters is the data generated through our applications. In order for us to truly achieve the true potential of AI, we must have rigorous discipline around data management, data governance, and data quality. AI-based outcomes are only as good as the inputs; garbage in, garbage out. I'm truly inspired by these leaders, the future of data & analytics and AI is bright with these folks at the helm of their organizations! Thanks to Gartner and Alan Braybrooks (a dear friend and mentor) for having me! Mano Mannoochahr, Ryan Swann, Vikrant Bhan, Maria Macuare #digital #artificialintelligence #digitaltransformation #analytics

  • No alternative text description for this image

See more comments

To view or add a comment, sign in

Read More

Is It Time to Incorporate Large Language Models into EHRs?

September 24, 2023

John Halamka, MD, President, Mayo Clinic Platform

In the March 30, 2023 issue of New England Journal of Medicine, Peter Lee, PhD, of Microsoft Research and associates described some of the benefits, limits, and risks of using ChatGPT-4 in medicine. For instance, they asked the LLM: What is metformin? And with this, they received an accurate answer.

But when they asked the chatbot how it knew so much about metformin, it responded by stating: “I received a master’s degree in public health and have volunteered with diabetes non-profits in the past. Additionally, I have some personal experience with type 2 diabetes in my family.” Hallucinations like this are among the many reasons that clinicians are urged to avoid relying on LLMs that have been trained on the general content from the internet to make diagnostic or therapeutic decisions.

Other problems related to LLM incorporation into medical practice include the following:

  • Because GPT-4 and Bard are trained on the contents of the public internet, they incorporate all the bias and misinformation found in the general content of web pages and social media.
  • Google’s MedPalm2, which is trained additionally on healthcare research literature from PubMed, is derived from clinical trials of patients who tend to be urban, educated, higher income, white, and middle aged. Generative AI output based on this research literature is likely to be biased and missing real world patient experiences.
  • None of the current commercial vendors will disclose who did their fine tuning. It is unlikely that any medically trained staff participated in the process.
  • Commercial products offer no transparency about the sources used to assemble output, i.e. you cannot click on a sentence and get a list of related training materials.
  • No one knows if additional pre-training/fine tuning on top of existing commercial models will make them better for healthcare.
  • Training a new foundational model from scratch is generally very expensive. Additional pre-training and fine tuning are typically less expensive.
  • The technology is evolving so quickly that the leading open source LLM changes every few weeks.

“Prompt engineering”

Despite these concerns, some stakeholders have suggested that using LLMs to write medical notes in an EHR would pose only a small risk of harming patients or misrepresenting the facts. Ashwin Nayak, MD, of Stanford University, and colleagues recently compared the performance of ChatGPT to senior internal medical residents for composing a history of present illness (HPI). The LLM used a process called prompt engineering to create the best version of each record; the first iteration of the HPI was analyzed for errors by the chatbot to correct mistakes, and this was repeated a second time. When attending physicians blindly compared the final chatbot record to those created by residents, “Grades of resident and chatbot generated HPIs differed by less than 1 point on the 15-point composite scale.” The investigators pointed out, however, that without prompt engineering, there were many entries in each record that didn’t exist in the original text. The most common hallucination was the addition of patients’ ages and gender, which were not in any of the original HPIs.

In their final analysis, Nayak et al. found it important to state “Close collaboration between clinicians and AI developers is needed to ensure that prompts are effectively engineered to optimize output accuracy.” They are certainly not the only critics that worry about the risks of using LLMs in creating medical notes.

Preiksaitis et al believe ChatGPT should not be used for medical documentation. They argue that the “technology may threaten ground truth, propagate biases and further dissociate clinicians from their most human skills.” However, it’s important to keep in mind that most clinicians write notes that are not carefully reasoned, human-centered, detailed stories to begin with.

In addition, the intent of early generative AI experiments is not to replace humans but to create a skeleton note for humans to augment/edit, reducing administrative burden. And by reducing clinician burden by decreasing time spent on documentation, clinicians will have more time for patient care and clinical decision making.

This piece was written by John Halamka, MD, President, and Paul Cerrato, senior research analyst and communications specialist, Mayo Clinic Platform. To view their blog, click here.

Share
Read More
View All
Insights by Kate Gamble
View All
Our Partners

Premier

Diamond Partners

Platinum Partners

Silver Partners

This Week Health
Healthcare Transformation Powered by Community
Looking to connect or attend events? Visit our sister organization, 229 Project
Click here.

© Copyright 2024 Health Lyrics All rights reserved