This Week Health

Interviews in Action

More
This Week Health is a series of IT podcasts dedicated to healthcare transformation powered by the community

What would you like to learn about today?

Latest Episodes
View All
R25 - Podcasts Category Filter-2
  • All
  • Leadership (668)
  • Emerging Technology (494)
  • Security (307)
  • Interoperability (296)
  • Patient Experience (295)
  • Financial (286)
  • Analytics (182)
  • Telehealth (174)
  • Digital (164)
  • Clinician Burnout (158)
  • Legal & Regulatory (140)
  • AI (103)
  • Cloud (92)
View All
In the News

Top 12 things healthcare organizations want in IT vendors, per KLAS

September 24, 2023

While the majority of healthcare organizations are generally satisfied with their health IT vendor's customer service, one-third remain dissatisfied, KLAS Research reported.

Here are the top 12 factors influencing customer satisfaction, according to the Aug. 24 report that had 53,189 respondents:

1. Proactive ownership of client issues: 44 percent

2. Ability to achieve outcomes: 26 percent

3. Quality of the upgrade experience: 25 percent

4. Development and roadmap communication: 24 percent

    Guidance and recommendations: 24 percent

6. Communication around bugs and issues: 23 percent

    Knowledge and empowerment of staff: 23 percent

8. System tailored to needs: 20 percent

9. Access to actionable reporting and insights: 19 percent

10. Implementation and training: 16 percent

11. Regularity of touchpoints: 13 percent

12. Support of integration needs: 11 percent

Read More

Is artificial intelligence a cybersecurity ally or menace?

September 24, 2023

Artificial intelligence continues to push cybersecurity into an unprecedented era as it offers benefits and at the same time drawbacks by assisting both aggressors and protectors.

Cybercriminals are using AI to launch more sophisticated and unique attacks at a wider scale. And cybersecurity teams are using the same technology to protect their systems and data.

Dr. Brian Anderson is chief digital health physician at MITRE, a federally funded nonprofit research organization. He will be speaking at the HIMSS 2023 Healthcare Cybersecurity Forum in a panel session titled "Artificial Intelligence: Cybersecurity's Friend or Foe?" Other members of the panel include Eric Liederman of Kaiser Permanente, Benoit Desjardins of UPENN Medical Center and Michelle Ramim of Nova Southeastern University.

We interviewed Anderson to help unpack the implications of both offensive and defensive AI and examine new risks introduced by ChatGPT and other types of generative AI.

Q. How exactly does the presence of artificial intelligence bring up cybersecurity concerns?

A. There are several ways AI brings up substantive cybersecurity concerns. For example, nefarious AI tools can pose risks by enabling denial of service attacks, as well as brute force attacks on a particular target.

AI tools also can be used in "model poisoning," an attack where a program is used to corrupt a machine learning model to produce incorrect results by inserting malicious code.

Additionally, many of the available free AI tools – such as ChatGPT – can be tricked with prompt engineering approaches to write malicious code. Particularly in healthcare, there are concerns around protecting sensitive health data, such as protected health information.

Sharing PHI in prompts of these publicly available tools could lead to data privacy concerns. Many health systems are struggling with how to protect systems from allowing for this kind of data sharing/leakage.

Q. How can AI benefit hospitals and health systems when it comes to protection against bad actors?

A. AI has been helping cybersecurity experts identify threats for years now. Many AI tools are currently used to identify threats and malware, as well as detecting malicious code inserted into programs and models.

Using these tools – with a human cybersecurity expert always in the loop to ensure appropriate alignment and decision-making – can help health systems stay one step ahead of bad actors. AI that is trained in adversarial tactics is a powerful new set of tools that can help protect health systems from optimized attacks by malevolent models.

Generative models such as large language learning models (LLMs) can help protect health systems by identifying and predicting phishing attacks or flagging harmful bots.

Finally, mitigating insider threats like leaking of PHI or sensitive data (for example, for use on ChatGPT), is another example of some of the emerging risks that health systems must develop responses to.

Q. What cybersecurity risks are introduced by ChatGPT and other types of generative AI?

A. ChatGPT and future iterations of the current GPT-4 and other LLMs will become increasingly effective at writing novel code that could be used for nefarious purposes. These generative models also pose privacy risks, as I previously mentioned.

Social engineering is another concern. By producing detailed text or scripts, and/or the ability to reproduce a familiar voice, the potential exists for LLMs to impersonate individuals in attempts to exploit vulnerabilities.

I have a final thought. It’s my sincere belief as a medical doctor and informaticist that, with the appropriate safeguards in place, the positive potential for AI in healthcare far exceeds the potential negative.

As with any new technology there is a learning curve to identify and understand where vulnerabilities or risk may exist. And in a space as consequential as healthcare – where patients' wellbeing and safety is on the line – it's critical we move as quickly as possible to address those concerns.

I look forward to gathering in Boston with this HIMSS community, so committed to advancing healthcare technology innovation while protecting patient safety.

Anderson's session, "Artificial Intelligence: Cybersecurity's Friend or Foe?" is scheduled for 11 a.m. on Thursday, September 7, at the HIMSS 2023 Healthcare Cybersecurity Forum in Boston.

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Read More

Health system execs bullish on generative AI, but still lack strategy

September 24, 2023

The past few years have been tough for health system bottom lines, with the COVID-19 public health emergency, critical staffing and workforce shortages, inflation and other financial pressures combining to strain budgets to the breaking points. More than 50% of U.S. hospitals reported a negative margin to close out 2022, according to a new report from Bain & Company.

But C-suite leaders see reason for optimism in the years ahead, the survey shows, thanks to the cost saving potential of fast-evolving generative AI technologies.

WHY IT MATTERS
Health system execs are particularly excited about AI and automation's potential to streamline financial and operational processes, tackle administrative inefficiencies and reduce clinician burnout. They see big opportunities for improving workflows and clinical documentation, managing and analyzing data and more in the year ahead.

Further out, in the next two to five years, hospital leaders say they're planning more AI-powered initiatives around predictive analytics, decision support, guided treatment insights and more, according to the report.

The potential is especially attainable given that the cost to train AI and machine learning models has decreased exponentially, according to Bain: "Down 1,000-fold since 2017," it promises an "arsenal of new productivity-enhancing tools at a low investment."

But while the report shows 75% of C-suite leaders are excited that generative AI has "reached a turning point in its ability to reshape the industry," it also finds that just 6% of health systems polled have an established generative AI strategy.

Still, that number is expected to grow as more and more provider organizations get serious about harnessing the potential of generative AI and automation to help address some long-standing clinical, financial and operational challenges.

Among the top health system priorities for generative AI applications, according to the report:

  • Charge capture and reconciliation

  • Structuring and analysis of patient data

  • Workflow optimization and automation

  • Clinical decision support

  • Predictive analytics and risk stratification

  • Telehealth and remote patient monitoring

  • Call centers for administrative purposes

  • Diagnostics and treatment recommendations

  • Provider and patient workflows, including payer interactions

  • Suggestions for care coordination and health system navigation

  • Treatment and therapy recommendations for providers

  • Call center for clinical questions

  • Summary of clinical literature and research mining

  • Assistance for patient financial counseling and Q&A

  • Procurement and contract management

  • Referral support and routing for providers and patients

  • Drug discovery and clinical trial design

Still, Bain & Company researchers warn that there are significant hurdles ahead as health systems race to implement these fast-advancing technologies.

"Solutions to the greatest hurdles aren't yet keeping up with the rapid technology development," they say. "Resource and cost constraints, a lack of expertise, and regulatory and legal considerations are the largest barriers to implementing generative AI, according to executives.

"Even when organizations can overcome these hurdles, one major challenge remains: focus and prioritization," researchers add. "In many boardrooms, executives are debating overwhelming lists of potential generative AI investments, only to deem them incomplete or outdated given the dizzying pace of innovation. These protracted debates are a waste of precious organizational energy – and time."

THE LARGER TREND
As health systems work to develop "pragmatic" strategies for generative AI adoption, Bain suggests that hospital execs keep four guiding principles top-of-mind. They should, as researchers write in the report:

  1. Pilot low-risk applications with a narrow focus first. When gaining experience with currently available technology, companies are testing and learning their way to minimum viable products in low-risk, repeatable use cases. These quick wins are typically in areas where they already have the right data, can create tight guardrails, and see a strong potential return on investment.

  2. Decide to buy, partner, or build. CEOs will need to think about how to invest in different use cases based on availability of third-party technology and importance of the initiative.

  3. Funnel cost savings and experience into bigger bets. As the technology matures and the value becomes clear, companies that generate savings, accumulate experience, and build organizational buy-in today will be best positioned for the next wave of more sophisticated, transformative use cases.

  4. Remember AI isn't a strategy unto itself. To build a true competitive advantage, top CEOs and CFOs are selective and discerning, ensuring that every AI initiative reinforces and enables their overarching goals.

ON THE RECORD
"Providers and payers are looking for profit opportunities while also doubling down on employee morale, clinical care, and patient experience," said Eric Berger, a partner in Bain's Healthcare & Life Sciences practice, in a statement. "Many recognize the potential AI offers to boost productivity, yet they are acutely aware of the uncertainties around evolving technology.

"This uncertainty cuts both ways," he added. "While there is hype, there is also opportunity. Leading companies are taking this technology shift seriously and getting started with highly focused, low stakes use cases with some near-term ROI while building up the experience and confidence needed to invest in a more transformative vision."

Mike Miliard is executive editor of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com

Healthcare IT News is a HIMSS publication.

Read More

Inside Job: How "Unfettered Access" Is Challenging Security Teams

September 24, 2023

Adam Zoller, CISO, Providence

When most people think about security breaches, the images often conjured involve hospitals being offline for days, or ransomware taking down large health systems. And although those events do certainly happen, fortunately they tend to be few and far between.

The more urgent threats, according to Nick Culbertson, Co-Founder and CEO of Protenus, are seemingly “low-risk incidents” such as storing ePHI on a laptop that “tend to build up over time and actually lead to bigger incidents.” Individuals who get away with “benign” actions are more likely to continue to push the envelope and do more nefarious things, he said during a recent discussion, which also featured Adam Zoller (CISO, Providence), Chuck Christian (VP of Technology and CTO, Franciscan Health), and Nicole Brown (Director, Privacy, Office of Compliance and Integrity, Ann & Robert H. Lurie Children’s Hospital of Chicago).

It’s enough to scare even (or perhaps, especially) the most seasoned IT and security leaders, particularly given the fact that the likelihood of a minor cybersecurity violation is far greater than a headline-grabbing ransomware attack. And as more organizations migrate to the cloud, it’s not going to get any easier to protect data.

“We’re creating an ever-expanding threat landscape,” said Christian. With solutions, platforms, and infrastructures now available as a service, the attack surface is continually growing. “And we, in some cases, are making it easier for people to make mistakes.”

The other differentiator is access — a concept that has evolved significantly since the days of paper charts, according to Brown. Because EHRs are compartmentalized, users may not realize that even if they’re simply looking at demographic information, they’re still accessing Protected Health Information (PHI). “We have to explain what access really means in the digital age,” she said, noting that the experience has been eye-opening.

Nicole Brown, Director, Privacy, Office of Compliance and Integrity, Ann & Robert H. Lurie Children’s Hospital

Zoller agreed, adding that attacks that enter from within an organization can incur the most damage because they already have a foot in the door. “From a threat actor’s perspective, it’s much easier to take data as a trusted individual.” And it doesn’t have to be an employee; it can be a contractor, vendor, or anyone who touches the organization, noted Culbertson. “It’s not just about hackers. It’s all the insiders you have to be responsible for, because of the “unfettered access they have throughout healthcare.”

That access, combined with factors like human error or what the panelists termed “willful ignorance,” can make risk mitigation seem impossible. However, with the right people, processes, and tools in place, organizations can make major strides. Below, the experts shared best practices based their experiences.

Keys to Managing Insider Threats

  • Leaders in lockstep. According to Christian, having solid policies — and people who are willing to enforce them — is key. In Franciscan’s case, it’s his top security leaders. “We work together to do that. And we’re in lockstep when it comes to physical and virtual access to the systems.”
  • Good governance. For Zoller, having an “incredibly supportive executive team that takes security very seriously” has made a big difference. “We’ve set up governance structures within Providence to have conversations with individuals who accept risk around data security and cybersecurity for the entire system.” And it’s not just about cybersecurity; if an initiative poses risks in terms of data privacy or reputational damage, it becomes a discussion. “We have an open conversation, and the right individuals can make an informed decision as to whether it’s acceptable, rather than just coming from me.”
  • Empowered CISOs. Although it doesn’t always go over well when security leaders have to veto an idea, it’s important they are empowered to say no — for example, a request to set up a VPN between a third party and another country. “There’s no way that’s getting approved because we inherit the cybersecurity risk from those parties,” Zoller said.
  • Involving compliance. At Lurie Children’s Hospital, the research arm has embedded compliance officers on privacy and security committees who are able to answer questions and raise flags when needed. “We have a very close working relationship with them,” said Brown. “That has really helped us remain in compliance.”
  • Lean on data. Zoller believes the key is in adopting an approach that’s realistic and driven by data. “Everyone wants to trust that their employees are doing the right thing, but not many are actually looking at what they’re doing with their data or with their systems.” And while no leader wants to go looking for a monster, it’s critical to acknowledge that the monster does, in fact, exist. Doing so can help boost understanding of “your risk posture as an organization and the proactive measures you’ve put in place to protect against adverse events,” he said.
  • Cultivate relationships. When any measure is put into place that can hinder workflow — and subsequently, impact patient care — clinician pushback can be expected, according to Christian, noting that CIOs and other leaders are often perceived as being obstructive. “It’s a fine line,” he said. “The way I’ve addressed it is by forming relationships and making sure people understand that I’m not doing it just because I can. We’re protecting the organization; everybody needs to focus on that.”

PHI is “everywhere”

The challenge in doing that? PHI can be very difficult to locate, added Christian. “I don’t think any health system knows exactly where PHI lives.” What he does know is that “it’s everywhere,” including laptops, despite warnings from leaders not to save or store on any shared devices.

Chuck Christian, VP of Technology, Franciscan Health

According to Culbertson, “one of the things we often hear from CISOs and privacy officers is that it’s really difficult to protect the data if you don’t really know where all of it is.”

Finding it, however, is only half the battle — that’s where Protenus comes in. “We’re able to monitor access log layer events and determine whether there’s questionable activity in those logs that are indicative of a potential data breach or privacy violation,” he said.

Once PHI is identified, Protenus uses AI to help automate auditing capabilities and be able to predict and prevent incidents, Culbertson said.

Targeted education

The auditing component has proven to be critical, particularly for organizations like Lurie Children’s that periodically audit and monitor access to ePHI. Doing so alerts leaders to practices that may not violate HIPAA standards, but are “questionable from a compliance perspective,” said Brown. “It also allows us to create more targeted education and helps inform some of the actions we take in response.” The ultimate goal is to be “in a more proactive state,” which she believes will be achieved eventually.

What’s important to note is that, like so many other challenges in healthcare, mitigating insider threats can be approached several different ways depending on the needs of a particular organization. And what works today may not be enough in a few years, noted Zoller. “Systems are changing. We’re moving apps to the cloud. We have new tools at our disposal that give us visibility that we never had in the past.”

Nick Culbertson, Co-Founder & CEO, Protenus

The key, he said, is to “look at it from a cybersecurity angle. What am I chartered to protect? What tools do I have to protect it? Do I have the right data sources and visibility in the right mechanisms to act if something happens? A lot of organizations struggle with this.” And while all leaders want to believe their employees are trustworthy, it’s important not to bury your head in the sand, he added. “You have to have mechanisms in place to control for situations where data is being misused and systems are being inappropriately accessed, exposing you to external threats. It’s about balancing risk versus reward.”

And of course, education is a critical part of that — and not just for new hires, noted Culbertson. In fact, the most effective training occurs right on the spot when someone is found to be acting questionably. “What we can do is identify those early warning signs or benign behaviors, reach out to them, and point out what they’re doing wrong,” he said. By intervening, not only can leaders correct the behavior of that individual; they can also prevent future incidents from happening.

Zoller agreed, urging colleagues to implement preventative controls and detection controls to help keep users on the right path. “Treat insider threats the same as you would external threats,” he said. “It all has to be part of your risk calculus.”

Finally, leaders need to remember that security, like anything else, is “never done,” noted Christian. “Never assume you have everything buttoned up. You have to stay at it, and you have to be diligent.”

To view the archive of this webinar — Strategies for Mitigating Insider Threat Risk (Sponsored by Protenus) — please click here.

Share
Read More

Top 12 things healthcare organizations want in IT vendors, per KLAS

September 24, 2023

While the majority of healthcare organizations are generally satisfied with their health IT vendor's customer service, one-third remain dissatisfied, KLAS Research reported.

Here are the top 12 factors influencing customer satisfaction, according to the Aug. 24 report that had 53,189 respondents:

1. Proactive ownership of client issues: 44 percent

2. Ability to achieve outcomes: 26 percent

3. Quality of the upgrade experience: 25 percent

4. Development and roadmap communication: 24 percent

    Guidance and recommendations: 24 percent

6. Communication around bugs and issues: 23 percent

    Knowledge and empowerment of staff: 23 percent

8. System tailored to needs: 20 percent

9. Access to actionable reporting and insights: 19 percent

10. Implementation and training: 16 percent

11. Regularity of touchpoints: 13 percent

12. Support of integration needs: 11 percent

Read More

Is artificial intelligence a cybersecurity ally or menace?

September 24, 2023

Artificial intelligence continues to push cybersecurity into an unprecedented era as it offers benefits and at the same time drawbacks by assisting both aggressors and protectors.

Cybercriminals are using AI to launch more sophisticated and unique attacks at a wider scale. And cybersecurity teams are using the same technology to protect their systems and data.

Dr. Brian Anderson is chief digital health physician at MITRE, a federally funded nonprofit research organization. He will be speaking at the HIMSS 2023 Healthcare Cybersecurity Forum in a panel session titled "Artificial Intelligence: Cybersecurity's Friend or Foe?" Other members of the panel include Eric Liederman of Kaiser Permanente, Benoit Desjardins of UPENN Medical Center and Michelle Ramim of Nova Southeastern University.

We interviewed Anderson to help unpack the implications of both offensive and defensive AI and examine new risks introduced by ChatGPT and other types of generative AI.

Q. How exactly does the presence of artificial intelligence bring up cybersecurity concerns?

A. There are several ways AI brings up substantive cybersecurity concerns. For example, nefarious AI tools can pose risks by enabling denial of service attacks, as well as brute force attacks on a particular target.

AI tools also can be used in "model poisoning," an attack where a program is used to corrupt a machine learning model to produce incorrect results by inserting malicious code.

Additionally, many of the available free AI tools – such as ChatGPT – can be tricked with prompt engineering approaches to write malicious code. Particularly in healthcare, there are concerns around protecting sensitive health data, such as protected health information.

Sharing PHI in prompts of these publicly available tools could lead to data privacy concerns. Many health systems are struggling with how to protect systems from allowing for this kind of data sharing/leakage.

Q. How can AI benefit hospitals and health systems when it comes to protection against bad actors?

A. AI has been helping cybersecurity experts identify threats for years now. Many AI tools are currently used to identify threats and malware, as well as detecting malicious code inserted into programs and models.

Using these tools – with a human cybersecurity expert always in the loop to ensure appropriate alignment and decision-making – can help health systems stay one step ahead of bad actors. AI that is trained in adversarial tactics is a powerful new set of tools that can help protect health systems from optimized attacks by malevolent models.

Generative models such as large language learning models (LLMs) can help protect health systems by identifying and predicting phishing attacks or flagging harmful bots.

Finally, mitigating insider threats like leaking of PHI or sensitive data (for example, for use on ChatGPT), is another example of some of the emerging risks that health systems must develop responses to.

Q. What cybersecurity risks are introduced by ChatGPT and other types of generative AI?

A. ChatGPT and future iterations of the current GPT-4 and other LLMs will become increasingly effective at writing novel code that could be used for nefarious purposes. These generative models also pose privacy risks, as I previously mentioned.

Social engineering is another concern. By producing detailed text or scripts, and/or the ability to reproduce a familiar voice, the potential exists for LLMs to impersonate individuals in attempts to exploit vulnerabilities.

I have a final thought. It’s my sincere belief as a medical doctor and informaticist that, with the appropriate safeguards in place, the positive potential for AI in healthcare far exceeds the potential negative.

As with any new technology there is a learning curve to identify and understand where vulnerabilities or risk may exist. And in a space as consequential as healthcare – where patients' wellbeing and safety is on the line – it's critical we move as quickly as possible to address those concerns.

I look forward to gathering in Boston with this HIMSS community, so committed to advancing healthcare technology innovation while protecting patient safety.

Anderson's session, "Artificial Intelligence: Cybersecurity's Friend or Foe?" is scheduled for 11 a.m. on Thursday, September 7, at the HIMSS 2023 Healthcare Cybersecurity Forum in Boston.

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Read More

Health system execs bullish on generative AI, but still lack strategy

September 24, 2023

The past few years have been tough for health system bottom lines, with the COVID-19 public health emergency, critical staffing and workforce shortages, inflation and other financial pressures combining to strain budgets to the breaking points. More than 50% of U.S. hospitals reported a negative margin to close out 2022, according to a new report from Bain & Company.

But C-suite leaders see reason for optimism in the years ahead, the survey shows, thanks to the cost saving potential of fast-evolving generative AI technologies.

WHY IT MATTERS
Health system execs are particularly excited about AI and automation's potential to streamline financial and operational processes, tackle administrative inefficiencies and reduce clinician burnout. They see big opportunities for improving workflows and clinical documentation, managing and analyzing data and more in the year ahead.

Further out, in the next two to five years, hospital leaders say they're planning more AI-powered initiatives around predictive analytics, decision support, guided treatment insights and more, according to the report.

The potential is especially attainable given that the cost to train AI and machine learning models has decreased exponentially, according to Bain: "Down 1,000-fold since 2017," it promises an "arsenal of new productivity-enhancing tools at a low investment."

But while the report shows 75% of C-suite leaders are excited that generative AI has "reached a turning point in its ability to reshape the industry," it also finds that just 6% of health systems polled have an established generative AI strategy.

Still, that number is expected to grow as more and more provider organizations get serious about harnessing the potential of generative AI and automation to help address some long-standing clinical, financial and operational challenges.

Among the top health system priorities for generative AI applications, according to the report:

  • Charge capture and reconciliation

  • Structuring and analysis of patient data

  • Workflow optimization and automation

  • Clinical decision support

  • Predictive analytics and risk stratification

  • Telehealth and remote patient monitoring

  • Call centers for administrative purposes

  • Diagnostics and treatment recommendations

  • Provider and patient workflows, including payer interactions

  • Suggestions for care coordination and health system navigation

  • Treatment and therapy recommendations for providers

  • Call center for clinical questions

  • Summary of clinical literature and research mining

  • Assistance for patient financial counseling and Q&A

  • Procurement and contract management

  • Referral support and routing for providers and patients

  • Drug discovery and clinical trial design

Still, Bain & Company researchers warn that there are significant hurdles ahead as health systems race to implement these fast-advancing technologies.

"Solutions to the greatest hurdles aren't yet keeping up with the rapid technology development," they say. "Resource and cost constraints, a lack of expertise, and regulatory and legal considerations are the largest barriers to implementing generative AI, according to executives.

"Even when organizations can overcome these hurdles, one major challenge remains: focus and prioritization," researchers add. "In many boardrooms, executives are debating overwhelming lists of potential generative AI investments, only to deem them incomplete or outdated given the dizzying pace of innovation. These protracted debates are a waste of precious organizational energy – and time."

THE LARGER TREND
As health systems work to develop "pragmatic" strategies for generative AI adoption, Bain suggests that hospital execs keep four guiding principles top-of-mind. They should, as researchers write in the report:

  1. Pilot low-risk applications with a narrow focus first. When gaining experience with currently available technology, companies are testing and learning their way to minimum viable products in low-risk, repeatable use cases. These quick wins are typically in areas where they already have the right data, can create tight guardrails, and see a strong potential return on investment.

  2. Decide to buy, partner, or build. CEOs will need to think about how to invest in different use cases based on availability of third-party technology and importance of the initiative.

  3. Funnel cost savings and experience into bigger bets. As the technology matures and the value becomes clear, companies that generate savings, accumulate experience, and build organizational buy-in today will be best positioned for the next wave of more sophisticated, transformative use cases.

  4. Remember AI isn't a strategy unto itself. To build a true competitive advantage, top CEOs and CFOs are selective and discerning, ensuring that every AI initiative reinforces and enables their overarching goals.

ON THE RECORD
"Providers and payers are looking for profit opportunities while also doubling down on employee morale, clinical care, and patient experience," said Eric Berger, a partner in Bain's Healthcare & Life Sciences practice, in a statement. "Many recognize the potential AI offers to boost productivity, yet they are acutely aware of the uncertainties around evolving technology.

"This uncertainty cuts both ways," he added. "While there is hype, there is also opportunity. Leading companies are taking this technology shift seriously and getting started with highly focused, low stakes use cases with some near-term ROI while building up the experience and confidence needed to invest in a more transformative vision."

Mike Miliard is executive editor of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com

Healthcare IT News is a HIMSS publication.

Read More

Inside Job: How "Unfettered Access" Is Challenging Security Teams

September 24, 2023

Adam Zoller, CISO, Providence

When most people think about security breaches, the images often conjured involve hospitals being offline for days, or ransomware taking down large health systems. And although those events do certainly happen, fortunately they tend to be few and far between.

The more urgent threats, according to Nick Culbertson, Co-Founder and CEO of Protenus, are seemingly “low-risk incidents” such as storing ePHI on a laptop that “tend to build up over time and actually lead to bigger incidents.” Individuals who get away with “benign” actions are more likely to continue to push the envelope and do more nefarious things, he said during a recent discussion, which also featured Adam Zoller (CISO, Providence), Chuck Christian (VP of Technology and CTO, Franciscan Health), and Nicole Brown (Director, Privacy, Office of Compliance and Integrity, Ann & Robert H. Lurie Children’s Hospital of Chicago).

It’s enough to scare even (or perhaps, especially) the most seasoned IT and security leaders, particularly given the fact that the likelihood of a minor cybersecurity violation is far greater than a headline-grabbing ransomware attack. And as more organizations migrate to the cloud, it’s not going to get any easier to protect data.

“We’re creating an ever-expanding threat landscape,” said Christian. With solutions, platforms, and infrastructures now available as a service, the attack surface is continually growing. “And we, in some cases, are making it easier for people to make mistakes.”

The other differentiator is access — a concept that has evolved significantly since the days of paper charts, according to Brown. Because EHRs are compartmentalized, users may not realize that even if they’re simply looking at demographic information, they’re still accessing Protected Health Information (PHI). “We have to explain what access really means in the digital age,” she said, noting that the experience has been eye-opening.

Nicole Brown, Director, Privacy, Office of Compliance and Integrity, Ann & Robert H. Lurie Children’s Hospital

Zoller agreed, adding that attacks that enter from within an organization can incur the most damage because they already have a foot in the door. “From a threat actor’s perspective, it’s much easier to take data as a trusted individual.” And it doesn’t have to be an employee; it can be a contractor, vendor, or anyone who touches the organization, noted Culbertson. “It’s not just about hackers. It’s all the insiders you have to be responsible for, because of the “unfettered access they have throughout healthcare.”

That access, combined with factors like human error or what the panelists termed “willful ignorance,” can make risk mitigation seem impossible. However, with the right people, processes, and tools in place, organizations can make major strides. Below, the experts shared best practices based their experiences.

Keys to Managing Insider Threats

  • Leaders in lockstep. According to Christian, having solid policies — and people who are willing to enforce them — is key. In Franciscan’s case, it’s his top security leaders. “We work together to do that. And we’re in lockstep when it comes to physical and virtual access to the systems.”
  • Good governance. For Zoller, having an “incredibly supportive executive team that takes security very seriously” has made a big difference. “We’ve set up governance structures within Providence to have conversations with individuals who accept risk around data security and cybersecurity for the entire system.” And it’s not just about cybersecurity; if an initiative poses risks in terms of data privacy or reputational damage, it becomes a discussion. “We have an open conversation, and the right individuals can make an informed decision as to whether it’s acceptable, rather than just coming from me.”
  • Empowered CISOs. Although it doesn’t always go over well when security leaders have to veto an idea, it’s important they are empowered to say no — for example, a request to set up a VPN between a third party and another country. “There’s no way that’s getting approved because we inherit the cybersecurity risk from those parties,” Zoller said.
  • Involving compliance. At Lurie Children’s Hospital, the research arm has embedded compliance officers on privacy and security committees who are able to answer questions and raise flags when needed. “We have a very close working relationship with them,” said Brown. “That has really helped us remain in compliance.”
  • Lean on data. Zoller believes the key is in adopting an approach that’s realistic and driven by data. “Everyone wants to trust that their employees are doing the right thing, but not many are actually looking at what they’re doing with their data or with their systems.” And while no leader wants to go looking for a monster, it’s critical to acknowledge that the monster does, in fact, exist. Doing so can help boost understanding of “your risk posture as an organization and the proactive measures you’ve put in place to protect against adverse events,” he said.
  • Cultivate relationships. When any measure is put into place that can hinder workflow — and subsequently, impact patient care — clinician pushback can be expected, according to Christian, noting that CIOs and other leaders are often perceived as being obstructive. “It’s a fine line,” he said. “The way I’ve addressed it is by forming relationships and making sure people understand that I’m not doing it just because I can. We’re protecting the organization; everybody needs to focus on that.”

PHI is “everywhere”

The challenge in doing that? PHI can be very difficult to locate, added Christian. “I don’t think any health system knows exactly where PHI lives.” What he does know is that “it’s everywhere,” including laptops, despite warnings from leaders not to save or store on any shared devices.

Chuck Christian, VP of Technology, Franciscan Health

According to Culbertson, “one of the things we often hear from CISOs and privacy officers is that it’s really difficult to protect the data if you don’t really know where all of it is.”

Finding it, however, is only half the battle — that’s where Protenus comes in. “We’re able to monitor access log layer events and determine whether there’s questionable activity in those logs that are indicative of a potential data breach or privacy violation,” he said.

Once PHI is identified, Protenus uses AI to help automate auditing capabilities and be able to predict and prevent incidents, Culbertson said.

Targeted education

The auditing component has proven to be critical, particularly for organizations like Lurie Children’s that periodically audit and monitor access to ePHI. Doing so alerts leaders to practices that may not violate HIPAA standards, but are “questionable from a compliance perspective,” said Brown. “It also allows us to create more targeted education and helps inform some of the actions we take in response.” The ultimate goal is to be “in a more proactive state,” which she believes will be achieved eventually.

What’s important to note is that, like so many other challenges in healthcare, mitigating insider threats can be approached several different ways depending on the needs of a particular organization. And what works today may not be enough in a few years, noted Zoller. “Systems are changing. We’re moving apps to the cloud. We have new tools at our disposal that give us visibility that we never had in the past.”

Nick Culbertson, Co-Founder & CEO, Protenus

The key, he said, is to “look at it from a cybersecurity angle. What am I chartered to protect? What tools do I have to protect it? Do I have the right data sources and visibility in the right mechanisms to act if something happens? A lot of organizations struggle with this.” And while all leaders want to believe their employees are trustworthy, it’s important not to bury your head in the sand, he added. “You have to have mechanisms in place to control for situations where data is being misused and systems are being inappropriately accessed, exposing you to external threats. It’s about balancing risk versus reward.”

And of course, education is a critical part of that — and not just for new hires, noted Culbertson. In fact, the most effective training occurs right on the spot when someone is found to be acting questionably. “What we can do is identify those early warning signs or benign behaviors, reach out to them, and point out what they’re doing wrong,” he said. By intervening, not only can leaders correct the behavior of that individual; they can also prevent future incidents from happening.

Zoller agreed, urging colleagues to implement preventative controls and detection controls to help keep users on the right path. “Treat insider threats the same as you would external threats,” he said. “It all has to be part of your risk calculus.”

Finally, leaders need to remember that security, like anything else, is “never done,” noted Christian. “Never assume you have everything buttoned up. You have to stay at it, and you have to be diligent.”

To view the archive of this webinar — Strategies for Mitigating Insider Threat Risk (Sponsored by Protenus) — please click here.

Share
Read More
View All
Insights by Kate Gamble
View All
Our Partners

Premier

Diamond Partners

Platinum Partners

Silver Partners

Healthcare Transformation Powered by Community

© Copyright 2024 Health Lyrics All rights reserved