This Week Health

Don't forget to subscribe!

Today: Are you rethinking your cloud strategy yet? There are a lot of reasons we may want to.

Transcript

 Today in health, it, we are going to take a look@thisstoryoncio.com. Are you rethinking your cloud strategy yet? Given what's gone on. Over the last couple of months. My name is bill Russell. I'm a former CIO for a 16 hospital system and creator this week health set of channels and events dedicated to transform healthcare.

One connection at a time. Today's show is brought to you by artist site one platform, infinite possibilities for improving healthcare artist sites, platform unlocks endless ways. To relieve tension, reduce friction. And make clinicians jobs easier from telemedicine to virtual nursing and beyond. Explore the artist side platform at this week, health. Dot com slash artist site.

Alright Hey, this new story, all the new stories are covered.

You can find on our website this week. health.com/news. We are almost at $150,000 for Alex's lemonade stand. If you want to be a part of that, go ahead and hit our website top right here in com click on the banner there. And you can be a part of that. And we would love to have you do that. And finally share this podcast with a friend or colleague uses foundation for daily or weekly discussions on the topics. That are relevant to you in the industry. A form of mentoring.

They can subscribe wherever you listen to podcasts. All right, we're going to come over here. cio.com. CrowdStrike incident has CIO is rethinking their cloud strategy. So CEO's are looking at ways to avoid single points of failure and are re-evaluating their cloud strategies to prevent any blue screen of death incidents. The widespread disruption caused by the recent CrowdStrike software glitch. Which led to a global outage window systems has sent shockwaves through the it community for CIO as the event serves as a stark reminder of the inherent risks associated with over-reliance on single vendor, particularly in the cloud. It's interesting. That they talk about single vendors in the cloud, but they don't talk about windows. Single vendor at the desktop. Anyway. Regardless. I'll go on.

I will overlook that. That oversight let's see the incident, which saw it systems crashing and displaying in the infamous blue screen of death. Expose the vulnerabilities of heavily cloud dependent infrastructures.

While the issue is being resolved. It has highlighted the potential for catastrophic consequences when a critical security component fails.

This is for CIO's question, the resilience. Of their cloud environments and explore alternative strategies. Again, I'm going to point out one more thing. By the way, I agree with the article. I agree with looking at the architecture of cloud in our cloud approach, in our cloud strategy. But the one thing I will again, point out is that. This was human error.

It wasn't an attack. It wasn't inherently. Cloud-based. If you remember, we were doing patches and fixes, which were being rolled out. From centralized systems a long time ago. So there's nothing inherently cloud. About the CrowdStrike outage with that being said, Do we need to look at our cloud strategies for resiliency and recovery. Absolutely.

Do we have to understand our cloud dependencies? Absolutely. That should have happened before you moved to the cloud. However, I will stop being a jerk on that kind of stuff. And just say that architecture does matter. When we were moving to the cloud at St. Joe's, we had to step back and say, okay, What is our disaster profile or our primary PR disaster profile. Was that of an earthquake, real disaster that have an earthquake. And in the case of the earthquake, we would be cut off from the internet.

So we knew we wanted to move to the cloud and all the benefits that it provided for us. And we understood that the agility, that it was going to give us in the. Access to tools that weren't available before and potentially some economies of scale. On access to systems was worth the risk. But we had to identify that risk and mitigate that risk. The first risk we identified is if we moved our EHR to the cloud, how were we going to operate in the case of being cut off from the cloud?

And that's the question you have to ask in the case that you are cut off from the internet completely. The network is. Out and down. What are you going to do? And so for us, we looked at the architecture and said we have to have access to certain information in order to continue to provide care.

And in the case of an earthquake, we would be called on to continue to supply care in that community. So there was two things we had to consider. One was. How to have the information available when a patient presented. And a way to pull that up. So we needed that to be local. No matter what happened, we needed that to be local.

And to be honest with you, even when I came in at first and we were not moved to the cloud. We still had everything at a centralized data center. And so while the connected connections to the, that data center were easier to restore, we had already moved outside of the four walls of the individual hospital for that data.

So in the event that there was a cut in orange county, we still could. We still had challenges in that. That system would be cut off from the EHR. Because the EHR is we're no longer being run in the hospital. Now, when we think about a disaster, we're thinking about being cut for a longer periods of time. Business continuity is for that two hour outage when our outage and that kind of stuff. But resiliency and recovery and those kinds of things.

That's really the purview of our disaster recovery plans. And being out for a week, two weeks, three weeks, I don't know how long it's going to take after a major earthquake in Southern California. To get reconnected to the internet. Therefore I need to have that data. So we came up with as a way to trickle that data back into a repository that resided in each one of the hospitals that became that fail safe. In the event that that, that building got cut off from the rest of the world. And there was a way for the various systems to get rerouted and reconnected. To that system in order to provide a static, look at the data.

Okay. It wasn't going to be dynamic. We weren't rebuilding the EHR in each one of the hospitals that is not. Feasible. However, we were able to at least provide a snapshot, a point in time, snapshot of the data. At any given time now it didn't also didn't have to be perfect. It didn't have to be up-to-date to yesterday. That would be great if we could do that.

But it had to be fairly current. And that was a, that was one of the things that we had to think through. We had to think through imaging as well. How much how much in the way of images do we store locally? And which images do you store locally? Where will the people present in the case of emergency?

You don't know those. You really don't know the answer to those questions. So you have to really think through. The dependencies and the things that are going on within your health system in order to rebuild those things, I'm going to go on into the article. Reevaluating cloud dependencies.

When an issue such of such magnitude happens. And causes such a big disruption. It is important and necessary to revisit your existing beliefs decisions. And trade-offs. That went into arriving at the current architecture. Said CIO of dish TV, one of India's largest cable TV providers. The outcome of the review may still be the same decision, but necessary to review. Goop to set, adding that dish TV is already reevaluating cloud strategy. Cloud strategy in a phased manner after the CrowdStrike incident. CIO for financial services form Shri financial suggested a strategic shift organizations and CISOs must review their cloud strategies and the automatic updating of patches should be discouraged.

All right. So again, It's really interesting. So automated, updating a patch. It's absolutely. I should be discouraged, especially in a healthcare environment. We should be have a mechanism for rolling them out. Now there's another way of patching systems. These are the systems that don't reside on your, at your health system, and it also creates a problem for healthcare.

And that is these cloud providers. They operate independent from what you're doing in healthcare and they will patch their systems. They will update their systems. And depending on the type of system that may or may not impact a clinical workflow. Now, most of our clinical cloud. Of systems are being run by people who understand healthcare and understand the The sensitive nature of making those kinds of changes. But you still do have technology vendors who make changes. And all of a sudden things aren't working, things don't function. So updating the cloud environment with patches is one thing to look at and then updates. Down into the health system.

That's another thing. Now we looked at that. A decade ago. And essentially said, no, we will. We will do our own testing onsite before we roll these things out. Now CrowdStrike is a different deal. Clearly they're rolling those things out. I think I, for whatever reason, I understand why we do that because you have to respond to. Attacks very quickly.

And this is one of the ways we respond to attacks very quickly, but I would be looking at putting a barrier between CrowdStrike and their ability to roll out directly to those agents. That's one thing I'd be looking at. The other thing I'd be looking at is this did not in fact affect max.

It did not affect Unix-based machines. Linux-based machines at all. And it's because of the architecture of those machines. And evidently there is a technology that will allow windows to function like that. I also think we need to look at our reliance on windows. In healthcare. And just, understand that as an endpoint device, it has not been the most reliable device.

It has not been the most reliable operating system. Yes. A lot of our stuff. Runs on it, but does it have to. And are we even asking that question or are we just sorta on that treadmill of while it's all windows devices, all of our stuff runs on windows. Therefore we're going to do this. It is traditionally not been the strongest.

Anyway, Hey, this is a good article to read. It has a lot of questions, not a lot of answers, a lot of questions. It's on cio.com. CrowdStrike incident has CIO is rethinking their cloud strategies. And it's always good to ask the questions. It's always going to have answers for the questions. Because as a CIO, walking down the hall, you're going to get asked. Hey, did you see this coming?

Did you not? Blah, blah, blah, fill in the blank and, Asking the questions ahead of time gives you the ability to either respond to it or at least have that answer to that question. When the time comes. Just wanted to throw that out there today, as we recover from this, that there's still a lot of conversations to have and a lot of questions to answer. As That's all for today. Don't forget, share this podcast with a friend or colleague, use it as a foundation for mentoring.

We want to thank our channel sponsors, but specifically we want to thank artists site. For investing in our mission to develop the next generation of health leaders. You can check them out at 📍 this week. health.com/artist site. Thanks for listening. That's all for now.

Thank You to Our Show Partners

Our Shows

Related Content

1 2 3 304
Healthcare Transformation Powered by Community

© Copyright 2024 Health Lyrics All rights reserved