This Week Health 5 Years

Contributors

December 1: Today on the Community channel, it’s an Interview in Action live from HLTH '22 with Suraj Kapa, MD the CMO & SVP of Healthcare at TripleBlind. As soon as you move data out of your four walls, you lose governance over how, when, why, and for what purpose that data is being used. The core thesis at TripleBlind is that privacy enforced data and algorithm interactions will unlock the tremendous value, currently trapped in private data stores and proprietary algorithms. They move the world from “don’t be evil” to “can’t be evil”, by enabling everyone to freely collaborate around their most sensitive data and algorithms without compromising their privacy. 

  • Subscribe: https://www.thisweekhealth.com/subscribe/
  • Twitter: https://twitter.com/thisweekhealth
  • Linkedin: https://www.linkedin.com/company/ThisWeekHealth
Transcript

This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.

interview in action from the:the health conference, health:

Absolutely. So my name is Suraj Kapa. I'm the Chief Medical Officer and SVP of healthcare for triple Blind. Effectively, what Triple line is, is a. To allow for data to be collaborated on in a way that's fully privacy preserving that allows for the data to be automatically de-identified, anonymized across global jurisdictions, and allow for governance controls by the data owner to allow for data collaborators to use that data in a way that is approved by those data owners. So you can think of a kind of like a zero trust, privacy preserving data collaboration tool.

So the problem that you're solving is for research, or are there other applications as well?

So it's actually for a broad array of applications and principles. So I mean, I'm a physician by background and I've done a lot of research in my life, and one of the biggest problems we always get into is the fact that the more data we have access to from the more diverse groups, the better our algorithms, the better our statistical understanding of how a population is doing, the better, even the future digital tools that are gonna be created or will be.

The problem is health data is very sensitive. , you have all of the information about individuals that you need to strip outta that data. But even if you do that, when you think about bring all this data together, you can still potentially identify individuals through their attributes or features and how do you ultimately prevent those risks through technology? And that's what we're trying to solve.

So this it doesn't happen often, but they talk about re-identifying from de-identified data. How much of a risk is that?

It's actually a bigger and bigger risk as tools are evolving more and more on the attacker side, so to speak. So let's think of a couple of examples.

So, an individual wants to train a new technology or new tool to predict. Outcome of head and neck cancer. So you're doing it off of data from DICOM CT data. The problem is you send these cts even without name, date of birth or anything else to a data user. But you know what? Modern tools can reconstruct the face from the CT scan.

It's pretty easy to do a reverse image search on Google and figure out how that person is. Second example, social determinants of health are huge. The fact of understanding how income level society environment impact health outcomes is becoming more and more obvious in terms of getting to better health outcomes.

So how for you wanna say, well, how do chemotherapy outcomes correlate pancreatic cancer with income level? So now you take some income level information, take some health information, and you find the billionaire who's treated for pancreatic cancer a few years ago, and we all can kind of know who that is, right?

It's interesting. So as I'm looking at more and more of these platforms, these health systems are coming together. They're forming I don't know, consortiums, if you will, and they're bringing this data together. That's where you guys come into play, isn't it? So I'm a CIO and they're saying, Hey, share your data, move it over here.

But I have to, anytime I'm moving it outside of the four walls I have to, I have to be very concerned, is. Pretty much where you you guys sit,

Exactly, because we have to remember, if we think about all, for example, of data breaches and other issues of data that have occurred over the last decade, really, especially as this digital enablement has become more and more of a focus in healthcare, every single time you send some of your data, even say you strip the name and date of birth and all that stuff, there's a risk of that data being breached.

You're dependent on that other company's IT structures. Number two, you lose control over how that data's used. You as a data owner might have an interest in doing your own research with that data, developing your own algorithms, partnering with more than one organization in order to scalably offer benefit in digital health.

But, As soon as you move that data out of your four walls, you lose that kind of governance over how, when, why, and for what purpose that data's being used.

I mean, why, why is your solution better than? Cause I hear everybody say, Hey, we're de-identifying the data. We're good to go. I mean, what's your, I don't want, what's your secret sauce is essentially what I'm asking.

Okay, so the way we do it is this, we are a pure software approach that sits behind the firewall of the organizations that are collaborating. We have a patented one way encryption that basically transforms the data into what we call computational material that can never be reconstructed. All of the analytic operations are happening on that computational.

So what happens is the only identifiable data always sits behind the firewall. The computations are happening on this encrypted computational material that we have. Mathematical process. Quantum safe can never be reconstructed, so we're not just. De-identifying, which unfortunately is also an arduous process through traditional deans.

Okay? But we're actually anonymizing because the data cannot be reconstructed. Not just in terms of name, data, birth, et cetera, but any other piece of information such as that face image example that I explained earlier.

So as a researcher, I never really get next to the. I mean, I get next to it, but it, so I, I put my algorithm in. I do the computational aspect of it, and then I get a result back. I can't really look at the raw data itself.

Exactly. We do have ways that a researcher. Can semantically understand what is represented in the data. So we call it mock data. That allows a user to say, oh, there's information about gender.

This is how they represent it. There's information about what the cancer type they have is. However the thing is, so I've done a lot of research in my career. I've published probably 200, 250 papers. The thing is, once I've actually. At the data into spreadsheets, et cetera. I never wanna look at a cell again.

I just want know what the average age is across this column. I want the output once I have the data represented into my spreadsheets. So that's what we facilitate the analyzer to do, the data user to do. We allow them to gain that insight, but not without having to actually touch the real data.

So is the, is the person you're talking to at the health system the privacy officer, or is it legal in compliance or is it A date officer cio, I mean, who are you talking to? It sounds like all of them.

So in fact, we touch the needs of all four of all of those groups. So in fact, in the end, even though we might come in for one specific vertical, we actually address the needs of all of them. We address the needs of the chief DA digital officer, chief data officer, cio, who has an interest in how do I create opportunities to allow for more seamless collaboration with actual entities, other hospitals, research.

Pharma, et cetera, in order to enable this broad access to my data. The chief privacy officer who's arguing, wait a second, how do I ensure that I'm not violating the privacy of our patients and the legal and legal counsel folks? Because they're saying, well, we have all these policies for compliance and how the data's de-identified.

But when you have a broad automated solution to solve that issue, it's all the logistical legal hurdles and compliance hurdles here in more seamless fashion. It's always the privacy concerns here, while enabling. Processes, the CDO CIO are expected to allow for their institution to be involved in.

📍 📍 All right. We'll get back to our show in just a minute. We have a webinar coming up on December 7th, and I'm looking forward to that webinar. It is on how to modernize the data platform within healthcare, the modern data platform within healthcare. And I'm really looking forward to the conversation. We just recorded five pre episodes for that. And so they're gonna air on Tuesday and Thursdays leading up to the episode. And we have great conversation about the different aspects, different use cases around the modern data platform and how agility becomes so key and data quality and all those things. So great conversation. Looking forward to that. Wednesday, December 7th at one o'clock. Love to have you join us. We're gonna have health system leaders from Memorial Care and others. CDW is going to have some of their experts on this show as well. So check that out. You can go to our website thisweekhealth.com, top right hand corner. You'll see the upcoming webinars. Love to have you be a part of it. If you have a question coming into it, one of the things we do is we collect the questions in the signup form because we want to make sure that we incorporate that into the discussion. So hope to see you there. Now, back to the show.

📍 📍 Are you able to talk about use cases and wins at this point?

Absolutely. So where we find the key use cases are, I'll give kind of examples and. Show how they might correlate. So one of our main investors is Mayo Clinic, and why would they invest in us? The reason is Mayo Clinic actually made a huge pivot towards its idea of a Mayo platform externally facing opportunity to enable data to grow and scale digital health.

So we're exploring how systems like this can enable not just this automatic de-identification, but actually the ability to enable data in a federated approach to allow consumers digital health companies like we see all around as an H T H to validate their assets, to validate their algorithms against the data.

So if we think about it, The clear cut use case is everything from knowing what everybody's saying works actually works in a faster fashion, allowing broader access to more diverse data loads, to develop algorithms that actually are representative of populations, and ultimately doing it in a way. That creates assurances over everything we talk about, privacy, security, et cetera.

That's really interesting. So Digital health startup comes to Mayo and says, we've got a new algorithm that does fill in the blank. It. And essentially you get, you give them a platform where They can validate that without ever exposing the Mayo data without ever

Yeah, and that's like a Mayo example, like which we're exploring. However, it's actually much more broad base because if you think about it with No, great now Mayo, because they want to bring in this. New pool tool and they know it's gonna work on their population, but say I live in Philadelphia now. Temple University, very different population wants to validate the same tool.

Tool is never trained on their population. They don't want it to deploy it and run into errors, right? In inaccuracy, they can actually negatively impact the popul. So when you think about this idea of being able to more rapidly validate and more directly validate on your population, so you know it's gonna work the way they say it's gonna work, yeah. We avoid all the problems people worry about in AI bias, inappropriate training populations, et cetera.

It was interesting. We had a conversation earlier this year with Michael Feer, Dr. Feer from Stanford, and one of the things he was saying is he said these national models for data. He said, I think we're gonna have to throw 'em all out.

He goes, my data set from eight o'clock at night to seven o'clock in the morning, same population is completely different than my data set from. Eight o'clock in the morning to seven o'clock at night at the same hospital. And he goes, so the data has to be local, it has to be validated on local models, local data, local populations, and even then you have to understand the nuances of the data that you're looking at.

it's really fascinating. I, I, I love this, especially with we've had a couple conversations at the health conference about the foundation of. Hmm. And we've had some breaches, we've had a fair number of breaches and whatnot. It's so important to establish that foundation of trust. And I also worry about these large amalgamations of healthcare data and are we going to be able to protect them? Cuz my data's in there. Exactly. Yeah, so, hey, I want, I wanna thank you for your time. Fantastic conversation.

That was wonderful meeting.

Another great interview. I wanna thank everybody who spent time with us at the conferences. I love hearing from people on the front lines and it is Phenomen. That they have taken the time to share their wisdom and experience with the community, which is greatly appreciated. We also want to thank our channel sponsors one more time, who invest in our mission to develop the next generation of health leaders. They are Olive, Rubrik, 📍 trx, Mitigate, and F5. Thanks for listening. That's all for now.

Want to tune in on your favorite listening platform? Don't forget to subscribe!

Thank You to Our Show Sponsors

Our Shows

Keynote - This Week HealthSolution Showcase This Week Health
Newsday - This Week HealthToday in Health IT - This Week Health

Related Content

1 2 3 192
Amplify great thinking to propel healthcare forward and raise up the next generation of health leaders.

© Copyright 2023 Health Lyrics All rights reserved