The Federal Government is looking to regulate AI. Today we discuss what they are saying.
Today in health, it, you were going to take a look at. Federal regulation of AI. Particularly some quotes, what people are thinking and what they're talking about, my name is bill Russell. I'm a former CIO for a 16 hospital system. And create, or this week health, a set of channels and events dedicated to transform healthcare. One connection at a time. We want to thank our show sponsors who were investing in developing the next generation of health leaders. Short tests are decide parlance, certified health, notable and service. Now check them out this week. Health. Dot com slash today. Hey, if you get a chance to share this podcast with a friend or colleague, you said is foundation for daily or weekly discussions on topics that are relevant to you and the industry. They can subscribe wherever you listen to podcasts. All right. Here's the story. This will probably be a short episode. If I think about it. September 20th, 2023, federal regulation of AI, what the likely framers and innovators are saying now this is. A let's see. Wow. The publication is called AI in healthcare innovation to transform. I guess it is, it is a. , journalists whose. Trying to, , write articles at that intersection of AI and healthcare. , here we go. Last week, brought the latest in an occasional series of conversations on AI, between government leaders and big tech honchos for months. The number one topic on the table has been regulation. Businesses are counterintuitively clamoring for it. And the political parties are for the moment anyway, United over it. Held behind closed doors on September 13th huddle really started the week before on September 8th, senators Richard Blumenthal, a Democrat from Connecticut and Josh Hawley, Republican from Missouri. Laid out a five bullet framework. The two are short to follow in drafting their subcommittees anticipated AI bill. So let's see license requirements clear. AI identification, accountability, transparency. Strong protection for consumers in kids, such common sense principles are a solid starting point. Blumenthal said. In introducing the framework. So let's go through those. Real quick license requirements, not sure what that is. Clear AI identification. We know what that is. That when AI is being used in any way, shape or form. These to identify itself. So for instance, in the notes, if we're generating notes, In the EHR and that's being used for the consumer at the bottom of that message. It will say this message was generated by AI and approved by your physician. , essentially so clear AI identification, it's even more important with images and those kinds of things. Accountability. I think we know what that is. Transparency. Again, , and then strong protections for consumers and kids. It'll be interesting to see where these categories go. They're so broad. , I can't even speculate on where it's going to go. , this would, Holly had to say our American families, workers and national security are on the line. We know what needs to be done. The only question is whether Congress has the willingness to see it through. I'm not sure I'm as bold as he is. , here's a taste of what tech leaders and close observers have been saying about regulating AI. And the day since September 13th meeting. So you have this from Zuckerberg founder, , had a Facebook and Metta. I agree that Congress should engage with AI to support innovation in safeguards. This is an emergency. , a, an emerging technology. There are important equities to balance here, and the government is ultimately responsible for that. Interesting. All right. We go on, , Jennifer Huddleston technology policy researcher calls for a new licensing regimen. Or a new regulator follow the approach seen in Europe where a heavy regulatory touch has produced undesirable economic consequences. A better approach is to build on the success of a light touch innovation. That has made the United States a world leader. In the internet era. All right. So, and there's going to be calls on both sides. , and the, by the way, the European approach, just like GDPR is very onerous. It's it's it? And I think it is going to quelch. The innovation that needs to happen in this space. With that being said, you can, you could err, on the side of protections over, over air, on the side of protections, or you can err on the side of innovation. And the United States has historically been good at sort of balancing the metal and then moving to one side or the other based on how things are emerging. Right. So you have to keep close on this. This will. This, this is going to go in a lot of different directions. There's a lot of different things to take into account here. And we don't, we don't have all the, the, the variables known yet. And so to lay out a very heavy regulatory framework right now. I think would be, , what would be a mistake, however, as we move into it, , I think you can see some things get more in the, on the side of protecting the consumer and protecting the individuals and, or, , you could see them more allowing for innovation to happen. So again, we don't know what we don't know. , Europe has sort of laid the framework for being very heavy on the regulatory side. , let's keep going. IBM chairman and CEO. , had this say regulate AI risk. Not AI algorithms, not all uses of AI, carry the same level of risk while some might seem harmless others. Can have far reaching consequences. That's see. That's a great example. I have a very pragmatic approach. First of all, understanding that AI is not a single homogenous. Technology, there are tons of different directions that AI. Are heading in and different, , technologies that are being utilized and they're being utilized in different ways. So, , again, look at the risk that is being generated by the use of AI, and then regulate around that. , let's see Elon Musk. Had this to say, I assume, you know what he does. It's important for us to have a referee. I think there's a strong consensus that there should be some AI regulation. There is some chance above zero that AI will kill us all. I think it's low, but there is some chance whether you haven't. , I mean, Clearly. There is a, a concern about that. And only the federal government or actually world governments are going to be able to protect against that. It's important to note that if AI can kill us, Then someone will be developing that as a weapon moving forward. And I think we're already starting to see that with what's going on. , in the Ukrainian Israel, , that AI is going to be utilized, , in the battlefield. So, , You know, it's, it is scary to think about how AI is going to be used. , we don't trust it to drive cars yet, but we're going to trust it to fly airplanes and conduct strikes and those kinds of things. It's just, I don't know, , again, regulatory, where do you regulate? Where do you not regulate? It will be interesting to see and a very hard. Job, quite frankly. , here we go. Last one, Sarah Myers, west AI now Institute managing director. So this, the combined net worth of the Senate room on September 13 was 550 billion. And it's hard to envision a room like that in any way, meaningfully representing the interests of the broader public.
And that is the concern. Right. It's the, , fairly affluent who have made their money predominantly through algorithms and predominantly through technology. And investing in technology, you could even count the senators and congressmen in that group. They've made a ton of money through technology. We, , anyone who's invested in the last, , couple of decades has made a ton of money in that way. Plus the, obviously the tech entrepreneurs have made a ton of money in that direction. , do we trust them to set the framework for this? And I think this is one of those things where. We are going to want to keep a close eye on this. We are going to want to. , be able to make our concerns known with regard to healthcare. I again, I'm leaning on the side of, of innovation more than, , protecting the, , the interests of the consumer. In this case now I think there needs to be transparency. If an AI is actually caring for me, I need to know that. I need to know that and understand that now. I was in a room where they were talking about. , essentially inventory, all AI algorithms that are being used in a health system and making a requirement for those to be available to the public, to query and know how AI algorithms are being used in the health system. I think that is a. An example of an onerous and silly regulatory framework. But by the way, it's not regulatory today. This was actually a health system. , Going down that path, taking that initiative. My question is as a consumer, am I ever going to query? The algorithms of my local healthcare system to ensure that they are using my data and, and caring for me. , in the right way. I wouldn't even know what to, what I'm looking at. Quite frankly. , you would have to be a tech expert and a healthcare expert in order to bring those two things together and not just a healthcare expert. An expert in the delivery of healthcare. And that might even be specialty specific to understand the algorithm that's been written and the AI that's being utilized. So I'm not sure where they were going with that. I'm not sure who that's being developed for. But it again, I, I just think that's, , that's probably a bridge too far with regard to transparency. , But again, If I'm receiving a note, I want to know if the doctor has reviewed it. I want to know if a doctor has written it. I want to know if AI has written it. If startups start to build, , solutions that are essentially delivering care advice in any care advice, period, startup, otherwise. I want to know if it's, if I'm actually interacting with, , AI. I think the same, thing's true of call centers, although a little less, so not so much risk. If, if I'm dealing with. , AI and it's helping me set an appointment. , in fact, I would hope that that's happening. I would hope that we're getting more efficient and that we are utilizing all the tools available to us to get more efficient and make sure I get the right appointment. So anyway, these are some of the things that are out there. These are some of the things that are being talked about today with regard to regulation, where do you think it should go? I'm curious, do you have a point of view on this? Do you have a point of view of how AI should be regulated moving forward? Are you slowing down to wait to see where the regulation goes or are you full steam ahead? And we'll figure it out as we go. Be interesting to see what the, , What the current, , temperature of health it professionals is with regard to this. , topic and I think it's an important one. When would you keep an eye on? All right. That's all for today. Don't forget, share this podcast with a friend or colleague, keep the conversation going. We want to thank our channel sponsors who are investing in our mission to develop the next generation of health leaders. Short test artists, I parlance certified health, notable and 📍 service. Now check them out at this week. health.com/today. Thanks for listening. That's all for now.