AI Regulation is here to save us. Yeah?
Today in alpha T we're going to take a look at the European AI act. And what does it mean for AI development? At least in Europe, because I think it is a precursor to some of the things we might see in the U S and other countries like China. My name is bill Russell. I'm a former CIO for a 16 hospital system.
And he created this weak health, a set of channels, dedicated to keeping health, it staff, current and engaged. We want to thank our show sponsors who are investing in developing the next generation. Of health leaders, short tests and artists. I check them out at this week. health.com/today. As you know, we've partnered with Alex's lemonade stand.
To find cures for childhood cancer. And we have a goal of raising $50,000. We're now up close to $40,000. And we want to thank you for your generosity to date. , we would ask that if you would like, we'd love to have you join us. You could hit our website banner on the top logo of Alex's lemonade. Stand, click on that.
And go ahead and give today. Thank you. And we, , appreciate your generosity. All right. The European act. Let me start by saying that if you really want to get into this, I am not the person to do this. , lawyers would be. , somebody who's really researched this heavily would be, I am going through about a wow.
That's a lot of tabs, about eight stories. And reading different things. So of which was proposals, some of which, , you know, and I haven't actually read the regulation. There will be someone out there who will read the regulation. And that's probably who you should. , get more information from, but I do want to touch on this because it has some implications because it is the first such law.
To be put out there and it's the EU. So it's a significant body that is putting it out there. There are also proposals out there from China there's proposals out there from the U S. , for AI regulation. And I think it's, it's informative to look at this regulation to see what they are looking to do. So.
, let me give you some of the unacceptable application risk applications that are banned by default and cannot be deployed. , And the European union. They include AI systems using subliminal techniques or manipulative or deceptive techniques to distort behavior. Wow. , absolutely. And I would think there's other laws that cover this, but you know, let's be specific AI systems that are manipulative, deceptive and so forth.
, it goes on AI systems, exploiting vulnerabilities of individuals or specific groups. Again, I would think there's already laws. Around this, but maybe we need specific AI laws. , biometric categorization systems based on sensitive attributes or characteristics. In other words, We are looking at, , people, maybe through a computer vision and other things.
And determining what's going on in their life. , AI systems use for social scoring or evaluating trustworthiness. Again, interesting AI systems. And use for risk assessments, predicting criminal and administrative offenses, sort of a minority report ask. AI systems creating or expanding facial recognition databases through un-targeted scraping. This has always been a hot topic in the EU.
And eh, You know, probably should be in the U S and , other countries as well. , essentially the number of cameras we have out there. And if we are scraping, are we, if we were actually collecting, , faces and facial recognition and filling databases with that, that's, , one of the things that is blocked now in the EU.
And, , I think the final thing here, AI systems, inferring emotions in law enforcement, border management, the workplace or education. Again, this is a, this is a little older, there's a CNBC article going back to, , May 15th. I don't know if there was any changes. To the proposed law, but those were the things that were just outright prohibitive. Let me go into some of the things.
That might be more interesting for you. Several lawmakers have called for making the measures more expansive to ensure they cover chats GPT. , to that end requirements have been imposed on foundation models, such as large language models and generative AI. Developers of foundation models will be required to apply safety checks, data governance, measures, and risk mitigation.
Before making their models public. , the law appears to call for transparency. This is something that chat GPT and open AI have openly. , pushed back on. They do not tell you how they've trained their models. What data has been used to train their models. They have specifically avoided that. And the EU law is essentially saying, look.
You now have to, that has to be made available. , it goes on, they will also be required to ensure that the training data used to inform their systems do not violate copyright law. Right. So if they are scraping images that are copyrighted in some way, shape or form, and that is informing the model and making the model better.
Then that has to be, , , pulled out. Of the model and I don't know what it means to pull data out of the model. Once the model is trained. , do you have to retrain the model? And again, this is, , I, I did see an article where Sam Altman. It was the CEO of, , , of open AI. , you know, it's interesting because he's called for more regulation.
And then he essentially. Says he wants to comply with the regulation, but may not be able to, and may need to pull out of the EU. , with Chatsy PT. They may have to develop a different product that gets utilized in the EU.
This is interesting and I wonder how it will inform what goes on in the U S. Now, granted these things that they're trying to block, these are dangerous uses of AI, subliminal techniques of manipulative or deceptive techniques. I mean, , these are bad uses of AI. So, as I, as I read this, what does this mean for the us? I think this could inform some of the things that we put forward , on the U S I will say that in the us, I doubt the law will be this restrictive.
In the us,
I think there's a belief that the. The genie's out of the bottle. AI has been demonstrated and it is already being adopted by so many people. , I think countries are looking at this saying, Hey, this will solve a lot of problems. Now, granted, they understand that it needs some regulation needs some oversight to ensure it doesn't go outside of bounds. And some of the things that we read in the European.
, AI act actually makes perfect sense to protect individuals from poor models and poor technology. That is out there on the flip side. I don't think any country wants to be left behind. This represents a potential solution to a lack of primary care doctors. So this presents a potential solution to burnout, cognitive overload of clinicians.
And I'm just focusing in on healthcare right now. I mean, this presents, , opportunities in the area of cybersecurity. It presents opportunities. And in so many different areas. That is, it is hard to ignore. And I think it's going to be hard for the U S to put these kinds of restrictions.
As onerous as what we're seeing from the EU. On these organizations and on these companies. And quite frankly, I don't think we'll see any of it from, from China. Because I think China recognizes the power of AI to be a transformative force in a lot of different areas. And I think there's just going to go full steam ahead after, AI to see how far they can take it.
And so, Will this inform the U S law? Yes. Do I think there's some things that, will. Make the use of AI better in the United States. If we have regulations around it. Yes.
The interesting thing in this argument that it brings up is, , do you trust big tech or do you trust the government to regulate it? And. It's interesting. I'm not sure I trust either. 'cause, I'm not sure the government really understands the tech enough. To regulate it. And I'm not sure big tech can be trusted because they are publicly traded companies , That drive towards a financial return that could lead to.
Let's just say unethical uses of AI. , in ways that could harm society. So does it need regulation? Yes. Do I trust them to put the right regulation in place? , not necessarily, but this is that constant thing that goes on. , there's a push forward and back from, , government regulation.
And the innovators until we find that happy, medium, where we can innovate. , within a realm that is protecting the citizens that the technology has meant to help and support. So again, I share this because I think we're going to be hearing a lot about it. Over the next couple of months. . I think you can also hear. This as an excuse. To not move forward with exploring AI. I think that's a mistake. Anytime we advocate for doing nothing. It's usually a mistake. We should be making progress in some way, shape or form. Either in the area of writing a policy around AI exploring the different applications of AI.
Within healthcare. , trying to put the boundaries self-regulating if you will, the boundaries. , and this is what the policy is, right? What do we want in the policy? Do we need transparency in our AI models? If you need transparency in your AI models and you're currently using chat GPT, you may want to evaluate what levels of transparency are accessible for which use cases.
Right. You have to write the policy. You have to understand what you're going to be doing with the technology, what data you're going to be giving it access to. , because to a certain extent we're going to be training these models. Healthcare is going to be training these models on healthcare. A lot of people are heading down this path right now with Azure GPT.
I think that's what they're calling it. Azure GPT. , where you essentially get a, , let's call it a walled garden. They protected sandbox. If you will, that you're training it on your data and you can choose to train the larger model or just keep it segmented to just your model. I think it's fascinating to look at, I think this will be an interesting area that we will be talking about.
In, colleges and universities for years to come in terms of AI ethics.
So there you haven't. I think this is a conversation that is starting . And so we're going to be. Revisiting this, I think an awful lot over the next three to five years.
So that's all for today. If you know someone that might benefit from our channel, please forward them a note. They can subscribe on our website this week, health.com or wherever you listen to podcasts. We want to thank our channel sponsors who are investing in our mission to develop the next generation of health leaders, short test and 📍 artist site two great companies. Check them out this week. health.com/today.
Thanks for listening. That's all for now.