Drex examines The alarming rise of intimate deepfakes targeting primarily women and children, with 18 states currently offering no legal protection against these digital sex crimes. Various state legislative efforts including Montana's focus on combating political deepfakes, particularly within 60 days of elections; and OpenAI's first investment in cybersecurity through a $43 million funding round for Adaptive Security, a company specializing in training organizations to recognize deepfake attacks and phishing threats.
Remember, Stay a Little Paranoid
Donate: Alex’s Lemonade Stand: Foundation for Childhood Cancer
This transcription is provided by artificial intelligence. We believe in technology but understand that even the smartest robots can sometimes get speech recognition wrong.
Hey everyone. I'm Drex, and this is the two minute drill where I cover three hot security stories twice a week. All part of the cyber and risk community here at the 2 29 project. I try to keep this podcast mostly plain English and mostly non-technical, so it's really easy for everyone in the organization to consume so that everyone can do their part.
To keep patients and families and the whole health system safe. Today's episode is brought to you by Google. Healthcare Systems are lowering cost and boosting endpoint security with Chrome OS devices paired with Chrome Enterprise, a secure browser that's trusted by billions of users. So now there's a better way for healthcare teams to work safely on the web, learn more, or schedule some time with the Google Healthcare team at this week, health.com/chromeos.
I hope you're having a good Thursday. Here's some stuff you might want to know about. I. State legislatures across the country are trying to address the problem of so-called intimate deep fakes. These images are created using software that can convert a completely innocent image of an individual into an image where they appear to be nude or involved in sexual activity.
By the way, the vast majority of intimate, deep fake victims are women and children. So you can imagine the opportunity for bullying or public humiliation and reputation damage. The website Public Citizen has created a page that lets you see how your state is doing on this kind of deep fake legislation.
Turns out, 18 states have no legal protection against this kind of digital deep fake sex crime. You can find the link to the story and the page on our news site. There are other kinds of deepfake images too. Obviously Montana is one of several states working on deepfake legislation, primarily focused on political deep fakes that legislation would make.
Publishing that kind of material within 60 days of an election illegal. Today's final story, there are companies that are working on some of these deep fake problems. One, uh, open ai. The company behind Chad GPT has taken the co-lead on a $43 million funding round for cybersecurity company adaptive security.
It's open AI's. First investment in a cybersecurity company. That company Adaptive simulates AI powered attacks against companies, and they specialize in training organizations on recognizing deep fake attacks and other phishing threats. You can read more on that story and all these stories and all the latest healthcare innovation, tech and security news at the industry's fastest growing news site this week, health.com/news.
And you can get all the past episodes of the two minute drill at this week. health.com/unh. Today's two minute drill was brought to you by Google. You can keep patient data safe and reduce the burden on IT operations staff, and create a better clinician experience all with one platform. Google Chrome OS with Chrome Enterprise.
Find out how by scheduling a chat today. Go to this week, health.com/chromeos. That's it for today's two minute drill. Thanks for being here. Stay a little paranoid and I will see you around campus.