Skip to main content

Search site

Find podcasts, news, articles, webinars, and contributors in one search.

OpenAI's GPT-4 can exploit real vulnerabilities by reading security advisories

Source: The Register

Found this useful? Share it with your network

Researchers at the University of Illinois Urbana-Champaign have discovered that OpenAI's GPT-4 large language model can autonomously exploit real-world security vulnerabilities when provided with CVE advisories. The model successfully exploited 87% of tested one-day vulnerabilities, significantly outperforming other models and open-source vulnerability scanners. Despite the impressive performance, the success rate drastically reduces when access to CVE descriptions is restricted. The study highlights the potential for future models to facilitate even more effective exploits, stressing the need for proactive security measures rather than relying on security through obscurity.

Read Full Article

Opens on The Register