OpenAI's GPT-4 can exploit real vulnerabilities by reading security advisories
The Register
|
Summary
Researchers at the University of Illinois Urbana-Champaign have discovered that OpenAI's GPT-4 large language model can autonomously exploit real-world security vulnerabilities when provided with CVE advisories. The model successfully exploited 87% of tested one-day vulnerabilities, significantly outperforming other models and open-source vulnerability scanners. Despite the impressive performance, the success rate drastically reduces when access to CVE descriptions is restricted. The study highlights the potential for future models to facilitate even more effective exploits, stressing the need for proactive security measures rather than relying on security through obscurity.