GPT-4 can exploit real vulnerabilities by reading advisories
The Register
|
Contributed by: Drex DeFord
Summary
A team of computer scientists from the University of Illinois Urbana-Champaign demonstrated that AI agents, powered by OpenAI's GPT-4, can exploit real-world security vulnerabilities with high efficiency by analyzing CVE advisories. In their study, GPT-4 successfully exploited 87% of tested vulnerabilities, a significant leap compared to other models and traditional vulnerability scanners. The research highlights the potential of large language models to automate attacks, raising concerns over security practices. The team emphasizes the importance of proactive security measures, as restricting access to vulnerability descriptions proved largely ineffective. This work points to a future where AI could outpace current exploitation tools available to hackers, underlining an impending need for advancements in cybersecurity defenses.