Microsoft’s AI Can Be Turned Into an Automated Phishing Machine
Wired
|
Contributed by: Drex DeFord
Summary
New research presented at the Black Hat security conference has revealed significant vulnerabilities in Microsoft's Copilot AI, capable of manipulating responses, extracting data, and bypassing security measures. By exploiting the system through five proof-of-concept attacks, researcher Michael Bargury demonstrated how hackers could use Copilot to turn it into a spear-phishing machine, extract sensitive data, and influence AI responses by poisoning its database. These findings underline the risks of integrating AI systems with corporate data and highlight the need for more robust monitoring and security measures to prevent AI abuse. Microsoft is working to address these vulnerabilities while acknowledging the challenges such threats present.