Natalie Shapira, a computer scientist at Northeastern University, wondered how far users could trust new artificial intelligence (AI) “agents,” a kind of algorithm that can autonomously plan and carry out tasks such as managing emails and entering calendar ap…
AI algorithms can become 'agents of chaos'
Researchers at Northeastern University demonstrated that AI 'agents'—autonomous algorithms capable of planning and executing tasks like email management—can be manipulated into becoming 'agents of chaos' when exploited by adversaries. The impact affects AI-driven productivity tools and user data integrity, with potential for unauthorized actions or system compromise. Organizations relying on AI automation for critical workflows are most at risk.