Researchers who found the flaws scored beer money bounties and warn the problem is probably pervasive Exclusive Security researchers hijacked three popular AI agents that integrate with GitHub Actions by using a new type of prompt injection attack to steal A…
Agents hooked into GitHub can steal creds – but Anthropic, Google, and Microsoft haven't warned users
Security researchers exploited a prompt injection vulnerability in AI agents integrated with GitHub Actions to steal credentials. The attack affects three popular AI agents (likely Anthropic, Google, and Microsoft-based tools) and may be widespread due to the pervasiveness of GitHub Actions integrations.