Copilot and Agentforce fall to form-based prompt injection tricks

Summary

Security researchers have discovered prompt-injection vulnerabilities in Microsoft Copilot Studio and Salesforce Agentforce, allowing attackers to exfiltrate sensitive data by tricking the AI agents into executing malicious instructions. These flaws exploit the way AI agents process user input, blurring the lines between trusted commands and untrusted data, leading to potential theft of PII and business information.

IFF Assessment

FOE

These vulnerabilities allow attackers to compromise AI agents and exfiltrate sensitive data, representing a significant threat to organizations relying on these tools.

Severity

7.5 High

The article describes prompt injection that leads to data exfiltration. This often involves taking control of agent actions and accessing sensitive information, which can be rated as High or Critical severity (e.g., CVSS 8.1 for Attack Complexity Low, Privileges Required None, User Interaction None, Scope Changed, Confidentiality High, Integrity High, Availability High).

Defender Context

Organizations using AI agents like Copilot and Agentforce should be aware of prompt injection risks, especially when these agents interact with sensitive data sources like SharePoint. Defenders need to implement input validation and consider how to better segregate user-provided data from system instructions to prevent unauthorized data exfiltration.

Read Full Story →