How Microsoft Purview DLP currently can help you protect confidential data in Copilot.

Organizations today face a difficult balancing act. Business leaders are eager to adopt tools like Microsoft Copilot to unlock productivity and innovation. Meanwhile, IT and security teams are concerned about safeguarding sensitive information, especially as AI-driven features process vast amounts of organizational data. This tension is real: enabling advanced capabilities without compromising compliance or data protection is a challenge every modern enterprise must solve.

Microsoft Purview Data Loss Prevention (DLP) is a key solution to this problem. It provides mechanisms to prevent confidential data from being exposed or misused, even in scenarios involving AI. I want to highlight two features designed to help organizations in controlling what is being processed by Copilot.

Blocking Documents Based on Sensitivity Labels

One of the foundational features of Purview DLP is its ability to enforce policies based on Microsoft Information Protection sensitivity labels. If your organization has a well-implemented labeling strategy, this capability is a game-changer.

DLP policies can be configured to block or restrict actions based on the sensitivity label attached to the document. It works the same way DLP works across all Microsoft 365 services, but it must be created as a separate DLP. It cannot be bundled into a DLP used on another service (at least at the time of writing this article).



This ensures that sensitive files remain protected across Microsoft 365 services, regardless of where they are stored or accessed. And for organizations with mature labeling practices, this feature provides strong, policy-driven control over data handling.

But this is not all you can do with Microsoft Purview DLP now. Released for preview is a new capability:

Detecting Sensitive Information Inline in Copilot Prompts

The new inline sensitive information detection capability (currently in preview) addresses the challenge where users might inadvertently include confidential details when asking Copilot for assistance.

Purview's new DLP feature can now inspect the content of prompts for sensitive information types (e.g., credit card numbers, health data, financial identifiers). The response the user gets is also quite "polite" and intended for helping the user understand they should not use that kind of data with Copilot


If a prompt contains restricted data, DLP policies can block Copilot from processing it, preventing sensitive information from being sent to the AI service.


Important Note: This detection applies only to the text within the prompt, not to any documents or files attached to the prompt. Users remain responsible for ensuring that uploaded content complies with organizational policies. This feature does not replace the need for sensitivity labeling in Microsoft 365. Labels remain critical for protecting documents and ensuring consistent enforcement across workloads.

The bottom line

Innovation and security don’t have to be mutually exclusive. By leveraging Microsoft Purview DLP, both its established capabilities like sensitivity label enforcement and emerging features like inline prompt inspection, organizations can now confidently adopt Copilot tools without compromising data protection.