Microsoft disclosed a service issue tracked as CW1226324 after some Microsoft 365 Copilot Chat prompts returned information from email messages in a user’s Sent Items and Drafts folders even when those messages had confidential sensitivity labels and Copilot-related DLP (data loss prevention) controls were configured.
TechInformed asked security practitioners to assess what the incident implies for AI data handling controls.
Melissa Ruzzi, director of AI at SaaS security firm AppOmni, said the incident underscored the need to scrutinize how data is handled “before it’s passed to AI” and how “metadata present in the data is handled by the AI.”
Ruzzi’s warning pointed to “the volume and complexity of data that AI tools are dealing with” as a factor that can make data handling harder to validate at scale.
What broke and how Microsoft described it
The Microsoft incident report attributed the bug to “a code issue” that allowed items in Sent Items and Drafts to be picked up by Copilot even when confidential labels were set.
Public mirrors of Microsoft’s service health incident feed list a start time but do not disclose when the defect was introduced, how many tenants were affected or how many emails were processed.
Timeline, as disclosed in the incident trail
Those mirrors show the issue was active in February 2026 and tied to CW1226324, with ongoing updates and remediation language. As of Feb. 20, Microsoft said the root cause had been addressed and that its fix had reached the majority of affected environments.
Separate reporting that surfaced the item more broadly described customers first reporting the behavior on Jan. 21, 2026, and Microsoft beginning to roll out a fix in early February.
Why sensitivity labels and DLP matter in Microsoft 365
Microsoft positions sensitivity labels as an information protection mechanism that helps classify and protect organizational data in Microsoft 365.
Microsoft also documents that DLP policies can use sensitivity labels as conditions, including in auditing and incident reporting.
Ruzzi said that without stronger diligence on AI data handling, sensitive information “may not be treated with the rigor it should.”
Ruzzi also said that a confidential label “doesn’t always mean the AI will follow that instruction.”
A second, separate Copilot data path issue in early 2026
In January 2026, Varonis published technical details on “Reprompt,” which it described as a single-click prompt injection technique that could lead to data exposure in Microsoft Copilot Personal, the consumer version of the assistant. Microsoft patched the issue on Jan. 13, 2026.
The Reprompt disclosure was separate from CW1226324 and described a different mechanism.
Operational oversight as AI features expand in SaaS
Ruzzi said organizations should “make sure employees are trained on best practices for using AI” and said that every organization using SaaS platforms such as Microsoft 365 “needs AI monitoring in place” as embedded AI adoption increases.