Bonfy Blog

Over-Permissioning and Data Leakage Risks with Microsoft Copilot: Navigating the New Reality

Written by Gidi Cohen | 10/30/25 7:12 PM

Microsoft Copilot promises a huge leap in productivity, surfacing insights and automating tasks across the Microsoft 365 ecosystem. But beneath the surface, Copilot’s deep integration also blurs the boundaries between user permissions and enterprise data, heightening the risk of sensitive information exposure.  

Copilot: Powerful, but Permission-Blind 

When organizations enable Microsoft Copilot, the AI assistant instantly inherits access rights from each user across Word, Excel, Outlook, Teams, and SharePoint. If a user has permission to access confidential documents, Copilot can also access, and potentially surface, the information contained in them in whatever it is asked to generate.  

How serious is the problem? Research shows that Microsoft Copilot interacts with about 3M confidential records per organization. These include financial numbers and statistics, HR records, IP data, and more. Because the majority of organizations allow Copilot to access these records if the employee has access to them and does not have a mechanism for Copilot to recognize which of the documents are confidential, the likelihood of the sensitive data contained in these documents to permeate into files shared externally is extremely high.  

The Real-World Fallout: Unintentional Data Exposure 

What does over-permissioning look like in practice? Imagine a Copilot prompt asking for “last quarter’s financial performance” and the response pulls confidential payroll data because that spreadsheet is accessible by the user but was never intended for broad consumption. Or consider a scenario where Copilot, acting on behalf of an executive assistant, surfaces HR files in a draft email due to overlapping access rights in SharePoint.  

Then add in the amplifying power of AI. Unlike manual searches, Copilot can aggregate, summarize, and republish sensitive content from every tool it touches. This means internal oversharing is instantly more discoverable, and external leaks are one misconfigured permission or compromised account away. 

Why Legacy Controls Fall Short 

Traditional data governance often relies on security “by obscurity,” counting on users not knowing where sensitive files live, or lacking the tools to search widely. Copilot upends this model. The AI assistant ties together every data source a user can touch, surfacing forgotten corners of OneDrive and orphaned spreadsheets in Teams channels. It can even pull outdated, misclassified, or duplicate files into new outputs, compounding oversharing risks.  

Alert admins may notice Copilot surfacing information outside approved channels or intended audiences, especially in environments lacking strong data classification and least-privilege rules. But without proactive controls, the window for detection may be very slim and the consequences for compliance and trust are severe. 

The Shadow AI Problem: Oversharing Moves Fast 

Shadow IT and shadow AI are already huge risks, with employees routinely bypassing official channels for convenience. Copilot accelerates this challenge. For instance, a prompt for “all recent customer feedback” can pull from a mix of internal notes, external reviews, and even sensitive support tickets because all are accessible with a broad enough permission set.  

Recent analysis reveals that nearly 67% of enterprise security teams now express concern over Copilot and similar AI tools potentially exposing business-critical information, and the US Congress itself banned staff from using Copilot due to unresolved data security concerns.  

Bonfy.AI’s Take: Proactive Defense, Not Reactive Cleanup 

Bonfy.AI advocates an aggressive, adaptive stance. Organizations can’t afford to wait for an incident to audit permissions. They need: 

  • Continuous, context-driven monitoring of access rights across Microsoft 365, focused on identifying over-permissioned accounts before Copilot deployment 
  • Automated data classification and labeling to block AI access to confidential files by default 
  • Dynamic data-loss prevention (DLP) for generative outputs, so that sensitive information isn’t inadvertently republished or shared externally 
  • Granular controls over Copilot prompts, restricting access based on user roles and dynamic risk signals 
  • Enterprise-wide audits every time Copilot is enabled or expanded, with visibility into which files, folders, and systems are exposed to AI aggregation  

Recommendations for Tech Leaders 

  • Start by mapping sensitive assets: intellectual property, financial statements, HR records, regulated data, etc. Which users can access them, and which systems can surface them via Copilot? 
  • Review permissions and sharing settings in SharePoint, Teams, and OneDrive. Focus especially on Excel (financial data), Word (corporate docs), and high-risk file types.  
  • Apply the principle of least privilege everywhere. For Copilot, block access to sensitive files unless there is a clear, auditable business need. Use built-in Microsoft 365 security controls and third-party AI governance solutions for ongoing oversight.  

In the world of generative AI, robust access management isn’t optional, it’s the difference between empowerment and exposure. We partner with B2B SaaS and cybersecurity teams to secure the future of work, ensuring Copilot is a force for progress, not a catalyst for risk. 

For more on how Bonfy helps organizations govern AI-powered content (and specifically Copilot) and reduce the risk of data leaks, or to request a risk assessment tailored to your Microsoft 365 environment, check out this page on our website. The era of permission-aware AI is here, make sure your controls are ready for it.