Bonfy Blog

Claude, Copilot, and the Pressure Test Your Security Wasn’t Built For

Written by Vishnu Varma | 4/30/26 3:55 PM

AI isn’t just testing your data security controls, it’s testing your architecture. The moment you plug Claude, Copilot, or any AI agentic platform into Microsoft 365, Google Workspace, or your SaaS stack, the real question stops being “Who can access this file?” and becomes “How is this data being used, combined, and reused by systems you don’t fully control?”

That shift demands a new layer in the stack—one that governs data in use across AI-mediated workflows, not just data at rest in repositories or in motion on the network, and that is precisely the layer Bonfy was built to provide.

Let's dive into this topic as a response to Gidi Cohen’s Substack article “Claude Enterprise Is Pressure Testing Data Security. And That’s a Good Thing.”

From User Access to AI Use

Gidi notes that the key question is changing from Is this user authorized to access the data?” to “Should this AI system be allowed to use this data (or a secure version of it) in this context?” Bonfy exists to operationalize that distinction.

Bonfy is an AI Data Security platform that protects unstructured data across email, SaaS apps, collaboration tools, Copilot, AI agents, and custom AI workflows by governing how data is used by both humans and AI. Instead of separating access and policy, Bonfy applies contextual, entity-aware policy at the moment content is selected for AI use, when it is retrieved into a reasoning workflow, not just when it is stored or sent.

Securing the New Execution Surface

A single AI prompt can now pull data from Microsoft 365, Google Workspace, file shares, and SaaS apps, assemble a transient working set in an LLM, reason on it, and generate outputs that may be stored or shared downstream. Much of that happens outside traditional enterprise-controlled planes where classic DLP, DSPM, and CASB had visibility.

Bonfy addresses this by acting as a dedicated content-security layer across the full execution surface:

  • One architecture across channels. Bonfy protects unstructured data across email, SaaS apps, collaboration tools, on-prem and cloud file stores, AI systems, and AI agents as one connected data surface.
  • Data in motion, at rest, and in use. The same policies apply to content being emailed, shared, indexed, retrieved for RAG, or passed between agents.
  • Execution-aware hooks. Native integrations with platforms like Microsoft 365, Google Workspace, Salesforce, Slack, M365 Copilot Studio, custom GenAI apps, and an MCP server interface let Bonfy inspect and govern content as it flows into and out of AI systems.

This is the “next layer” Gidi hints at: a unified, AI-aware control plane focused on how data is actually used, not just where it resides.

Entity-Aware Policy for AI Workflows

Legacy tools struggle because they don’t understand who the data belongs to, which customer or consumer is referenced, or which trust boundary is being crossed. That makes it hard to decide whether something is safe to feed an AI assistant, or safe to appear in its response.

Bonfy’s approach:

  • Business-context knowledge graph. Bonfy learns your org structure, customers, consumers, and relationships from business systems to build a knowledge graph.
  • Entity-aware analysis and labeling. Our multi-stage engine differentiates generic from client-specific content, applies granular, contextual labels automatically, and can push those labels into systems like Microsoft Purview so AI tools see accurate governance metadata.

This allows Bonfy to enforce the difference between “user can open this file” and “AI may reuse this data here.” For example, Bonfy can detect when Copilot tries to reuse a clause tied to Client A in a draft for Client B, pause the action, and alert the user before anything leaves the organization.

Extending, Not Replacing, Existing Controls

Gidi stresses that the point is not to throw away existing controls but to question whether they are positioned correctly in the workflow. Bonfy is designed to extend and strengthen the stack you already have:

  • With Purview. Purview governs Microsoft; Bonfy brings entity-aware, AI-aware protection across your broader SaaS and collaboration estate and feeds better labels back into Purview.
  • With DSPM. DSPM finds sensitive data at rest; Bonfy adds prevention and AI guardrails across data in motion and in use—email, SaaS, collaboration tools, and AI flows.
  • With SSE/SWG, secure browsers, and email security. Those tools protect traffic, browsers, or inbound threats; Bonfy focuses on content: what it is, who it belongs to, and whether its current use breaks policy.

Together, they survive the “pressure test” Claude, Copilot, and AI agents are now applying to traditional architectures.

Turning Pressure into a Roadmap

Gidi frames a hard question: where should policy be enforced when access, reasoning, and generation span an execution surface the enterprise does not fully own? Bonfy turns that into a practical roadmap.

  • Start with visibility. Connect Bonfy to email, SaaS apps, collaboration tools, file repositories, and AI systems to see where sensitive data lives and how humans, systems, and AI agents interact with it—often in days.
  • Understand risk in context. Tie exposures to specific people, partners, systems, and AI agents with entity-level risk scoring.
  • Define policy from reality. Use out-of-the-box templates and natural-language rules, tuned based on observed AI and data flows.
  • Automate and prevent. Once accuracy is validated, move from alerting to automated enforcement: block or modify risky actions, prevent sensitive data from entering prompts or indexes, and stop hallucinated leakage before it reaches recipients.

In other words, the same AI systems that are stress-testing your controls can also justify and accelerate the evolution of your architecture, if you have the right layer in place.

Claude, Copilot, and emerging agents will keep challenging the old assumptions. Bonfy’s role is to ensure that as the execution point moves, your ability to govern how data is used moves with it.