AI isn’t just testing your data security controls, it’s testing your architecture. The moment you plug Claude, Copilot, or any AI agentic platform into Microsoft 365, Google Workspace, or your SaaS stack, the real question stops being “Who can access this file?” and becomes “How is this data being used, combined, and reused by systems you don’t fully control?”
That shift demands a new layer in the stack—one that governs data in use across AI-mediated workflows, not just data at rest in repositories or in motion on the network, and that is precisely the layer Bonfy was built to provide.
Let's dive into this topic as a response to Gidi Cohen’s Substack article “Claude Enterprise Is Pressure Testing Data Security. And That’s a Good Thing.”
Gidi notes that the key question is changing from “Is this user authorized to access the data?” to “Should this AI system be allowed to use this data (or a secure version of it) in this context?” Bonfy exists to operationalize that distinction.
Bonfy is an AI Data Security platform that protects unstructured data across email, SaaS apps, collaboration tools, Copilot, AI agents, and custom AI workflows by governing how data is used by both humans and AI. Instead of separating access and policy, Bonfy applies contextual, entity-aware policy at the moment content is selected for AI use, when it is retrieved into a reasoning workflow, not just when it is stored or sent.
A single AI prompt can now pull data from Microsoft 365, Google Workspace, file shares, and SaaS apps, assemble a transient working set in an LLM, reason on it, and generate outputs that may be stored or shared downstream. Much of that happens outside traditional enterprise-controlled planes where classic DLP, DSPM, and CASB had visibility.
Bonfy addresses this by acting as a dedicated content-security layer across the full execution surface:
This is the “next layer” Gidi hints at: a unified, AI-aware control plane focused on how data is actually used, not just where it resides.
Legacy tools struggle because they don’t understand who the data belongs to, which customer or consumer is referenced, or which trust boundary is being crossed. That makes it hard to decide whether something is safe to feed an AI assistant, or safe to appear in its response.
Bonfy’s approach:
This allows Bonfy to enforce the difference between “user can open this file” and “AI may reuse this data here.” For example, Bonfy can detect when Copilot tries to reuse a clause tied to Client A in a draft for Client B, pause the action, and alert the user before anything leaves the organization.
Gidi stresses that the point is not to throw away existing controls but to question whether they are positioned correctly in the workflow. Bonfy is designed to extend and strengthen the stack you already have:
Together, they survive the “pressure test” Claude, Copilot, and AI agents are now applying to traditional architectures.
Gidi frames a hard question: where should policy be enforced when access, reasoning, and generation span an execution surface the enterprise does not fully own? Bonfy turns that into a practical roadmap.
In other words, the same AI systems that are stress-testing your controls can also justify and accelerate the evolution of your architecture, if you have the right layer in place.
Claude, Copilot, and emerging agents will keep challenging the old assumptions. Bonfy’s role is to ensure that as the execution point moves, your ability to govern how data is used moves with it.