TL;DR: Gidi Cohen's latest Substack articles argues that in connected AI environments, understanding whose data is involved, the relationships and obligations behind it, is becoming part of the security policy itself, not just context around it. At Bonfy, we've been building exactly for this moment, the stage in AI’s adoption when it is entrenched in the Enterprise such that it has access to the most business-critical apps and data. Our Contextual Data Enforcement capability sits between AI clients like Claude and enterprise data stores, applying entity-aware, relationship-driven controls to every retrieval and interaction — not just at the perimeter, but throughout the agent's reasoning process. "The Who" isn't a metadata problem anymore. It's an enforcement problem. And we have the right architecture to solve it.
********************************************
Gidi Cohen published a thought-provoking piece this week on his Substack titled "Claude Enterprise Makes 'The Who' a Control Problem." It's worth reading carefully, because he articulates something that the data security industry has been slow to name clearly: in connected AI environments, understanding whose data is involved — the relationships, obligations, and business context surrounding it — is no longer just useful enrichment for security policies. It is increasingly part of the policy itself.
Gidi frames this as a shift in architecture. Traditional controls, he argues, were built to answer two questions: what is the classification and sensitivity of this data, and who and what has permission to access it? Connected AI introduces a third question that those controls weren't designed to answer: should this AI system be permitted to use this data in this specific interaction, given the relationships and obligations attached to it?
That third question is exactly what we've been building toward at Bonfy.
The Gap Gidi Is Describing Is Real — And We See It Every Day
The organizations adopting Claude Enterprise, Copilot Studio, and other connected AI systems are discovering a problem their existing tooling wasn't built to solve. When an AI agent connects to SharePoint, Google Drive, or any enterprise data repository, the native connectors those systems provide operate purely on user-level access control. If a user has access to a folder, for e.g, the AI can see everything in it. There is no content-level logic, no relationship awareness, no contextual enforcement.
We heard this directly from customers. One of our earliest enterprise prospects put it plainly: they were being pushed to adopt Claude, and they had no means to control which data the AI could actually use — beyond whatever access controls already existed in Microsoft 365. The only options were blunt ones: block the integration entirely, or accept that the AI had access to everything the connecting user had access to.
That is the gap Gidi is identifying. And he's right that it isn't primarily a classification problem. Two documents can share identical sensitivity labels and still carry very different risk depending on which customer's data they contain, which regulatory obligations apply to that relationship, and what the AI system is being asked to do with the information.
What We Built: A Contextual Enforcement Layer
Our answer to this problem is what we call Contextual Data Enforcement — a capability within the Bonfy platform that sits between the AI platform (Claude, Copilot etc) and an enterprise data source, inspecting content in real time before it reaches the AI's reasoning process.
The mechanics are deliberately simple. Instead of using Claude's native Microsoft 365 connector, an organization deploys Bonfy's connector. From Claude's perspective, the experience is identical — the same tools, the same interface. But the traffic is now routed through Bonfy's inspection engine, which applies the same entity-aware, contextual analysis we use across email, file shares, and collaboration platforms. If the content contains data that shouldn't flow to this AI system — whether that's PII belonging to a specific customer, regulated content under a particular obligation, or information that violates an internal policy — the AI receives a denial instead of the document.
This is content-level enforcement, not access-level enforcement. And that distinction matters enormously in the context Gidi is describing. It is almost akin to providing an identity to the data being accessed and then security it.
A permission model tells you what a user can see. It says nothing about whether an AI system should synthesize that content across thousands of documents, surface it in a generated response, or retain it in an agent's memory. Those are use decisions, and they require a different kind of signal — exactly the kind of entity context and relationship awareness that Gidi argues is becoming central to policy, not peripheral to it.
Closing the Loop: Three Control Points, One Platform
One of the core insights we've built into Bonfy's architecture is that AI-mediated data risk doesn't happen at a single point — it happens across an entire workflow.
When an AI agent operates, there are multiple moments where sensitive data can travel inappropriately: in the initial prompt or grounding context, during retrieval from connected data sources, across calls to external MCP servers, and in the final output delivered via email, file, or another channel. Addressing only one of these leaves the others unprotected.
Bonfy provides enforcement across all three:
Input control governs what data is available to the agent during grounding. Contextual classification ensures that retrieval is filtered before the AI ever sees the content.
Output control inspects what the agent produces before it leaves the organization — catching sensitive content in generated emails, files, or other downstream artifacts.
Data-in-use inspection is our newest and perhaps most architecturally interesting capability. Bonfy now offers its own MCP server that agents can consult during their reasoning process, not just at endpoints. An agent building a customer summary can call Bonfy mid-task to verify that the content it's about to use or transmit doesn't violate a policy. This turns Bonfy from a perimeter control into something more like an active participant in the agent's decision-making — which is, in essence, what Gidi is calling for when he argues that "The Who" needs to become part of the control plane.
Critically, all three of these control points run through the same Bonfy platform — the same intelligence, the same policies, the same entity-aware engine. There is no separate toolset for the AI use case, no parallel governance model. The architecture that already understands your customers, your counterparties, and your regulatory obligations simply extends to govern AI-mediated access and use.
Why Entity Awareness Is the Foundation
Gidi's central argument is that entity context — understanding the people, customers, and relationships behind data — is becoming material to whether a given AI use is permissible, not merely useful for tuning policies after the fact.
This is something we've believed since the beginning. Bonfy's detection engine was built around entity awareness from the start: understanding not just that a document contains sensitive content, but whose data it contains, what business relationship it represents, and what obligations attach to that relationship. That's what enables enforcement decisions that reflect real policy rather than generic classification.
When a customer imposes a contractual restriction on how their data can be used in AI systems, labeling the relevant documents "confidential" doesn't solve the problem. A label is static. Bonfy's knowledge graph is dynamic — it understands which customers are represented in which documents, which counterparty obligations apply, and how those signals should interact with the AI systems that want to access or reason over that content.
That is precisely the architecture Gidi is pointing toward when he writes that the next evolution of data security may be "one in which protecting data is not only about identifying sensitive content or enforcing permissions, but about governing use through richer contextual understanding."
We agree. And we think that evolution is already underway.
The Practical Takeaway for Security and AI Teams
If your organization is adopting Claude Enterprise, Copilot Studio, or building internal AI workflows on top of enterprise data stores, the question isn't whether you need contextual enforcement — it's whether you have it yet.
The native connectors provided by AI platforms are access controls, not data controls. They reflect permissions, not policies. And as Gidi argues compellingly, in connected AI environments, that gap between access and appropriate use is where risk increasingly lives.
Bonfy closes that gap — not by adding complexity to your security stack, but by inserting a thin, transparent enforcement layer that applies everything you already know about your data and your relationships to every interaction between AI systems and enterprise content.
The "who" behind your data has always mattered. We just finally have the architecture to act on it.