Gidi's Substack Articles

Claude Enterprise Makes “The Who” a Control Problem

Written by Gidi Cohen | May 14, 2026 2:42:46 PM

Originally published on Substack on May 13, 2026 here. 

Why connected AI is exposing the limits of traditional data classification

In the first article in this series, I argued that Claude Enterprise is pressure testing where data security controls operate. The underlying point was that connected AI shifts part of the execution surface into environments that many traditional controls were not designed to govern directly. That shift raises an architectural question about control placement.

It also raises a second question, which may be just as important.

What must those controls actually understand in order to make the right decisions?

That question becomes harder in connected AI environments because risk increasingly depends on more than the sensitivity of the data itself. It often depends on whose data is involved, the relationships surrounding that data, and the purpose for which an AI system is using it.

This is what I have previously referred to as “The Who.”

Historically, “The Who” has often been treated as contextual enrichment — useful for improving detection or refining policy outcomes, but secondary to the core machinery of classification and access control.

Connected AI begins to challenge that hierarchy.

As systems such as Claude Enterprise increasingly act as reasoning layers over enterprise data, entity context can become material to whether a given use of information should be permitted in the first place. In that sense, “The Who” starts to look less like auxiliary context and more like part of the control decision itself.

Classification Was Never the Whole Policy

Much of traditional data security has combined two powerful abstractions: classify the sensitivity of information, and control who can access it.

That model remains foundational. But connected AI begins to expose where it may be incomplete.

The reason is subtle but critical. In many enterprise policies, risk has never depended solely on what category of information exists. It has often depended on whose information it is, how it relates to a business relationship, and what obligations attach to that relationship.

Those distinctions are frequently implicit in how organizations think about acceptable use, even when they are only imperfectly represented in technical controls.

Connected AI makes them harder to treat as implicit.

Two documents may share the same sensitivity label and the same access permissions, yet present very different risk when introduced into an AI reasoning workflow. The difference may not lie in the content type itself, but in the entities and relationships represented within the content.

That is where traditional classification starts to show its limits.

Not because classification stops mattering, but because content sensitivity alone may not fully express the policy question at stake.

When Context Becomes Part of Control

This becomes more apparent when Claude is connected to enterprise repositories.

A retrieval event may appear, on the surface, to be an ordinary access decision. But once retrieved information becomes input to reasoning, synthesis, shared memory, and generated output, the policy question often changes.

The issue is no longer simply whether a user was authorized to access a document.

It may be whether an AI system should be permitted to use that document in this specific interaction.

That is a different kind of judgment.

And it is often shaped by context that traditional permissions do not capture at all.

A permission model may establish that a user can access customer information.

It does not necessarily determine whether Claude should combine that information with other customer context in a reasoning workflow, or use it in a generated response.

Those are use decisions, not merely access decisions.

And they may depend on understanding relationships, entities, and business context as much as content classification.

This is why I believe connected AI is beginning to elevate “The Who” from a contextual enhancement into something closer to policy logic.

Not because entity context replaces classification.

But because increasingly it may help determine whether the use itself is acceptable.

That is a significant shift.

Why Claude Makes This Easier to See

This challenge is not unique to Claude, but Claude Enterprise makes it unusually visible because it operationalizes AI as a practical reasoning layer across enterprise systems.

That matters.

Because once AI mediates access and use, many enterprise policies reveal themselves to be more relational than static.

  • Customer-specific obligations.
  • Counterparty restrictions.
  • Context-dependent sharing boundaries.

These were always part of policy.

Connected AI simply makes them harder to enforce through labels and permissions alone.

And that may be one of the more important implications of this moment.

The pressure test is not only about whether existing controls sit at the right point in the architecture.

It is about whether they understand enough about the data and the relationships behind it to govern AI-mediated use appropriately.

That is a deeper challenge.

And perhaps a more interesting one.

When the Missing Dimension Becomes the Control Plane

In an earlier essay I described “The Who” as a missing dimension in data security.

Connected AI may be turning that missing dimension into part of the control plane.

That is a stronger claim, but I believe an increasingly defensible one.

Because if policy decisions about AI use depend materially on entity relationships and contextual obligations, then those signals are no longer merely enriching policy.

They are helping constitute it.

And that begins to suggest a different architecture.

One in which protecting data is not only about identifying sensitive content or enforcing permissions, but about governing use through richer contextual understanding.

That may prove central not only to securing Claude Enterprise, but to securing connected AI more broadly.

And it may be one of the places where the next evolution of data security begins.