Bonfy Blog

Trusting Claude Is the Easy Part. Governing What It Does With Your Data Is Not.

Written by Vishnu Varma | 5/7/26 3:45 PM

There's a line in Gidi's latest Substack piece that should stop every enterprise security leader mid-scroll:

"Authorized AI can still produce unauthorized outcomes."

Not because the platform is rogue. Not because someone bypassed a control. But because trust in a platform does not, by itself, govern how that platform uses data once reasoning and automation operate at scale.

That's the Shady AI problem, and it's a fundamentally different challenge than the Shadow AI problem most organizations have been trying to solve. Shadow AI is a visibility problem: unsanctioned tools operating beyond governance. Shady AI is a governance problem operating inside trusted workflows. An organization can eliminate every unauthorized AI tool in its environment and still face the harder question: is our approved AI, deeply integrated, broadly connected, reasoning across sensitive enterprise data, actually operating within policy intent?

Claude Enterprise (or for that matter any agentic AI platform with access to Enterprise data) makes this question impossible to ignore. Precisely because it is trusted, embedded in ordinary workflows, and designed to participate deeply in how work gets done, it creates a governance surface that traditional security controls weren't built for. As Gidi writes, Claude doesn't create the Shady AI problem. It exposes it.

Here’s how Bonfy sees it.

The Governance Gap That Approval Leaves Behind

There's a reason enterprises invest in platform approval processes. Sanctioned tools, governed access, documented policies, these are real and necessary steps. But there is a gap that approval doesn't close.

When an AI system operates across enterprise data, drawing from repositories, combining context, generating outputs, triggering downstream actions, it does so through dynamic reasoning processes that don't look like individual policy events. Each step may be authorized. The aggregate outcome may still land outside what the policy intended.

This is what makes Shady AI so hard to govern through traditional controls. The issue isn't a bad actor or misconfigured permission. It's that sensitive information flows in ways that are technically permitted but insufficiently governed for AI-mediated use.

Approval tells you the platform can be trusted. It doesn't tell you whether every output that platform produces is operating within intended policy boundaries.

What Data-in-Use Inspection Actually Means

Most data security tools are built around two moments: before data is accessed, and after output is produced.

Bonfy covers both. But the problem Shady AI exposes lives in the middle, in the reasoning process itself, where an agent is combining information, preparing content, deciding what to include and what to leave out and working on the outcome aligned with the original intent, which may not have even expected that some data will be accessed along the way, during the reasoning loop. That's where policy drift actually happens. And it's the moment that most security architectures don't reach.

This is why Bonfy now provides its own MCP server, a capability that lets AI agents call Bonfy during their reasoning process to verify content in real time, before decisions are finalized or outputs are sent.

The flow is simple in concept, but significant in practice:

  • An agent is preparing to send a summary, generate a response, or pass information to another system
  • It calls Bonfy's MCP server: Is this content safe to proceed with? OR it uses data access tools exposed by Bonfy’s MCP server for a given data repository (for e.g. M365)
  • Bonfy inspects against policy and responds
  • The agent uses that result in its next reasoning step

This isn't a checkpoint bolted onto the outside of a workflow. It's governance that operates where the data is actually in use.

Three Layers, One Platform

Bonfy's approach addresses AI data risk at every phase of the information flow:

  • Input control governs what data is available to an agent when it begins its work. Contextual classification determines what from SharePoint, Google Drive, or other repositories is even in scope — and at what sensitivity level.

  • Output control inspects what an agent produces before it leaves the system — through email, files, messages, or any downstream channel.

  • Data-in-use inspection is the new capability, and the one that speaks most directly to the Shady AI problem. It operates not at the entry or exit points, but inside the agent's reasoning, when content is being formed and decisions are being made.

These aren't separate tools requiring separate management. They run on the same platform, the same policy engine, the same intelligence. The same controls that govern your email and SharePoint security now power the compliance checks happening inside your AI workflows.

Why This Matters for Approved AI

The Shady AI framing is useful because it reframes where the governance challenge actually lives.

For years, the implicit assumption in enterprise security was that if a tool was approved and a user was authorized, the outcomes would be governed by default. Connected AI puts real pressure on that assumption, not because the tools are untrustworthy, but because authorization and governance are genuinely different things.

An employee working manually, step by step, produces decisions that can be reviewed and audited at a human pace. An AI system reasoning across dozens of data sources, producing outputs in seconds, operating at scale across an organization, that's a different governance surface. And it requires data security that works at the speed and depth of the AI itself.

Bonfy was designed for that surface. The data-in-use capability isn't an add-on to a legacy architecture, it's the architecture operating in a new mode, one that the AI era specifically requires.

TL;DR

Gidi's Substack argues that Claude Enterprise doesn't create the Shady AI problem, it makes it visible. That's right. And visibility is the first step toward governance.

The second step is having security that can reach the places where policy drift actually happens: not just at the edges of workflows, but inside the reasoning processes where data is being combined, assessed, and acted upon.

That's what Bonfy's platform now makes possible, systematic data security for AI agents, at input, at output, and in use.

Because governing what you've already approved turns out to be a different problem than approving it in the first place.