Traditional stacks still look at slices of the problem, an endpoint, a SaaS app, a proxy, an LLM firewall, but agents don’t respect those boundaries. A single agent task can traverse file repositories, SaaS tools, LLMs, MCP servers, and outbound channels in minutes, while most controls only see one hop of that flow.
Bonfy’s CEO and Co‑Founder, Gidi Cohen, recently outlined why data security for AI agents is now a multi‑dimensional problem in his Substack article, “Data Security for AI Agents: The Missing Dimension.” Building directly on that perspective, this post explains how Bonfy translates that problem statement into a concrete, enterprise-ready solution, closing the gaps across west–east data flows, north–south agent control planes, and the often overlooked realm of data in use inside agent reasoning loops.
Bonfy’s adaptive content security platform is built as a multi-channel engine that governs data at rest, in motion, and in use across email, files, SaaS apps, collaboration tools, AI systems, and AI agents. Instead of treating “agent security” as a new silo, Bonfy extends the same contextual, entity-aware controls you already use for human workflows into increasingly autonomous AI workflows.
West–East: Governing the Expanding Data Flow Surface
On the west–east axis, agents amplify the classic risks: overshared files in SharePoint or Google Drive, sensitive records scattered across CRMs and ITSM tools, and confidential content moving through email and collaboration channels. As Gidi notes, what used to be relatively simple, human-driven flows are now multi-hop journeys involving LLMs, RAG pipelines, internal automations, and agents that read, write, and send data autonomously.
Bonfy addresses this surface by:
- Connecting directly to major SaaS platforms, email systems, file repositories, web traffic, and AI tools to analyze content where it lives and moves, not just on a single endpoint or proxy.
- Automatically discovering and classifying sensitive data in places like SharePoint, Google Drive, S3, and on‑prem file stores, so security teams see which content is even eligible to be pulled into prompts, indexes, vector stores, or agent workflows.
- Applying granular, contextual labels (including publishing to Microsoft Purview) so downstream AI systems and agents only ground on content that meets your governance rules.
This gives organizations the west–east visibility Gidi calls for: understanding what data agents can touch, where it resides, and how it moves across channels long before it shows up inside an agent reasoning loop.
North–South: Making the Agent Control Plane Data-Aware
The harder dimension in Gidi’s article is the north–south control plane — how agents interpret instructions, assemble context, invoke tools, call MCP servers, and orchestrate multi-step workflows with delegated, privileged, authority. That’s where traditional controls struggle, because the most sensitive handling happens in transient, in-memory contexts that never exist as a single file or network object.
Bonfy’s architecture is explicitly designed to instrument this plane without forcing you into a new agent framework or security model:
- Grounding & data access control. Bonfy’s entity-aware labeling and access policies control what data is even available to agents in the first place, aligning grounding decisions with business context, trust boundaries, and compliance rules.
- Unified control plane. A single policy and automation engine orchestrates classification, labeling, remediation, and enforcement across human and agentic actors, so the same rule set governs a lawyer sending an email and an agent summarizing a knowledge base.
- Entity risk management for agents. Bonfy extends its risk modeling to treat agents as first-class entities alongside humans, contractors, and service accounts, enabling risk scoring and behavioral insights for specific agents, not just generic “AI usage.”
In other words, Bonfy makes the north–south plane data‑aware rather than configuration-only, so you can reason about what agents actually see, use, and emit — not just how they were configured on paper.
Data in Use: Securing the Reasoning Loop with Bonfy’s MCP Server
Gidi highlights data in use as the missing dimension: the transient, token-level context that blends user prompts, retrieved documents, tool responses, and intermediate reasoning steps, often spanning multiple trust domains and surfaces. That is precisely the gap Bonfy’s MCP server is designed to close.
Bonfy delivers three complementary control layers for agent workflows:
- Input control (upstream). Bonfy inspects and governs what flows into AI systems and agents: prompts, retrieved documents, and upstream content pulled from email, SaaS, or file repositories, ensuring regulated or trust-boundary‑sensitive data is not silently fed into agent contexts.
- Output control (downstream). Before agent-generated content is sent via email, collaboration tools, or file shares, Bonfy analyzes the output and can block, modify, quarantine, relabel, or redirect based on policy — preventing hallucinated leakage or inadvertent disclosure as agents act on behalf of users or systems.
- Data‑in‑use inspection via Bonfy’s MCP server (reasoning loop). This is the breakthrough: Bonfy exposes its content security engine as an MCP server that agents can call during execution to ask, in effect, “Is this safe to proceed with?”
During a multi-step workflow, the agent can be instructed to:
- Call Bonfy’s MCP server with intermediate summaries, payloads, or tool inputs to get risk ratings, labels, and policy evaluations before sending data to an external service or user.
- Use Bonfy’s response inside its own reasoning — for example, to redact fields, change recipients, choose a different tool path, or escalate to a human if risk exceeds a threshold.
All three layers run on the same platform, with the same policies, knowledge graph, and explainability, so you don’t end up with one product for email, another for SaaS, and a third bolted onto your agent stack. This is exactly the multi-dimensional, real-time protection model Gidi argues is required when data in use extends beyond the model into MCP servers, APIs, and downstream systems.
Multi-Channel, Entity-Aware, Workflow-Aware Visibility
Gidi notes that effective protection for agents requires visibility that is multi-channel, multi-state, entity-aware, and workflow-aware — all at once. Bonfy is built so those characteristics are not add-ons, but core design principles:
- Multi-channel. Bonfy monitors email, SaaS apps, collaboration tools, web traffic (including Shadow AI usage), file shares, custom GenAI apps, and AI agents via connectors, browser extension, APIs, and now the MCP server interface.
- Multi-state. One platform governs content at rest (data discovery and classification in repositories), in motion (email, web, SaaS transactions), and in use (agent prompts, MCP calls, and outputs), closing the gap between static DSPM and traditional DLP.
- Entity-aware. A self-supervised business context knowledge graph learns your organizational structure, customers, consumers, and relationships, allowing Bonfy to understand who the data belongs to — and which human or agent is putting it at risk.
- Workflow-aware. Because Bonfy correlates activities across channels and actors, it can reveal patterns like “this agent is repeatedly combining data from regulated repositories with unsanctioned external tools,” rather than treating each event as an isolated incident.
This directly addresses the gap Gidi highlights between local control coverage and end-to-end risk understanding: security teams stop seeing only fragments (an email alert here, a web event there) and start seeing the full multi-dimensional exposure pattern behind agent-driven workflows.