In his recent Substack article, Gidi Cohen argues that AI turns customer data from a “what” problem into a “who” and “why” problem: who the data is about, who is using it, and whether that use fits the relationship. Bonfy exists to turn that idea into enforceable guardrails across the systems where work actually happens.

From sensitivity labels to trust boundaries

Traditional data security classifies content (PII, PHI, PCI) and applies static rules. That was barely sufficient when data lived in a few systems and humans stayed in the loop. AI changes that.

Customer information is now constantly retrieved, combined, and re‑expressed in copilots, prompts, RAG pipelines, and agents. The same facts can be:

  • Fine in an internal note
  • Partially acceptable in a customer update
  • Completely inappropriate inside a training set

The difference is not what the data is, but who it concerns and why it’s being used. Protecting customer data now means enforcing relationship boundaries, not just detecting sensitive strings.

Where AI quietly breaks those boundaries

Gidi highlights two common failure modes: internal misuse and external mis‑exposure. We see both across modern AI adoption:

  • Productivity AI (like Copilot and SaaS copilots) surfacing customer‑specific documents or histories in contexts they were never meant for.
  • CRM, ITSM, and support teams turning real tickets into AI‑generated knowledge articles that still contain PII or client‑identifiable context.
  • Shadow AI in browsers becoming an unmonitored way to move sensitive content.
  • Growing use of AI agents orchestrating multi‑step workflows across systems, APIs, and MCP servers, with no content‑aware guardrails.

In each case, the AI finds something “relevant,” but the organization has no systemic way to ask: “Is this appropriate for this customer, this user, and this situation?”

How Bonfy makes the “who” enforceable

Bonfy is an AI data security platform that protects unstructured data across email, files, SaaS apps, collaboration tools, copilots, AI agents, and internal AI workflows. The core design choice: we treat entities and relationships as first‑class.

We do three things that map directly to Gidi’s thesis:

  1. Know who the data is about
    Bonfy builds a business‑context knowledge graph from your real systems, so it can tell generic content from customer‑ or consumer‑specific content, internal commentary from external commitments, and aggregate analytics from identifiable histories.
  2. Know who – or what – is acting
    We connect exposure to specific humans, service accounts, partners, and AI agents. That lets you see which actors actually generate risk and how their behavior evolves, even when the agent logic runs in external frameworks you don’t control directly.
  3. Decide based on the relationship, in real time
    Our policy engine encodes customer‑trust boundaries, not just “high/medium/low” sensitivity. Policies can say, for example, “This client’s data may be used for internal analytics but never appear in raw form in any external‑facing AI output,” or “EU PHI cannot flow into third‑party MCP tools.” When Bonfy evaluates an email, chat, Copilot output, or agent action, it looks at the content, the entities, the relationships, and the channel, then decides whether that specific use is appropriate.

Guardrails for humans, copilots, and agents

DSPM can tell you where sensitive data sits; legacy DLP can pattern‑match what’s leaving. Neither reliably governs how customer‑specific data is used in real time by AI systems. Bonfy fills that gap by:

  • Using one entity‑aware engine to analyze content across email, SaaS, collaboration, AI systems, and agents, so the same notion of “who” applies everywhere.
  • Protecting AI both upstream (which content can be used in grounding, prompts, embeddings, and indexes) and downstream (which outputs can reach customers or partners).
  • Providing a phased path: start with visibility, move to automation, and then to confident prevention once teams trust the signal.

For B2B and AI‑intensive organizations, this is no longer optional. Until you can reliably answer “Is this use of this customer’s data appropriate right now?”, AI will keep turning trust into your biggest blind spot. Bonfy’s job is to make that question answerable, and enforceable, by design.

 

Interested in an assessment of your data security status? Click here to complete one in 5 minutes.