Bonfy Blog

Scoring the Human and the Machine: Why Entity Risk Management is the New Frontier of Data Security

Written by Gidi Cohen | 1/27/26 3:15 PM

The Identity Crisis in Data Security

Historically, Data Loss Prevention (DLP) has focused on the "what" (for example, tools that work by detecting a credit card number or a sensitive keyword) while Insider Risk Management (IRM) focused on the "who" (for example, by detecting a spike in file downloads). But in today’s world of AI Agents and Copilots, these two silos have failed. Traditional tools were not intended for modern intricate content systems. 

We’ve learned that content doesn't move in a vacuum. Instead, it is created, transformed, and shared by a complex web of humans and machines. To achieve human-grade accuracy, security must pivot to Entity Risk Management (ERM), a system that scores the risk of every actor (human or agent) based on the context of the content they touch.

Beyond IRM: Defining Entity Risk Management for the AI Era 

IRM vs. ERM

IRM was historically useful for detecting and mitigating some types of risks related to data access and movement, but this tactic is no longer adequate for content-related risks, leaving visibility gaps in the process. There are essential differences in the ways ERM and IRM function. 

Traditional IRM is activity-based (for instance, “User X logged in at 2 AM"), but it is often content-blind. Conversely, ERM is content-aware, linking data exposure to the specific identity and intent of the entity involved. The rise of the “machine entity” makes this identification critical. 

New risks of agentic AI and machine entities continue to emerge and are growing more complex. A recent report from McKinsey notes that these new vulnerabilities linked to AI agents have the potential to disrupt operations, compromise sensitive data, or erode customer trust. 

“AI agents provide new external entry points for would-be attackers,” and “are able to make decisions without human oversight, they also introduce novel internal risks.” Acting as “digital insiders,” with varying levels of authority and privilege, these entities can cause harm in a number of ways, either unintentionally or deliberately, the report said. 

As a result, AI agents must be scored like employees because of the way they interact with content. Agents autonomously access, index, and generate content, creating new "agentic" leakage vectors that traditional tools cannot monitor. Using a dynamic, quantifiable risk score is a critical element of a framework that identifies and assesses the risk of each entity that either interacts with or creates content. This type of risk score evolves based on a combination of factors, including historical patterns, behavioral insights, and the sensitivity of the content with which an entity interacts.

The Engine Under the Hood: Adaptive Knowledge Graphs 

ERM isn’t based on static rules. Instead, the approach relies on a self-supervised knowledge graph that has the ability to map relationships by learning organizational structure, customer relationships, and “trust boundaries" directly from business apps (such as CRM, IAM, HRIS).

Understanding context and the “why” is an important element of ERM. For example, if a high-risk contractor interacts with sensitive IP, the score spikes. However, if a trusted employee shares the same data with a verified partner under an active NDA, the context remains safe.

ERM also spots behavioral anomalies or "intent shifts,” such as when a user’s interaction with a GenAI tool moves from internal research to potential public dissemination, which is critical to detect before a silent spill of data occurs.

Risk-Adjusted Automation: Turning Visibility into Enforcement 

With ERM in place, there is improved visibility and more precise identification, leading to less friction and frustration among teams. Because ERM is entity-aware, security teams can move beyond ‘block all’ to risk-adjusted automation. This means that while high-risk actors are met with automated mitigation, such as blocking or quarantining, low-risk, valid business workflows can continue without friction.

ERM can also help to lower the total cost of ownership. Because they can filter noise based on entity risk, security teams can drastically reduce false positives, which currently consume an average of 14.1 hours per week for SOC analysts, according to industry reports and estimates. 

Also, ERM provides greater executive flexibility when it’s fed into a “cockpit view” solution that allows CISOs to see exactly who (or what agent) represents the highest risk to the organization and why, providing an audit-ready governance trail.

TL;DR: Secure Humans and AI Agents with Human-Grade Accuracy

As the line between human and AI-generated content blurs, the only way to protect data is to manage the risk of the entities that handle it. ERM provides the context needed to uncage the AI safely by scoring every interaction in real-time.

Discover the Bonfy Advantage 

Bonfy Adaptive Content Security™ (Bonfy ACS™) delivers the industry’s first ERM for the data plane, securing humans and AI agents across the enterprise.

Use the Data Security Risk Assessment to assess your current data security posture and identify gaps across users, systems, and AI-driven workflows.