Security Models Were Built Around Human Behavior

Traditional security models, which were built for human users, made certain assumptions. For example, these models often relied on defined roles for access, assuming predictable intent based on those roles. They also operated within sessions and systems that had distinct boundaries.

Additionally, these models could work by assuming that users and actions are discrete and observable, working with linear and more predictable workflows. With legacy systems involving human actors, risks can be evaluated when data and systems are being accessed.

But AI agents introduce a fundamentally different type of actor. Unlike human actors, AI agents operate continuously across systems and at exponentially faster rates. They can execute complex tasks, sometimes without any step-by-step human direction or interaction. These dramatic differences in how AI agents access and interact with data are challenging the core assumptions underlying traditional security models.

AI Agents Change How Data Is Accessed and Used

Let’s take a closer look at what AI agents are and how they operate.

AI agents are defined as lightweight software powered by LLMs, such as those from OpenAI or Microsoft, that can connect data sources, tools, and services to execute tasks autonomously.

The complexity of AI agents introduces new and often significant layers of risk, particularly in how they access and interact with data. They create multiple potential data leakage points, which can occur at various stages: the initial prompts, unauthorized data access, communications with external servers, and final outputs such as emails or files.

Furthermore, because AI agents are not static and perform actions across multiple systems, they inherently amplify risks throughout those interconnected systems. In fact, a recent HBR report on AI agents and risk noted that, "Standards bodies like the National Institute of Standards and Technology define AI agents as systems capable of taking autonomous actions that impact real-world systems or environments."

AI agents interact directly with enterprise data sources, APIs, external systems, SaaS applications, and various internal tools. Their function is to retrieve data, interpret it, combine it with other information to generate new outputs, and automatically trigger downstream actions and workflows.

This continuous, distributed, and context-dependent nature of AI agent-data interactions creates data flow challenges that legacy security models cannot handle. Unlike the session-based, contained access of the past, AI agents involve ongoing, multi-step data usage that is much faster than human interaction, fundamentally shifting how data flows.

Why Traditional Security Models Cannot Keep Up

Traditional security systems were not designed for AI agents or the complex, multi-step workflows they utilize. For example, a traditional access control approach determines if an action can begin, but it does not govern how data is subsequently used after access is granted.

Furthermore, detection models that identify security issues after activity has occurred are inadequate for AI agents, as data is often already exposed or transformed by the time an issue is flagged.

Static security policies also fail because they assume stable workflows and cannot adapt to evolving, multi-step processes characteristic of AI environments.

Consequently, traditional security models are insufficient in AI-agent-driven environments, leading to a loss of visibility into data usage across systems and workflows. These legacy systems cannot evaluate usage across interactions, creating significant gaps between the security policy's intent and the actual outcomes.

This leaves security teams vulnerable, resulting in fragmented signals, delayed insights, and a limited ability to proactively influence security outcomes.

Risk Emerges Through Data Usage, Not Just Access

In traditional environments, risk is evaluated at key checkpoints related to system and tool access, such as logins, access requests, and data transfers.

However, in AI-driven workflows, risk can arise and escalate as data is interpreted, combined, and reused. A primary source of risk in AI systems involves AI agents. For example, agents have access to vast data repositories across multiple systems. They can also send content to unauthorized recipients, potentially exposing PII or other sensitive data, or generating 'hallucinations' (inaccurate outputs).

Other risks include context changes, such as applying data retrieved in one context to a different one, or generating outputs based on sensitive or entity-specific inputs. Additional risks can emerge as intermediate steps influence subsequent actions.

These critical moments in data workflows are often transient, context-dependent, and difficult to evaluate after they have occurred.

These changes and new risks shift the focus of security from controlling access to systems and data, to governing how that data is used during the execution of workflows.

Why Security Must Operate During Execution

Agent-driven workflows unfold across multiple systems and steps, often at significant speed and scale. This creates new risks that emerge during these actions, not just before they begin or after they are complete.

In the era of AI-agent workflows, security systems must adapt dynamically to how data is being used as actions unfold. Security must also be able to continuously evaluate whether these actions stay within acceptable boundaries and influence outcomes before they are finalized.

Traditional security was not designed to operate during the execution phase, creating gaps between detecting malicious activity and governing it in real time. Closing these gaps is now essential for securing modern AI-driven environments.

TL;DR: AI Agents Break the Assumptions Security Was Built On

Traditional security models were built around human-driven workflows. However, AI agents continuously access, interpret, and act on data across systems. When AI agents are involved, risk arises from the usage of data, not solely at the points of access or transfer.

Existing traditional security controls are unable to govern these real-time interactions. Therefore, securing AI-driven systems requires a shift from static, checkpoint-based controls to real-time, execution-aware governance across the entire enterprise environment.

See how organizations are governing data across AI agents, SaaS apps, and enterprise workflows in real time: https://www.bonfy.ai/use-case-agentic-data-security