Original article published on Substack on March 17, 2026.
How AI workflows are redefining what data security must control
For more than two decades, enterprise data security programs have been organized around a familiar and largely effective approach.
Most initiatives have focused on protecting:
Critically, the approach combined content awareness with identity and access controls, reflecting an environment where humans were the primary actors interacting with data.
This architecture made sense for the world it was designed to protect. Data was largely human-created, human-accessed, and human-shared. Information typically moved in discrete objects - files, records, messages - across relatively predictable workflows.
In that environment, improving visibility and tightening policy coverage delivered meaningful risk reduction.
But the underlying assumptions behind this approach are beginning to shift.
In the previous piece in this series, I described how enterprises are encountering new categories of AI risk, from Shadow AI to what might be called Shady AI, where approved systems behave in ways that violate policy or business intent. These emerging failure modes highlight something deeper: they are not simply operational issues. They reveal a structural mismatch between how traditional data security architectures evaluate risk and how AI-driven systems actually generate and assemble information.
Traditional data security implicitly assumed that humans were the primary actors in the data lifecycle.
Users created documents.
Users sent emails.
Users uploaded files.
Users made mistakes.
Today’s environments look different.
Enterprise data flows increasingly involve a mix of:
The data flow is no longer human-dominated.
Systems designed primarily for human users are now being asked to reason about copilots, agents, and autonomous services operating at machine speed. This shift begins to stress architectures optimized for user-driven events rather than machine-mediated decision loops.
In earlier architectures, “data in use” referred both to data being actively processed in application memory and to human interaction with information at endpoints.
This framing was appropriate for environments where processing was largely deterministic and predominantly human-driven.
For example:
AI systems introduce a materially different pattern.
AI models and agentic systems now routinely:
Data is no longer only accessed. It is continuously reconstructed during interactions between users, AI systems, and automated workflows.
As explored in the previous piece on the missing dimension of AI data security, sensitivity is increasingly contextual rather than purely intrinsic. The same underlying data may be safe in one interaction and problematic in another, depending on who the information refers to, who is requesting it, and how it is being assembled.
This dynamic behavior begins to expose structural limits in security approaches built primarily around static classification and discrete access events.
Traditional DLP and related control architectures evolved around a practical balance:
In largely human-paced workflows, this trade-off was often workable.
AI-driven environments compress that margin.
Generation is instantaneous.
Automation amplifies mistakes.
False positives break workflows.
False negatives can propagate at machine speed.
As AI systems increasingly participate directly in content creation and distribution, prevention decisions must become both more precise and closer to the moment of generation.
This is less about adding more policies and more about improving the fidelity of enforcement in highly dynamic flows.
Modern DSPM and next-generation DLP platforms have significantly advanced data visibility, coverage, and posture management across cloud and SaaS environments. These improvements have meaningfully strengthened many enterprise programs.
The next phase of AI-driven workflows, however, is beginning to stress even these improved security architectures.
The reason is structural.
These shifts can be understood by comparing how the design center of data security is evolving across three stages: traditional data security, modern cloud-era platforms, and what may be emerging as AI-native data security.
|
Dimension |
Traditional Data Security |
Modern Data Security (DSPM and Next-Gen DLP) |
AI-Native Data Security |
|
Unit of protection |
Files and repositories |
Sensitive data elements and stores |
Dynamically assembled information in context |
|
Primary actors assumed |
Human users |
Humans plus SaaS applications |
Humans, copilots, agents, and autonomous workflows |
|
Meaning of data in use |
Memory processing and user activity |
Broader user and application access patterns |
Real-time machine-driven generation and assembly |
|
Data behavior model |
Largely static |
Continuously discovered and scanned |
Continuously recomposed and generated |
|
Role of context |
Primarily content-centric |
Improved metadata and ownership context |
Deep entity, relationship, and intent context |
|
Detection vs prevention |
Detect-first with coarse prevention |
Improved detection with selective inline controls |
High-precision, in-flow decisioning required |
|
Precision requirements |
Moderate noise tolerance acceptable |
Reduced false positives |
Near-real-time, high-confidence decisions required |
|
Risk creation point |
When data moves or is accessed |
During sharing and exposure events |
During generation, recomposition, and agent execution |
|
Control architecture |
Channel-specific, siloed controls |
Broader cross-SaaS visibility |
Coordinated multi-channel, workflow-native enforcement |
This shift does not invalidate earlier approaches. It reflects a change in where risk is most likely to emerge.
The traditional model often treated controls for data at rest, in motion, and in use as largely separate domains.
In AI-driven environments, these boundaries are increasingly fluid.
A single workflow may span:
Risk frequently emerges across these transitions, not neatly within one control plane.
As AI-driven workflows expand across email, SaaS platforms, copilots, and agent ecosystems, effective governance increasingly depends on coordinated multi-channel enforcement rather than isolated point controls.
Taken together, these changes point to a deeper conclusion.
AI is not simply increasing data risk. It is changing where and how that risk is created.
Security models optimized for:
are being asked to reason about:
This is the architectural break.
As AI adoption accelerates, the gap between traditional control models and real-world data behavior will become increasingly visible.
Forward-leaning security teams are already asking new questions:
Answering these questions will define the next phase of data security evolution.
The familiar categories of data at rest, in motion, and in use still matter. But what “in use” means is evolving rapidly.
In environments where humans, copilots, and autonomous systems continuously assemble and generate information, data security can no longer focus only on where data resides or where it moves.
The architectural center of gravity is shifting from protecting where data lives to governing how information is created.