Gidi's Substack Articles

Shadow AI vs. Shady AI: The Elephant in the Room

Written by Gidi Cohen | May 15, 2026 12:28:06 AM

Original article published on Substack on March 10, 2026.

Why the bigger AI risk is emerging inside approved systems

Security teams are right to worry about Shadow AI. But in many organizations, the more immediate risk is already showing up inside approved systems.

The pattern is familiar.
First came Shadow IT.
Now comes Shadow AI.

Employees experiment with external GenAI tools. Business units embed copilots into workflows. Teams move faster than governance processes can keep up.

Security leaders are right to pay attention.

But in many conversations I’ve had recently with CISOs and data security teams, I’m seeing a growing imbalance:

We are heavily focused on Shadow AI.
Many organizations are already encountering Shady AI - whether they call it that or not.

And the distinction matters more than it may initially appear.

To understand the gap, it helps to separate three different risk patterns.

What Shadow AI Actually Is

Shadow AI is fundamentally a visibility and governance problem.

It refers to AI usage that occurs outside approved or monitored channels, such as:

  • Employees pasting sensitive data into unsanctioned GenAI tools
  • Business units adopting AI SaaS without security review
  • Developers embedding external models without proper approval

The core risk is straightforward:

  • The organization does not know where AI is being used
  • Policies cannot be enforced consistently
  • Sensitive data may flow through unmanaged pathways

This is real and important.

But it is also, in many ways, a familiar problem pattern. Security teams have spent two decades building muscles around discovery and governance.

Shadow AI is difficult - but conceptually tractable.

The Risk Many Teams Are Already Seeing: Shady AI

Shady AI is different.

Shadow AI is about where AI is used.

Shady AI is about whether it behaves correctly once deployed.

It is not primarily about unauthorized usage.
It is about sanctioned AI behaving in ways that violate policy, business rules, or regulatory expectations.

In other words:

  • The AI system is approved
  • The deployment is intentional
  • The workflow is production

…and yet the outcome is still unacceptable.

What makes AI “shady” is not whether the system is sanctioned.
It is whether the resulting behavior - human or autonomous - aligns with organizational policy and business intent.

This can arise from:

  • Malicious use
  • Negligent prompting
  • Sloppy workflow design
  • Autonomous agent drift
  • or high-confidence AI mistakes in real workflows

Shady AI begins to surface when AI systems:

  • Generate incorrect or misleading content tied to real customers
  • Include the wrong recipient or entity in communications
  • Produce outputs that violate compliance requirements
  • Mishandle sensitive data in downstream workflows
  • Take agent-driven actions that drift from intended policy

Here, the question is no longer:

“Where is AI being used?”

The real question becomes:

“Is the AI behaving correctly in our business context?”

For many organizations, this is no longer theoretical. It is already showing up in pilots and early production deployments.

A Critical Middle Layer: Insecure AI

Between Shadow AI and Shady AI sits a third category that also deserves attention.

Insecure AI refers to sanctioned AI systems that are misconfigured, over-permissioned, or insecurely implemented.

Examples include:

  • Copilots with excessive data access
  • Weak identity or permission boundaries
  • Improper data grounding controls
  • Overly broad retrieval scopes

This is primarily an architecture and hardening problem, typically owned by engineering, IT, and security teams.

It is receiving increasing attention - and rightly so.

But even well-hardened systems can still produce risky outcomes once they are operating in real business workflows. That is where Shady AI becomes visible.

Different Risks, Different Security Muscles

These categories may sound similar, but they demand very different control approaches.

Shadow AI is primarily a discovery and governance problem.
Insecure AI is primarily a secure architecture and posture problem.
Shady AI, however, is different.

Addressing Shady AI requires semantic understanding of what the AI is actually doing in business context:

  • Who the data belongs to
  • Whether the right customer or entity is involved
  • Whether the output complies with policy and regulation
  • Whether an agent’s action aligns with business intent

Visibility and hardening remain necessary.

But they are no longer sufficient on their own.

Why Shady AI Is Proving Harder

What makes Shady AI particularly challenging is that it occurs inside trusted, sanctioned environments.

Consider a common scenario: an AI assistant is authorized to generate customer responses inside a legitimate workflow. The tool is approved and the access is permitted - but the agent is prompted to aggregate customer details beyond what should be shared in that interaction. Nothing about the AI use is unauthorized. Yet the outcome is still a policy violation.

Traditional controls tend to focus on:

  • Where AI is used
  • Whether access is properly configured
  • Whether sensitive data is broadly classified

But Shady AI failures often emerge from contextual and semantic mismatches, such as:

  • the wrong customer referenced
  • the wrong recipient generated
  • the wrong relationship inferred
  • compliant data used in a non-compliant way

These are precisely the scenarios where systems can appear healthy from a posture perspective — while real business risk is quietly accumulating.

As agent-driven workflows scale, the surface area for these subtle failures will expand significantly.

What Security Leaders Should Be Asking Now

This shift is already changing the questions forward-leaning teams are asking:

  • Where could approved AI systems generate customer-level mistakes?
  • How do we validate entity correctness in AI outputs?
  • Can we detect when AI creates policy-violating content?
  • What guardrails exist for agent-driven workflows?
  • How do we enforce context-aware controls, not just data classification?

These questions point directly at the Shady AI surface that many organizations are now encountering.

Final Thought

Shadow AI is the problem we can already see.

Insecure AI is the problem we are actively hardening.

Shady AI is the problem many teams are already running into - precisely because it happens inside systems they trust and have intentionally deployed.

The next phase of AI security will be defined less by who is using AI - and more by whether AI systems are behaving correctly in context.