If you've been trying to bolt traditional security tooling onto your AI infrastructure, you already know it doesn't fit. Generic SAST misses hallucinated dependencies in AI-generated code. Standard firewalls can't catch prompt injection or RAG leakage through egress. Your IAM team has no framework for managing credentials issued to autonomous agents at scale. That's the problem the AI Defense Matrix was built to solve—finally.

Why Existing Frameworks Come Up Short

The Cyber Defense Matrix, created by Sounil Yu back in 2015, mapped asset classes (Devices, Applications, Networks, Data) against NIST CSF functions (Identify, Protect, Detect, Respond, Recover) to show coverage gaps. It became a staple for security architects managing traditional infrastructure. But AI systems introduced attack surfaces that didn't fit any of the original rows—model poisoning, prompt injection, agent credential sprawl, and model-weight theft required AI-native defenses that standard tooling simply doesn't provide. The AI Defense Matrix preserves the familiar structure while adding eight asset classes purpose-built for the AI stack: AI-Workload Platforms (inference servers, training pipelines, vector DB platforms), AI Orchestration Tools (agent frameworks, plugins, system prompts, MCP clients on endpoints), AI-Generated Code (AI-suggested code, vibe-coded apps bypassing CI/CD), AI Gateways and Routers (MCP proxies, LLM routers, outbound egress traffic, shadow AI detection), AI Model (self-hosted weights and fine-tuning checkpoints alongside consumed-as-a-service LLMs), Training Data (datasets used for training and continued learning), Runtime AI Data (user prompts, inference inputs, RAG content, vector DB data, persistent agent memory), and AI Agent Identities (non-human principals, ephemeral credentials, delegation chains between agents).

Mapping Coverage Gaps Across the Stack

The framework's real power is its gap analysis methodology. Security teams review every cell—where an asset class intersects with a NIST CSF function—and mark it as covered, partial, or absent. The recommendation is to start at Govern (the leftmost column) to understand current AI ownership, risk appetite, and policy posture before working rightward across the maturity progression. That left-to-right reading acts as a signal: organizations typically have some governance-level awareness of their AI risks but lack detection and response capabilities for runtime threats like prompt injection or vector DB tampering. The matrix also explicitly flags when two asset classes should fold into one row—only if the same defender team handles both with identical tooling. If either the team or tools differ, they deserve separate rows. Self-hosted models and consumed-as-a-service models share enough model-layer trust concerns and defender tooling overlap to stay in a single AI Model row, but inference platforms and orchestration frameworks have distinct toolchains requiring their own entries.

Bridging Into Existing Threat Intelligence

The framework doesn't reinvent threat classification—it maps directly into the existing AI security intelligence ecosystem. Each asset class connects to MITRE ATLAS techniques (AML.T0051 LLM Prompt Injection under Runtime AI Data, AML.T0024 Exfiltration via AI Inference API under AI Model, and so on), OWASP LLM Top 10 risks including LLM01 Prompt Injection, LLM03 Supply Chain Vulnerabilities, and the December 2025 release of the OWASP Agentic Security Top 10 with its ASI06 Memory & Context Poisoning concern. It also aligns to CSA AICM's 18 control domains, ISO 42001 Annex A clauses for AI management systems, and Google SAIF's six-principle structure—with SAIF's explicit Focus on Agents section landing squarely in the AI Agent Identities row.

Key Takeaways

  • Eight AI-specific asset classes fill gaps that generic security frameworks completely ignore—no traditional tool catches hallucinated dependencies or model-weight theft
  • The framework maps directly to MITRE ATLAS, OWASP LLM Top 10, OWASP Agentic Security Top 10, CSA AICM, ISO 42001, and Google SAIF—making it immediately useful as a common language across existing AI security programs
  • Practitioners should start gap analysis at Govern (not Protect or Detect) to establish ownership and risk appetite before assessing technical controls

The Bottom Line

The AI Defense Matrix isn't trying to replace the Cyber Defense Matrix—it's extending it into territory that defenders have been navigating blind. If your organization is deploying agents, running self-hosted models, or accepting AI-generated code into CI/CD pipelines without a structured way to assess coverage across those attack surfaces, you're operating on vibes and hope. Zeltser and Yu just handed you a map. Use it.