In the past two weeks, four publicly-documented security events made the AI agent attack surface concrete in a way vendor marketing usually obscures. Trend Micro reported that exposed MCP servers nearly tripled from 492 (July 2025) to 1,467 by April 2026, with 74% hosted on major cloud platforms. Akamai researcher Tomer Peled disclosed three vulnerabilities in database-wrapper MCP implementations on May 13. FastGPT patched a critical auth bypass (CVE-2026-42302, CVSS 9.8) affecting versions 4.14.10 through 4.14.12. And a US commercial bank self-disclosed to the SEC that employees had routed customer data—including Social Security Numbers—into an unauthorized third-party AI application. These aren't theoretical attack paths. They're documented incidents with measurable consequences.

The MCP Server Population Explosion

Trend Micro's April 28, 2026 threat intelligence update reveals the structural problem at scale: exposed MCP servers nearly tripled in nine months as developers bound localhost-native services to public interfaces over a deprecated SSE transport. Seventy-four percent run on AWS, Azure, GCP, or Oracle Cloud Infrastructure. The attack chain is operational—not theoretical. A command-injection bug in aws-mcp-server (CVE-2026-5058, CVSS 9.8) lets an attacker execute as the EC2 instance, query the metadata service for temporary credentials, and pivot to S3, DynamoDB, Lambda, or IAM user creation. This is classic IMDS credential theft via a new entry point, not novel cloud-attack tradecraft. The underlying issue: MCP servers were designed for localhost/stdio communication and got exposed to 0.0.0.0 because that's what 'make it work over HTTP' looked like to deployment teams under deadline pressure.

Three Database-Wrapper Failures, One Structural Flaw

Peled's May 13 disclosures reveal a consistent pattern across three MCP implementations that wrap analytical databases with SQL execution surfaces. Apache Doris MCP (CVE-2025-66335) allows SQL injection through an unsanitized db_name parameter in the exec_query tool—a downstream validator only inspects the prefix, missing injected payload after it. The vulnerability was patched in doris-mcp-server version 0.6.1. StarTree's mcp-pinot (issue #90, unpatched at disclosure) binds to 0.0.0.0 with OAuth disabled by default and validates queries with a single line checking for SELECT—trivially bypassed via UNION, stacked queries, or SQL comments. The third vulnerability sits in Alibaba Cloud RDS MCP: unauthenticated access to the RAG retrieval tool, which Alibaba classified as 'not applicable' for patching. Across all three, the failure mode is identical: the MCP tool inherits the AI agent's trust model instead of respecting the database's authorization boundaries. Validator-as-theatre (Doris), transport-without-auth (StarTree), and RAG-as-side-door (Alibaba) are different surface manifestations of the same structural error.

Sandbox Isolation as a Deployment Checkbox

CVE-2026-42302, disclosed May 8, is the cleanest single-CVE artifact of this period. FastGPT's agent-sandbox entrypoint.sh launches code-server with --auth none bound to 0.0.0.0:8080—any network-reachable attacker gets unauthenticated remote code execution at CVSS 9.8. The sandbox component existed because someone designed isolation into the product. The --auth none flag was a deployment choice that nullified it entirely. This is the checkbox security pattern in practice: a feature ships with its protective controls disabled by default, and operators inherit the risk when they deploy without auditing runtime configurations. Affected versions 4.14.10 through 4.14.12 were patched in release 4.14.13 (GHSA-34rc-438g-7w78).

Shadow AI Hits the Regulatory Record

On May 12, The Register reported that a US commercial bank self-disclosed to the SEC: employees fed customer data—including Social Security Numbers—into an unauthorized third-party AI application outside the bank's approved systems. This isn't a framework CVE or a misconfigured server. The attack surface here is structural: employees routed sensitive work to an unapproved tool because the sanctioned path was slower than their deadline. Shadow AI just entered regulatory disclosure records, which means every CISO at federally-regulated institutions must now assume this access pattern exists within their organization. When exfiltration happens through unsanctioned tooling, the trust boundary that failed wasn't a technical control—it was the absence of a viable approved alternative.

What to Test Now

For MCP deployments: probe every tool wrapping SQL surfaces for parameter injection using both Doris and StarTree patterns—UNION SELECT, stacked queries, and comment-based bypasses. Audit whether tool registration accepts admin overrides without authentication on Alibaba-pattern implementations. Scan deployment configurations for --auth none flags, 0.0.0.0 bindings, and SSE transport usage at scale. For governance: inventory unapproved AI tools your workforce already uses—the number is non-zero. Map each sanctioned tool to its maximum permitted data classification; refuse SSN, PHI, or PCI exposure on uncertified systems. Treat shadow AI as a gap in approved tooling alternatives, not an employee discipline problem.

The Bottom Line

The four events of May 2026 make one fact citable: MCP database servers ship the database's blast radius with the agent's trust model. When vendors patch half the problem (StarTree), declare 'not applicable' on unauthenticated access to retrieval tools (Alibaba), or disable authentication by default in sandbox components (FastGPT), operators absorb the asymmetry—and the consequences show up in SEC filings, not changelogs. The response isn't to rotate credentials when there's nothing to rotate. It's to identify which workflows route through vulnerable surfaces and revoke the trust those workflows assumed they had.