The Model Context Protocol has gone from a niche developer tool to the backbone of AI agent tooling in under two years. According to DEV.to author Andrew (@yondoodx), MCP SDK downloads jumped from roughly 2 million per month at launch in November 2024 to 97 million by March 2026—a nearly 50x increase that reflects how seriously the industry is taking agent-to-tool connectivity. The public registry ballooned from 1,200 servers in Q1 2025 to over 9,400 by April 2026. If you haven't shipped an MCP server yet, you probably will soon. And if you've been following the standard tutorials, you're likely building a security liability without realizing it.

Where Tutorials Fall Short

Almost every publicly available MCP example shares the same gaps: one transport (usually stdio), a single tool with no real schema validation, no authentication layer, and error handling that amounts to throwing raw exceptions. This approach works fine for a local demo. It fails spectacularly when you deploy to Railway, Cloudflare Workers, or any shared infrastructure where strangers can hit your endpoints. The problem isn't that the tutorials are wrong—it's that they're optimized for learning, not production readiness. Andrew breaks down five specific gotchas he encountered shipping MCP servers for client work, each one a pattern he's now standardized across all his projects.

Input Validation Happens Inside Your Handlers

The first misconception catches most developers off guard: registering a tool with a Zod schema doesn't automatically validate inputs at runtime. The schema exists as metadata for the client—the server handler still receives whatever the model decides to send, which could be null, wrong types, or injected keys. Andrew's fix is straightforward but essential: explicitly call your schema's parse() method inside every handler and return a structured MCP error if validation fails. This single line of defensive code prevents entire classes of runtime crashes and unexpected behavior that would otherwise confuse your AI client and generate opaque retry loops.

Path Traversal Deserves Priority, Not Neglect

File tools are among the most common MCP server use cases, and they're also the easiest to exploit. If your implementation accepts relative path traversal sequences like ../../../etc/passwd, you've turned your AI assistant into a data exfiltration vector. Andrew warns against sandboxing with regex patterns—it's the wrong tool for the job. Instead, leverage the filesystem's own path resolver: resolve both the base directory and the requested file, then verify the result starts with your base plus a separator character. The separator check is critical because /tmp/sandbox-evil/secret.txt technically starts with /tmp/sandbox. Test your implementation against .., ../.., absolute paths to system files, and symlink attacks before you ship anything that touches the filesystem.

Dual Transport Support Is Non-Negotiable

stdio transport works beautifully for local development—it's what Claude Desktop and Cursor expect when you're iterating on a tool definition. But production deployments typically run over HTTP or SSE, especially on platforms like Railway, Cloudflare Workers, and Fly where stdio isn't an option. Andrew's pattern is elegant: maintain one tool registry with two transport entry points controlled by an environment variable. Your development workflow stays fast with local stdio connections while your CI/CD pipeline deploys the identical code base over HTTP transport to production. The tools themselves remain completely transport-agnostic, which is exactly how it should be.

OAuth 2.1 Is Mandatory for HTTP Transport

If you're exposing an MCP server over HTTP without token validation, you have a wide-open API that anyone can call with whatever tools you've registered. The MCP specification moved to OAuth 2.1 Bearer tokens for remote servers, and this isn't optional—it's the difference between a controlled agent interface and an unauthenticated attack surface. Andrew implements middleware that extracts the Bearer token from the Authorization header, verifies it against a configurable token store, and returns a structured 401 response if validation fails. The beauty of this approach is that the tokenStore is an interface: swap in static tokens for development, JWT verification for staging, or a full authorization server for production without touching your handler code.

Structured Errors Prevent Retry Loops

AI models retry failed calls based on error responses. If you're returning unstructured prose exceptions, the model has no way to understand what went wrong and will happily retry the same broken operation indefinitely. Andrew's rule: always return errors as MCP content blocks with isError: true and a short, machine-readable reason field. The model reads this structure and adjusts its next attempt rather than spinning in circles. This single pattern can mean the difference between an agent that self-corrects and one that times out repeatedly while confusing everyone watching the logs.

Key Takeaways

  • Always validate inputs explicitly inside handlers with Zod's parse() method—never assume the SDK does it for you
  • Use filesystem path resolution for sandboxing, not regex; include separator checks to prevent directory escape
  • Build dual transport support from day one: stdio for local dev, HTTP/SSE for production deployments
  • Implement OAuth 2.1 Bearer token validation for any HTTP-exposed MCP server—it's not optional
  • Return structured error responses with isError: true so AI models can self-correct rather than retry blindly

The Bottom Line

The gap between an MCP demo and a production-ready MCP server isn't wide—but it's full of subtle traps that tutorials happily teach the wrong way. These five patterns (input validation, path safety, dual transport, OAuth enforcement, and structured errors) are each 20-50 lines of code that take an afternoon to get right. The investment pays back every time your agent successfully handles a malformed request or blocks a path traversal attempt instead of leaking data. Start with these foundations. Your future self—and your users—will thank you.