Teleport Launches Beams — Trusted Agent Runtimes For Infrastructure
Learn More
Teleport logoGet a Demo

Home - Teleport Blog - The Complicating Factors of Deploying MCP in the Enterprise

The Complicating Factors of Deploying MCP in the Enterprise

by Boris Kurktchiev Mar 20, 2026

Complicating Factors of Deploying MCP in the Enterprise Header Image

About the author:

Boris Kurktchiev is a Field CTO at Teleport, known for his expertise in Zero-Trust identity solutions for cloud and AI, and for his contributions to the CNCF's Cloud Native AI working group.

Doyensec dropped a piece last week called The MCP AuthN/Z Nightmare, and I think anyone deploying MCP in production needs to read it.

Full disclosure: Teleport sponsored this research. We believe that MCP will be part of the core infrastructure for enterprise AI, and we want to invest in getting its security story right. Francesco Lacerenza and the Doyensec team put together what is probably the most technically rigorous public analysis of MCP's authentication architecture to date, complete with a massive sequence diagram mapping every injection point in the OAuth 2 flow.

Their conclusion? Certificate-based auth and mTLS are the path forward for enterprise MCP.

I obviously have opinions on that (I work at the company that builds exactly this), and we sponsored this research because the findings align with what I keep seeing in the field when companies try to deploy MCP today.

So instead of rehashing their article, I want to use it as a launching point to talk about something slightly different: what are the actual complicating factors when you try to deploy MCP in an enterprise? Not the theoretical attack taxonomy stuff, but the real friction you hit when you're sitting across from a CISO or an enterprise architect trying to get this into production.

MCP's auth story is still catching up

MCP launched in November 2024 with no authentication mechanism. Zero. It was stdio-only with implicit trust between client and server.

To be fair, the initial use case was local tooling: think Claude Desktop talking to a local filesystem. Nobody was putting this on a network.

Then it went remote, and things got interesting fast. The spec has been through five versions in about 13 months, each iteration adding significant security capabilities.

The March 2025 version introduced OAuth 2.1, but made the MCP server act as both the resource server and the authorization server (which, if you've done any enterprise IdP work, you know is a non-starter). The June 2025 version fixed that separation. Finally, the November 2025 version (the current stable) brought a whole new client registration mechanism, enterprise extensions, and mandatory PKCE.

The pace of change is genuinely impressive, and the MCP team deserves credit for how fast they've iterated. But rapid iteration also means the enterprise auth story is still maturing. There are real gaps, and the enterprises I work with are hitting them today.

The OAuth problem is structural, not implementational

Here's the thing that the Doyensec article really nails: the problem isn't that people are implementing OAuth badly for MCP (though they are; 492 MCP servers were found on the open internet with zero auth, and Obsidian Security has reported multiple one-click account takeover vulnerabilities).

The problem is that OAuth was designed for a fundamentally different trust model.

OAuth assumes a human user who can read a consent screen, evaluate the requested scopes, and make an informed decision. MCP in enterprise contexts has a non-deterministic LLM deciding which scopes it needs at runtime.

Those are not the same thing. It's a genuinely hard design problem, and one that the spec is actively working through.

Enterprise-managed authorization still has open questions

The Enterprise-Managed Authorization Extension (JAG) aims to address this by decoupling consent from authorization, with enterprise policy deciding on behalf of the user.

Doyensec identified four specific problems with this approach that I think are worth paying attention to:

1. No access invalidation mechanism

When your agent starts misbehaving (prompt injection, scope abuse, whatever), there is no standardized way to revoke ID-JAG tokens. Every vendor ends up building their own recovery pattern. That's a nightmare for incident response.

2. LLM scope abuse without user consent

The LLM can autonomously request any scope permitted by enterprise policies, including scopes entirely irrelevant to the user's current task. In classic M2M environments, the human is directly mandating specific tasks. In MCP? The model decides. That is a fundamentally different trust posture.

3. Scope namespace collisions across IdPs

Multiple IdPs managing overlapping MCP Authorization Servers with identical scope names — such as files:read and create — create cross-access scenarios. If audience validation isn't strict (and the spec doesn't mandate it), a low-privilege token from Server B could get you into Server A.

4. ID-JAG replay amplification

A single ID-JAG can mint multiple access tokens. Each access token can invoke high-impact tools. The spec doesn't enforce single-use checks on the JTI claim, so the ID-JAG becomes a damage amplifier.

These aren't hypotheticals. These are open architectural decisions in a spec that's still being finalized, and they create real security gaps in the meantime.

Enterprise security lives in extensions, not the core spec

This is maybe the most important thing that doesn't get enough attention.

The 2026 MCP Roadmap, published last week by lead maintainer David Soria Parra at Anthropic, explicitly states that enterprise features "shouldn't make the base protocol heavier for everyone else" and that most enterprise capabilities will ship as extensions rather than core spec changes.

This is a reasonable design philosophy. You don't want to bloat a protocol with enterprise requirements that 90% of users don't need, and keeping the core spec lean makes adoption easier.

But the practical outcome is that the core spec has:

  • No mTLS or certificate-based auth: mTLS appears only in security best-practices guidance as a recommendation, not as a mechanism. It is not supported yet, though there is an open feature request tracking it.

  • No per-tool authorization: OAuth scopes authorize access to the MCP server, not to individual tools within it. If you can connect to the server, you can call any tool. In an enterprise with MCP servers exposing 20+ tools, this is way too coarse.

  • No audit trail specification: The roadmap acknowledges this as a gap. If you're in a regulated industry (and most of the enterprises I talk to are), you need structured audit logging for every MCP interaction. The spec doesn't define what that looks like.

  • No gateway behavior specification: Every enterprise deploying MCP behind an API gateway is rolling its own policy enforcement. Companies are cropping up daily to build MCP gateways, but there's no spec-level guidance on how they should behave.

Look at the OWASP MCP Top 10. Five of the ten identified risks are directly related to authentication and authorization. Half the threat model lives in areas the spec is still actively working to address.

The deployment brick wall

I spend most of my time talking to enterprise security and infrastructure teams about how to actually deploy this stuff.

Here’s the pattern I keep seeing. A team gets excited about MCP and builds a prototype with local servers where everything works great. Then they try to move to remote MCP servers in production and hit a brick wall of auth complexity.

Doyensec’s scary sequence diagram (the one where "every step is an injection point") is basically a visual representation of this wall.

Understanding Teleport’s approach to MCP deployment

Teleport's approach is not to play the OAuth game at all for MCP. Instead of trying to fix the token exchange chain, Teleport sits as a protocol-level proxy and handles identity, authentication, authorization, and audit at the infrastructure layer.

The practical mechanics work like this:

  1. MCP servers register as Teleport applications.
  2. The tbot agent authenticates using platform-signed identity documents (AWS IAM, Kubernetes API server, GCP, Azure, TPM), ensuring no shared secrets are used.
  3. After joining, it holds short-lived X.509 certificates that auto-rotate.

When a user or agent connects to an MCP server through Teleport, the proxy handles egress authentication using CA-signed JWTs that carry identity, roles, and traits. The client never directly negotiates OAuth with the MCP server.

That scary Doyensec sequence diagram? Most of it goes away.

The part I think matters most is the two-tier MCP RBAC model, where:

  • Tier 1 controls which MCP servers you can connect to.
  • Tier 2 controls which specific tools within that server you can invoke, using literal names, globs (read_*), or regex (^(get|query|list)_.*$).

This is the per-tool authorization that the MCP spec doesn't have.

New MCP servers onboard in a deny-by-default posture. All tools start denied. You can see the server and list the tools, but you can't invoke anything until a role policy explicitly allows it. This is the inverse of how most MCP implementations work, where adding a server immediately exposes every tool to every connected client.

For the JAG consent problem — specifically, where the LLM non-deterministically selects scopes — tool-level RBAC renders it irrelevant. Even if a model requests slack_post_message, the request is denied unless the role policy explicitly permits it. The authorization decision is made at the proxy layer based on cryptographic identity, not at runtime based on what the LLM decides to ask for.

Every MCP JSON-RPC request is logged as a structured audit event, including tool name, input parameters, client identity, authorization decision, timestamp, and session context. That's not an extension or a recommendation. It's the default behavior.

The honest version of where MCP stands

I want to be straight about something: MCP is still early.

The spec is actively evolving. The SEP pipeline has proposals for DPoP (proof-of-possession), workload identity federation, and attested client registration. Dick Hardt, co-author of OAuth 2.1, has an alternative proposal for server-side authorization management with HTTP message signatures. These could meaningfully improve the landscape.

But "could improve" and "is ready for enterprise production" are different things.

The MCP spec explicitly defers enterprise security to extensions with no dedicated working groups. Meanwhile, Teleport's 2026 survey of infrastructure and security leaders found that 67% still rely on static credentials for AI systems, and that over-privileged AI systems drive a 4.5x higher incident rate than least-privileged deployments.

The enterprises I work with can't wait for the spec to mature. They need to deploy MCP now with the security properties they already require for everything else: cryptographic identity, least-privileged access, comprehensive auditing, and centralized policy enforcement.

The Doyensec article's conclusion lands on certificate-based auth and mTLS as the answer. I think that's right — not because I work at Teleport, but because we've spent a decade building exactly this for SSH, Kubernetes, databases, and applications.

One more thing. I want to give credit where it's due.

The Anthropic team and the MCP maintainers have been genuinely responsive to community feedback throughout this process. The spec has improved dramatically in a short time; the SEP process is open and moving. Doyensec's PR to the ext-auth repo, with updated security considerations, was engaged with constructively. That kind of openness matters, especially for a protocol this early.

We sponsored the Doyensec research because we think MCP is important enough to invest in getting it right, and the maintainers' responsiveness is a big part of why we're confident in that bet.

Further reading:

Doyensec: The MCP AuthN/Z Nightmare
Teleport Agentic Identity Framework
Teleport MCP Access, Streamable HTTP
Enterprise-Managed Authorization Extension Draft

background

Subscribe to our newsletter

PAM / Teleport