The 2026 Infrastructure Identity Survey: State of AI Adoption
Read Survey
Teleport logoGet a Demo

Model Context Protocol (MCP)

Learn key Model Context Protocol (MCP) definitions, how it works, personal and enterprise use cases, and critical security limitations.

Jack Pitts

AUTHOR:

Jack Pitts

, Teleport

What is Model Context Protocol (MCP)?

Model Context Protocol (MCP) is an open standard that allows large language models (LLMs) to securely connect to tools, services, and data. MCP facilitates interoperability between AI models and operational systems by standardizing how models invoke actions and retrieve information. 

Originally introduced by Anthropic in 2024, it has since evolved into an open-source specification with growing support across the AI and developer ecosystems. Dubbed the "USB-C for AI”, MCP is designed to provide a universal, structured, and vendor-neutral interface for connecting models to data sources and systems.

 


Core Concepts of MCP

As the name suggests, Model Context Protocol is a protocol that enables large language models (LLMs) to securely and consistently access tools without requiring custom code or plugins for each model.

MCP solves the integration and orchestration challenges that arise when AI systems must interact with diverse tools, services, and environments. It replaces ad hoc, model-specific logic with a consistent and reusable framework, eliminating the inefficiencies of building and maintaining multiple one-off integrations.

Conceptually, MCP draws comparisons to an Application Programming Interface (API), though their applied purposes are much different. While APIs facilitate communication between software components, MCP facilitates communication between AI models and systems. However, both standardize integration and interoperability, while abstracting away a degree of implementation complexity for simplified development.

This simplification is foundational for developing agentic AI systems, in which AI models are not limited to generating text but can also perform defined tasks with context from connected systems.

Architecture

MCP's architecture is based on a client-server model consisting of hosts, clients, and servers.

The host is the environment that runs the language model and MCP client, and can establish connections to multiple servers. Individual MCP clients maintain 1:1 connections to servers. Each server exposes structured, discoverable capabilities for the client to access and execute. This is illustrated in the following diagram.

Diagram of MCP host, client, and server architecture.

These capabilities are organized into resources, tools, and prompts.

Resources

Read-only interfaces that return structured data from internal or external systems (e.g., databases, file stores); they provide context but do not perform actions.

Tools

Executable endpoints that perform side effects, such as API calls, updates, or workflows, based on structured input provided by the model.

Prompts

Reusable templates or workflows that define how the model should interact with the server, enabling complex task orchestration and consistent input-output formatting.

These three types of capabilities enable models to dynamically reason about available options, retrieve relevant context, and invoke appropriate actions. 

How it works

When an AI model connects to an MCP server, the first thing it does is ask what tools are available. The server replies with a list of tool names, descriptions, and functions the model can call.

Once the model picks a tool, it sends a structured request using JSON. The server processes the request just like an API would, and sends back a response in the same format. 

This consistent flow makes it easier to monitor, debug, and secure compared to multiple ad hoc model integrations. A full operational sequence may consist of:

  1. Tool discovery: The client requests a list of available tools from the server.

  2. Schema evaluation: The server returns structured metadata describing each tool’s name, parameters, permissions, and expected outputs.

  3. Invocation: The client submits a JSON-formatted request for a selected tool.

  4. Response handling: The server processes the request and returns a response in a structured format for further reasoning or action.

     

     


Implementing MCP

Implementation begins by defining tools with JSON schemas, exposing them via a lightweight server, and handling structured requests using JSON-RPC.

At its core, MCP relies on defining tools using JSON schemas. Each tool schema specifies the tool’s name, a description, input parameters, output format, and access scope. These schemas act as a contract between your tool and any language model that connects to it so the model knows exactly how to call the tool and what to expect in return.

To implement MCP, you’ll need to:

  1. Create an MCP server that exposes your tools in a discoverable format.

  2. Define your tools using MCP-compliant JSON schemas.

  3. Handle requests using JSON-RPC 2.0 over HTTP, STDIO, or server-sent events (SSE).

  4. Return structured responses that conform to your defined schema.

MCP SDKs are available in popular languages like Python, TypeScript/JavaScript, Java, and C#. These libraries can help with registering tools, validating requests and responses, managing schema metadata, and implementing communication protocols

Tool schemas can be versioned independently, which helps support long-term compatibility and reduces restrictive coupling between systems and models. You can organize tools into logical groups, attach metadata for discovery, and manage them through configuration files or runtime introspection.

While MCP defines the structure and flow of communication, it intentionally leaves security, access control, and observability to the implementation. This increases flexibility, but also means you’ll need to layer in authentication, permissions, and logging as needed for your use case.

Creating an MCP server

An MCP server is any service that implements the MCP schema contract and exposes callable tools in a discoverable format. Each tool is declared in a formal JSON schema that includes:

  • Tool name and description

  • Input parameters (type, constraints, optional/required)

  • Output format specification

  • Access scope (public, authenticated, role-restricted)

Servers can be implemented as standalone processes, microservices, or modules integrated into existing systems. 

Tool schemas are versioned independently of the hosting server logic, enabling rapid iteration without coupling to LLM host logic. Tools can be automatically registered via configuration files, environment discovery, or runtime introspection.

Developers commonly MCP-wrap the following resource classes:

  • File systems (e.g., local disk access, document parsing)

  • Cloud APIs (e.g., AWS Lambda, GCP Cloud Functions)

  • SaaS platforms (e.g., Salesforce, Jira, GitHub)

  • Databases (e.g., PostgreSQL, MongoDB, Elasticsearch)

  • Infrastructure services (e.g., Kubernetes, CI/CD pipelines)

Deployment models

MCP is flexible in where and how you deploy it. 

It can run locally (e.g., as a background service on a developer machine) or in production environments (e.g., as a microservice behind an API gateway). The same protocol applies whether you're enabling a model to access a database, query logs, run shell scripts, or call external APIs.

In local deployments like personal computers or developer workstations, MCP servers often run as lightweight background services. This setup allows language models to perform tasks like reading documents, analyzing local code, or automating workflows without needing to connect to the internet.

For larger-scale deployments, MCP can run inside enterprise networks or cloud-based systems using containers, serverless functions, or traditional microservices. These servers may integrate with existing infrastructure like databases, cloud APIs, or internal platforms. Governing MCP with identity systems, logging tools, and access control layers can help manage security in these larger environments and track how tools are being used (and by whom).

This flexibility makes MCP a practical choice for both personal automation and complex enterprise applications. 

 


MCP Use Cases and Applications

MCP supports a wide spectrum of practical AI use cases, ranging from local automation to complex enterprise orchestration. Its design enables LLMs to move beyond passive interaction toward secure, actionable behavior across systems.

Personal and developer use

On individual machines, MCP allows models to work with local files, applications, and development tools without internet access. This gives users a way to automate everyday tasks or extend their workflows using natural language.

These actions are powered by lightweight MCP servers running locally, which translate model requests into system commands or API calls. This setup is simple, fast, and ideal for environments like IDEs, terminals, or offline workstations.

For example, using the MCP-wrapped resources mentioned previously, a developer can ask a model to perform actions like:

  • Search a folder for recently edited documents and summarize them.

  • Scan a codebase to find outdated dependencies.

  • Automatically update a spreadsheet with totals and charts.

  • Run a script or shell command to clean up files or install packages.

Enterprise use

In larger environments, MCP enables AI agents to securely interact with multiple business systems at once. This is especially useful in enterprise settings where processes span across different teams and platforms.

Because enterprise tasks involve multiple steps and systems, MCP's structure simplifies coordination. This can include chaining outputs between tools, caching intermediate results, and managing permissions for each step. 

Real world enterprise use cases of MCP

Customer support automation

An AI assistant can look up a customer’s account in a CRM, check their ticket history from a helpdesk tool, and draft a response as part of a single workflow.

Data analytics

A model can pull records from a database, perform analysis, and present summaries or charts directly to a dashboard or internal tool.

DevOps support

AI agents can inspect server health, suggest configuration updates, or monitor logs across environments like Kubernetes or CI/CD pipelines.

Security response

In a security incident, a model can review logs, identify suspicious activity, and trigger a containment workflow using predefined tools through MCP.

 


Security and Governance

While MCP defines the structure and flow of communication, it intentionally leaves the responsibilities of security and governance to the individual implementation. 

This design tradeoff is openly stated in the MCP specification. To remain transport-agnostic and model-neutral, MCP focuses on defining a common protocol for structured requests and responses, while leaving enforcement and governance concerns to external systems. 

This means developers must treat MCP as a low-level interface and explicitly architect security boundaries around it, much like exposing an internal API to external callers. And while this design flexibility can be beneficial for individual use cases, addressing these gaps is critical for enterprise MCP implementations

The table below highlights the primary MCP control gaps, the risks they introduce, and provides recommendations for control layers to secure enterprise implementations.

 

MCP Security Control Gap

Risk Introduced

Recommended Controls to Layer

Servers often rely on hardcoded API keys or tokens for authentication.

Long-lived access that is hard to monitor or revoke; vulnerable to leaks or misuse.

Use short-lived, signed credentials (e.g. certificates, JWTs); implement token expiration and rotation.

MCP does not verify the identity of the calling model or client.

Any client with access can invoke tools; no way to verify or restrict based on source identity.

Enforce mutual TLS, API gateways with identity verification, or signed request tokens (e.g. OAuth, mTLS).

Access scopes like “role:admin” can be declared in schemas, but enforcement is left to the server.

Tools may be invoked by unauthorized clients; potential for privilege escalation and unintended behavior.

Apply external RBAC/ABAC engines; validates scope against caller identity at the server.

MCP has no built-in logging or telemetry for tool invocations.

No visibility into tool usage; difficult to detect misuse, perform forensics, or demonstrate compliance.

Integrate with observability stacks; log every invocation with context and identity.

The protocol lacks native support for time-based access, multi-party approval, or escalation processes.

High-risk actions can be triggered without oversight or escalation; lack of human-in-the-loop control.

Implement external approval workflows via access proxies, automation platforms, or policy engines.

Once a client has access, there’s no built-in mechanism to revoke it or expire a session.

Persistent access once granted; no native way to revoke compromised clients or rotate secrets in real time.

Use session-bound tokens or certs with short lifetimes; integrate dynamic access provisioning systems.

 


How to Secure MCP

Teleport solves these security gaps by acting as an identity-aware access layer that governs how AI models and applications interact with MCP servers with zero trust

Short-lived, certificate-based credentials are issued to each client and role-based access controls (RBAC) are applied to narrowly define what tools can be discovered and invoked without disrupting the user experience. Detailed audit logs of every tool invocation, including the tool name, input parameters, and client identity are all captured for real time monitoring and compliance. 

Additionally, Teleport supports just-in-time access requests for tools that require elevated permissions, and integrates with policy engines such as OPA for fine-grained access decisions. This allows organizations to use MCP in environments that require traceability, policy enforcement, and minimal standing privileges.

Learn how Teleport is used to secure an enterprise MCP build for querying multiple databases in this real world implementation example.

Frequently Asked Questions

What is MCP in simple terms?

Model Context Protocol (MCP) is a standard that lets AI models safely connect to tools and data. It helps models do real-world tasks by accessing external systems in a structured way.

An LLM is an AI model that generates text or answers based on input. MCP is a protocol that lets that model interact with external tools, services, and data.

Model context is the real-time information a model receives by calling external tools or retrieving structured data. It expands the model’s understanding beyond its static training.

No, you don’t need to rewrite your APIs. You can wrap existing APIs with MCP by describing them in a schema without changing their functionality.

MCP has official SDKs for Python, JavaScript/TypeScript, Java, and C#. Other languages are supported by community and open-source projects.

MCP provides a structured communication layer but doesn’t enforce security by itself. Developers must implement identity checks, permissions, and audit logging separately