MCP servers are programs designed to provide AI applications with a standardized way to interact with files, databases, APIs, and other systems.
A Model Context Protocol (MCP) server is a core component of the AI ecosystem, exposing structured capabilities (such as tools, resources, and prompts) to AI models through a predictable, schema-driven interface.
Like an API, an MCP server defines a contract for structured communication. However, MCP servers remove the complexity of bespoke connectors and fragile parsing logic by standardizing how models request and receive data or execute actions. This enables AI agents to work reliably with operational systems while maintaining a portable interface across hosts and environments.
MCP draws comparisons to APIs for providing structured communication with external systems. However, while APIs are typically built for human- or service-driven requests, MCP servers are schema-bound, designed explicitly for AI-to-system interactions. Learn more about the differences between APIs and MCP in this article.
An MCP server is a program that provides AI applications with clearly defined capabilities through a standard protocol. It acts as a controlled gateway between the model and real-world systems, ensuring every action and piece of data is delivered in a structured, predictable, and safe way.
Think of an MCP server as a translator and gatekeeper for AI. It dictates exactly what the AI can do, what data it can see, and how to present that information so both sides understand it.
An MCP server declares what it can do when it starts, using three building blocks:
Tools: Actions the AI can perform, like “deploy a service,” “search flights,” or “send an email.”
Resources: Read-only data the AI can access for context, such as documents, logs, calendars, or databases.
Prompts: Predefined templates or workflows that guide the AI through complex tasks. Each is described in a formal schema so the AI knows exactly how to call it and what to expect in return.
The server verifies that all requests comply with the declared input requirements and that all results are returned in the correct format. This ensures reliable interactions, prevents errors, and makes outputs predictable enough for automation and analytics.
Only declared capabilities are accessible, and they can be limited by identity, role, or environment. Sessions are isolated, so different servers or users cannot see each other’s data. Sensitive information can be filtered, size-limited, or require explicit approval before access.
The server returns information in consistent formats. For example, a list of flight options, a deployment summary, or a KPI report. This predictability ensures reliable and structured results, making it easier for AI to combine data from multiple sources without relying on guesswork.
All actions are processed through the MCP server, making it the single point of control for authentication, authorization, logging, and auditing. High-risk actions may require human approval, and every interaction can be recorded for compliance and investigation purposes.

MCP follows a host-client-server architecture to maintain clear and secure responsibilities.
The host is the AI application that launches or connects clients, aggregates context, and mediates consent. Each connection to a server uses one stateful client instance.
The client opens a session and negotiates capabilities like prompts, resources, tools, roots, and sampling. It maintains isolation across servers so one server cannot read unrelated context through the protocol. It also routes JSON‑RPC requests, responses, and notifications.
The server provides capabilities through three primitives. Resources expose data via URIs, prompts publish reusable templates, and tools perform actions the model can call. The protocol requires explicit declaration of these capabilities during initialization.
The client sends an initialization request with the protocol version and capabilities.
The server responds with its own version and capabilities.
The client confirms readiness via an initialized notification.
This handshake ensures compatibility and sets the stage for the session to proceed.
Once the initial handshake between the client and server is complete, MCP defines a clear set of methods and message patterns for ongoing communication. This structure ensures that any compliant host and server can work together predictably and without the need for custom glue code, providing a sense of reassurance and security.
These are the primary MCP methods that clients use to discover and interact with server capabilities:
Servers can return plain text, structured JSON, media files, or links to resources. Returning structured content allows hosts to reliably parse and combine outputs without relying on brittle text parsing, giving the audience a sense of confidence and control.
MCP tool calls follow a standardized JSON-RPC format, allowing clients to pass well-defined arguments and receive predictable outputs. For example, the name is the tool's registered identifier, and the arguments must match the tool's JSON schema.
Here’s a basic example of calling a tool where:
name is the tool’s registered identifier.
arguments must match the tool’s JSON Schema.
The response can include isError: true to indicate a domain-specific error, even if the protocol exchange itself was valid.
{
"jsonrpc": "2.0",
"id": 42,
"method": "tools/call",
"params": {
"name": "get_customer",
"arguments": { "customer_id": "12345" }
}
}MCP defines consistent methods for discovering and retrieving read-only data and reusable workflows:
Resources can be static or dynamic. Some servers support subscriptions to notify clients when resource data changes.
Resource templates support parameterized URIs, allowing for argument completion in complex data paths.
Prompts may include embedded resources and typed arguments, letting hosts supply context directly to a model.
Structured responses in MCP let clients process results programmatically without guesswork:
Multiple content items can be returned for a single call.
A structuredContent object provides machine-readable data.
Optional annotations help hosts decide how to present or prioritize results.
Links can point to URIs for later retrieval, while embedded resources deliver small payloads directly in the response.
MCP maintains session state to enable resumable and cancellable operations:
Streamable HTTP sessions use Mcp-Session-Id, allowing clients to resume after network interruptions.
Either side can send progress updates for long-running tasks, and requests can be cancelled via notifications/cancellations.
Sessions end cleanly: HTTP servers may return a 404 error for expired sessions, while STDIO sessions terminate when the client closes the server process.
MCP defines two official transports to support both local and remote use cases:
STDIO: The server runs as a local subprocess, exchanging JSON-RPC messages over standard input/output. This is fast, secure, and avoids network dependencies—ideal for developer tools or offline workflows.
Streamable HTTP: Uses HTTPS for requests and Server-Sent Events (SSE) for streaming results. Supports session resumption, event replay, and requires origin validation. OAuth is recommended for authentication.
STDIO relies on process isolation and local credentials, while HTTP adds transport-layer security and works well for remote, multi-user deployments.
An MCP server can front databases or internal APIs, allowing the host to retrieve or update records through a single, permissioned interface. Capability negotiation, per-call approval, and host-side logging replace the need for embedding static credentials directly into models, reducing the risk of credential leaks while centralizing access control.
A host can connect to multiple servers simultaneously, such as a file system server, a Git server, and a CRM server. Each connection is isolated and described by its capabilities, allowing the host to aggregate context from multiple sources without enabling one server to read another server’s data. This enables safe cross-system reasoning for complex workflows.
MCP servers can wrap third-party APIs as model-callable tools with typed inputs and outputs, allowing for seamless integration. Structured results and consistent error handling enhance reliability, allowing developers to debug integrations without needing to account for host-specific quirks.
A DevOps MCP server might provide tools for deploying services, scaling infrastructure, or running health checks. Hosts can enforce guardrails, such as role-based approvals, before any production-level change is executed.
Security-focused MCP servers can query SIEM logs, run threat-hunting queries, trigger endpoint isolation, or update firewall rules. All actions are schema-bound and logged, which helps meet compliance requirements for incident response and management.
An MCP server could connect to a ticketing platform, enabling AI to look up case histories, suggest responses, or open new tickets. Capability scopes ensure the assistant can’t close tickets or change account data without explicit human approval.
A business intelligence MCP server might expose resources for querying KPIs, downloading reports, or generating dashboards from a data warehouse. Typed outputs enable hosts to merge data from different reports into a single, coherent narrative for end-users.
Content-oriented MCP servers can interact with CMS platforms, digital asset managers, or design tools. This allows AI agents to retrieve templates, update articles, or request image assets within clearly defined schemas and access scopes.
An MCP server is effectively a new surface, so every capability it exposes must be treated as a potential attack vector. Without proper controls, tools could be invoked for unauthorized actions, resources could leak sensitive data, and prompts could unintentionally reveal private context.
The trust boundary between host and server is critical. If either side is compromised, the other could be abused to escalate privileges or exfiltrate data.
The MCP specification does not include built-in authentication, authorization, credential rotation, session revocation, logging, or approval workflows. This means implementers must design and enforce their own controls.
Reducing MCP server risks requires a layered security approach that addresses identity, validation, monitoring, and governance. Best practices typically include:
Grant each AI tool, resource, and integration only the access strictly necessary for its function. Limit sensitive operations to specific roles or approval workflows to enforce the principle of least privilege.
Implement mTLS or OAuth 2 with short-lived, scoped tokens. Require both client and server to verify identity before exchanging data.
Validate inputs and outputs against strict schemas to prevent malformed data, malicious payloads, or unapproved fields from passing through.
Watch for unusual patterns in tool usage, data access, or execution timing. Flag anomalies for investigation before they cause damage.
Record every request, response, and approval decision, including timestamps, identities, and originating systems. Use logs for compliance verification and post-incident analysis.
Run high-risk or regulated operations in segregated environments or dedicated client instances, thereby limiting the potential impact if compromised.
Require explicit human approval for critical or irreversible actions, such as modifying production systems or changing security configurations.
Explore the MCP Access Getting Started Guide to learn how to use Teleport to provide secure connections to your MCP (Model Context Protocol) servers while improving both access control and visibility.
Are MCP servers necessary for AI?
MCP servers are not required for AI to function, but they provide a standardized way for models to interact with external systems. MCP servers remove the need to build custom connectors and improve the portability of AI integrations.
Is an MCP server an actual server?
An MCP server is not a physical server. Instead, it is a program that can run locally or remotely, acting as a controlled gateway to manage communication between AI clients and systems.
Are MCP servers safe to use?
MCP servers are secure when implemented with proper security measures, such as identity verification, schema validation, and session isolation. Similar to APIs, security depends on how capabilities are defined, constrained, and monitored.
What are the security risks of MCP servers?
MCP servers can expose security risks, including unauthorized tool execution, leakage of sensitive data from exposed resources, or misuse of prompts to reveal unintended information. If either the host or server environment is compromised, attackers may attempt to escalate privileges or exfiltrate data through the protocol.
How do you secure an MCP server?
Recognized best practices for securing MCP servers include enforcing least privilege, utilizing short-lived authentication mechanisms such as mTLS or OAuth 2.1, and validating all inputs and outputs against strict schemas. Recommended protections include real-time monitoring, audit logging, and human-in-the-loop approvals for sensitive actions.