What is MCP? The Model Context Protocol explained
Your AI tools are blind.
They can write emails. Summarize documents. Generate code. But ask them to pull your latest pipeline data from HubSpot, cross-reference it with Stripe billing, and flag accounts at churn risk?
Nothing. They can't see your tools.
That's the problem MCP fixes.
What is MCP?
MCP stands for Model Context Protocol. It's an open standard that connects AI models to external tools and data sources.
Think of it as USB-C for AI. Before USB-C, every device had its own charger, its own cable, its own port. MCP does the same thing USB-C did — one standard interface that works everywhere.
Anthropic released MCP in late 2024. Since then, it's become the default way to connect AI agents to real-world systems.
Here's the core idea: instead of building custom API integrations for every AI model and every tool, you build one MCP server. That server exposes your tool's capabilities in a format any AI model can understand.
One connection. Any model. Any tool.
How MCP works
MCP uses a client-server architecture. Three pieces:
MCP hosts
The AI application your team actually uses. Claude Desktop, Cursor, Claude Code, or your own custom AI agent. The host is where someone asks a question or gives an instruction.
MCP clients
The connectors that live inside the host. Each client maintains a one-to-one connection with a specific MCP server. The client handles the protocol — authentication, message formatting, error handling.
MCP servers
This is where it gets interesting. An MCP server wraps a specific tool or data source. It exposes three things:
- Tools — actions the AI can take. "Create a deal in HubSpot." "Send an invoice in Stripe." "Close a ticket in Zendesk."
- Resources — data the AI can read. "Show me all deals closing this month." "Pull the latest NRR number." "Get this customer's support history."
- Prompts — pre-built templates for common workflows. "Run the weekly pipeline review." "Generate a churn risk report."
The AI model doesn't need to know HubSpot's API. It doesn't need to understand Stripe's webhook format. It just talks to the MCP server, and the server handles everything else.
A concrete example
Let's say you're a B2B founder. You use HubSpot for CRM, Stripe for billing, Intercom for support, and Mixpanel for product analytics.
Without MCP, these tools don't talk to each other. Your sales team can't see support tickets. Your CS team can't see billing data. Your product team can't see pipeline stage.
With MCP, you connect each tool through its own MCP server:
- HubSpot MCP server — exposes deals, contacts, companies, pipeline data
- Stripe MCP server — exposes subscriptions, invoices, revenue metrics
- Intercom MCP server — exposes conversations, tickets, customer satisfaction
- Mixpanel MCP server — exposes product usage, feature adoption, engagement
Now your AI agent can see everything. You can ask: "Which accounts expanded usage by 40% last month but haven't upgraded their plan?" The agent queries Mixpanel for usage data, cross-references Stripe for billing, and pulls the account details from HubSpot.
That question used to require a data analyst, three exports, and a spreadsheet. Now it takes ten seconds.
Why MCP matters for B2B teams
B2B companies run on disconnected tools. The average Series B startup uses 15-25 SaaS products. None of them share data natively.
This creates three problems MCP directly addresses.
Problem 1: Data silos kill AI usefulness
AI is only as good as the data it can access. If your AI agent can't see your CRM, it can't help with sales. If it can't see support tickets, it can't identify churn signals.
Most companies try to fix this with data warehouses. They pipe everything into Snowflake or BigQuery, run dbt models, and build dashboards.
That works for analytics. It doesn't work for AI agents that need to take action.
MCP gives AI agents direct access to your tools. Not a stale copy of your data. The live system. Read and write.
Problem 2: Custom integrations don't scale
Every time you connect a new tool to an AI model, you build a custom integration. Different auth. Different data formats. Different error handling.
With 20 tools and 3 AI models, that's 60 integrations to maintain. Every API change breaks something.
MCP standardizes this. Build the server once. It works with every MCP-compatible AI client.
Problem 3: AI agents need to act, not just analyze
Dashboards show you what happened. AI agents should do something about it.
MCP servers expose tools — actions the AI can take. Create a task. Send a message. Update a record. This is what turns AI from a fancy search engine into an actual team member.
What MCP servers look like in practice
An MCP server is a lightweight program. It can run locally or on a remote server. Most are built in Python or TypeScript.
Here's what a simple MCP server exposes:
Server: hubspot-mcp-server
Tools:
- search_contacts(query) → list of matching contacts
- create_deal(name, amount, stage) → new deal ID
- update_deal_stage(deal_id, stage) → confirmation
Resources:
- deals://closing-this-month → list of deals
- contacts://recently-created → new contacts
- pipeline://summary → pipeline metrics
The AI model discovers these capabilities automatically. When someone asks "show me deals closing this month," the model knows to call the deals://closing-this-month resource.
No prompt engineering. No manual tool configuration. The server declares what it can do, and the model figures out when to use it.
MCP transport layers: STDIO vs HTTP+SSE
Two transport mechanisms power MCP. Which one you use determines where your servers run, who can access them, and how you handle secrets.
STDIO transport runs the MCP server as a local process on the same machine as your AI client. Communication happens over stdin and stdout — no network, no ports, no infrastructure. If you're connecting Claude Desktop to your local Git history or a filesystem tool, STDIO is what's running under the hood. It's simple to set up and keeps everything on your machine. The tradeoff: it doesn't travel. A STDIO server can't be shared across a team, can't run in the cloud, and can't be accessed by a remote agent.
HTTP+SSE transport runs the MCP server as a remote service. Your AI client connects over HTTP and the server pushes updates back using Server-Sent Events — a lightweight mechanism for streaming data from server to client. This is how you run MCP at team scale. One server, multiple clients, cloud deployment. A shared HubSpot MCP server your whole revenue team queries.
Choosing between them comes down to two questions: where does your data need to stay, and who needs access?
Use STDIO when you're working with sensitive local data — internal databases, filesystem contents, credentials that should never leave the machine. Use HTTP+SSE when you need team-shared connections, cloud-hosted tools, or multi-agent workflows where multiple AI systems need to reach the same data source.
Authentication follows the same logic. STDIO servers typically use environment variables to pass API keys and secrets — they live on the local machine and never cross a network. HTTP+SSE servers use OAuth 2.0 or API key headers sent with each request.
Most pre-built MCP servers support both transports. Start with STDIO to validate a connection works. Move to HTTP+SSE when you need to scale access beyond one machine.
MCP security: what you need to know before connecting your tools
MCP gives AI agents direct access to your business tools. That's the point. It's also the risk.
A misconfigured MCP server isn't just a broken integration. It's a potential data exposure. Before you connect anything to production, understand what you're actually authorizing.
Start with least privilege. Scope every API token to the minimum access required. If your HubSpot MCP server only needs to read contacts, don't give it admin access to the account. If your Slack server only needs to read channels, don't grant it message deletion permissions. Use fine-grained OAuth scopes everywhere they're available.
What actually goes wrong in practice:
Overly broad permissions are the most common mistake. Teams connect a CRM server with admin OAuth because it's easier than figuring out the right scopes. Six months later, an AI agent modifies records it should never have touched.
Running community MCP servers without reviewing them is the second mistake. Before you run any open-source MCP server with real credentials, read the source. Know what data it reads, what it sends where, and what it logs.
Before connecting any MCP server to production:
- Review the source code if it's open source — confirm what data it reads and where it sends requests
- Check every OAuth scope the server requests and reject anything broader than necessary
- Run it against a restricted test token with synthetic data before touching production
- Never connect untrusted community servers directly to production credentials
At enterprise scale, log everything. Every MCP tool call should produce an audit record: which agent, which tool, which operation, which data, at what time. Without logging, you have a black box touching your business data.
MCP is as secure as the permissions you grant. Build least-privilege habits from day one and they'll hold as your MCP stack grows.
The MCP ecosystem today
MCP adoption has been fast. Here's where things stand:
- 2,500+ MCP servers are publicly available, covering CRM, billing, support, analytics, dev tools, databases, and more
- Major AI clients support MCP natively — Claude Desktop, Cursor, Windsurf, Claude Code, and others
- Enterprise tools are shipping their own MCP servers — Atlassian, Stripe, Slack, and dozens more
- 7,000+ tools are accessible through MCP connections
You can browse the full directory of MCP servers at shyft.ai/mcp-servers.
The ecosystem is growing because MCP solves a real problem. Every AI tool builder was reinventing the same integration wheel. Now there's a standard.
MCP vs. traditional API integrations
| Traditional APIs | MCP | |
|---|---|---|
| Discovery | Read docs, write code | AI discovers capabilities automatically |
| Auth | Different per tool | Standardized |
| Maintenance | Breaks on API changes | Server handles versioning |
| AI compatibility | Build adapters per model | Works with any MCP client |
| Bidirectional | Usually read-only | Read and write by default |
MCP doesn't replace APIs. It sits on top of them. The MCP server uses the tool's API internally. But it presents a standardized interface to AI models.
Common questions about MCP
Is MCP only for Anthropic/Claude?
No. MCP is an open standard. Any AI model can implement an MCP client. Claude was first, but the protocol is model-agnostic. OpenAI, Google, and others are adopting it.
Is MCP secure?
MCP servers handle their own authentication and authorization. You control what data is exposed and what actions are allowed. Most servers support OAuth, API keys, and role-based access.
The server runs in your infrastructure. Data doesn't pass through a third party.
Do I need to write code to use MCP?
Not necessarily. Thousands of pre-built MCP servers exist. If your tool has a popular API, there's probably an MCP server for it already.
For custom tools or specific workflows, you'll need to build a server. That's where our team comes in.
How is MCP different from function calling?
Function calling is model-specific. You define functions for GPT-4 differently than for Claude. MCP is model-agnostic — define capabilities once, use them everywhere.
MCP also handles resource access (reading data), not just tool execution (taking actions).
How to get started with MCP in one week
Most teams overthink the start. You don't need a production-ready MCP stack on day one. You need one working connection that answers a real question about your business.
Day 1-2: Install a client and run your first query
Start with Claude Desktop. It's free, has native MCP support, and requires zero infrastructure. Go to the MCP server directory and pick the server for the tool your team uses most.
Install it. Configure auth. Ask a question you'd normally have to log into the tool to answer.
Goal: one real business question answered through MCP. You're validating the connection works.
Day 3-4: Add a second server and run a cross-tool query
Pick a second tool that touches the first. Billing and CRM. Support and product analytics.
Connect both. Then ask a question that requires both: "Which accounts in HubSpot had Stripe payment failures last week?"
When you see cross-tool data in one place without logging into two systems and pasting between tabs, the use cases become obvious.
Day 5: Evaluate, plan, and decide
Note what worked and what broke. Identify the 3-5 MCP servers that would cover 80% of your team's cross-tool questions.
Map your security model. Decide which servers run locally (STDIO) vs. remotely (HTTP+SSE). Define token scopes. Establish a rotation schedule.
Then make one decision: build this in-house or bring in help. If you want to skip the trial phase and go straight to production, our foundation build covers a typical B2B stack in 4-6 weeks.
What's next for MCP
MCP is still early. The spec is evolving. But the direction is clear.
Every SaaS tool will ship an MCP server. Just like every tool ships a REST API today. It's becoming table stakes.
The companies that connect their stack now will have a compounding advantage. Their AI agents will be smarter because they have access to more data. Their teams will move faster because the infrastructure is already in place.
The companies that wait will spend 2027 catching up.
Start here
- Browse the MCP server directory — see what's available for your tools
- Scan your current stack with our free AI scan — find the gaps
- Talk to us about building your unified data layer — we'll scope it in a week
MCP isn't a future technology. It's shipping today. The question isn't whether to adopt it. It's how fast you can connect your stack.