What is an MCP server and why it shifts AI integrations
An MCP server is an open standard for connecting AI agents to a company's data and tools. Knowing what it is and what it changes shapes how you plan your next integration.
Key points
If you have seen the MCP acronym pop up in recent months, it is not by chance. The Model Context Protocol has become one of the most discussed topics among teams building AI agents, and it is starting to appear in the conversations of those considering hiring one.
This article explains what an MCP server is, why it solves a real problem in connected AI, and above all, what it implies for an SMB that is considering integrating agents with its tools: ERP, CRM, email, internal files. No promises, no futurology.
What an MCP server is, in one sentence
An MCP server is a standardized bridge between an AI model and a data source or tool. It publishes the actions the tool can execute (read an email, create a contact in the CRM, search for a file) in a format that any agent compatible with the protocol understands, without the need for custom integration.
The protocol was defined by Anthropic at the end of 2024 as an open standard. Today it is natively supported by Claude (Anthropic), by GPT-based agents through community SDKs, and by a growing number of commercial products (Zed, Cursor, Sourcegraph, internal tools at larger companies). The core idea is simple: instead of writing a custom connector for each model + tool combination, the MCP server exposes a common interface and the models speak that same language.
The problem it solves
Until now, every time you wanted an agent to read your Holded, write to your HubSpot, or query an internal sheet, the same thing happened: a custom connector. If you switched models (from GPT to Claude, for instance), a good part of that connector had to be rewritten. If an internal tool got updated, the connector stopped working and no one knew until the agent failed.
The result, in practice, is that many AI projects in SMBs stayed at the pilot stage. The cost of maintaining twenty custom connectors against five different models was prohibitive, and no one wanted to be locked into a single AI provider out of fear of technical dependency.
MCP changes that equation. You build one MCP server for your Holded, once. It works with Claude. It works with GPT. It works with Gemini. It works with whatever model comes out next year. Your integration does not expire every six months at the pace of the marketing of the big AI platforms.
How it works, without going into architecture
Three pieces:
- MCP server. A small process (usually a Node, Python, or Go service) that knows how to talk to your specific tool: your CRM, your database, your ticketing system. It exposes “tools” (actions the agent can execute) and “resources” (data the agent can read).
- MCP client. The environment where the agent lives: Claude Desktop, Cursor, an internal application built on top of the Anthropic or OpenAI SDK. The client knows how to discover the available MCP servers and how to invoke their tools.
- Model. The AI model itself (Claude, GPT, Gemini, open-weights). The client passes the list of available tools, the model decides which to invoke based on the conversation or the workflow, and the server executes the action against the real tool.
The key point is that the three components are interchangeable. You swap the model, the MCP servers keep working. You swap the client, the servers keep working. You swap the server of a vendor (for example, you migrate from Holded to Sage), you rewrite only that server, everything else stays the same.
A concrete example
Imagine an SMB with Holded as ERP, HubSpot as CRM, and an internal Drive with proposal templates. You want an agent that, when a client sends an email asking for a quote, looks up the client data in HubSpot, retrieves the latest conversation, checks the available products in Holded, and prepares a draft proposal using the Drive template.
Without MCP, this requires four custom integrations (email, HubSpot, Holded, Drive), each tied to the specific model the agent uses. If you change models in six months, you rewrite all four.
With MCP, you set up:
- An MCP server for Holded (or use a community one if it exists).
- An MCP server for HubSpot.
- An MCP server for Drive.
- An MCP server for your email system.
And you connect the agent to the MCP client of your choice. Any compatible model can run the complete workflow. The day the model of the moment improves enough, you swap it and the servers keep working.
Why it could replace today’s integration systems
The reasonable hypothesis is that MCP, or an equivalent protocol that ends up winning, will replace three integration families that are today the norm:
- Proprietary plugins. The early versions of plugins for ChatGPT and similar tied each integration to a specific platform. MCP removes that coupling: the same integration serves any compatible client.
- Connectors in automation tools. Platforms like Zapier or Make offer thousands of connectors, but each one needs to be maintained by the platform itself. With MCP, a server can be maintained by the vendor of the original tool, by the community, or by the client.
- Specific SDK wrappers. Today, integrating OpenAI Function Calling with your internal tools means writing wrappers tied to its SDK. MCP standardizes the interface: the wrapper stops being part of the agent and becomes part of the tool.
It will not be immediate. Established platforms have commercial incentives to delay standard adoption. But the historical pattern is clear: when an open protocol that solves a real problem appears (HTTP, SMTP, OAuth in their day), proprietary options end up converging. MCP looks like that protocol for connected AI.
What MCP does not solve
It is worth putting optimism in its place. MCP is not:
- A recipe for building reliable agents. The protocol standardizes how the model talks to the tools. The reliability of the agent depends on the prompt, the model, the data quality, and above all the bounded workflow the agent is asked to execute. A perfect MCP server does not save a poorly defined agent.
- A guarantee of security. The server exposes tools to the model. If those tools include “delete CRM records” without human confirmation, the agent can execute them. The permission layer, kill-switch, and human validation remain the implementer’s responsibility.
- A replacement for business logic. The MCP server is a bridge. The rules about which clients are priority, which products are active, which cases require escalation still live in the server’s code or in the original tool.
- A shortcut to skip the discovery phase. Even though the technical integration is faster, the prior work of defining the workflow, the success metric, the kill-switch, and the human fallback remains the same.
What changes for an SMB
Three concrete things, assuming the MCP ecosystem keeps growing at the current pace.
1. Integration cost going down. Connectors already written (published by the community or by the tool vendor itself) reduce custom work. This lowers the threshold at which an agent makes economic sense, especially for medium-volume processes.
2. Less dependency on the AI provider. An SMB that invests in an MCP architecture is not locked into Claude, GPT, or Gemini specifically. It can change models based on reliability or cost without rewriting the integration. The mechanical layer of the workflow (MCP server) survives the model changes.
3. Ability to combine tools from different sources. A single agent can query an MCP server for Holded, another for a custom internal system, and another for a public API, with no additional integration cost. This opens up workflows that were previously prohibitive due to the complexity of combining several tools.
What does not change is the human part: defining the workflow, validating what the agent does, reviewing the success metric, keeping the kill-switch operational. MCP frees the mechanical layer of integrations; it does not replace the team that decides what to automate and how. The layer where the team adds value (judgment, ambiguous decisions, client relationships) stays untouched.
How we apply this at serpixel
serpixel (Clever European Business, S.L.) is a bespoke AI agent implementation agency for small and medium businesses in the Iberian market, registered in Spain. We design and implement specific agents for customer support, sales, and operations, integrated with the tools the client already uses (ERP, CRM, email, internal systems). The proposal is model-agnostic (Claude, GPT, Gemini), the code and data stay with the client, and every implementation includes a kill-switch, a human fallback, and an evaluation harness.
In practice, this means the agents we build today already follow the spirit of MCP. We separate the integration layer (each tool, in its own module) from the orchestration layer (what the agent does with those tools). When a new client comes in with Holded, we reuse what we did with another client that had Holded, without tying the agent to a specific model. When a stable, mature MCP server for a specific tool arrives, migrating to it will be mechanical.
The advantage for the client, in any case, is the same: the investment in the agent does not expire every time a new model comes out. The mechanical layer of the process survives, the human team keeps doing what only people do well, and the maintenance cost stays predictable. serpixel accompanies that technical decision from the start of the project, before writing a single line of code.
What to do now
If you have a bounded workflow in mind and you wonder whether MCP changes the conversation, the short answer is: probably yes, in six to eighteen months. The useful conversation today is not “which protocol will we use in 2027”, it is “which specific workflow do you have and which tools does it touch”.
A 30-minute discovery session is enough to answer three questions: whether the workflow fits an agent, which tools would have to be integrated, and which success metric makes sense to measure before budgeting anything. If you want to have that conversation, let’s book 30 minutes on Calendly. No commercial pressure, no commitment to sign, just the space needed to understand whether the project makes sense and, if it does, where to start.