Skip to content
← Back to blog
NewsTips

What is an MCP server and why it shifts AI integrations

An MCP server is an open standard for connecting AI agents to a company's data and tools. Knowing what it is and what it changes shapes how you plan your next integration.

serpixel ·
Abstract illustration of digital neural networks and connected data flows, representing the idea of connected AI through the MCP protocol

Key points

MCP is an open standard, not a product: Model Context Protocol was defined by Anthropic at the end of 2024 as an open protocol. Any AI model, any client, and any tool can speak the same language without being tied to a specific vendor.
One MCP server per tool, every model compatible: Instead of writing a custom connector for each model + tool combination, you set up one MCP server per tool (Holded, HubSpot, email, Drive). It works with Claude, GPT, and Gemini without rewriting the integration.
Solves the model lock-in trap: Many SMBs are postponing agents because they fear being tied to an AI provider. With MCP, the integration survives model changes: you swap the model, you don't rewrite the connectors.
MCP does not solve reliability or security on its own: The protocol standardizes the interface. The quality of the agent still depends on the bounded workflow, the prompt, the kill-switch, and the human fallback. A perfect MCP server does not save a poorly defined agent.
Replaces proprietary plugins and SDK wrappers: Closed integrations (plugins tied to specific platforms, wrappers locked to a single SDK) are the piece MCP is starting to replace. The mechanical layer of integrations is becoming standardized, the same way HTTP, SMTP, or OAuth did.
For an SMB, integration cost goes down: The cost of putting an agent into production drops because reusable connectors already exist or are published by the community. This lowers the threshold at which an agent makes economic sense for medium-volume processes.

If you have seen the MCP acronym pop up in recent months, it is not by chance. The Model Context Protocol has become one of the most discussed topics among teams building AI agents, and it is starting to appear in the conversations of those considering hiring one.

This article explains what an MCP server is, why it solves a real problem in connected AI, and above all, what it implies for an SMB that is considering integrating agents with its tools: ERP, CRM, email, internal files. No promises, no futurology.

What an MCP server is, in one sentence

An MCP server is a standardized bridge between an AI model and a data source or tool. It publishes the actions the tool can execute (read an email, create a contact in the CRM, search for a file) in a format that any agent compatible with the protocol understands, without the need for custom integration.

The protocol was defined by Anthropic at the end of 2024 as an open standard. Today it is natively supported by Claude (Anthropic), by GPT-based agents through community SDKs, and by a growing number of commercial products (Zed, Cursor, Sourcegraph, internal tools at larger companies). The core idea is simple: instead of writing a custom connector for each model + tool combination, the MCP server exposes a common interface and the models speak that same language.

The problem it solves

Until now, every time you wanted an agent to read your Holded, write to your HubSpot, or query an internal sheet, the same thing happened: a custom connector. If you switched models (from GPT to Claude, for instance), a good part of that connector had to be rewritten. If an internal tool got updated, the connector stopped working and no one knew until the agent failed.

The result, in practice, is that many AI projects in SMBs stayed at the pilot stage. The cost of maintaining twenty custom connectors against five different models was prohibitive, and no one wanted to be locked into a single AI provider out of fear of technical dependency.

MCP changes that equation. You build one MCP server for your Holded, once. It works with Claude. It works with GPT. It works with Gemini. It works with whatever model comes out next year. Your integration does not expire every six months at the pace of the marketing of the big AI platforms.

How it works, without going into architecture

Three pieces:

  1. MCP server. A small process (usually a Node, Python, or Go service) that knows how to talk to your specific tool: your CRM, your database, your ticketing system. It exposes “tools” (actions the agent can execute) and “resources” (data the agent can read).
  2. MCP client. The environment where the agent lives: Claude Desktop, Cursor, an internal application built on top of the Anthropic or OpenAI SDK. The client knows how to discover the available MCP servers and how to invoke their tools.
  3. Model. The AI model itself (Claude, GPT, Gemini, open-weights). The client passes the list of available tools, the model decides which to invoke based on the conversation or the workflow, and the server executes the action against the real tool.

The key point is that the three components are interchangeable. You swap the model, the MCP servers keep working. You swap the client, the servers keep working. You swap the server of a vendor (for example, you migrate from Holded to Sage), you rewrite only that server, everything else stays the same.

A concrete example

Imagine an SMB with Holded as ERP, HubSpot as CRM, and an internal Drive with proposal templates. You want an agent that, when a client sends an email asking for a quote, looks up the client data in HubSpot, retrieves the latest conversation, checks the available products in Holded, and prepares a draft proposal using the Drive template.

Without MCP, this requires four custom integrations (email, HubSpot, Holded, Drive), each tied to the specific model the agent uses. If you change models in six months, you rewrite all four.

With MCP, you set up:

  • An MCP server for Holded (or use a community one if it exists).
  • An MCP server for HubSpot.
  • An MCP server for Drive.
  • An MCP server for your email system.

And you connect the agent to the MCP client of your choice. Any compatible model can run the complete workflow. The day the model of the moment improves enough, you swap it and the servers keep working.

Why it could replace today’s integration systems

The reasonable hypothesis is that MCP, or an equivalent protocol that ends up winning, will replace three integration families that are today the norm:

  • Proprietary plugins. The early versions of plugins for ChatGPT and similar tied each integration to a specific platform. MCP removes that coupling: the same integration serves any compatible client.
  • Connectors in automation tools. Platforms like Zapier or Make offer thousands of connectors, but each one needs to be maintained by the platform itself. With MCP, a server can be maintained by the vendor of the original tool, by the community, or by the client.
  • Specific SDK wrappers. Today, integrating OpenAI Function Calling with your internal tools means writing wrappers tied to its SDK. MCP standardizes the interface: the wrapper stops being part of the agent and becomes part of the tool.

It will not be immediate. Established platforms have commercial incentives to delay standard adoption. But the historical pattern is clear: when an open protocol that solves a real problem appears (HTTP, SMTP, OAuth in their day), proprietary options end up converging. MCP looks like that protocol for connected AI.

What MCP does not solve

It is worth putting optimism in its place. MCP is not:

  • A recipe for building reliable agents. The protocol standardizes how the model talks to the tools. The reliability of the agent depends on the prompt, the model, the data quality, and above all the bounded workflow the agent is asked to execute. A perfect MCP server does not save a poorly defined agent.
  • A guarantee of security. The server exposes tools to the model. If those tools include “delete CRM records” without human confirmation, the agent can execute them. The permission layer, kill-switch, and human validation remain the implementer’s responsibility.
  • A replacement for business logic. The MCP server is a bridge. The rules about which clients are priority, which products are active, which cases require escalation still live in the server’s code or in the original tool.
  • A shortcut to skip the discovery phase. Even though the technical integration is faster, the prior work of defining the workflow, the success metric, the kill-switch, and the human fallback remains the same.

What changes for an SMB

Three concrete things, assuming the MCP ecosystem keeps growing at the current pace.

1. Integration cost going down. Connectors already written (published by the community or by the tool vendor itself) reduce custom work. This lowers the threshold at which an agent makes economic sense, especially for medium-volume processes.

2. Less dependency on the AI provider. An SMB that invests in an MCP architecture is not locked into Claude, GPT, or Gemini specifically. It can change models based on reliability or cost without rewriting the integration. The mechanical layer of the workflow (MCP server) survives the model changes.

3. Ability to combine tools from different sources. A single agent can query an MCP server for Holded, another for a custom internal system, and another for a public API, with no additional integration cost. This opens up workflows that were previously prohibitive due to the complexity of combining several tools.

What does not change is the human part: defining the workflow, validating what the agent does, reviewing the success metric, keeping the kill-switch operational. MCP frees the mechanical layer of integrations; it does not replace the team that decides what to automate and how. The layer where the team adds value (judgment, ambiguous decisions, client relationships) stays untouched.

How we apply this at serpixel

serpixel (Clever European Business, S.L.) is a bespoke AI agent implementation agency for small and medium businesses in the Iberian market, registered in Spain. We design and implement specific agents for customer support, sales, and operations, integrated with the tools the client already uses (ERP, CRM, email, internal systems). The proposal is model-agnostic (Claude, GPT, Gemini), the code and data stay with the client, and every implementation includes a kill-switch, a human fallback, and an evaluation harness.

In practice, this means the agents we build today already follow the spirit of MCP. We separate the integration layer (each tool, in its own module) from the orchestration layer (what the agent does with those tools). When a new client comes in with Holded, we reuse what we did with another client that had Holded, without tying the agent to a specific model. When a stable, mature MCP server for a specific tool arrives, migrating to it will be mechanical.

The advantage for the client, in any case, is the same: the investment in the agent does not expire every time a new model comes out. The mechanical layer of the process survives, the human team keeps doing what only people do well, and the maintenance cost stays predictable. serpixel accompanies that technical decision from the start of the project, before writing a single line of code.

What to do now

If you have a bounded workflow in mind and you wonder whether MCP changes the conversation, the short answer is: probably yes, in six to eighteen months. The useful conversation today is not “which protocol will we use in 2027”, it is “which specific workflow do you have and which tools does it touch”.

A 30-minute discovery session is enough to answer three questions: whether the workflow fits an agent, which tools would have to be integrated, and which success metric makes sense to measure before budgeting anything. If you want to have that conversation, let’s book 30 minutes on Calendly. No commercial pressure, no commitment to sign, just the space needed to understand whether the project makes sense and, if it does, where to start.

Tags

MCP protocolMCP serverconnected AIAI business integrationmodel context protocolSMB AI agent

Frequently asked questions

An MCP server is a small process that exposes, in a standardized format, the actions a tool can execute and the data it can read. The MCP client (where the agent lives) discovers those servers and passes the list of available tools to the model. When the model decides to execute one, the server translates it into the real call against the original tool. The key point is that the interface is the same regardless of the model or the client.
A traditional API is consumed by a specific client that knows it in advance. An MCP server publishes its actions in a self-describing way: any client compatible with the protocol can discover what tools it offers without being programmed specifically for that tool. This lets the same model invoke tools from servers that did not exist when it was trained. The underlying API is still there; MCP is the layer that makes it consumable by AI agents in a generic way.
Not strictly. An agent can work with custom integrations without going through MCP. But if you expect to change models in one or two years, or to add more tools over time, an architecture inspired by MCP (separating the integration layer from the orchestration layer) reduces future cost. If the project is a closed pilot, a direct integration can be reasonable while the MCP ecosystem matures.
Any tool with an API or a way to expose data: ERPs like Holded, Sage, Odoo, SAP Business One; CRMs like HubSpot, Pipedrive, Salesforce, Zoho; ticketing systems, calendars, file systems, internal databases, public APIs. There are community MCP servers for many common tools and they grow every month. For internal or very specific tools, the MCP server is built custom, the same way a classic connector would be, with the difference that it is only built once.
The protocol was defined by Anthropic, so the most mature native support today is Claude (Claude Desktop, Anthropic SDK). For GPT and Gemini there are implementations through community SDKs and wrappers that are maturing quickly. The reasonable hypothesis is that, if MCP settles as the standard, the major AI providers will eventually offer first-class support. In the meantime, an MCP architecture is viable with any of the three at the maturity level of each ecosystem.
The server exposes tools the model can invoke. If those tools include destructive actions (deleting records, sending emails on behalf of the user, modifying invoices) without a human validation layer, the agent can execute them. The permission layer, the kill-switch, and human confirmation before sensitive operations remain the implementer's responsibility. MCP standardizes the how, not the what is allowed to execute. A serious implementation defines which tools require human confirmation before touching real data.
It makes sense if the agent will touch more than two tools, if you expect the underlying model to change in the next two years, or if the process is critical enough that technical dependency on a single AI provider is an acceptable risk. For small, very bounded pilots, a direct integration can be faster. The useful conversation is to discuss the specific workflow, not to decide the protocol upfront.
serpixel implements custom AI agents for SMBs across three lines (customer support, sales, operations), always with a kill-switch, human fallback, and evaluation harness. The internal architecture already separates the integration layer from the orchestration layer, in line with the spirit of MCP. When stable MCP servers for each specific tool reach the reliability standard required for production, the migration will be mechanical. The client, meanwhile, is not tied to a specific model and the investment in the agent survives the changes in the ecosystem.

Related posts

Person working at a laptop with several process screens showing business software integrations
NewsTips

AI agent vs chatbot: why they are not the same thing

A chatbot answers within fixed rules. An AI agent executes processes with judgment and integrates with company tools. Knowing the difference decides what you actually buy.

Businesswoman signing a contract at her desk with a laptop
Web Designlocal-business

Spain's Kit Digital: what to ask before signing with an agente digitalizador

Spain's Kit Digital subsidises SMB websites through accredited agencies called agentes digitalizadores. Accreditation is not a quality mark. Here is what to check before you sign.

All posts →