The rise of the model context protocol in the age of agency



We’ve all heard of the model context protocol (MCP) in the context of artificial intelligence. In this article, we will learn what MCP is and why it is becoming more important day by day. Why do we need MCP when APIs already exist? Although we have seen a huge increase in the popularity of MCP, is there any strength in this new protocol? In part one, we’ll look at the parallels between API and MCP, and then begin to explore what makes it different.

From APIs to model context protocol

A single isolated computer is limited in the amount of data it can access, and this directly affects its use. APIs have been created to enable data transfer between systems. Just like APIs, Model Context Protocol (MCP) is a protocol for communication between AI agents using large language models (LLM). APIs are written primarily for developers, while MCP servers are created for AI agents (Johnson, 2025).

What is MCP?

MCP was introduced by Anthropic on November 25, 2024 as an open source standard for communicating between AI assistants and external data sources. AI agents are limited by data fragmentation in isolated systems (Anthropic, 2024). Protocol defines how agents can interact with external systems, receive user input, and run automated agents.

Basic MCP uses a client-server model and has three main features for clients and servers.

  • MCP servers: tools, resources and hints
  • MCP clients: clarification, roots and sampling

To keep this article short, the focus will be on the most important feature of both the client and the server. Tools for MCP servers are the primary way to perform complex tasks, and clients use disambiguation to provide two-way communication between agent and user.

Agents select and use appropriate tools (functions) based on input they receive from the user, rather than explicitly calling APIs. If the tool requires certain parameters, the agent will use elicitation to get information from the user. This allows for a more responsive workflow where two-way communication between the LLM and the user is possible.

Why do you need MCP now?

A very valid question to ask is, if the APIs already exist, then why is there a need for MCP? APIs are designed to connect fragmented data systems, and SaaS applications now allow two-way communication with the user. So why do you need MCP now?

The main need for MCP is that the user of external data has changed from developers to AI agents. A developer will typically program an application using APIs that behave deterministically. Whereas, AI agents will use the user request and make autonomous decisions to fulfill the user request. By its very nature, workflow execution by an AI agent is non-deterministic.

APIs are machine-executable contracts that act deterministically. API works if users know what action to take next (Posta, 2025). Artificial intelligence agents operate on probabilistic LLMs that do not produce consistently reproducible results across all tasks (Atil, 2024). A difference in LLM’s response is expected, and this poses a challenge for autonomous execution.

MCP to the rescue

MCP solves the problem of variation in agent execution by providing a high-level abstraction that covers more functionality than API endpoints. Tools allow LLM models to perform operations such as flight search, calendar booking, and more (Understanding MCP Servers, 2026).

A common misconception for tools is that they are simply abstractions over existing API calls. Tools are designed for abstraction over functionality, not abstraction over API calls. If many APIs are exposed as mere tools, this will increase the cost and context dimension for the non-ideal agent (Johnson, 2025).

A tool may include multiple API calls in its implementation to achieve the desired result. The agent will automatically review the list of available tools to select the most appropriate tools and determine the appropriate execution order.

MCP adoption boom

Since its release in 2024, MCP has seen a steady rise in popularity. The following chart from Google Trends demonstrates the relative interest in MCP since its launch.

Many companies have launched their own MCP servers to make it easier to set up autonomous agents. As of February 2026, there are already over 6,400 MCP servers registered on the official MCP registry. The number of these MCP servers is only expected to increase in the near future. The official registry for MCP servers is still under review, and the ecosystem has grown massively in less than a year.

Other major players in the market have adopted MCP and added support to their customers. OpenAI added MCP support to ChatGPT in March, and Google added support a few weeks later in April 2025. This demonstrates the continued validity of the protocol and the rapid pace of adoption.

What lies ahead?

MCP is still in the early stages of widespread adoption, where many applications need to mature and enter production. Leonardo Pineryo of Pento AI summed it up best: “The first year of the MCP changed how AI systems connect to the world. Its second year will change what they can achieve” (2025).

Safeguards around tools are an area that will see further development, as trust is one of the biggest concerns with AI agents. With better safeguards in tools, the AI ​​agent can be allowed to operate with greater autonomy. Over the next year, MCP is sure to see continued growth in both its capabilities and application volume.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *