The MCP Protocol Will Change How AI Agents Work Together
What the Model Context Protocol is, why it matters for multi-agent systems, and what I learned building MCP-compatible tools at the Lost & Found hackathon.
The Interoperability Problem No One Talks About
AI agents are proliferating. Every SaaS product is adding an "AI agent" that can take actions on behalf of users. But here is the problem no one is solving well: these agents cannot talk to each other. Your coding agent cannot ask your research agent for context. Your calendar agent cannot check with your project management agent before scheduling. Each agent is a silo, and the integration burden falls on the user or the developer writing brittle glue code.
The Model Context Protocol (MCP), introduced by Anthropic, is the first serious attempt at solving this. After spending a weekend building MCP-compatible tools at the Lost & Found hackathon, I am convinced this protocol will fundamentally change how we build and compose AI systems.
What Is MCP?
MCP is an open protocol that standardizes how AI models interact with external tools, data sources, and services. Think of it as a USB-C for AI -- a universal interface that any model can use to connect to any tool, without custom integration code for each combination.
The protocol defines three primitives:
1. Tools
Tools are functions that the AI can invoke. MCP standardizes how tools declare their capabilities, input schemas, and output formats. A tool server exposes a set of tools via a well-defined JSON-RPC interface:
from mcp.server import MCPServer
from mcp.types import Tool, TextContent
server = MCPServer("sarathi-tools")
@server.tool()
async def search_government_schemes(
state: str,
category: str,
income_bracket: str | None = None,
) -> list[TextContent]:
"""Search for government welfare schemes available in a given state.
Args:
state: Indian state name (e.g., "Assam", "Kerala")
category: Scheme category (e.g., "housing", "education", "agriculture")
income_bracket: Optional income bracket filter
"""
schemes = await scheme_database.search(
state=state, category=category, income_bracket=income_bracket
)
return [TextContent(
type="text",
text=format_schemes(schemes)
)]Any MCP-compatible model -- Claude, GPT, Gemini, or a local model -- can discover and invoke this tool without any model-specific integration code. The tool server does not need to know which model is calling it.
2. Resources
Resources are data that the AI can read. Unlike tools (which perform actions), resources provide context. An MCP server can expose files, database records, API responses, or any other data as resources:
@server.resource("scheme://{scheme_id}")
async def get_scheme_details(scheme_id: str) -> Resource:
"""Get detailed information about a specific government scheme."""
scheme = await scheme_database.get(scheme_id)
return Resource(
uri=f"scheme://{scheme_id}",
name=scheme.name,
description=scheme.summary,
mime_type="application/json",
text=json.dumps(scheme.to_dict()),
)3. Prompts
Prompts are reusable templates that guide the AI's behavior for specific tasks. They are optional but powerful for standardizing how agents interact with domain-specific tools.
Why This Matters for Multi-Agent Systems
The real power of MCP emerges when you have multiple agents that need to collaborate. Consider a scenario from the hackathon project we built -- a lost-and-found system where multiple agents coordinate:
- Intake Agent: Processes reports of lost items (via text, voice, or image)
- Matching Agent: Compares lost item reports against found item reports
- Notification Agent: Contacts users when potential matches are found
- Verification Agent: Helps users confirm matches through follow-up questions
Without MCP, you would need to write custom integration code for each agent-to-agent interaction. With MCP, each agent exposes its capabilities as tools that other agents can discover and invoke:
# MCP server configuration for the Lost & Found system
servers:
intake:
command: python
args: ["-m", "lost_found.intake_server"]
tools:
- report_lost_item
- report_found_item
- get_report_status
matching:
command: python
args: ["-m", "lost_found.matching_server"]
tools:
- find_matches
- get_match_confidence
- update_match_status
notification:
command: python
args: ["-m", "lost_found.notification_server"]
tools:
- send_match_notification
- get_notification_history
- update_contact_preferencesThe matching agent can call the intake agent's get_report_status tool to check if a report is still active before processing. The notification agent can call the matching agent's get_match_confidence to decide whether to send an immediate notification or queue it for human review. All through standardized MCP tool calls, with no custom glue code.
What I Built at the Hackathon
At the Lost & Found hackathon, our team built a complete MCP-based lost-and-found system in 36 hours. The key architectural decisions:
Each agent is an independent MCP server. This means any agent can be replaced, upgraded, or scaled independently. We swapped the matching algorithm three times during the hackathon without touching any other component.
Agent communication happens through MCP tool calls, not direct API calls. The orchestrating agent (Claude, in our case) decides which tools to call and in what order. This means the coordination logic is in the model's reasoning, not hardcoded in our application code.
We used MCP resources for shared state. Instead of a shared database that all agents query directly, each agent exposes relevant state as MCP resources. The matching agent exposes match://pending as a resource that other agents can read to see the current queue of unresolved matches.
The most interesting outcome was that we could swap Claude for a local Llama model mid-demo and the entire system continued to work because the tool interfaces were model-agnostic.
Building MCP-Compatible Tools: Practical Advice
After building several MCP servers, here is what I have learned:
Keep tools atomic. A tool should do one thing. search_items and create_report are good. search_and_create_if_not_found is bad. Let the model compose atomic tools into complex workflows.
Invest in tool descriptions. The model uses tool descriptions to decide when and how to use each tool. A vague description leads to incorrect tool selection. Include examples of when to use the tool and when NOT to use it.
Return structured data, not prose. Models work better with structured JSON responses that they can reason about, rather than pre-formatted text that limits how they can use the information.
Handle errors gracefully. Return error information in a structured format that the model can interpret and recover from, rather than throwing exceptions that crash the agent loop.
@server.tool()
async def search_items(query: str, category: str | None = None):
"""Search for lost or found item reports matching a description.
Use this tool when:
- A user wants to check if their lost item has been found
- A user has found an item and wants to check for matching lost reports
Do NOT use this tool for:
- Creating new reports (use report_lost_item or report_found_item)
- Checking the status of an existing report (use get_report_status)
"""
try:
results = await item_index.search(query, category=category, limit=10)
return [TextContent(type="text", text=json.dumps({
"matches": [r.to_dict() for r in results],
"total_count": len(results),
"query": query,
}))]
except SearchError as e:
return [TextContent(type="text", text=json.dumps({
"error": str(e),
"error_type": "search_failed",
"suggestion": "Try a broader search query or different category",
}))]The Bigger Picture
MCP is still early. The ecosystem of MCP servers is growing but remains small compared to traditional API integrations. The protocol itself is evolving -- authentication, authorization, and streaming are areas where the spec is still maturing.
But the direction is clear. Just as REST APIs standardized how web services communicate, MCP will standardize how AI agents interact with tools and with each other. The composability this enables is transformative. Instead of building monolithic AI applications, we can build ecosystems of specialized agents that discover and use each other's capabilities dynamically.
At Sarathi Studio, we are already building all new tool integrations as MCP servers. When a client needs a new capability, we build an MCP server for it once, and every agent in their system can use it immediately. The days of writing custom integration code for every model-tool combination are numbered.
Comments