Build effective agents with Model Context Protocol using simple, composable patterns.
mcp-agent is a simple, composable framework for building AI agents using the Model Context Protocol (MCP). It implements Anthropic’s Building Effective Agents patterns in a model-agnostic way, allowing you to create robust agent applications that can leverage any MCP-compatible tools and services.
mcp-agent bridges the gap between the low-level Model Context Protocol and high-level agent patterns. It handles the complex mechanics of managing MCP server connections, tool orchestration, and workflow patterns so you can focus on building your application logic.
MCP Integration
Seamless connection to any MCP server with tools, resources, prompts, and
file system roots
Agent Patterns
Pre-built implementations of proven agent workflow patterns
Model Agnostic
Works with OpenAI, Anthropic, Azure, AWS Bedrock, Google, and Ollama
Composable
Chain workflow patterns together for complex behaviors
Human-in-the-Loop
Built-in support for human input and workflow signaling
As more services become MCP-aware, you can use mcp-agent to build agents that leverage these tools out-of-the-box. mcp-agent is the only framework purpose-built for the MCP protocol, making it the simplest way to build robust agent applications.
Here’s a simple agent that can read files and fetch web content:
Copy
Ask AI
import asynciofrom mcp_agent.app import MCPAppfrom mcp_agent.agents.agent import Agentfrom mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLMapp = MCPApp(name="finder_agent")async def main(): async with app.run() as mcp_agent_app: # Create an agent with access to fetch and filesystem servers finder_agent = Agent( name="finder", instruction="You can read local files or fetch URLs. Return the requested information when asked.", server_names=["fetch", "filesystem"] ) async with finder_agent: # Attach an LLM to the agent llm = await finder_agent.attach_llm(OpenAIAugmentedLLM) # Ask the agent to read a file result = await llm.generate_str("Show me what's in README.md") print(result) # Multi-turn conversation result = await llm.generate_str("Summarize that in a tweet") print(result)if __name__ == "__main__": asyncio.run(main())