Make sure you have installed mcp-agent before proceeding.

Overview

We’ll create a β€œfinder” agent that has access to two MCP servers:
  • Fetch Server: For retrieving web content
  • Filesystem Server: For reading local files

Step 1: Set Up Your Project

1

Create a new directory

mkdir my-first-agent
cd my-first-agent
2

Install dependencies

uv init
uv add "mcp-agent[openai]"

Step 2: Configure Your Agent

Create two configuration files:
mcp_agent.config.yaml
execution_engine: asyncio
logger:
  transports: [console]
  level: info

mcp:
  servers:
    fetch:
      command: "uvx"
      args: ["mcp-server-fetch"]
      description: "Fetch content from URLs"
    filesystem:
      command: "npx"
      args: ["-y", "@modelcontextprotocol/server-filesystem", "."]
      description: "Read and write local files"

openai:
  default_model: gpt-4o
mcp_agent.secrets.yaml
openai:
  api_key: "your-openai-api-key-here"
Replace "your-openai-api-key-here" with your actual OpenAI API key. You can get one from the OpenAI platform.

Step 3: Create Your Agent

Create a file called main.py:
import asyncio
from mcp_agent.app import MCPApp
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM

# Create the MCP app

app = MCPApp(name="finder_agent")

async def main():
    async with app.run() as mcp_agent_app:
        logger = mcp_agent_app.logger

        # Create an agent with access to fetch and filesystem servers
        finder_agent = Agent(
            name="finder",
            instruction="""You can read local files or fetch URLs.
                Return the requested information when asked.""",
            server_names=["fetch", "filesystem"]
        )

        async with finder_agent:
            # List available tools
            list_tools_result = await finder_agent.list_tools()
            logger.info("Available tools:", data=[tool.name for tool in list_tools_result.tools])

            # Attach an OpenAI LLM to the agent
            llm = await finder_agent.attach_llm(OpenAIAugmentedLLM)

            # Example 1: Read a local file
            print("\nπŸ” Reading local file...")
            result = await llm.generate_str(
                "Create a simple README.md file, then show me its contents"
            )
            print(f"πŸ“„ Result: {result}")

            # Example 2: Fetch web content
            print("\n🌐 Fetching web content...")
            result = await llm.generate_str(
                "Fetch the first two paragraphs from https://www.anthropic.com/research/building-effective-agents"
            )
            print(f"πŸ“° Result: {result}")

            # Example 3: Multi-turn conversation
            print("\nπŸ’¬ Multi-turn conversation...")
            result = await llm.generate_str(
                "Summarize that content in a 140-character tweet"
            )
            print(f"🐦 Tweet: {result}")

if __name__ == "__main__":
    asyncio.run(main())

Step 4: Run Your Agent

uv run main.py

Expected Output

You should see output similar to this:
  πŸ” Reading local file...
  πŸ“„ Result: I've created a README.md file with basic project information. Here's its contents:

  # My First Agent

  This is a simple mcp-agent that can read files and fetch web content.

  ## Features
  - File system access
  - Web content fetching
  - Multi-turn conversations

  🌐 Fetching web content...
  πŸ“° Result: According to Anthropic's research on building effective agents,
  there are several key patterns for creating robust AI systems...

  πŸ’¬ Multi-turn conversation...
  🐦 Tweet: Anthropic's research reveals key patterns for building effective AI agents:
  parallel processing, routing, and human feedback loops. #AI #Agents

What Just Happened?

Agent Creation

You created an agent with specific instructions and access to two MCP servers

Tool Discovery

The agent automatically discovered available tools from connected MCP servers

LLM Integration

You attached an OpenAI LLM that can use the discovered tools

Multi-turn Chat

The agent maintains conversation context across multiple interactions

Troubleshooting

Next Steps

Check out the examples directory for 30+ working examples covering different use cases and patterns.