This documentation was generated by CodeBoarding to provide comprehensive insights into the LLM and external tooling layer.

Overview

The LLM & External Tooling component is central to the mcp_agent project, providing the foundational capabilities for interacting with various Large Language Models (LLMs) and embedding services. It embodies the project’s architectural bias towards abstraction, extensibility, and modularity, allowing the framework to seamlessly integrate with diverse AI providers and leverage their unique strengths.

Core Components

AugmentedLLM

This is the core abstract interface for all Large Language Model (LLM) interactions. It defines a unified API for sending requests and receiving responses from various LLM providers, abstracting away provider-specific details. Implementation Details: Key Features:
  • Unified LLM API
  • Provider abstraction
  • Request/response handling
  • Cross-provider compatibility

EmbeddingModel

An abstract interface for generating numerical embeddings from text. It provides a consistent way to interact with different embedding service providers. Implementation Details: Key Features:
  • Text embedding generation
  • Provider-agnostic interface
  • Consistent API across providers
  • Numerical representation of text

ModelSelector

Responsible for dynamically selecting the most appropriate LLM model based on predefined criteria such as cost, latency, or specific model capabilities. Implementation Details: Key Features:
  • Dynamic model selection
  • Cost optimization
  • Latency optimization
  • Capability-based selection
  • Multi-criteria decision making

Multipart Converters

These modules handle the conversion of diverse content types (e.g., text, images, tool calls) between the internal Model Context Protocol (MCP) format and the specific input/output formats required by different LLM providers. Implementation Details: Key Features:
  • Content type conversion
  • MCP format standardization
  • Provider-specific formatting
  • Multi-modal content handling

Provider-Specific AugmentedLLM Implementations

Concrete implementations of the AugmentedLLM abstract class for specific LLM providers (e.g., Anthropic, OpenAI, Google, Azure, Bedrock, Ollama). These classes contain the actual logic for making API calls to their respective LLM services. Implementation Details: Key Features:
  • Provider-specific API integration
  • Authentication handling
  • Request formatting
  • Response parsing
  • Error handling

Provider-Specific EmbeddingModel Implementations

Concrete implementations of the EmbeddingModel abstract class for specific embedding providers (e.g., Cohere, OpenAI). They handle the actual API calls to generate embeddings. Implementation Details: Key Features:
  • Provider-specific embedding APIs
  • Vector generation
  • Batch processing
  • Dimension handling

Integration Components

Agent

Represents the core intelligent entity within the framework, responsible for understanding tasks, making decisions, and executing actions, often by interacting with LLMs and external tools. Implementation Details:

Orchestrator

Manages and coordinates complex, multi-step workflows, often involving multiple LLM calls, tool uses, and interactions between different agents. Implementation Details:

Router

Intelligently directs incoming requests or internal queries to the most appropriate LLM or embedding model/service based on context, intent, or other routing criteria. Implementation Details:

IntentClassifier

Determines the underlying intent of a user query or system state, leveraging either LLMs or embedding models for classification. Implementation Details: