The LLM & External Tooling component is central to the mcp_agent project, providing the foundational capabilities for interacting with various Large Language Models (LLMs) and embedding services. It embodies the project’s architectural bias towards abstraction, extensibility, and modularity, allowing the framework to seamlessly integrate with diverse AI providers and leverage their unique strengths.
This is the core abstract interface for all Large Language Model (LLM) interactions. It defines a unified API for sending requests and receiving responses from various LLM providers, abstracting away provider-specific details.Implementation Details:
An abstract interface for generating numerical embeddings from text. It provides a consistent way to interact with different embedding service providers.Implementation Details:
Responsible for dynamically selecting the most appropriate LLM model based on predefined criteria such as cost, latency, or specific model capabilities.Implementation Details:
These modules handle the conversion of diverse content types (e.g., text, images, tool calls) between the internal Model Context Protocol (MCP) format and the specific input/output formats required by different LLM providers.Implementation Details:
Concrete implementations of the AugmentedLLM abstract class for specific LLM providers (e.g., Anthropic, OpenAI, Google, Azure, Bedrock, Ollama). These classes contain the actual logic for making API calls to their respective LLM services.Implementation Details:
Concrete implementations of the EmbeddingModel abstract class for specific embedding providers (e.g., Cohere, OpenAI). They handle the actual API calls to generate embeddings.Implementation Details:
Represents the core intelligent entity within the framework, responsible for understanding tasks, making decisions, and executing actions, often by interacting with LLMs and external tools.Implementation Details:
Manages and coordinates complex, multi-step workflows, often involving multiple LLM calls, tool uses, and interactions between different agents.Implementation Details:
Intelligently directs incoming requests or internal queries to the most appropriate LLM or embedding model/service based on context, intent, or other routing criteria.Implementation Details:
Determines the underlying intent of a user query or system state, leveraging either LLMs or embedding models for classification.Implementation Details: