Marcus Kazmierczak

Home mkaz.blog

Create an IMDb Query Agent with PydanticAI

After building my IMDb MCP server using FastMCP, I wanted to compare it to another approach using PydanticAI. The MCP architecture is great for creating reusable tools that multiple agents can access, but what if you just want a simple, self-contained application? PydanticAI fit the bill nicely.

What is PydanticAI?

PydanticAI is a Python framework from the makers of Pydantic for building production-grade AI agents. It provides a simple, type-safe interface for creating agents with tools (functions the LLM can call) and handles all the orchestration internally. You can also connect to MCP Servers using Pydantic AI, or use within MCP servers, but the goal here was to simplify.

Using a PydanticAI agent is simpler for standalone apps tool calling. No external coordinators, no configuration files, just Python.

The Same Tool, Two Approaches

Both implementations provide the same six tools for querying IMDb data:

  1. query_movies_by_director - Find movies by director
  2. query_movies_by_actor - Find movies by actor/actress
  3. search_movies - Search with filters (year, rating, genre)
  4. get_movie_details - Get comprehensive movie info
  5. top_rated_movies - Get highest rated movies
  6. execute_sql_query - Run custom SQL queries

Same database, same queries, different architecture. Let's compare.

Code Comparison

Tool Definition: MCP vs PydanticAI

FastMCP approach:

from fastmcp import FastMCP

mcp = FastMCP("imdb-server")

@mcp.tool()
def query_movies_by_director(
    director_name: str,
    order_by: str = "rating_desc",
    limit: int = 20
) -> str:
    """Find movies directed by a specific director, optionally ordered by rating

    Args:
        director_name: Name of the director to search for
        order_by: How to order the results
        limit: Maximum number of results to return
    """
    conn = get_db_connection()
    # ... SQL query code ...
    return formatted_results

PydanticAI approach:

from pydantic_ai import Agent, RunContext
from pydantic import BaseModel

class DatabaseDeps(BaseModel):
    db_path: str = "imdb.db"

agent = Agent(
    "anthropic:claude-sonnet-4-0",
    deps_type=DatabaseDeps,
    system_prompt="You are an IMDb movie database expert..."
)

@agent.tool
def query_movies_by_director(
    ctx: RunContext[DatabaseDeps],
    director_name: str,
    order_by: str = "rating_desc",
    limit: int = 20,
) -> str:
    """Find movies directed by a specific director."""
    conn = get_db_connection(ctx.deps.db_path)
    # ... SQL query code ...
    return formatted_results

The main differences:

  • Decorator: @mcp.tool() vs @agent.tool
  • Dependencies: PydanticAI uses RunContext for dependency injection
  • Model: PydanticAI requires specifying the LLM model upfront

Running the Agent

MCP Server (requires mcphost):

# Install the Go-based coordinator
go install github.com/mark3labs/mcphost@latest

# Create mcp_config.json configuration
# Run with mcphost coordinating between server and LLM
mcphost -m openai-gpt4 --config mcp_config.json

PydanticAI (standalone):

# Set API key
export ANTHROPIC_API_KEY="your-key"

# Run directly
python agent.py "What are Christopher Nolan's top movies?"

The PydanticAI version is refreshingly simple. No configuration files, no external tools written in other languages, just Python and your API key.

Architecture Differences

MCP: Client-Server Model

The MCP approach creates a server that exposes tools. Any MCP-compatible client can connect to your server and use those tools. An additional mcphost tool must run to act as a coordinator between the MCP server and your LLM script calling the server.

Advantages:

  • Reusable across multiple agents/applications
  • Tools can be discovered and used by any MCP client
  • Good separation of concerns (tool server vs LLM client)
  • Can serve multiple clients simultaneously

Disadvantages:

  • More complex setup
  • Requires external coordinator (mcphost)
  • Configuration overhead
  • Overkill for simple standalone apps

PydanticAI: Monolithic Application

PydanticAI creates a self-contained application where the agent, tools, and LLM integration are all in one place.

Advantages:

  • Simpler setup and deployment
  • Pure Python (no external tools needed)
  • Easier to debug and reason about
  • Better for standalone applications
  • Type-safe dependency injection

Disadvantages:

  • Tools aren't reusable by other agents
  • Can't easily share tools across projects
  • Monolithic (agent and tools are coupled)

When to Use Each Approach

Choose MCP Server when:

  • You want to create reusable tools for multiple agents
  • You're building a tool ecosystem that different LLMs/applications will access
  • You need tools to run as a separate service
  • You want Claude Code, other IDEs, or multiple applications to use your tools

Choose PydanticAI when:

  • You're building a standalone application
  • You want the simplest possible setup
  • You don't need to share tools with other agents
  • You want everything in pure Python
  • You're prototyping or building personal tools

For my use case—a personal IMDb query tool—PydanticAI is the clear winner. It does exactly what I need with minimal complexity.

Dependency Injection in PydanticAI

One nice feature of PydanticAI is its dependency injection system. You define a Pydantic model for your dependencies:

class DatabaseDeps(BaseModel):
    db_path: str = "imdb.db"

Then every tool gets a RunContext[DatabaseDeps] parameter that provides type-safe access to those dependencies:

@agent.tool
def query_movies_by_director(
    ctx: RunContext[DatabaseDeps],
    director_name: str,
    ...
) -> str:
    conn = get_db_connection(ctx.deps.db_path)
    # Use the connection...

This is cleaner than global variables or closures, and you can get full type checking from mypy/pyright.

The Verdict

Both versions call the appropriate tools and return similar results. The PydanticAI version might be slightly faster since there's no mcphost intermediary, but the difference is negligible.

For reusable tool ecosystems: Use MCP. The extra complexity pays off when you want Claude Code, multiple agents, or different applications accessing your tools.

For standalone applications: Use PydanticAI. It's simpler, pure Python, and gets out of your way.

I'm keeping both implementations in the mymdb repository:

  • Main branch: PydanticAI implementation (current default)
  • mcp-server branch: FastMCP implementation (view here)

Both solve the same problem differently. Choose the one that fits your use case.