Skip to main content

Lesson 9: Introduction to MCP

Topics Covered
  • Tool Calling vs MCP: What's the difference and why does it matter?
  • Which LLMs Support Tools: Not all models can call functions—here's how to check.
  • What is MCP: The protocol that standardizes tool connections.
  • MCP Architecture: Servers, clients, and transports.
  • Building MCP Servers: Create tools that work everywhere.
  • Using MCP Clients: Connect to existing MCP servers.
  • Real-World Examples: File system, databases, APIs.

In Lesson 8, you learned to give LLMs the ability to call functions. But those tools were tied to your specific application. What if you could write a tool once and use it in Claude Desktop, VS Code, your custom app, and anywhere else? That's what Model Context Protocol (MCP) enables.

1. First: Can Your LLM Even Call Tools?

Before we dive into MCP, let's address a fundamental question: not all LLMs can call tools.

What Makes an LLM "Tool-Capable"?

Tool calling is a trained capability. The model must have been specifically trained to:

  1. Recognize when a tool would help
  2. Output structured tool call requests (not just text)
  3. Incorporate tool results into responses

How to Check If a Model Supports Tools

Method 1: Check the API documentation

# OpenAI - look for "tools" or "function_calling" in docs
# If the model supports it, you can pass the tools parameter

# Claude - check "tool_use" capability
# https://docs.anthropic.com/en/docs/build-with-claude/tool-use

# Ollama - check model card for "Tools" tag
# ollama show llama3.1 --modelfile

Method 2: Try it and check for errors

from openai import OpenAI

client = OpenAI()

tools = [{
"type": "function",
"function": {
"name": "test_tool",
"description": "A test tool",
"parameters": {"type": "object", "properties": {}}
}
}]

try:
response = client.chat.completions.create(
model="gpt-4o-mini", # Try different models
messages=[{"role": "user", "content": "Hello"}],
tools=tools,
)
print("✅ Model supports tools")
except Exception as e:
if "tools" in str(e).lower() or "function" in str(e).lower():
print("❌ Model does NOT support tools")
else:
print(f"Other error: {e}")

Method 3: Check Ollama model capabilities

# List models with their capabilities
ollama list

# Check specific model
ollama show llama3.1:70b

# Look for "Tools" in the model's capabilities
# Models like llama3.1, mistral-large, command-r+ support tools
# Models like phi, tinyllama typically don't

Tool-Capable Models Reference

ProviderTool-Capable ModelsNotes
OpenAIGPT-4o, GPT-4o-mini, GPT-4 TurboAll current models
AnthropicClaude 4, Claude 3.5, Claude 3 (all variants)Opus, Sonnet, Haiku
GoogleGemini 1.5 Pro, Gemini 1.5 FlashNative support
Meta (Ollama)Llama 3.1 (8B, 70B, 405B), Llama 3.2Requires 3.1+
MistralMistral Large, Mistral Medium, MixtralNot Mistral 7B
CohereCommand R, Command R+Native support
Not All Models Are Equal

Even among tool-capable models, quality varies significantly. Larger models (GPT-4o, Claude Sonnet, Llama 70B+) are much more reliable at choosing the right tool and providing correct arguments than smaller models.

2. Tool Calling vs MCP: What's the Difference?

Now for the key distinction that confuses many developers:

Tool Calling (Lesson 8)

What it is: A capability of LLMs to request function executions.

How it works:

  1. You define tools in YOUR application code
  2. You pass them to the LLM API
  3. The LLM requests tool calls
  4. YOUR code executes the tools
  5. Results go back to the LLM

The limitation: Tools are bound to your specific application.

# Your app defines tools
tools = [{"type": "function", "function": {"name": "get_weather", ...}}]

# Your app implements tools
def get_weather(city): ...

# Your app runs the loop
response = client.chat.completions.create(tools=tools, ...)

MCP (Model Context Protocol)

What it is: A protocol/standard for exposing tools (and data) to LLMs.

How it works:

  1. Tools are defined in MCP Servers (separate processes)
  2. MCP Clients (apps like Claude Desktop) discover and connect to servers
  3. The client translates MCP tools into the LLM's native tool format
  4. When tools are called, the client routes to the appropriate server

The benefit: Write once, use everywhere.

Side-by-Side Comparison

AspectTool Calling (Lesson 8)MCP
What is it?LLM capabilityProtocol/Standard
Where are tools defined?In your app codeIn MCP servers (separate processes)
ReusabilityPer-applicationCross-application
DiscoveryHardcodedDynamic (servers can be added/removed)
Who executes tools?Your app directlyMCP client routes to MCP server
Requires tool-capable LLM?YesYes (MCP doesn't change this)
Standard format?Varies by providerYes (MCP specification)

The Key Insight

┌─────────────────────────────────────────────────────────────┐
│ │
│ MCP does NOT replace tool calling. │
│ │
│ MCP STANDARDIZES how tools are defined and connected, │
│ but the LLM still uses its native tool calling to │
│ interact with those tools. │
│ │
│ Tool Calling = The LLM's ability to request actions │
│ MCP = A protocol for organizing and sharing tools │
│ │
└─────────────────────────────────────────────────────────────┘

3. What is MCP?

Model Context Protocol (MCP) is an open standard created by Anthropic for connecting LLMs to external data and tools. Think of it as "USB for AI"—a standard interface that lets any compatible tool work with any compatible application.

MCP Provides Three Things

CapabilityDescriptionExample
ToolsFunctions the LLM can callread_file(), query_database(), send_email()
ResourcesData the LLM can accessFiles, database records, API responses
PromptsReusable prompt templates"Summarize this document", "Review this PR"

Why MCP Matters

Before MCP:

  • Every app implements its own tool system
  • Tools written for App A don't work in App B
  • No standard way to discover available tools
  • Duplicated effort across the ecosystem

With MCP:

  • Write a tool once, use it in any MCP-compatible app
  • Standard discovery mechanism
  • Growing ecosystem of pre-built servers
  • Separation of concerns (tools vs. application logic)

4. MCP Architecture

MCP follows a client-server architecture:

Components

ComponentRoleExamples
HostThe application users interact withClaude Desktop, VS Code, custom apps
ClientMaintains connections to MCP serversBuilt into the host
ServerExposes tools, resources, and promptsFilesystem server, GitHub server, Slack server
TransportCommunication channelstdio (local), HTTP+SSE (remote)

Transport Types

stdio (Standard I/O):

  • For local servers running on the same machine
  • Server is spawned as a subprocess
  • Communication via stdin/stdout
  • Most common for desktop applications

HTTP + Server-Sent Events (SSE):

  • For remote servers
  • Server runs as a web service
  • Client connects via HTTP
  • Good for shared/cloud servers

5. Using Existing MCP Servers

Before building your own, let's use existing servers. The MCP ecosystem already has servers for many common use cases.

ServerWhat It DoesRepository
FilesystemRead/write files, search directories@modelcontextprotocol/server-filesystem
GitHubManage repos, issues, PRs@modelcontextprotocol/server-github
PostgreSQLQuery databases@modelcontextprotocol/server-postgres
SlackSend messages, read channels@modelcontextprotocol/server-slack
Google DriveAccess Drive files@modelcontextprotocol/server-gdrive
Brave SearchWeb search@modelcontextprotocol/server-brave-search

Setting Up Claude Desktop with MCP

Claude Desktop is the easiest way to use MCP servers.

Step 1: Install an MCP server

# Install the filesystem server globally
npm install -g @modelcontextprotocol/server-filesystem

Step 2: Configure Claude Desktop

Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/yourname/Documents"
]
}
}
}

Step 3: Restart Claude Desktop

Now Claude can read and write files in your Documents folder!

Adding Multiple Servers

{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/files"]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxxxxxxxxxxx"
}
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"DATABASE_URL": "postgresql://user:pass@localhost/mydb"
}
}
}
}

6. Building Your First MCP Server

Let's build a simple MCP server that provides weather information.

Setup

mkdir weather-mcp-server
cd weather-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod

The Server Code

src/index.ts
/**
* Weather MCP Server
* ==================
* A simple MCP server that provides weather tools.
*/

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// Create the MCP server
const server = new McpServer({
name: "weather-server",
version: "1.0.0",
});

// Simulated weather data
const weatherData: Record<string, { temp: number; condition: string }> = {
london: { temp: 12, condition: "rainy" },
tokyo: { temp: 22, condition: "sunny" },
"new york": { temp: 18, condition: "cloudy" },
paris: { temp: 15, condition: "partly cloudy" },
sydney: { temp: 25, condition: "sunny" },
};

// ─────────────────────────────────────────────────────────────────────────────
// Define Tools
// ─────────────────────────────────────────────────────────────────────────────

// Tool 1: Get current weather
server.tool(
"get_weather",
"Get the current weather for a city. Use this when asked about weather conditions, temperature, or forecasts.",
{
city: z.string().describe("City name (e.g., 'London', 'Tokyo')"),
units: z.enum(["celsius", "fahrenheit"]).default("celsius").describe("Temperature units"),
},
async ({ city, units }) => {
const cityLower = city.toLowerCase();
const data = weatherData[cityLower];

if (!data) {
return {
content: [
{
type: "text",
text: JSON.stringify({
error: `Weather data not available for ${city}`,
available_cities: Object.keys(weatherData),
}),
},
],
};
}

const temp = units === "fahrenheit" ? (data.temp * 9) / 5 + 32 : data.temp;

return {
content: [
{
type: "text",
text: JSON.stringify({
city,
temperature: temp,
units,
condition: data.condition,
}),
},
],
};
}
);

// Tool 2: List available cities
server.tool(
"list_cities",
"List all cities with available weather data.",
{},
async () => {
return {
content: [
{
type: "text",
text: JSON.stringify({
cities: Object.keys(weatherData),
count: Object.keys(weatherData).length,
}),
},
],
};
}
);

// Tool 3: Compare weather between cities
server.tool(
"compare_weather",
"Compare weather between two cities.",
{
city1: z.string().describe("First city"),
city2: z.string().describe("Second city"),
},
async ({ city1, city2 }) => {
const data1 = weatherData[city1.toLowerCase()];
const data2 = weatherData[city2.toLowerCase()];

if (!data1 || !data2) {
return {
content: [
{
type: "text",
text: JSON.stringify({
error: "One or both cities not found",
available: Object.keys(weatherData),
}),
},
],
};
}

const warmer = data1.temp > data2.temp ? city1 : city2;
const diff = Math.abs(data1.temp - data2.temp);

return {
content: [
{
type: "text",
text: JSON.stringify({
city1: { name: city1, temp: data1.temp, condition: data1.condition },
city2: { name: city2, temp: data2.temp, condition: data2.condition },
comparison: {
warmer_city: warmer,
temperature_difference: diff,
},
}),
},
],
};
}
);

// ─────────────────────────────────────────────────────────────────────────────
// Start the Server
// ─────────────────────────────────────────────────────────────────────────────

async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Weather MCP server running on stdio");
}

main().catch(console.error);

Package Configuration

package.json
{
"name": "weather-mcp-server",
"version": "1.0.0",
"type": "module",
"main": "dist/index.js",
"scripts": {
"build": "tsc",
"start": "node dist/index.js"
},
"dependencies": {
"@modelcontextprotocol/sdk": "^1.0.0",
"zod": "^3.22.0"
},
"devDependencies": {
"typescript": "^5.3.0",
"@types/node": "^20.0.0"
}
}
tsconfig.json
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true
},
"include": ["src/**/*"]
}

Build and Test

# Build
npm run build

# Test manually (type JSON-RPC messages)
node dist/index.js

# Configure in Claude Desktop
# Add to claude_desktop_config.json:
{
"mcpServers": {
"weather": {
"command": "node",
"args": ["/path/to/weather-mcp-server/dist/index.js"]
}
}
}

7. Building an MCP Server in Python

Python is also well-supported:

mkdir weather-mcp-python
cd weather-mcp-python
uv init
uv add mcp
server.py
"""
Weather MCP Server (Python)
===========================
"""

import json
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent

# Create server
server = Server("weather-server")

# Weather data
WEATHER_DATA = {
"london": {"temp": 12, "condition": "rainy"},
"tokyo": {"temp": 22, "condition": "sunny"},
"paris": {"temp": 15, "condition": "cloudy"},
}


@server.list_tools()
async def list_tools() -> list[Tool]:
"""Return available tools."""
return [
Tool(
name="get_weather",
description="Get weather for a city",
inputSchema={
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"},
},
"required": ["city"],
},
),
Tool(
name="list_cities",
description="List available cities",
inputSchema={"type": "object", "properties": {}},
),
]


@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
"""Handle tool calls."""

if name == "get_weather":
city = arguments.get("city", "").lower()
data = WEATHER_DATA.get(city)

if data:
result = {"city": city, "temperature": data["temp"], "condition": data["condition"]}
else:
result = {"error": f"No data for {city}", "available": list(WEATHER_DATA.keys())}

return [TextContent(type="text", text=json.dumps(result))]

elif name == "list_cities":
return [TextContent(type="text", text=json.dumps({"cities": list(WEATHER_DATA.keys())}))]

else:
return [TextContent(type="text", text=json.dumps({"error": f"Unknown tool: {name}"}))]


async def main():
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream, server.create_initialization_options())


if __name__ == "__main__":
import asyncio
asyncio.run(main())

Configure for Claude Desktop

{
"mcpServers": {
"weather-python": {
"command": "uv",
"args": ["run", "python", "/path/to/server.py"]
}
}
}

8. MCP Resources: Exposing Data

Tools let LLMs take actions. Resources let LLMs read data.

Adding resources to your server
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";

const server = new McpServer({
name: "docs-server",
version: "1.0.0",
});

// Static resource
server.resource(
"readme",
"file://readme.md",
async () => ({
contents: [
{
uri: "file://readme.md",
mimeType: "text/markdown",
text: "# My Project\n\nThis is the readme content...",
},
],
})
);

// Dynamic resource (template)
server.resource(
"user-profile",
"user://{userId}/profile",
async (uri) => {
// Extract userId from URI
const match = uri.match(/user:\/\/(.+)\/profile/);
const userId = match?.[1] ?? "unknown";

return {
contents: [
{
uri,
mimeType: "application/json",
text: JSON.stringify({
id: userId,
name: `User ${userId}`,
// Fetch from database in real implementation
}),
},
],
};
}
);

9. MCP in Your Own Application

You can also be an MCP client in your own applications:

mcp_client_example.py
"""
Using MCP Servers in Your Own Application
=========================================
Connect to MCP servers and use their tools with any LLM.
"""

import asyncio
import json
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from openai import OpenAI

openai_client = OpenAI()


async def run_with_mcp():
"""Connect to MCP server and use its tools with OpenAI."""

# Define server connection
server_params = StdioServerParameters(
command="node",
args=["/path/to/weather-mcp-server/dist/index.js"],
)

async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()

# Discover available tools
tools_result = await session.list_tools()

# Convert MCP tools to OpenAI format
openai_tools = []
for tool in tools_result.tools:
openai_tools.append({
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": tool.inputSchema,
}
})

print(f"Discovered {len(openai_tools)} tools from MCP server")

# Now use them with OpenAI
messages = [{"role": "user", "content": "What's the weather in Tokyo?"}]

response = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
tools=openai_tools,
)

assistant_message = response.choices[0].message

if assistant_message.tool_calls:
messages.append(assistant_message)

for tool_call in assistant_message.tool_calls:
# Route tool call to MCP server
result = await session.call_tool(
tool_call.function.name,
json.loads(tool_call.function.arguments)
)

# Extract text content
result_text = result.content[0].text if result.content else "{}"

messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result_text,
})

# Get final response
final = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
)

print(f"Assistant: {final.choices[0].message.content}")


if __name__ == "__main__":
asyncio.run(run_with_mcp())

10. When to Use MCP vs Direct Tool Calling

ScenarioRecommendation
Quick prototype, single appDirect tool calling (Lesson 8)
Tools used across multiple appsMCP
Team sharing toolsMCP
Integration with Claude DesktopMCP
Custom UI with specific toolsDirect tool calling
Building a tool marketplaceMCP
Maximum control over executionDirect tool calling
Want to use pre-built toolsMCP

11. Common Pitfalls

SymptomCauseFix
"MCP server not found"Wrong path in configUse absolute paths
Server starts but no toolslist_tools not implementedCheck server logs
Tools work locally but not in ClaudeConfig not reloadedRestart Claude Desktop
"Model doesn't support tools"Using non-tool-capable modelSwitch to GPT-4o, Claude, etc.
Slow tool responsesServer doing heavy workAdd async, caching
JSON parse errorsTool returning invalid formatReturn TextContent properly

12. The MCP Ecosystem

Official Servers

Anthropic maintains several official MCP servers:

  • Filesystem - File operations
  • GitHub - Repository management
  • Google Drive - Cloud file access
  • PostgreSQL - Database queries
  • Slack - Messaging
  • Memory - Persistent key-value store

Community Servers

The community is building servers for:

13. Try It Yourself

Challenge 1: Build a Todo MCP Server

Create a server with tools to:

  • add_todo(title, due_date)
  • list_todos(filter)
  • complete_todo(id)
  • delete_todo(id)

Store todos in a JSON file for persistence.

Challenge 2: Database Explorer

Build an MCP server that:

  • Connects to a SQLite database
  • Exposes list_tables, describe_table, query tools
  • Includes resources for table schemas

Challenge 3: Multi-Server App

Create an application that connects to multiple MCP servers simultaneously and routes tool calls appropriately.

14. Key Takeaways

  1. Not all LLMs support tool calling. Check before building—use GPT-4o, Claude 3+, Llama 3.1+, or similar.

  2. Tool calling ≠ MCP. Tool calling is an LLM capability; MCP is a protocol for standardizing tools.

  3. MCP enables reusability. Write tools once, use them in any MCP-compatible application.

  4. Start with existing servers. The ecosystem has pre-built servers for common use cases.

  5. MCP uses JSON-RPC over stdio or HTTP. Local servers use stdio; remote servers use HTTP+SSE.

  6. Tools, Resources, and Prompts. MCP provides all three capabilities.

  7. You can be both client and server. Build servers for your tools, use client SDK to connect to others.

15. What's Next

Congratulations! You've completed Part 2: Building Your First AI Features. You now know how to:

  • Generate text and stream responses
  • Extract structured data from documents
  • Process images and multimodal inputs
  • Give LLMs the ability to call functions
  • Use MCP to standardize and share tools

In Lesson 10: LangChain Agents, we'll build on these foundations using LangChain's battle-tested abstractions for agent orchestration, tool management, and memory.

16. Additional Resources