Skip to main content

Lesson 10: LangChain Agents

Topics Covered
  • Why LangChain: The industry standard for agent development.
  • Agent Architecture: AgentExecutor, tools, and the reasoning loop.
  • Built-in Tools: Search, calculators, SQL, and API integrations.
  • Custom Tools: Creating tools with the @tool decorator.
  • Memory Patterns: Conversation buffer, summary, and entity memory.
  • Putting It Together: Building a research assistant agent.

LangChain is the most widely adopted framework for building LLM applications. While you learned raw function calling in Lesson 1, LangChain provides battle-tested abstractions that handle the boilerplate: tool management, memory, prompt templates, and the agent reasoning loop. In this lesson, you'll build agents that can search the web, query databases, and maintain conversation context.

Synopsis

1. Why LangChain for Agents

  • The problem with raw function calling: boilerplate, no memory, manual orchestration
  • LangChain's value proposition: abstractions that scale
  • When to use LangChain vs raw APIs
  • The LangChain ecosystem: core, community, integrations

2. Agent Architecture

  • The AgentExecutor: the main orchestration loop
  • How agents decide which tool to use (ReAct pattern under the hood)
  • Agent types: OpenAI Functions, ReAct, Structured Chat
  • The input → reasoning → action → observation cycle
  • Configuring max iterations and early stopping

3. Tools: Giving Agents Capabilities

  • What is a LangChain Tool (name, description, function)
  • Built-in tools: DuckDuckGo search, Wikipedia, calculators
  • Loading tool collections: load_tools()
  • Tool input schemas and validation
  • Tool error handling and fallbacks

4. Creating Custom Tools

  • The @tool decorator for simple functions
  • StructuredTool for complex inputs with Pydantic
  • Async tools for non-blocking operations
  • Tool descriptions that guide agent behavior
  • Best practices: clear names, explicit descriptions, constrained inputs

5. Memory: Maintaining Context

  • Why agents need memory (multi-turn conversations)
  • ConversationBufferMemory: full history
  • ConversationSummaryMemory: compressed history for long conversations
  • ConversationBufferWindowMemory: sliding window approach
  • EntityMemory: tracking entities across conversations
  • Choosing the right memory type for your use case

6. Building a Research Assistant

  • Combining search, Wikipedia, and calculator tools
  • Adding conversation memory
  • System prompt design for agent behavior
  • Handling tool failures gracefully
  • Testing and iterating on agent behavior

7. Common Pitfalls and Debugging

  • Agent stuck in loops: max iterations and better prompts
  • Wrong tool selection: improving tool descriptions
  • Memory overflow: choosing appropriate memory types
  • Cost management: tracking token usage
  • Verbose mode for debugging agent reasoning

8. When LangChain Isn't Enough

  • Limitations of AgentExecutor (no cycles, limited control flow)
  • Preview: LangGraph for stateful workflows
  • Preview: Pydantic AI for type safety
  • Making the decision: LangChain vs alternatives

Additional Resources