How-to guides
These how-to guides answer "How do I...?" questions with practical solutions for specific problems.
Note: Many guides are still being written. Want to help? See our documentation contribution guide!
LLMs and chat models
Basic configuration
Advanced features
- How to handle API rate limits and retries
- How to stream responses from LLMs
- How to use function calling with OpenAI
- How to implement custom LLM providers
Prompts and templates
Template creation
- How to create dynamic prompt templates
- How to implement few-shot prompting
Output processing
- How to parse structured output from LLMs
- How to validate and sanitize LLM outputs
Memory and conversation
Memory management
- How to implement conversation memory
- How to persist conversation history
- How to implement context windowing
- How to handle long conversations
Agents and tools
Tool development
- How to create custom tools for agents
- How to handle tool execution errors
Agent optimization
- How to implement multi-step reasoning
- How to optimize agent performance
Production and deployment
Project structure
- How to structure LangChainGo projects
- How to handle secrets and configuration
Monitoring and scaling
- How to implement logging and monitoring
- How to deploy with Docker
- How to implement health checks
- How to scale LangChainGo applications
Testing and debugging
Testing strategies
- How to write tests for LangChainGo components
- How to mock LLM responses for testing
Performance
- How to debug chain execution
- How to benchmark performance
Integration patterns
Web applications
- How to integrate with web frameworks (Gin, Echo)
- How to implement background processing
Data integration
- How to integrate with databases
- How to implement caching strategies