Stack traces: the unsung villains of debugging. Why wrestle with a wall of cryptic error messages when you could let LLM Catcher do the heavy lifting?
LLM Catcher is your debugging sidekick—a Python library that teams up with Large Language Models (Ollama or OpenAI) to decode those pesky exceptions. Instead of copy-pasting a stack trace into your LLM chat and then shuffling back to your IDE, LLM Catcher delivers instant, insightful fixes right in your logs.
Stop debugging the old-fashioned way. Catch the errors, not the headaches!
⚠️ Note: This project is under active development and may include breaking changes. See Version Notice for details.
- Exception diagnosis using LLMs (Ollama or OpenAI)
- Support for local LLMs through Ollama
- OpenAI integration for cloud-based models
- Multiple error handling approaches:
- Function decorators for automatic diagnosis
- Try/except blocks for manual control
- Global exception handler for unhandled errors from imported modules
- Both synchronous and asynchronous APIs
- Flexible configuration through environment variables or config file
- Install LLM Catcher:
pip install llm-catcher
- Install Ollama (recommended default setup):
macOS or Windows:
- Download and install from Ollama.com
Linux:
curl -fsSL https://ollama.com/install.sh | sh
- Pull the default model:
ollama pull qwen2.5-coder
That's it! You're ready to use LLM Catcher with the default local setup.
from llm_catcher import LLMExceptionDiagnoser
# Initialize diagnoser (uses Ollama with qwen2.5-coder by default)
diagnoser = LLMExceptionDiagnoser()
try:
result = 1 / 0 # This will raise a ZeroDivisionError
except Exception as e:
diagnosis = diagnoser.diagnose(e)
print(diagnosis)
@diagnoser.catch
def risky_function():
"""Errors will be automatically diagnosed."""
return 1 / 0
@diagnoser.catch
async def async_risky_function():
"""Works with async functions too."""
import nonexistent_module
# Synchronous
try:
result = risky_operation()
except Exception as e:
diagnosis = diagnoser.diagnose(e)
print(diagnosis)
# Asynchronous
try:
result = await async_operation()
except Exception as e:
diagnosis = await diagnoser.async_diagnose(e)
print(diagnosis)
By default, LLM Catcher catches all unhandled exceptions. You might want to disable this when:
- You have other error handling middleware or global handlers
- You want to handle exceptions only in specific try/except blocks
- You're using a framework with its own error handling
- You want more control over which exceptions get diagnosed
# Disable global handler for more specific exception handling
diagnoser = LLMExceptionDiagnoser(global_handler=False)
from fastapi import FastAPI
from llm_catcher import LLMExceptionDiagnoser
app = FastAPI()
diagnoser = LLMExceptionDiagnoser()
@app.get("/error")
async def error():
try:
1/0
except Exception as e:
diagnosis = await diagnoser.async_diagnose(e, formatted=False)
return {"error": str(e), "diagnosis": diagnosis}
The diagnosis output can be formatted in two ways:
formatted=True
(default): Returns the diagnosis with clear formatting and boundariesformatted=False
: Returns plain text, suitable for JSON responses
For detailed diagnostic information:
DEBUG=true python your_script.py
[Configuration section follows...]
LLM Catcher can be configured in multiple ways, with the following precedence (highest to lowest):
- Local config files:
./llm_catcher_config.json
./config.json
- User home config:
~/.llm_catcher_config.json
- Environment variables
- Default values
Create a JSON config file in your project or home directory:
{
"provider": "openai",
"llm_model": "gpt-4",
"temperature": 0.2, # Only used with OpenAI
"openai_api_key": "sk-your-api-key"
}
Environment variables can be set directly or through a .env
file:
# Required for OpenAI provider
LLM_CATCHER_OPENAI_API_KEY=sk-your-api-key
# Optional settings
LLM_CATCHER_PROVIDER=openai # or 'ollama'
LLM_CATCHER_LLM_MODEL=gpt-4 # or any supported model
LLM_CATCHER_TEMPERATURE=0.2 # Only used with OpenAI
qwen2.5-coder
(default): Optimized for code understanding and debugging- Any other Ollama model can be used
GPT-4o Series (Recommended):
gpt-4o
: Advanced multimodal model with superior reasoning capabilitiesgpt-4o-mini
: Cost-effective version with excellent performance
o1 Series (Recommended for Complex Code):
o1
: Specialized for coding, science, and mathematical reasoningo1-mini
: Faster variant with similar capabilities
GPT-4 Series:
gpt-4
(default for OpenAI): Strong general-purpose modelgpt-4-turbo
: Latest version with improved performance
GPT-3.5 Series (Economy Option):
gpt-3.5-turbo
: Good balance of performance and costgpt-3.5-turbo-16k
: Extended context version for longer stack traces
Note: When using OpenAI, if no model is specified, gpt-4
will be used as the default.
The examples/
directory contains several examples demonstrating different use cases:
minimal_example.py
: Basic usage with try/except and global handlerdecorator_example.py
: Using the function decorator with both sync and async functionsfastapi_example.py
: Integration with FastAPI
Check out these examples to see LLM Catcher in action with different patterns and frameworks.
Set the DEBUG
environment variable to see detailed diagnostic information:
DEBUG=true python your_script.py
- Ollama must be installed and running for local LLM support (default)
- OpenAI API key is required only when using OpenAI provider
- Settings are validated on initialization
- Stack traces are included in LLM prompts for better diagnosis
- Changed default provider from OpenAI to Ollama
- Added global exception handler (enabled by default)
- Added function decorator support
If you're upgrading from an earlier version, please review these changes. We recommend pinning to a specific version in your dependencies until we reach 1.0.0:
pip install llm-catcher==0.3.5
This project is licensed under the GNU General Public License v3.0. See the LICENSE file for more details.
Run the test suite:
./scripts/test.sh
Check code style:
./scripts/lint.sh
See CONTRIBUTING.md for more detailed development instructions.