This guide provides comprehensive information for developers working on the CodeGate project.
CodeGate is a configurable generative AI gateway designed to protect developers from potential AI-related security risks. Key features include:
- Secrets exfiltration prevention
- Secure coding recommendations
- Prevention of AI recommending deprecated/malicious libraries
- Modular system prompts configuration
- Multiple AI provider support with configurable endpoints
- Python 3.12 or higher
- Poetry for dependency management
- Docker or Podman (for containerized deployment)
- Visual Studio Code (recommended IDE)
-
Clone the repository:
git clone https://github.com/stacklok/codegate.git cd codegate
-
Install Poetry following the official installation guide
-
Install project dependencies:
poetry install --with dev
Clone the repository
git clone https://github.com/stacklok/codegate-ui
cd codegate-ui
To install all dependencies for your local development environment, run
npm install
Run the development server using:
npm run dev
Run the build command:
npm run build
Run the production build command:
npm run preview
codegate/
├── pyproject.toml # Project configuration and dependencies
├── poetry.lock # Lock file (committed to version control)
├── prompts/ # System prompts configuration
│ └── default.yaml # Default system prompts
├── src/
│ └── codegate/ # Source code
│ ├── __init__.py
│ ├── cli.py # Command-line interface
│ ├── config.py # Configuration management
│ ├── exceptions.py # Shared exceptions
│ ├── codegate_logging.py # Logging setup
│ ├── prompts.py # Prompts management
│ ├── server.py # Main server implementation
│ └── providers/ # External service providers
│ ├── anthropic/ # Anthropic provider implementation
│ ├── openai/ # OpenAI provider implementation
│ ├── vllm/ # vLLM provider implementation
│ └── base.py # Base provider interface
├── tests/ # Test files
└── docs/ # Documentation
Poetry commands for managing your development environment:
poetry install
: Install project dependenciespoetry add package-name
: Add a new package dependencypoetry add --group dev package-name
: Add a development dependencypoetry remove package-name
: Remove a packagepoetry update
: Update dependencies to their latest versionspoetry show
: List all installed packagespoetry env info
: Show information about the virtual environment
The project uses several tools to maintain code quality:
-
Black for code formatting:
poetry run black .
-
Ruff for linting:
poetry run ruff check .
-
Bandit for security checks:
poetry run bandit -r src/
To run the unit test suite with coverage:
poetry run pytest
Tests are located in the tests/
directory and follow the same structure as the
source code.
To run the integration tests, create a .env
file in the repo root directory and add the
following properties to it:
ENV_OPENAI_KEY=<YOUR_KEY>
ENV_VLLM_KEY=<YOUR_KEY>
ENV_ANTHROPIC_KEY=<YOUR_KEY>
Next, run import_packages to ensure integration test data is created:
poetry run python scripts/import_packages.py
Next, start the CodeGate server:
poetry run codegate serve --log-level DEBUG --log-format TEXT
Then the integration tests can be executed by running:
poetry run python tests/integration/integration_tests.py
You can include additional properties to specify test scope and other information. For instance, to execute the tests for Copilot providers, for instance, run:
CODEGATE_PROVIDERS=copilot CA_CERT_FILE=./codegate_volume/certs/ca.crt poetry run python tests/integration/integration_tests.py
The project includes a Makefile for common development tasks:
make install
: install all dependenciesmake format
: format code using black and ruffmake lint
: run linting checksmake test
: run tests with coveragemake security
: run security checksmake build
: build distribution packagesmake all
: run all checks and build (recommended before committing)
make image-build
# Basic usage with local image
docker run -p 8989:8989 -p 9090:9090 codegate:latest
# With pre-built pulled image
docker pull ghcr.io/stacklok/codegate:latest
docker run --name codegate -d -p 8989:8989 -p 9090:9090 ghcr.io/stacklok/codegate:latest
# It will mount a volume to /app/codegate_volume
# The directory supports storing Llama CPP models under subdirectory /models
# A sqlite DB with the messages and alerts is stored under the subdirectory /db
docker run --name codegate -d -v /path/to/volume:/app/codegate_volume -p 8989:8989 -p 9090:9090 ghcr.io/stacklok/codegate:latest
- CODEGATE_VLLM_URL: URL for the inference engine (defaults to https://inference.codegate.ai)
- CODEGATE_OPENAI_URL: URL for OpenAI inference engine (defaults to https://api.openai.com/v1)
- CODEGATE_ANTHROPIC_URL: URL for Anthropic inference engine (defaults to https://api.anthropic.com/v1)
- CODEGATE_OLLAMA_URL: URL for OLlama inference engine (defaults to http://localhost:11434/api)
- CODEGATE_APP_LOG_LEVEL: Level of debug desired when running the codegate server (defaults to WARNING, can be ERROR/WARNING/INFO/DEBUG)
- CODEGATE_LOG_FORMAT: Type of log formatting desired when running the codegate server (default to TEXT, can be JSON/TEXT)
docker run -p 8989:8989 -p 9090:9090 -e CODEGATE_OLLAMA_URL=http://1.2.3.4:11434/api ghcr.io/stacklok/codegate:latest
CodeGate uses a hierarchical configuration system with the following priority (highest to lowest):
- CLI arguments
- Environment variables
- Config file (YAML)
- Default values (including default prompts)
- Port: server port (default:
8989
) - Host: server host (default:
"localhost"
) - Log level: logging verbosity level (
ERROR
|WARNING
|INFO
|DEBUG
) - Log format: log format (
JSON
|TEXT
) - Prompts: system prompts configuration
- Provider URLs: AI provider endpoint configuration
See Configuration system for detailed information.
CodeGate supports multiple AI providers through a modular provider system.
-
vLLM provider
- Default URL:
http://localhost:8000
- Supports OpenAI-compatible APIs
- Automatically adds
/v1
path to base URL - Model names are prefixed with
hosted_vllm/
- Default URL:
-
OpenAI provider
- Default URL:
https://api.openai.com/v1
- Standard OpenAI API implementation
- Default URL:
-
Anthropic provider
- Default URL:
https://api.anthropic.com/v1
- Anthropic Claude API implementation
- Default URL:
-
Ollama provider
- Default URL:
http://localhost:11434
- Endpoints:
- Native Ollama API:
/ollama/api/chat
- OpenAI-compatible:
/ollama/chat/completions
- Native Ollama API:
- Default URL:
Provider URLs can be configured through:
-
Config file (config.yaml):
provider_urls: vllm: "https://vllm.example.com" openai: "https://api.openai.com/v1" anthropic: "https://api.anthropic.com/v1" ollama: "http://localhost:11434" # /api path added automatically
-
Environment variables:
export CODEGATE_PROVIDER_VLLM_URL=https://vllm.example.com export CODEGATE_PROVIDER_OPENAI_URL=https://api.openai.com/v1 export CODEGATE_PROVIDER_ANTHROPIC_URL=https://api.anthropic.com/v1 export CODEGATE_PROVIDER_OLLAMA_URL=http://localhost:11434
-
CLI flags:
codegate serve --vllm-url https://vllm.example.com --ollama-url http://localhost:11434
To add a new provider:
- Create a new directory in
src/codegate/providers/
- Implement required components:
provider.py
: Main provider class extending BaseProvideradapter.py
: Input/output normalizers__init__.py
: Export provider class
Example structure:
from codegate.providers.base import BaseProvider
class NewProvider(BaseProvider):
def __init__(self, ...):
super().__init__(
InputNormalizer(),
OutputNormalizer(),
completion_handler,
pipeline_processor,
fim_pipeline_processor
)
@property
def provider_route_name(self) -> str:
return "provider_name"
def _setup_routes(self):
# Implement route setup
pass
Default prompts are stored in prompts/default.yaml
. These prompts are loaded
automatically when no other prompts are specified.
-
Create a new YAML file following the format:
prompt_name: "Prompt text content" another_prompt: "More prompt text"
-
Use the prompts file:
# Via CLI codegate serve --prompts my-prompts.yaml # Via config.yaml prompts: "path/to/prompts.yaml" # Via environment export CODEGATE_PROMPTS_FILE=path/to/prompts.yaml
-
View loaded prompts:
# Show default prompts codegate show-prompts # Show custom prompts codegate show-prompts --prompts my-prompts.yaml
-
Write tests for prompt functionality:
def test_custom_prompts(): config = Config.load(prompts_path="path/to/test/prompts.yaml") assert config.prompts.my_prompt == "Expected prompt text"
The main command-line interface is implemented in cli.py
. Basic usage:
# Start server with default settings
codegate serve
# Start with custom configuration
codegate serve --port 8989 --host localhost --log-level DEBUG
# Start with custom prompts
codegate serve --prompts my-prompts.yaml
# Start with custom provider URL
codegate serve --vllm-url https://vllm.example.com
See CLI commands and flags for detailed command information.