Skip to content

Commit

Permalink
feat: setup nginx + haproxy to 4 devnet validators
Browse files Browse the repository at this point in the history
  • Loading branch information
oyyblin committed Feb 19, 2025
1 parent 3bcd6bb commit 6c5e744
Show file tree
Hide file tree
Showing 8 changed files with 290 additions and 1 deletion.
3 changes: 3 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
.git
*.log
*.md
61 changes: 61 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
name: ci

on:
push:
branches:
- "main"
tags:
- "v*"
pull_request:
branches:
- "main"

env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

jobs:
build-and-push-image:
runs-on: ubuntu-latest
# Sets the permissions granted to the `GITHUB_TOKEN` for the actions in this job.
permissions:
contents: read
packages: write
attestations: write
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4

- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}

# This step uses [docker/metadata-action](https://github.com/docker/metadata-action#about) to extract tags and labels that will be applied to the specified image. The `id` "meta" allows the output of this step to be referenced in a subsequent step. The `images` value provides the base name for the tags and labels.
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}

# This step uses the `docker/build-push-action` action to build the image, based on your repository's `Dockerfile`. If the build succeeds, it pushes the image to GitHub Packages.
# It uses the `context` parameter to define the build's context as the set of files located in the specified path. For more information, see "[Usage](https://github.com/docker/build-push-action#usage)" in the README of the `docker/build-push-action` repository.
# It uses the `tags` and `labels` parameters to tag and label the image with the output from the "meta" step.
- name: Build and push Docker image
id: push
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
with:
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

# This step generates an artifact attestation for the image, which is an unforgeable statement about where and how it was built. It increases supply chain security for people who consume the image. For more information, see "[AUTOTITLE](/actions/security-guides/using-artifact-attestations-to-establish-provenance-for-builds)."
- name: Generate artifact attestation
uses: actions/attest-build-provenance@v1
with:
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME}}
subject-digest: ${{ steps.push.outputs.digest }}
push-to-registry: true
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
*.log
*.swp
.DS_Store
.env
24 changes: 24 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Use Debian slim as the base image for smaller size
FROM debian:bookworm-slim

# Install dependencies (HAProxy + Nginx) and clean up in a single layer
# to minimize image size
RUN apt-get update && \
apt-get install -y --no-install-recommends \
nginx \
haproxy && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

# Copy configuration files into the container
COPY haproxy.cfg /etc/haproxy/haproxy.cfg
COPY nginx.conf /etc/nginx/nginx.conf
COPY entrypoint.sh /entrypoint.sh

# Make entrypoint script executable
RUN chmod +x /entrypoint.sh

EXPOSE 8080

# Set the entrypoint script to handle service startup
ENTRYPOINT ["/entrypoint.sh"]
91 changes: 90 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,91 @@
# gravity-devnet-rpc-lb
A lightweight RPC Load Balancer for the Gravity Devnet. This project leverages HAProxy & Nginx to distribute RPC traffic across multiple blockchain nodes with sticky session support to ensure consistent client connections. Designed to run efficiently in Kubernetes, this solution helps optimize node performance and network reliability.

A lightweight RPC Load Balancer for the Gravity Devnet. This project leverages HAProxy & Nginx to distribute RPC traffic across multiple blockchain nodes with sticky session support to ensure consistent client connections. Designed to run efficiently in Kubernetes, this solution helps optimize node performance and network reliability.

## Features

- Load balancing of RPC requests across multiple blockchain nodes
- Sticky sessions for consistent client connections
- Health checking and automatic failover
- Kubernetes-ready configuration
- WebSocket support
- Efficient request routing and connection management

## Architecture

The solution uses a two-tier architecture:

1. **Nginx (Front Tier)**

- Handles incoming HTTP/HTTPS traffic
- Provides initial request routing
- Manages SSL/TLS termination
- Forwards requests to HAProxy

2. **HAProxy (Back Tier)**
- Manages load balancing across RPC nodes
- Implements sticky sessions
- Performs health checks
- Handles WebSocket connections
- Provides failover capabilities

## Prerequisites

- Docker
- Kubernetes cluster (for production deployment)
- Access to Gravity Devnet RPC nodes

## Quick Start

1. **Clone the repository:**

```bash
git clone https://github.com/yourusername/gravity-devnet-rpc-lb.git
cd gravity-devnet-rpc-lb
```

2. **Build the Docker image:**

```bash
docker build -t gravity-devnet-rpc-lb .
```

3. **Run the container:**
```bash
docker run -d -p 8080:8080 gravity-devnet-rpc-lb
```

## Configuration

### HAProxy Configuration

The HAProxy configuration (`haproxy.cfg`) includes:

- TCP mode for WebSocket support
- Round-robin load balancing
- Sticky sessions based on client IP
- Health checks for RPC nodes
- Configurable timeouts and connection limits

### Nginx Configuration

The Nginx configuration (`nginx.conf`) provides:

- Reverse proxy setup
- Header forwarding
- Health check endpoint
- Connection pooling

## Health Checks

The service includes two health check endpoints:

- `/health/ready` - HAProxy internal health status
- `/health/alive` - Kubernetes readiness/liveness probe endpoint

## Security Considerations

- Health check endpoints are only accessible from localhost
- Proper header forwarding for client identification
- Connection limits and timeouts are configured for protection
- No direct external access to HAProxy management interface
14 changes: 14 additions & 0 deletions entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
#!/bin/bash
set -e

# Start HAProxy in background
echo "Starting HAProxy..."
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg &

# Start Nginx in background with daemon mode disabled
echo "Starting Nginx..."
/usr/sbin/nginx -g "daemon off;" &

# Wait for all background processes to complete
# This keeps the container running
wait
51 changes: 51 additions & 0 deletions haproxy.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Global settings for HAProxy instance
global
# Send logs to stdout for container logging
log stdout format raw local0
# Maximum concurrent connections
maxconn 4096

# Default settings applied to all sections unless overridden
defaults
log global
# Connection timeouts for safety
timeout connect 5s
timeout client 50s
timeout server 50s

# Main RPC frontend - handles incoming blockchain RPC requests
frontend rpc_frontend
# Listen on all interfaces for external access
bind 0.0.0.0:8545
# TCP mode for WebSocket support
mode tcp
default_backend rpc_backend

# Backend configuration for RPC nodes
backend rpc_backend
mode tcp
# Round robin load balancing between nodes
balance roundrobin
# Enable TCP health checks
option tcp-check
# Allow server retry on failure
option redispatch
# Sticky sessions configuration
stick-table type ip size 200k expire 30m
stick on src
# RPC nodes with health checks
server node1 35.89.200.123:8545 check inter 3s fall 2 rise 2
server node2 18.236.243.137:8545 check inter 3s fall 2 rise 2
server node3 54.212.168.35:8545 check inter 3s fall 2 rise 2
server node4 54.202.224.179:8545 check inter 3s fall 2 rise 2

# Health check endpoint for monitoring HAProxy status
# Only accessible from localhost for security
frontend health_check
# Bind only to localhost (127.0.0.1) to prevent external access
bind 127.0.0.1:8546
# Use HTTP mode since health checks are HTTP requests
mode http
# Create a simple health endpoint at /health
# Returns 200 OK if HAProxy is running
monitor-uri /health
43 changes: 43 additions & 0 deletions nginx.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
events {
# Maximum number of simultaneous connections
worker_connections 1024;
}

http {
# Upstream definition for HAProxy RPC load balancer
upstream rpc_nodes {
server 127.0.0.1:8545;
}

server {
listen 8080;

# Main location for RPC traffic
location / {
# Forward requests to HAProxy
proxy_pass http://rpc_nodes;
# Pass important headers for proper client identification
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

# Health check endpoint for Kubernetes/monitoring
# Liveness probe endpoint - always returns OK
location /health/alive {
access_log off;
return 200 "OK";
add_header Content-Type text/plain;
}

# Readiness probe endpoint - always returns OK
location /health/ready {
access_log off;
# Short timeouts for quick health check responses
proxy_connect_timeout 2s;
proxy_read_timeout 2s;
proxy_pass http://127.0.0.1:8546/health;
add_header Content-Type text/plain;
}
}
}

0 comments on commit 6c5e744

Please sign in to comment.