Deployment Guide

Deployment Guide

Deploy SkysPy anywhere - from local development to production Raspberry Pi installations. This guide covers every deployment scenario with step-by-step instructions.


Quick Navigation

SectionDescription
Quick StartGet running in 5 minutes
Docker DeploymentDevelopment & testing environments
ProductionFull production setup
Raspberry PiOptimized RPi5 deployment
ScalingHigh-traffic configurations
SecurityProduction security checklist

Architecture Overview

SkysPy is a real-time ADS-B aircraft tracking system built on modern, battle-tested technologies.

Technology Stack

ComponentTechnologyPurpose
BackendDjango 5.0+REST API & Socket.IO server
ASGI ServerDaphneSocket.IO support
Task QueueCelery + geventBackground processing
DatabasePostgreSQL 16Primary data store
Cache/BrokerRedis 7Message broker & cache
FrontendReact + ViteDashboard UI

System Architecture Diagram

flowchart TB
    subgraph External["External Data"]
        UF[Ultrafeeder<br/>ADS-B Data]
        D978[Dump978<br/>UAT Data]
        ACARS[ACARS/VDL2<br/>Messages]
    end

    subgraph Frontend["Frontend Layer"]
        NGINX[Nginx<br/>Reverse Proxy]
        REACT[React/Vite<br/>Dashboard]
    end

    subgraph Application["Application Layer"]
        DAPHNE[Daphne<br/>ASGI Server]
        CELERY[Celery Worker<br/>gevent pool]
        BEAT[Celery Beat<br/>Scheduler]
    end

    subgraph Data["Data Layer"]
        REDIS[(Redis<br/>Cache & Broker)]
        POSTGRES[(PostgreSQL<br/>Database)]
    end

    UF --> DAPHNE
    D978 --> DAPHNE
    ACARS --> DAPHNE

    REACT <--> NGINX
    NGINX <--> DAPHNE

    DAPHNE <--> REDIS
    DAPHNE <--> POSTGRES

    CELERY <--> REDIS
    CELERY <--> POSTGRES

    BEAT --> REDIS

    style UF fill:#e1f5fe
    style D978 fill:#e1f5fe
    style ACARS fill:#e1f5fe
    style NGINX fill:#fff3e0
    style REACT fill:#fff3e0
    style DAPHNE fill:#e8f5e9
    style CELERY fill:#e8f5e9
    style BEAT fill:#e8f5e9
    style REDIS fill:#ffebee
    style POSTGRES fill:#ffebee

Service Ports

ServicePortProtocolDescription
api8000HTTP/Socket.IODjango API server (Daphne ASGI)
celery-worker--Background task processing
celery-beat--Periodic task scheduler
redis6379TCPMessage broker and cache
postgres5432TCPPostgreSQL database
acars-listener5555/5556UDPACARS/VDL2 listener (optional)

Prerequisites

System Requirements

EnvironmentCPUMemoryStorage
Development2+ cores4GB+10GB
Production4+ cores8GB+50GB+
Raspberry Pi 54 cores8GB32GB+ SD/NVMe

Software Requirements

Required Software

  • Docker 24.0+ and Docker Compose v2
  • Git

Optional Software

  • Nginx (reverse proxy)
  • Certbot (SSL certificates)

Verify Docker Installation

Info: Run these commands to ensure Docker is properly installed.

# 1. Check Docker version
docker --version
# Expected: Docker version 24.0.0 or higher

# 2. Check Docker Compose version
docker compose version
# Expected: Docker Compose version v2.0.0 or higher

# 3. Verify Docker is running
docker info

Quick Start (Development)

Time to deploy: ~5 minutes

Step-by-Step Guide

Step 1: Clone the Repository - Get the latest SkysPy source code.

git clone https://github.com/your-org/skyspy.git
cd skyspy

Step 2: Create Environment File - Copy the example and configure your settings.

cp .env.example .env

Edit .env with minimum required settings:

# Security (REQUIRED)
DJANGO_SECRET_KEY=your-super-secret-key-change-this

# Your Location (REQUIRED)
FEEDER_LAT=47.9377
FEEDER_LON=-121.9687

# Data Source (REQUIRED)
ULTRAFEEDER_HOST=ultrafeeder
ULTRAFEEDER_PORT=80

Step 3: Start Services - Launch all containers with Docker Compose.

# Start all services
docker compose up -d

# View logs
docker compose logs -f api

Step 4: Access the Dashboard - Open your browser and navigate to your SkysPy instance.

EndpointURLDescription
Dashboardhttp://localhost:8000Main web interface
API Docshttp://localhost:8000/api/docs/Interactive API documentation
Health Checkhttp://localhost:8000/health/System health status

Docker Compose Deployment

Development Environment

The docker-compose.test.yaml provides a complete development environment with hot-reload and mock data sources.

flowchart LR
    subgraph Dev["Development Stack"]
        API[API<br/>Hot Reload]
        WORKER[Celery Worker]
        BEAT[Celery Beat]
        VITE[Vite Dev<br/>Port 3000]
    end

    subgraph Mock["Mock Services"]
        UF[Ultrafeeder Mock]
        D978[Dump978 Mock]
    end

    subgraph Data["Test Data"]
        PG[(PostgreSQL<br/>tmpfs)]
        PGBOUNCER[PgBouncer]
        REDIS[(Redis)]
    end

    Mock --> API
    API --> PGBOUNCER
    PGBOUNCER --> PG
    API --> REDIS
    WORKER --> REDIS
    WORKER --> PGBOUNCER

    style API fill:#c8e6c9
    style VITE fill:#c8e6c9
    style UF fill:#ffe0b2
    style D978 fill:#ffe0b2

Info: Launch the full development stack with hot-reload enabled.

# Start development environment
docker compose -f docker-compose.test.yaml --profile dev up -d

# Services started:
# - api (Django with hot reload)
# - celery-worker
# - celery-beat
# - postgres (with tmpfs for speed)
# - pgbouncer
# - redis
# - ultrafeeder (mock)
# - dump978 (mock)
# - adsb-dashboard (Vite dev server on port 3000)

Running Tests

# Run the test suite
docker compose -f docker-compose.test.yaml --profile test run --rm api-test

# Test results are saved to ./test-results/

Service Profiles

ProfileDescriptionUse Case
devFull development environment with hot reloadLocal development
testTest runner with mock servicesCI/CD pipelines
acarsInclude ACARS listener serviceACARS message capture
# Start with ACARS listener
docker compose --profile acars up -d

# Start development with ACARS
docker compose -f docker-compose.test.yaml --profile dev --profile acars up -d

Production Deployment

Deployment Roadmap

flowchart LR
    A[1. Prepare<br/>Environment] --> B[2. Configure<br/>Variables]
    B --> C[3. Generate<br/>Secrets]
    C --> D[4. Start<br/>Services]
    D --> E[5. Initial<br/>Setup]
    E --> F[Verify<br/>Health]

    style A fill:#e3f2fd
    style B fill:#e3f2fd
    style C fill:#fff3e0
    style D fill:#e8f5e9
    style E fill:#e8f5e9
    style F fill:#c8e6c9

1. Prepare the Environment

# Create application directory
sudo mkdir -p /opt/skyspy
cd /opt/skyspy

# Clone repository
sudo git clone https://github.com/your-org/skyspy.git .

# Create secure environment file
sudo cp .env.example .env
sudo chmod 600 .env

2. Configure Environment Variables

Warning: Never commit your .env file to version control. Generate unique secrets for each deployment.

Edit /opt/skyspy/.env with production settings:

# =============================================================================
# Django Settings (REQUIRED)
# =============================================================================
DEBUG=False
DJANGO_SECRET_KEY=generate-a-secure-64-character-key-here
ALLOWED_HOSTS=skyspy.example.com,192.168.1.100

# Superuser (auto-created on startup)
DJANGO_SUPERUSER_USERNAME=admin
[email protected]
DJANGO_SUPERUSER_PASSWORD=your-secure-password

# =============================================================================
# Database (REQUIRED)
# =============================================================================
POSTGRES_USER=skyspy_prod
POSTGRES_PASSWORD=generate-a-secure-database-password
POSTGRES_DB=skyspy_prod

# =============================================================================
# Authentication
# =============================================================================
# Options: public, private, hybrid
AUTH_MODE=hybrid

# JWT Configuration
JWT_SECRET_KEY=generate-a-separate-jwt-secret-key
JWT_ACCESS_TOKEN_LIFETIME_MINUTES=60
JWT_REFRESH_TOKEN_LIFETIME_DAYS=7

# =============================================================================
# ADS-B Data Sources (REQUIRED)
# =============================================================================
ULTRAFEEDER_HOST=192.168.1.50
ULTRAFEEDER_PORT=80
DUMP978_HOST=192.168.1.50
DUMP978_PORT=8978

# Feeder Location (REQUIRED)
FEEDER_LAT=47.9377
FEEDER_LON=-121.9687

# =============================================================================
# Polling Configuration
# =============================================================================
POLLING_INTERVAL=2
DB_STORE_INTERVAL=5
SESSION_TIMEOUT_MINUTES=30

# =============================================================================
# Notifications (Optional)
# =============================================================================
# Apprise URLs for notifications
# Format: service1://...,service2://...
APPRISE_URLS=telegram://123456:ABC-DEF1234/987654321
NOTIFICATION_COOLDOWN=300

# =============================================================================
# Safety Monitoring
# =============================================================================
SAFETY_MONITORING_ENABLED=True
SAFETY_VS_CHANGE_THRESHOLD=2000
SAFETY_VS_EXTREME_THRESHOLD=6000
SAFETY_PROXIMITY_NM=0.5

# =============================================================================
# Photo Cache
# =============================================================================
PHOTO_CACHE_ENABLED=True
PHOTO_AUTO_DOWNLOAD=True

# =============================================================================
# Radio/Audio
# =============================================================================
RADIO_ENABLED=True
RADIO_MAX_FILE_SIZE_MB=50
RADIO_RETENTION_DAYS=7

# =============================================================================
# S3 Storage (Optional)
# =============================================================================
S3_ENABLED=False
S3_BUCKET=skyspy-photos
S3_REGION=us-east-1
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_ENDPOINT_URL=  # For MinIO: http://minio:9000

# =============================================================================
# Monitoring (Optional)
# =============================================================================
SENTRY_DSN=https://[email protected]/project-id
SENTRY_ENVIRONMENT=production
PROMETHEUS_ENABLED=True

# =============================================================================
# CORS (adjust for your domain)
# =============================================================================
CORS_ALLOWED_ORIGINS=https://skyspy.example.com

3. Generate Secure Keys

Danger: Always generate unique, cryptographically secure keys for production deployments.

# Generate Django secret key
python3 -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())"

# Generate database password
openssl rand -base64 32

# Generate JWT secret key
openssl rand -base64 48

4. Start Production Services

# Build and start services
docker compose up -d --build

# Verify services are running
docker compose ps

# Check service health
docker compose exec api curl -f http://localhost:8000/health/

5. Initial Setup Commands

# Create superuser (if not auto-created)
docker compose exec api python manage.py createsuperuser

# Populate aviation data
docker compose exec api python manage.py populate_data

# Sync Celery tasks to database
docker compose exec api python manage.py sync_celery_tasks

Environment Variables Reference

Core Django Settings

VariableRequiredDefaultDescription
DEBUGOptionalFalseEnable debug mode (never in production)
DJANGO_SECRET_KEYRequired-Secret key for cryptographic signing
ALLOWED_HOSTSOptional*Comma-separated list of allowed hostnames
DJANGO_LOG_LEVELOptionalINFOLogging level

Authentication

VariableRequiredDefaultDescription
AUTH_MODEOptionalhybridpublic, private, or hybrid
JWT_SECRET_KEYRecommendedDJANGO_SECRET_KEYSeparate JWT signing key
JWT_ACCESS_TOKEN_LIFETIME_MINUTESOptional60Access token validity
JWT_REFRESH_TOKEN_LIFETIME_DAYSOptional7Refresh token validity
LOCAL_AUTH_ENABLEDOptionalTrueEnable local username/password auth
API_KEY_ENABLEDOptionalTrueEnable API key authentication

OIDC Configuration (SSO)

VariableRequiredDefaultDescription
OIDC_ENABLEDOptionalFalseEnable OIDC authentication
OIDC_PROVIDER_URLRequired if OIDC-OIDC provider base URL
OIDC_PROVIDER_NAMEOptionalSSODisplay name on login button
OIDC_CLIENT_IDRequired if OIDC-OIDC client ID
OIDC_CLIENT_SECRETRequired if OIDC-OIDC client secret
OIDC_SCOPESOptionalopenid profile email groupsRequested scopes
OIDC_DEFAULT_ROLEOptionalviewerDefault role for new users

Database

VariableRequiredDefaultDescription
POSTGRES_USEROptionaladsbPostgreSQL username
POSTGRES_PASSWORDRequiredadsbPostgreSQL password
POSTGRES_DBOptionaladsbDatabase name
DATABASE_URLOptional-Full database URL (auto-constructed)

Redis

VariableRequiredDefaultDescription
REDIS_URLOptionalredis://redis:6379/0Redis connection URL

ADS-B Sources

VariableRequiredDefaultDescription
ULTRAFEEDER_HOSTRequiredultrafeederUltrafeeder hostname
ULTRAFEEDER_PORTOptional80Ultrafeeder port
DUMP978_HOSTOptionaldump978Dump978 hostname
DUMP978_PORTOptional80Dump978 port
FEEDER_LATRequired47.9377Antenna latitude
FEEDER_LONRequired-121.9687Antenna longitude

Polling & Sessions

VariableRequiredDefaultDescription
POLLING_INTERVALOptional2Seconds between ADS-B polls
DB_STORE_INTERVALOptional5Seconds between database writes
SESSION_TIMEOUT_MINUTESOptional30Minutes before aircraft session ends

Safety Monitoring

VariableRequiredDefaultDescription
SAFETY_MONITORING_ENABLEDOptionalTrueEnable safety event detection
SAFETY_VS_CHANGE_THRESHOLDOptional2000Vertical speed change threshold (ft/min)
SAFETY_VS_EXTREME_THRESHOLDOptional6000Extreme vertical speed (ft/min)
SAFETY_PROXIMITY_NMOptional0.5Proximity alert distance (nm)
SAFETY_ALTITUDE_DIFF_FTOptional500Altitude difference for proximity (ft)

ACARS/VDL2

VariableRequiredDefaultDescription
ACARS_ENABLEDOptionalTrueEnable ACARS message processing
ACARS_PORTOptional5555ACARS UDP listen port
VDLM2_PORTOptional5556VDL Mode 2 UDP listen port

Transcription

VariableRequiredDefaultDescription
TRANSCRIPTION_ENABLEDOptionalFalseEnable audio transcription
WHISPER_ENABLEDOptionalFalseEnable local Whisper
WHISPER_URLOptionalhttp://whisper:9000Whisper service URL

LLM Integration

VariableRequiredDefaultDescription
LLM_ENABLEDOptionalFalseEnable LLM for transcript analysis
LLM_API_URLRequired if LLMhttps://api.openai.com/v1OpenAI-compatible API URL
LLM_API_KEYRequired if LLM-API key
LLM_MODELOptionalgpt-4o-miniModel to use

S3 Storage

VariableRequiredDefaultDescription
S3_ENABLEDOptionalFalseEnable S3 storage
S3_BUCKETRequired if S3-Bucket name
S3_REGIONOptionalus-east-1AWS region
S3_ACCESS_KEYRequired if S3-Access key
S3_SECRET_KEYRequired if S3-Secret key
S3_ENDPOINT_URLOptional-Custom endpoint (for MinIO)

Monitoring

VariableRequiredDefaultDescription
SENTRY_DSNOptional-Sentry error tracking DSN
SENTRY_ENVIRONMENTOptionalproductionEnvironment name
PROMETHEUS_ENABLEDOptionalTrueEnable Prometheus metrics

Database Setup

PostgreSQL Configuration

The default docker-compose.yml includes a PostgreSQL container with:

  • PostgreSQL 16 Alpine image
  • Persistent volume for data
  • Health checks enabled
  • Automatic restart

Production Database Tuning

# docker-compose.override.yml for production PostgreSQL tuning
services:
  postgres:
    command: >
      postgres
      -c shared_buffers=256MB
      -c effective_cache_size=768MB
      -c maintenance_work_mem=128MB
      -c checkpoint_completion_target=0.9
      -c wal_buffers=16MB
      -c default_statistics_target=100
      -c random_page_cost=1.1
      -c effective_io_concurrency=200
      -c work_mem=16MB
      -c min_wal_size=1GB
      -c max_wal_size=4GB
      -c max_worker_processes=4
      -c max_parallel_workers_per_gather=2
      -c max_parallel_workers=4

External PostgreSQL

# In .env
DATABASE_URL=postgresql://username:[email protected]:5432/skyspy

# Remove postgres service from docker-compose
docker compose up -d api celery-worker celery-beat redis

Database Migrations

# Run migrations
docker compose exec api python manage.py migrate

# Check migration status
docker compose exec api python manage.py showmigrations

Database Backups

# Create backup
docker compose exec postgres pg_dump -U $POSTGRES_USER $POSTGRES_DB > backup_$(date +%Y%m%d_%H%M%S).sql

# Restore backup
cat backup.sql | docker compose exec -T postgres psql -U $POSTGRES_USER $POSTGRES_DB

Redis Setup

Default Configuration

Redis is configured with:

  • Persistence enabled (appendonly yes)
  • Memory limit of 256MB
  • LRU eviction policy
# Redis configuration in docker-compose.yml
command: redis-server --appendonly yes --maxmemory 256mb --maxmemory-policy allkeys-lru

Production Redis Recommendations

# docker-compose.override.yml for production Redis
services:
  redis:
    command: >
      redis-server
      --appendonly yes
      --maxmemory 512mb
      --maxmemory-policy allkeys-lru
      --tcp-keepalive 300
      --save 900 1
      --save 300 10
      --save 60 10000

External Redis

# In .env
REDIS_URL=redis://:[email protected]:6379/0

# Remove redis service from docker-compose
docker compose up -d api celery-worker celery-beat postgres

Nginx / Reverse Proxy

Basic Nginx Configuration

Create /etc/nginx/sites-available/skyspy:

upstream skyspy_api {
    server 127.0.0.1:8000;
    keepalive 32;
}

server {
    listen 80;
    server_name skyspy.example.com;

    # Redirect HTTP to HTTPS
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name skyspy.example.com;

    # SSL Configuration
    ssl_certificate /etc/letsencrypt/live/skyspy.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/skyspy.example.com/privkey.pem;
    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_session_tickets off;

    # Modern SSL configuration
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    # HSTS
    add_header Strict-Transport-Security "max-age=63072000" always;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;

    # Logging
    access_log /var/log/nginx/skyspy_access.log;
    error_log /var/log/nginx/skyspy_error.log;

    # Max upload size for audio files
    client_max_body_size 100M;

    # API and static files
    location / {
        proxy_pass http://skyspy_api;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_read_timeout 300s;
        proxy_connect_timeout 75s;
    }

    # Socket.IO endpoint
    location /socket.io/ {
        proxy_pass http://skyspy_api;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_read_timeout 86400;
        proxy_send_timeout 86400;
    }

    # Health check (no logging)
    location /health/ {
        proxy_pass http://skyspy_api;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        access_log off;
    }

    # Metrics endpoint (restrict access)
    location /api/v1/system/metrics {
        proxy_pass http://skyspy_api;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        allow 127.0.0.1;
        allow 10.0.0.0/8;
        allow 172.16.0.0/12;
        allow 192.168.0.0/16;
        deny all;
    }
}

Enable Site

# Enable site
sudo ln -s /etc/nginx/sites-available/skyspy /etc/nginx/sites-enabled/

# Test configuration
sudo nginx -t

# Reload nginx
sudo systemctl reload nginx

SSL with Certbot

# Install certbot
sudo apt install certbot python3-certbot-nginx

# Obtain certificate
sudo certbot --nginx -d skyspy.example.com

# Auto-renewal is configured automatically
sudo systemctl status certbot.timer

Raspberry Pi Deployment

Optimized for Raspberry Pi 5 - SkysPy includes special settings for resource-constrained environments.

Prerequisites

RequirementRecommendedMinimum
HardwareRaspberry Pi 5 (8GB)Raspberry Pi 4 (4GB)
StorageNVMe SSD32GB+ high-speed SD card
OSRaspberry Pi OS Lite (64-bit)Ubuntu Server 24.04

1. Installation

# Update system
sudo apt update && sudo apt upgrade -y

# Install Docker
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER

# Reboot to apply group changes
sudo reboot

2. Deploy SkysPy

# Clone repository
cd /opt
sudo git clone https://github.com/your-org/skyspy.git
sudo chown -R $USER:$USER skyspy
cd skyspy

# Create environment file
cp .env.example .env

# Edit with RPi-specific settings
nano .env

3. Raspberry Pi Environment Configuration

# =============================================================================
# RPi-Optimized Settings
# =============================================================================
DEBUG=False
DJANGO_SECRET_KEY=your-secure-key
ALLOWED_HOSTS=*

# Use RPi-optimized Django settings
DJANGO_SETTINGS_MODULE=skyspy.settings_rpi

# Database
POSTGRES_USER=skyspy
POSTGRES_PASSWORD=secure-password
POSTGRES_DB=skyspy

# Reduced polling for lower CPU usage
POLLING_INTERVAL=3
DB_STORE_INTERVAL=10

# Disable resource-intensive features
TRANSCRIPTION_ENABLED=False
WHISPER_ENABLED=False
LLM_ENABLED=False
PHOTO_AUTO_DOWNLOAD=False

# Your ADS-B source
ULTRAFEEDER_HOST=192.168.1.50
ULTRAFEEDER_PORT=80

# Your location
FEEDER_LAT=47.9377
FEEDER_LON=-121.9687

# Notifications (lightweight)
APPRISE_URLS=
NOTIFICATION_COOLDOWN=600

# Monitoring (optional - disable if low on resources)
PROMETHEUS_ENABLED=False
SENTRY_DSN=

RPi-Optimized Settings

The settings_rpi.py module provides these optimizations:

SettingDefaultRPi ValueBenefit
POLLING_INTERVAL2s3s33% CPU reduction
DB_STORE_INTERVAL5s10s50% fewer DB writes
CACHE_TTL5s10sDoubled cache effectiveness
CONN_MAX_AGE60s120sFewer DB connections
WEBSOCKET_CAPACITY15001000Lower memory usage
ACARS_BUFFER_SIZE5030Smaller memory footprint

4. RPi Docker Compose Override

Create docker-compose.override.yml:

services:
  api:
    environment:
      - DJANGO_SETTINGS_MODULE=skyspy.settings_rpi
    deploy:
      resources:
        limits:
          memory: 1G

  celery-worker:
    environment:
      - DJANGO_SETTINGS_MODULE=skyspy.settings_rpi
    command: >
      celery -A skyspy worker
      --loglevel=info
      --concurrency=20
      --queues=polling,default,database,notifications
      --pool=gevent
    deploy:
      resources:
        limits:
          memory: 512M

  postgres:
    command: >
      postgres
      -c shared_buffers=128MB
      -c effective_cache_size=256MB
      -c maintenance_work_mem=32MB
      -c work_mem=4MB
    deploy:
      resources:
        limits:
          memory: 512M

  redis:
    command: redis-server --appendonly yes --maxmemory 128mb --maxmemory-policy allkeys-lru
    deploy:
      resources:
        limits:
          memory: 192M

5. Start Services on RPi

# Start with resource limits
docker compose up -d

# Monitor resource usage
docker stats

Performance Monitoring

# Check CPU and memory
htop

# Check disk I/O
iotop

# Check container stats
docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"

# Check API response time
time curl -s http://localhost:8000/health/ | jq .

Scaling Considerations

Horizontal Scaling Architecture

flowchart TB
    subgraph LB["Load Balancer"]
        NGINX[Nginx]
    end

    subgraph API["API Servers"]
        API1[API Server 1]
        API2[API Server 2]
        API3[API Server 3<br/>backup]
    end

    subgraph Workers["Celery Workers"]
        W1[Polling Worker<br/>concurrency: 50]
        W2[Default Worker<br/>concurrency: 20]
        W3[Transcription Worker<br/>concurrency: 2]
    end

    subgraph Data["Data Layer"]
        PGBOUNCER[PgBouncer]
        PG[(PostgreSQL)]
        REDIS[(Redis)]
    end

    NGINX --> API1
    NGINX --> API2
    NGINX --> API3

    API1 --> PGBOUNCER
    API2 --> PGBOUNCER
    API3 --> PGBOUNCER
    PGBOUNCER --> PG

    API1 --> REDIS
    API2 --> REDIS
    API3 --> REDIS

    W1 --> REDIS
    W2 --> REDIS
    W3 --> REDIS
    W1 --> PGBOUNCER
    W2 --> PGBOUNCER
    W3 --> PGBOUNCER

    style NGINX fill:#fff3e0
    style API1 fill:#e8f5e9
    style API2 fill:#e8f5e9
    style API3 fill:#ffebee
    style W1 fill:#e3f2fd
    style W2 fill:#e3f2fd
    style W3 fill:#e3f2fd

Multiple Celery Workers

# docker-compose.override.yml
services:
  celery-worker-polling:
    extends:
      service: celery-worker
    container_name: skyspy-celery-polling
    command: >
      celery -A skyspy worker
      --loglevel=info
      --concurrency=50
      --queues=polling
      --pool=gevent
      --hostname=polling@%h

  celery-worker-default:
    extends:
      service: celery-worker
    container_name: skyspy-celery-default
    command: >
      celery -A skyspy worker
      --loglevel=info
      --concurrency=20
      --queues=default,database,notifications
      --pool=gevent
      --hostname=default@%h

  celery-worker-transcription:
    extends:
      service: celery-worker
    container_name: skyspy-celery-transcription
    command: >
      celery -A skyspy worker
      --loglevel=info
      --concurrency=2
      --queues=transcription
      --pool=prefork
      --hostname=transcription@%h

Task Queue Configuration

SkysPy uses multiple Celery queues for task prioritization:

QueuePriorityTasks
pollingHighAircraft polling, stats updates
defaultNormalGeneral background tasks
databaseNormalDatabase sync, cleanup
transcriptionLowAudio transcription
notificationsNormalAlert notifications
low_priorityLowAnalytics, cleanup

Load Balancing Multiple API Instances

upstream skyspy_api {
    least_conn;
    server 10.0.0.10:8000 weight=5;
    server 10.0.0.11:8000 weight=5;
    server 10.0.0.12:8000 backup;
    keepalive 32;
}

Database Connection Pooling

For high-traffic deployments, use PgBouncer:

services:
  pgbouncer:
    image: edoburu/pgbouncer:latest
    environment:
      - DB_USER=${POSTGRES_USER}
      - DB_PASSWORD=${POSTGRES_PASSWORD}
      - DB_HOST=postgres
      - DB_NAME=${POSTGRES_DB}
      - POOL_MODE=transaction
      - MAX_CLIENT_CONN=1000
      - DEFAULT_POOL_SIZE=20
    depends_on:
      - postgres

  api:
    environment:
      - DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@pgbouncer:5432/${POSTGRES_DB}
    depends_on:
      - pgbouncer

Health Checks and Monitoring

Health Check Endpoints

EndpointDescriptionAuthStatus
/health/Basic health checkNonePublic
/api/v1/system/statusDetailed system statusNonePublic
/api/v1/system/infoAPI informationNonePublic
/api/v1/system/metricsPrometheus metricsNoneRestrict in prod

Health Check Response

curl http://localhost:8000/health/ | jq .
{
  "status": "healthy",
  "services": {
    "database": {
      "status": "up",
      "latency_ms": 1.23
    },
    "cache": {
      "status": "up"
    },
    "celery": {
      "status": "up"
    },
    "libacars": {
      "status": "up",
      "circuit_state": "closed",
      "healthy": true
    }
  },
  "timestamp": "2024-01-15T10:30:00Z"
}

Service Health Status Indicators

flowchart LR
    subgraph Health["Health Status"]
        DB[Database<br/>up - 1.23ms]
        CACHE[Cache<br/>up]
        CELERY[Celery<br/>up]
        ACARS[libacars<br/>closed]
    end

    API[API] --> DB
    API --> CACHE
    API --> CELERY
    API --> ACARS

    style DB fill:#c8e6c9
    style CACHE fill:#c8e6c9
    style CELERY fill:#c8e6c9
    style ACARS fill:#c8e6c9
    style API fill:#e8f5e9

Prometheus Configuration

# prometheus.yml
scrape_configs:
  - job_name: 'skyspy'
    static_configs:
      - targets: ['skyspy-api:8000']
    metrics_path: '/api/v1/system/metrics'
    scrape_interval: 15s

Docker Health Checks

# Check container health
docker compose ps

# View health check logs
docker inspect --format='{{json .State.Health}}' skyspy-api | jq .

Sentry Error Tracking

# Configure Sentry in .env
SENTRY_DSN=https://[email protected]/project-id
SENTRY_ENVIRONMENT=production
SENTRY_TRACES_SAMPLE_RATE=0.1
SENTRY_PROFILES_SAMPLE_RATE=0.1

Log Monitoring

# View all logs
docker compose logs -f

# View specific service logs
docker compose logs -f api

# View logs with timestamps
docker compose logs -f --timestamps api

# Tail last 100 lines
docker compose logs --tail=100 api

Backup and Recovery

Automated Backup Script

Create /opt/skyspy/backup.sh:

#!/bin/bash
# SkysPy Backup Script

set -e

BACKUP_DIR="/opt/skyspy/backups"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=7

# Create backup directory
mkdir -p "$BACKUP_DIR"

# Backup PostgreSQL
echo "Backing up PostgreSQL..."
docker compose exec -T postgres pg_dump -U ${POSTGRES_USER:-adsb} ${POSTGRES_DB:-adsb} | gzip > "$BACKUP_DIR/postgres_$DATE.sql.gz"

# Backup Redis
echo "Backing up Redis..."
docker compose exec -T redis redis-cli BGSAVE
sleep 5
docker compose cp redis:/data/dump.rdb "$BACKUP_DIR/redis_$DATE.rdb"

# Backup environment file
echo "Backing up environment..."
cp /opt/skyspy/.env "$BACKUP_DIR/env_$DATE.backup"

# Backup photo cache (optional, can be large)
if [ "${BACKUP_PHOTOS:-false}" = "true" ]; then
    echo "Backing up photo cache..."
    docker compose cp api:/data/photos "$BACKUP_DIR/photos_$DATE"
fi

# Cleanup old backups
echo "Cleaning up backups older than $RETENTION_DAYS days..."
find "$BACKUP_DIR" -type f -mtime +$RETENTION_DAYS -delete

echo "Backup complete: $BACKUP_DIR"
ls -lh "$BACKUP_DIR"/*_$DATE*

Schedule Automated Backups

# Make script executable
chmod +x /opt/skyspy/backup.sh

# Add to crontab (daily at 2 AM)
(crontab -l 2>/dev/null; echo "0 2 * * * /opt/skyspy/backup.sh >> /var/log/skyspy-backup.log 2>&1") | crontab -

Recovery Procedures

Restore PostgreSQL

# Stop services
docker compose stop api celery-worker celery-beat

# Restore database
gunzip -c backups/postgres_20240115_020000.sql.gz | docker compose exec -T postgres psql -U ${POSTGRES_USER:-adsb} ${POSTGRES_DB:-adsb}

# Run migrations (in case of schema changes)
docker compose exec api python manage.py migrate

# Restart services
docker compose start api celery-worker celery-beat

Restore Redis

# Stop Redis
docker compose stop redis

# Copy backup file
docker compose cp backups/redis_20240115_020000.rdb redis:/data/dump.rdb

# Start Redis
docker compose start redis

Full System Recovery

# 1. Clone fresh repository
cd /opt
git clone https://github.com/your-org/skyspy.git skyspy-new
cd skyspy-new

# 2. Restore environment file
cp /path/to/backup/env_20240115_020000.backup .env

# 3. Start infrastructure services
docker compose up -d postgres redis

# 4. Wait for PostgreSQL to be ready
sleep 10

# 5. Restore database
gunzip -c /path/to/backup/postgres_20240115_020000.sql.gz | docker compose exec -T postgres psql -U ${POSTGRES_USER:-adsb} ${POSTGRES_DB:-adsb}

# 6. Start remaining services
docker compose up -d

# 7. Verify health
curl http://localhost:8000/health/

Disaster Recovery Checklist

  • Verify backups exist and are valid
  • Test restore in staging environment
  • Document recovery time objective (RTO): ~15 minutes
  • Document recovery point objective (RPO): Up to 24 hours

Upgrading

Standard Upgrade Procedure

cd /opt/skyspy

# 1. Pull latest changes
git fetch origin
git checkout main
git pull origin main

# 2. Backup database
./backup.sh

# 3. Pull new images
docker compose pull

# 4. Rebuild custom images
docker compose build

# 5. Apply database migrations
docker compose exec api python manage.py migrate

# 6. Restart services with new images
docker compose up -d

# 7. Verify health
curl http://localhost:8000/health/
docker compose logs -f api

Zero-Downtime Upgrade (Advanced)

# 1. Build new images without stopping services
docker compose build

# 2. Start new API container alongside old one
docker compose up -d --scale api=2 --no-recreate

# 3. Run migrations (during overlap)
docker compose exec api python manage.py migrate

# 4. Gradually shift traffic to new container
# (Requires external load balancer configuration)

# 5. Stop old containers
docker compose up -d --scale api=1

# 6. Verify
curl http://localhost:8000/health/

Troubleshooting

Common Issues

Error: API Not Starting - Check logs and verify database connectivity.

# Check logs
docker compose logs api

# Verify PostgreSQL is ready
docker compose exec postgres pg_isready

# Generate new secret key if invalid
python3 -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())"

# Run migrations manually
docker compose exec api python manage.py migrate

Warning: Socket.IO Connection Issues - Verify Daphne is running and nginx is configured correctly.

# Check Daphne is running
docker compose exec api ps aux | grep daphne

# Verify Socket.IO endpoint
curl -i http://localhost:8000/socket.io/?EIO=4&transport=polling

# Ensure nginx proxy_read_timeout is high enough (86400 for 24 hours)

Warning: Celery Tasks Not Running - Check worker status and Redis connectivity.

# Check Celery worker status
docker compose exec celery-worker celery -A skyspy inspect active

# Check Celery beat schedule
docker compose exec celery-beat celery -A skyspy inspect scheduled

# Verify Redis connectivity
docker compose exec celery-worker python -c "import redis; r = redis.from_url('redis://redis:6379/0'); print(r.ping())"

Warning: Database Connection Issues - Verify PostgreSQL is running and connections are not exhausted.

# Check PostgreSQL is running
docker compose exec postgres pg_isready -U ${POSTGRES_USER:-adsb}

# Check connection from API
docker compose exec api python -c "
from django.db import connection
cursor = connection.cursor()
cursor.execute('SELECT 1')
print('Database OK')
"

# Check for connection pool exhaustion
docker compose exec postgres psql -U ${POSTGRES_USER:-adsb} -c "SELECT count(*) FROM pg_stat_activity;"

Info: High Memory Usage - Reduce worker concurrency and cache limits.

# Check container memory usage
docker stats --no-stream

# Reduce Celery concurrency in docker-compose.override.yml:
# command: celery -A skyspy worker --concurrency=20 ...

# Reduce Redis memory:
# command: redis-server --maxmemory 128mb ...

Getting Help


Security Considerations

Production Security Checklist

Danger: Complete ALL items before deploying to production.

Authentication & Secrets

  • Set DEBUG=False
  • Generate strong DJANGO_SECRET_KEY (64+ characters)
  • Generate separate JWT_SECRET_KEY
  • Use strong database passwords
  • Configure ALLOWED_HOSTS properly
  • Configure AUTH_MODE=private or hybrid for sensitive deployments

Network Security

  • Enable HTTPS with valid SSL certificate
  • Configure firewall rules
  • Set up fail2ban for brute force protection
  • Enable rate limiting in nginx
  • Restrict access to /api/v1/system/metrics

Container Security

  • Use non-root user in containers (already configured)
  • Keep Docker and dependencies updated
  • Scan images for vulnerabilities

Firewall Configuration

# Allow only necessary ports
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable

Container Security Features

The Docker images are built with security in mind:

FeatureStatusDescription
Non-root userEnabledRuns as skyspy:skyspy (UID/GID 1000)
Minimal baseEnabledPython 3.12-slim images
No shell accessEnabledProduction containers
Read-only volumesEnabledCode volumes where possible

Platform Deployment Options

PlatformBest ForResources
Docker - Full containerized deployment with Docker Compose.Development, testing, CI/CD4GB RAM, 2 cores
Raspberry Pi - Optimized for edge deployment on Pi 5.Home users, remote stations8GB RAM, NVMe SSD
Cloud - Scalable production deployment.High-traffic, enterprise8GB+ RAM, 4+ cores

Next Steps: After deployment, check out the API Reference and Socket.IO Guide to start building integrations.