Deployment

Docker Deployment

Deploy thinnestAI using Docker and Docker Compose for local development or small-scale production use.

Docker Deployment

Docker is the fastest way to get thinnestAI running on your own machine. This guide covers the Dockerfile, Docker Compose setup, and building images.

Prerequisites

Quick Start

# Clone the repository
git clone https://github.com/thinnestai/agno-platform.git
cd agno-platform

# Copy and configure environment
cp .env.example .env
# Edit .env with your API keys and settings

# Start all services
docker-compose up -d

# Check that everything is running
docker-compose ps

# View logs
docker-compose logs -f backend

The API will be available at http://localhost:8000. Check health with:

curl http://localhost:8000/api/health

Docker Compose Configuration

The docker-compose.yml file defines all services:

version: '3.8'

services:
  backend:
    build:
      context: .
      dockerfile: Dockerfile.backend
    ports:
      - "8000:8000"
    env_file:
      - .env
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    restart: unless-stopped

  workers:
    build:
      context: .
      dockerfile: Dockerfile.workers
    env_file:
      - .env
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    restart: unless-stopped

  postgres:
    image: pgvector/pgvector:pg15
    environment:
      POSTGRES_USER: agno
      POSTGRES_PASSWORD: your_secure_password
      POSTGRES_DB: agno
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U agno"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 5s
      retries: 5

volumes:
  postgres_data:
  redis_data:

Backend Dockerfile

The backend Dockerfile builds the FastAPI application:

# Dockerfile.backend
FROM python:3.11-slim

WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
    build-essential \
    libpq-dev \
    && rm -rf /var/lib/apt/lists/*

# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Run database migrations and start server
CMD ["sh", "-c", "alembic upgrade head && python main.py"]

EXPOSE 8000

Workers Dockerfile

Background workers handle email, billing, campaigns, and auto-topup:

# Dockerfile.workers
FROM python:3.11-slim

WORKDIR /app

RUN apt-get update && apt-get install -y \
    build-essential \
    libpq-dev \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

# Start all workers via supervisord
CMD ["supervisord", "-c", "supervisord.conf"]

Building Images

Build All Services

docker-compose build

Build Individual Services

# Backend only
docker-compose build backend

# Workers only
docker-compose build workers

Build with No Cache

docker-compose build --no-cache

Environment Configuration

Create your .env file from the example:

cp .env.example .env

At minimum, configure these variables:

# Database (matches docker-compose postgres service)
PG_DB_URL=postgresql://agno:your_secure_password@postgres:5432/agno

# Redis (matches docker-compose redis service)
REDIS_URL=redis://redis:6379

# Encryption key (generate a random 32-char string)
ENCRYPTION_KEY=your-32-character-encryption-key-here

# LLM Provider (at least one)
OPENAI_API_KEY=sk-your-openai-api-key

# Environment
ENVIRONMENT=development
DEV_MODE=true

See the full Environment Variables reference for all options.

Database Migrations

Migrations run automatically on startup. To run them manually:

# Apply all pending migrations
docker-compose exec backend alembic upgrade head

# Check current migration status
docker-compose exec backend alembic current

# Create a new migration
docker-compose exec backend alembic revision --autogenerate -m "description"

# Rollback one migration
docker-compose exec backend alembic downgrade -1

Common Operations

View Logs

# All services
docker-compose logs -f

# Specific service
docker-compose logs -f backend
docker-compose logs -f workers
docker-compose logs -f postgres

Restart Services

# Restart everything
docker-compose restart

# Restart one service
docker-compose restart backend

Stop and Remove

# Stop services (preserves data)
docker-compose down

# Stop and remove volumes (deletes all data)
docker-compose down -v

Access the Database

docker-compose exec postgres psql -U agno -d agno

Access Redis

docker-compose exec redis redis-cli

Updating

To deploy a new version:

# Pull latest code
git pull origin main

# Rebuild and restart
docker-compose build
docker-compose up -d

# Migrations run automatically on startup

Production Considerations

Docker Compose is great for development and small deployments. For production, consider:

  • Use managed databases: Cloud-hosted PostgreSQL and Redis for reliability and backups.
  • Set up reverse proxy: Use Nginx or Caddy in front of the backend for SSL termination.
  • Configure backups: Automate PostgreSQL backups.
  • Set resource limits: Add mem_limit and cpus to your Docker Compose services.
  • Use Docker secrets: Don't store sensitive values in .env files in production.
  • Enable health checks: The backend exposes /api/health for monitoring.

For a fully managed production setup, see GCP Deployment.

Troubleshooting

IssueSolution
Backend can't connect to databaseEnsure PG_DB_URL uses postgres (service name), not localhost
Migrations failCheck PostgreSQL is healthy: docker-compose ps
Out of memoryIncrease Docker's memory allocation in Docker Desktop settings
Port conflictsChange port mappings in docker-compose.yml
Workers not processingCheck Redis connectivity and worker logs

Next Steps

On this page