Docker Deployment
Deploy thinnestAI using Docker and Docker Compose for local development or small-scale production use.
Docker Deployment
Docker is the fastest way to get thinnestAI running on your own machine. This guide covers the Dockerfile, Docker Compose setup, and building images.
Prerequisites
- Docker 20.10+
- Docker Compose v2+
- At least one LLM API key (OpenAI, Google, or Anthropic)
Quick Start
# Clone the repository
git clone https://github.com/thinnestai/agno-platform.git
cd agno-platform
# Copy and configure environment
cp .env.example .env
# Edit .env with your API keys and settings
# Start all services
docker-compose up -d
# Check that everything is running
docker-compose ps
# View logs
docker-compose logs -f backendThe API will be available at http://localhost:8000. Check health with:
curl http://localhost:8000/api/healthDocker Compose Configuration
The docker-compose.yml file defines all services:
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile.backend
ports:
- "8000:8000"
env_file:
- .env
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
workers:
build:
context: .
dockerfile: Dockerfile.workers
env_file:
- .env
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
postgres:
image: pgvector/pgvector:pg15
environment:
POSTGRES_USER: agno
POSTGRES_PASSWORD: your_secure_password
POSTGRES_DB: agno
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U agno"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
volumes:
postgres_data:
redis_data:Backend Dockerfile
The backend Dockerfile builds the FastAPI application:
# Dockerfile.backend
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Run database migrations and start server
CMD ["sh", "-c", "alembic upgrade head && python main.py"]
EXPOSE 8000Workers Dockerfile
Background workers handle email, billing, campaigns, and auto-topup:
# Dockerfile.workers
FROM python:3.11-slim
WORKDIR /app
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
# Start all workers via supervisord
CMD ["supervisord", "-c", "supervisord.conf"]Building Images
Build All Services
docker-compose buildBuild Individual Services
# Backend only
docker-compose build backend
# Workers only
docker-compose build workersBuild with No Cache
docker-compose build --no-cacheEnvironment Configuration
Create your .env file from the example:
cp .env.example .envAt minimum, configure these variables:
# Database (matches docker-compose postgres service)
PG_DB_URL=postgresql://agno:your_secure_password@postgres:5432/agno
# Redis (matches docker-compose redis service)
REDIS_URL=redis://redis:6379
# Encryption key (generate a random 32-char string)
ENCRYPTION_KEY=your-32-character-encryption-key-here
# LLM Provider (at least one)
OPENAI_API_KEY=sk-your-openai-api-key
# Environment
ENVIRONMENT=development
DEV_MODE=trueSee the full Environment Variables reference for all options.
Database Migrations
Migrations run automatically on startup. To run them manually:
# Apply all pending migrations
docker-compose exec backend alembic upgrade head
# Check current migration status
docker-compose exec backend alembic current
# Create a new migration
docker-compose exec backend alembic revision --autogenerate -m "description"
# Rollback one migration
docker-compose exec backend alembic downgrade -1Common Operations
View Logs
# All services
docker-compose logs -f
# Specific service
docker-compose logs -f backend
docker-compose logs -f workers
docker-compose logs -f postgresRestart Services
# Restart everything
docker-compose restart
# Restart one service
docker-compose restart backendStop and Remove
# Stop services (preserves data)
docker-compose down
# Stop and remove volumes (deletes all data)
docker-compose down -vAccess the Database
docker-compose exec postgres psql -U agno -d agnoAccess Redis
docker-compose exec redis redis-cliUpdating
To deploy a new version:
# Pull latest code
git pull origin main
# Rebuild and restart
docker-compose build
docker-compose up -d
# Migrations run automatically on startupProduction Considerations
Docker Compose is great for development and small deployments. For production, consider:
- Use managed databases: Cloud-hosted PostgreSQL and Redis for reliability and backups.
- Set up reverse proxy: Use Nginx or Caddy in front of the backend for SSL termination.
- Configure backups: Automate PostgreSQL backups.
- Set resource limits: Add
mem_limitandcpusto your Docker Compose services. - Use Docker secrets: Don't store sensitive values in
.envfiles in production. - Enable health checks: The backend exposes
/api/healthfor monitoring.
For a fully managed production setup, see GCP Deployment.
Troubleshooting
| Issue | Solution |
|---|---|
| Backend can't connect to database | Ensure PG_DB_URL uses postgres (service name), not localhost |
| Migrations fail | Check PostgreSQL is healthy: docker-compose ps |
| Out of memory | Increase Docker's memory allocation in Docker Desktop settings |
| Port conflicts | Change port mappings in docker-compose.yml |
| Workers not processing | Check Redis connectivity and worker logs |
Next Steps
- GCP Deployment — Scale to production on Google Cloud.
- Environment Variables — Complete configuration reference.
- Monitoring — Set up health checks and observability.