Deployment

Deployment

Deploy thinnestAI to your own infrastructure. Self-host the full platform on Docker, GCP, or your preferred cloud provider.

Deployment

thinnestAI can be self-hosted on your own infrastructure. This gives you full control over your data, compliance, and scaling.

Deployment Options

OptionBest ForComplexity
DockerLocal development, small teamsLow
GCP Cloud RunProduction workloadsMedium
CustomAdvanced users with existing infrastructureHigh

Architecture Overview

A full thinnestAI deployment consists of:

ComponentPurposeRequired
Backend APIFastAPI server handling all API requestsYes
PostgreSQLPrimary database with pgvector for embeddingsYes
RedisCaching, rate limiting, and task queuesYes
Background WorkersEmail, billing, campaigns, auto-topupYes (for full features)
FrontendNext.js dashboard and management UIOptional (use hosted dashboard)

Minimum Requirements

ResourceMinimumRecommended
CPU2 vCPU4 vCPU
Memory4 GB8 GB
Storage20 GB50 GB+
PostgreSQL15+ with pgvectorManaged service preferred
Redis6.0+Managed service preferred

Quick Start

The fastest way to get running is with Docker:

git clone https://github.com/thinnestai/agno-platform.git
cd agno-platform
cp .env.example .env
# Edit .env with your configuration
docker-compose up -d

See Docker Deployment for the full guide.

Configuration

All configuration is done through environment variables. See the Environment Variables reference for the complete list.

The minimum required variables are:

# Database
PG_DB_URL=postgresql://user:password@localhost:5432/agno

# Encryption
ENCRYPTION_KEY=your-32-character-encryption-key

# At least one LLM provider
OPENAI_API_KEY=sk-your-openai-key

What's Next

On this page