Summary: I've been deploying n8n for production workloads for the past two years. Here's everything I learned about the different deployment methods, with actual code examples and configurations that work in the real world.

Why Deploy n8n Yourself?
Before we dive into the how, let's talk about the why. n8n is self-hosted by design, which means you run it on your own infrastructure. Coming from tools like Zapier or Make.com, this might seem like extra work — and honestly, it is. But there are compelling reasons:
- Data stays on your servers — no third-party access to sensitive business data
- No execution limits — run as many workflows as your server can handle
- Custom nodes — build integrations for internal tools or niche services
- Cost predictability — pay for infrastructure, not per-workflow execution
The trade-off? You're responsible for deployment, maintenance, security, and scaling. That's what this guide is for.
Before We Start
This isn't a "click next to install" tutorial. We're going to set up production-ready n8n deployments. You'll need basic familiarity with command line tools, Docker, and server administration. If that sounds intimidating, skip to the AutomateSpot section at the end.
Method 1: Docker Self-Hosting
This is the "roll your sleeves up" approach. You're going to run n8n in Docker containers on your own server. I've been running my production n8n instance this way for over a year, and while it requires more setup, the control is worth it.
What You'll Need
- A Linux server (I use Ubuntu 22.04, but any recent distro works)
- Docker and Docker Compose
- A domain name pointed to your server
- Basic comfort with command line
Server Sizing
For a single-user n8n instance with moderate workflow complexity, I recommend starting with 2GB RAM and 2 CPU cores. You can always scale up later. My production instance runs on a $12/month Hetzner VPS and handles hundreds of workflow executions daily.
Setting Up the Environment
First, let's get Docker installed and create our project structure:
# Install Docker (Ubuntu/Debian)
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
# Install Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# Create our n8n directory
mkdir ~/n8n-production && cd ~/n8n-production
The Docker Compose Stack
Here's the docker-compose.yml
I use in production. It includes PostgreSQL for data persistence and Traefik for automatic SSL:
version: '3.8'
services:
traefik:
image: traefik:v3.0
command:
- --api.dashboard=true
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --certificatesresolvers.letsencrypt.acme.tlschallenge=true
- --certificatesresolvers.letsencrypt.acme.email=your-email@domain.com
- --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
ports:
- 80:80
- 443:443
volumes:
- ./letsencrypt:/letsencrypt
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
postgres:
image: postgres:15
environment:
POSTGRES_DB: n8n
POSTGRES_USER: n8n
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
healthcheck:
test: pg_isready -U n8n -d n8n
interval: 10s
timeout: 5s
retries: 5
n8n:
image: n8nio/n8n:latest
environment:
- N8N_HOST=${N8N_HOST}
- N8N_PROTOCOL=https
- N8N_EDITOR_BASE_URL=https://${N8N_HOST}/
- WEBHOOK_URL=https://${N8N_HOST}/
- GENERIC_TIMEZONE=America/New_York
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- N8N_USER_MANAGEMENT_DISABLED=true
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD}
volumes:
- n8n_data:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
labels:
- traefik.enable=true
- traefik.http.routers.n8n.rule=Host(`${N8N_HOST}`)
- traefik.http.routers.n8n.tls=true
- traefik.http.routers.n8n.tls.certresolver=letsencrypt
- traefik.http.services.n8n.loadbalancer.server.port=5678
restart: unless-stopped
volumes:
postgres_data:
n8n_data:
Environment Configuration
Create a .env
file with your configuration. Keep this file secure — it contains sensitive data:
# Your domain
N8N_HOST=n8n.yourdomain.com
# Database password (generate a strong one)
POSTGRES_PASSWORD=your-super-secure-password-here
# Basic auth credentials
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=another-secure-password
# Encryption key (generate with: openssl rand -base64 32)
N8N_ENCRYPTION_KEY=your-generated-encryption-key-here
Critical Security Note
The N8N_ENCRYPTION_KEY
encrypts stored credentials in your workflows. If you lose this key, you'll lose access to all saved credentials. Back it up securely and never change it after initial setup.
Launching Your n8n Instance
# Start everything
docker-compose up -d
# Check that services are running
docker-compose ps
# Follow the logs to see what's happening
docker-compose logs -f n8n
If everything went well, you should be able to access n8n at https://n8n.yourdomain.com
. Traefik will automatically handle the SSL certificate from Let's Encrypt.
Essential Maintenance Tasks
Here are the maintenance commands I run regularly:
# Update n8n to the latest version
docker-compose pull n8n
docker-compose up -d n8n
# Backup your database (run this weekly)
docker-compose exec postgres pg_dump -U n8n n8n | gzip > backup-$(date +%Y-%m-%d).sql.gz
# Clean up old Docker images
docker image prune -f
# Check disk usage
df -h
docker system df
Real-World Costs
My production setup runs on a Hetzner CPX21 VPS (2 vCPU, 4GB RAM, 40GB disk) for €4.15/month (~$4.50). Add domain costs (~$12/year) and you're looking at about $5.50/month total. Your costs may vary based on provider and resource needs.
When to choose Docker self-hosting:
- You're comfortable with command line and Docker
- You need custom configurations or integrations
- You want predictable, low costs
- Data sovereignty is important to you
When to skip it:
- You just want to get started quickly
- Server maintenance sounds like a chore
- You don't have experience with containerization
Advanced Docker Configuration
For production deployments, you'll want to implement additional configurations:
Performance Optimization
For high-volume workflows, optimize your Docker configuration:
# Add to docker-compose.yml n8n service
environment:
# Worker Configuration
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- N8N_WORKERS=4
# Memory Management
- NODE_OPTIONS=--max-old-space-size=2048
# Execution Timeout
- EXECUTIONS_TIMEOUT=300
- EXECUTIONS_TIMEOUT_MAX=600
# File Storage Optimization
- N8N_DEFAULT_BINARY_DATA_MODE=filesystem
- N8N_BINARY_DATA_TTL=24
Add Redis for queue management:
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
restart: unless-stopped
command: redis-server --appendonly yes --maxmemory 512mb --maxmemory-policy allkeys-lru
volumes:
postgres_data:
n8n_data:
redis_data:
Security Hardening
Implement additional security measures for production:
# Security environment variables
environment:
# Disable user registration
- N8N_USER_MANAGEMENT_DISABLED=true
# Enable CORS protection
- N8N_CORS_ORIGIN=https://n8n.yourdomain.com
# Secure webhook handling
- N8N_SECURE_COOKIE=true
- N8N_COOKIE_SAME_SITE_POLICY=strict
# Rate limiting
- N8N_WORKFLOW_CALLER_POLICY_DEFAULT_OPTION=workflowsFromSameOwner
# Disable external hooks
- N8N_DISABLE_PRODUCTION_MAIN_PROCESS=false
Monitoring and Logging
Add comprehensive monitoring to your deployment:
# Add to docker-compose.yml
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
restart: unless-stopped
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
volumes:
- grafana_data:/var/lib/grafana
restart: unless-stopped
Create a prometheus.yml
configuration:
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'n8n'
static_configs:
- targets: ['n8n:5678']
metrics_path: '/metrics'
- job_name: 'postgres'
static_configs:
- targets: ['postgres_exporter:9187']
Method 2: VPS Hosting Solutions
VPS hosting offers a middle ground between full self-management and managed services. Here's a comprehensive guide to deploying n8n on various VPS providers.
Provider Comparison and Setup
I've tested n8n deployments across major VPS providers. Here's the detailed breakdown:
Provider | 2GB RAM / 2 vCPU | 4GB RAM / 2 vCPU | Setup Complexity | n8n Performance |
---|---|---|---|---|
Hetzner | €3.29/month | €4.15/month | 4/5 | Excellent |
DigitalOcean | $12/month | $24/month | 5/5 | Very Good |
Vultr | $6/month | $12/month | 4/5 | Good |
Linode | $10/month | $20/month | 4/5 | Very Good |
AWS Lightsail | $10/month | $20/month | 3/5 | Good |
Note: Prices and performance ratings are based on my own testing and provider data as of October 2025. Actual results and pricing may vary.
Hetzner Cloud Setup (Recommended)
Based on 18 months of production use, here's my battle-tested Hetzner setup:
Server Provisioning
# Create server via Hetzner Cloud CLI
hcloud server create --type cpx21 --image ubuntu-22.04 --name n8n-prod --ssh-key my-key
# Or use the web interface:
# Location: Nuremberg (eu-central)
# Image: Ubuntu 22.04
# Type: CPX21 (2 vCPU, 4GB RAM, 40GB SSD)
# Networking: IPv4 + IPv6
Initial Server Security
# SSH into your server
ssh root@your-server-ip
# Update system
apt update && apt upgrade -y
# Create non-root user
adduser n8nuser
usermod -aG sudo n8nuser
usermod -aG docker n8nuser
# Setup SSH key authentication
mkdir -p /home/n8nuser/.ssh
cp ~/.ssh/authorized_keys /home/n8nuser/.ssh/
chown -R n8nuser:n8nuser /home/n8nuser/.ssh
chmod 700 /home/n8nuser/.ssh
chmod 600 /home/n8nuser/.ssh/authorized_keys
# Configure firewall
ufw allow OpenSSH
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
Docker Installation & Optimization
# Install Docker with optimization
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
# Configure Docker daemon for performance
cat << EOF | sudo tee /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 64000,
"Soft": 64000
}
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl reload docker
Advanced Nginx Configuration
For maximum performance, I recommend using Nginx instead of Traefik:
# /etc/nginx/sites-available/n8n
server {
listen 80;
server_name n8n.yourdomain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name n8n.yourdomain.com;
# SSL Configuration
ssl_certificate /etc/letsencrypt/live/n8n.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/n8n.yourdomain.com/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:MozTLS:10m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers off;
# HSTS
add_header Strict-Transport-Security "max-age=63072000" always;
# Security headers
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
# Proxy settings
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
location / {
proxy_pass http://127.0.0.1:5678;
}
# Webhook optimization
location /webhook/ {
proxy_pass http://127.0.0.1:5678;
proxy_buffering off;
proxy_request_buffering off;
}
}
Security Best Practices
Security is critical for n8n deployments since they often handle sensitive data and API keys.
Environment Variable Security
Secure your environment configuration:
# Use Docker secrets for sensitive data
echo "your-encryption-key" | docker secret create n8n_encryption_key -
echo "your-db-password" | docker secret create postgres_password -
# Updated docker-compose.yml with secrets
version: '3.8'
services:
n8n:
image: n8nio/n8n:latest
secrets:
- n8n_encryption_key
- postgres_password
environment:
- N8N_ENCRYPTION_KEY_FILE=/run/secrets/n8n_encryption_key
- DB_POSTGRESDB_PASSWORD_FILE=/run/secrets/postgres_password
secrets:
n8n_encryption_key:
external: true
postgres_password:
external: true
Network Security
Implement network-level security:
# Create isolated Docker network
docker network create n8n-network --driver bridge
# Add to docker-compose.yml
networks:
n8n-network:
external: true
services:
n8n:
networks:
- n8n-network
postgres:
networks:
- n8n-network
# Only expose to n8n, not externally
# Remove 'ports' section
Backup and Disaster Recovery
Implement comprehensive backup strategies:
# Automated backup script
#!/bin/bash
BACKUP_DIR="/home/n8nuser/backups"
DATE=$(date +%Y%m%d_%H%M%S)
# Create backup directory
mkdir -p $BACKUP_DIR
# Database backup
docker-compose exec -T postgres pg_dump -U n8n n8n | gzip > $BACKUP_DIR/db_backup_$DATE.sql.gz
# n8n data backup
docker run --rm -v n8n_data:/data -v $BACKUP_DIR:/backup alpine tar czf /backup/n8n_data_$DATE.tar.gz -C /data .
# Keep only last 7 backups
find $BACKUP_DIR -name "*.gz" -mtime +7 -delete
# Upload to cloud storage (optional)
# aws s3 cp $BACKUP_DIR s3://your-backup-bucket/n8n-backups/ --recursive
Performance Optimization
Optimize your n8n deployment for high-volume workflows:
Database Optimization
PostgreSQL tuning for n8n workloads:
# postgresql.conf optimizations
shared_buffers = 256MB
effective_cache_size = 1GB
maintenance_work_mem = 64MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
# Connection pooling
max_connections = 200
shared_preload_libraries = 'pg_stat_statements'
Scaling Strategies
Scale n8n for enterprise workloads:
Scaling Approach | Complexity | Cost Impact | Performance Gain | Best For |
---|---|---|---|---|
Vertical Scaling | Low | Linear | Good | < 1000 executions/day |
Worker Nodes | Medium | Moderate | Excellent | 1000-10000 executions/day |
Queue + Redis | Medium | Low | Very Good | High concurrency |
Multiple Instances | High | High | Excellent | 10000+ executions/day |
Monitoring and Alerting
Set up comprehensive monitoring for production n8n deployments:
Metrics Collection
Key metrics to monitor:
# Custom monitoring script
#!/bin/bash
# /usr/local/bin/n8n-metrics.sh
# Check n8n health
n8n_status=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:5678/healthz)
# Database connections
db_connections=$(docker-compose exec -T postgres psql -U n8n -d n8n -t -c "SELECT count(*) FROM pg_stat_activity;")
# Disk usage
disk_usage=$(df -h /var/lib/docker | awk 'NR==2 {print $5}' | sed 's/%//')
# Memory usage
memory_usage=$(free | grep Mem | awk '{printf "%.2f", $3/$2 * 100.0}')
# Log to InfluxDB or similar
echo "n8n_status,host=$(hostname) value=${n8n_status}"
echo "db_connections,host=$(hostname) value=${db_connections}"
echo "disk_usage,host=$(hostname) value=${disk_usage}"
echo "memory_usage,host=$(hostname) value=${memory_usage}"
Alerting Configuration
Set up alerts for critical issues:
# Prometheus alerting rules
groups:
- name: n8n
rules:
- alert: N8NDown
expr: up{job="n8n"} == 0
for: 1m
labels:
severity: critical
annotations:
summary: "n8n instance is down"
- alert: HighDiskUsage
expr: disk_usage > 85
for: 5m
labels:
severity: warning
annotations:
summary: "Disk usage is above 85%"
- alert: DatabaseConnectionsHigh
expr: db_connections > 180
for: 2m
labels:
severity: warning
annotations:
summary: "Database connections approaching limit"
Advanced Troubleshooting
Common issues and their solutions:
Performance Issues
Symptom | Likely Cause | Solution | Prevention |
---|---|---|---|
Slow workflow execution | Database bottleneck | Optimize queries, add indexes | Regular DB maintenance |
Memory leaks | Large data sets in memory | Enable binary data mode | Monitor memory usage |
Webhook timeouts | Network latency | Increase timeout values | Use CDN for assets |
Queue backlog | Insufficient workers | Add more worker nodes | Monitor queue depth |
Debugging Commands
# Check n8n logs
docker-compose logs -f n8n --tail=100
# Database connection test
docker-compose exec postgres psql -U n8n -d n8n -c "SELECT version();"
# Check resource usage
docker stats n8n-production_n8n_1
# Analyze slow queries
docker-compose exec postgres psql -U n8n -d n8n -c "SELECT query, mean_exec_time FROM pg_stat_statements ORDER BY mean_exec_time DESC LIMIT 10;"
# Check disk space by container
docker system df
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"
Ready to Start Your n8n Journey?
You now have all the technical knowledge needed to deploy n8n successfully. The choice between self-hosting and managed solutions depends on your team's expertise and time constraints. What matters most is taking that first step toward automating your workflows.
Get Started with AutomateSpotBy: Lahoucine Taqi, Full-Stack Developer.