Kubernetes Deployment
This guide covers deploying AutoCom to Kubernetes with production-grade configurations, including high availability and autoscaling.
Prerequisites
- Kubernetes 1.28+
- kubectl configured
- Docker images built
- metrics-server installed (for HPA)
Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Kubernetes Cluster │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Nginx │ │ Frontend │ │ Docs │ │
│ │ LoadBalancer│ │ LoadBalancer│ │ LoadBalancer│ │
│ │ :80 │ │ :80 │ │ :80 │ │
│ └──────┬──────┘ └─────────────┘ └─────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ API │ │ Horizon │ │
│ │ (2+ pods) │ │ (1 pod) │ │
│ │ :9000 │ │ Queue │ │
│ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ PostgreSQL │ │ Redis │ │
│ │ StatefulSet │ │ StatefulSet │ │
│ │ :5432 │ │ :6379 │ │
│ └─────────────┘ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Manifest Structure
k8s/dev/
├── namespace.yaml # Namespace definition
├── configmap.yaml # ConfigMaps for app and nginx
├── secrets.yaml # Secrets (credentials)
├── storage.yaml # PersistentVolumeClaims
├── postgres.yaml # PostgreSQL StatefulSet
├── redis.yaml # Redis StatefulSet
├── api.yaml # API Deployment + Service
├── nginx.yaml # Nginx Deployment + LoadBalancer
├── frontend.yaml # Frontend Deployment + LoadBalancer
├── docs.yaml # Docs Deployment + LoadBalancer
├── horizon.yaml # Horizon Deployment
├── hpa.yaml # HorizontalPodAutoscalers
├── pdb.yaml # PodDisruptionBudgets
├── ingress.yaml # Ingress rules
├── migrations.yaml # Database migration Job
└── deploy.sh # Deployment script
Quick Start
# Navigate to k8s directory
cd k8s/dev
# Deploy everything
./deploy.sh deploy
# Check status
./deploy.sh status
# View logs
./deploy.sh logs api
# Destroy everything
./deploy.sh destroy
Core Components
Namespace
apiVersion: v1
kind: Namespace
metadata:
name: autocom
labels:
app.kubernetes.io/name: autocom
ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: autocom-config
namespace: autocom
data:
APP_NAME: "AutoCom"
APP_ENV: "production"
APP_DEBUG: "false"
DB_CONNECTION: "pgsql"
DB_HOST: "postgres"
DB_PORT: "5432"
REDIS_HOST: "redis"
CACHE_DRIVER: "redis"
QUEUE_CONNECTION: "redis"
Secrets
apiVersion: v1
kind: Secret
metadata:
name: autocom-secrets
namespace: autocom
type: Opaque
stringData:
APP_KEY: "base64:your-app-key-here"
DB_USERNAME: "autocom"
DB_PASSWORD: "your-secure-password"
REDIS_PASSWORD: "your-redis-password"
API Deployment
The API deployment includes:
- Init containers for dependency checks
- Health probes (startup, liveness, readiness)
- Resource limits for autoscaling
- Graceful shutdown handling
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: autocom
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
spec:
terminationGracePeriodSeconds: 60
initContainers:
- name: wait-for-postgres
image: postgres:16-alpine
command: ["sh", "-c", "until pg_isready -h postgres; do sleep 2; done"]
- name: wait-for-redis
image: redis:7-alpine
command: ["sh", "-c", "until redis-cli -h redis -a $REDIS_PASSWORD ping; do sleep 2; done"]
containers:
- name: api
image: autocom-app:latest
ports:
- containerPort: 9000
resources:
requests:
cpu: 200m
memory: 384Mi
limits:
cpu: 1000m
memory: 768Mi
startupProbe:
tcpSocket:
port: 9000
failureThreshold: 30
periodSeconds: 5
livenessProbe:
exec:
command: ["/usr/local/bin/health-check.sh"]
periodSeconds: 15
readinessProbe:
tcpSocket:
port: 9000
periodSeconds: 5
Nginx Configuration
High-performance nginx configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
worker_processes auto;
worker_rlimit_nofile 65535;
events {
worker_connections 4096;
use epoll;
multi_accept on;
}
http {
# Performance optimizations
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 30;
keepalive_requests 1000;
# Gzip compression
gzip on;
gzip_comp_level 5;
gzip_types text/plain application/json application/javascript;
# Rate limiting
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=100r/s;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=5r/s;
# Upstream with keepalive
upstream api_backend {
server api:9000 max_fails=3 fail_timeout=30s;
keepalive 32;
}
server {
listen 80;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
location / {
limit_req zone=api_limit burst=30 nodelay;
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass api_backend;
fastcgi_buffers 32 32k;
fastcgi_buffer_size 64k;
fastcgi_keep_conn on;
}
}
}
StatefulSets
PostgreSQL
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
namespace: autocom
spec:
serviceName: postgres
replicas: 1
template:
spec:
containers:
- name: postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "autocom"
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: autocom-secrets
key: DB_USERNAME
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
Redis
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
namespace: autocom
spec:
serviceName: redis
replicas: 1
template:
spec:
containers:
- name: redis
image: redis:7-alpine
command:
- redis-server
- --requirepass
- $(REDIS_PASSWORD)
- --appendonly
- "yes"
volumeMounts:
- name: redis-data
mountPath: /data
Pod Disruption Budgets
Ensure high availability during maintenance:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: api-pdb
namespace: autocom
spec:
minAvailable: 1
selector:
matchLabels:
app.kubernetes.io/component: api
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: postgres-pdb
namespace: autocom
spec:
maxUnavailable: 0 # Never disrupt database
selector:
matchLabels:
app.kubernetes.io/component: postgres
Running Migrations
Migrations are run as a Kubernetes Job:
apiVersion: batch/v1
kind: Job
metadata:
name: migrations
namespace: autocom
spec:
template:
spec:
containers:
- name: migrations
image: autocom-app:latest
command:
- php
- artisan
- migrate
- --force
envFrom:
- configMapRef:
name: autocom-config
restartPolicy: Never
backoffLimit: 3
Run manually:
# Delete existing job
kubectl delete job migrations -n autocom --ignore-not-found
# Apply job
kubectl apply -f migrations.yaml
# Watch logs
kubectl logs -f job/migrations -n autocom
Accessing Services
Port Forwarding
# API via nginx
kubectl port-forward svc/nginx -n autocom 8080:80
# Frontend
kubectl port-forward svc/frontend -n autocom 3000:80
# PostgreSQL (for debugging)
kubectl port-forward svc/postgres -n autocom 5432:5432
LoadBalancer URLs
On Docker Desktop or cloud providers:
| Service | URL |
|---|---|
| API | http://localhost:80 |
| Frontend | http://localhost:31030 |
| Docs | http://localhost:30425 |
Monitoring
Check Pod Status
kubectl get pods -n autocom -o wide
View Events
kubectl get events -n autocom --sort-by='.lastTimestamp'
Describe Resources
kubectl describe deployment api -n autocom
kubectl describe pod <pod-name> -n autocom
View Logs
# All containers
kubectl logs -f deployment/api -n autocom
# Specific container
kubectl logs -f deployment/api -n autocom -c api
# Previous container (after crash)
kubectl logs deployment/api -n autocom --previous
Troubleshooting
Pod Won't Start
# Check events
kubectl describe pod <pod-name> -n autocom
# Check init container logs
kubectl logs <pod-name> -n autocom -c wait-for-postgres
Database Connection Issues
# Verify postgres is running
kubectl exec -n autocom postgres-0 -- pg_isready
# Test connection from API pod
kubectl exec -n autocom deployment/api -- php artisan db:monitor
Redis Connection Issues
# Test Redis ping
kubectl exec -n autocom redis-0 -- redis-cli -a $REDIS_PASSWORD ping
Restart Deployments
# Restart single deployment
kubectl rollout restart deployment/api -n autocom
# Restart all deployments
kubectl rollout restart deployment -n autocom
# Watch rollout status
kubectl rollout status deployment/api -n autocom
Next Steps
- Configure Autoscaling for automatic scaling
- PHP Optimization for performance tuning