K8s Wildcard Subdomain Setup
Production tenants are routed through one wildcard DNS record, one wildcard TLS certificate, and one wildcard Ingress rule. That means onboarding a new tenant is a pure application-layer event (insert a Domain row) — no DNS provisioning, no cert renewal, no ingress update per tenant.
This page covers the one-time infra setup.
Prerequisites
- A k8s cluster running AutoCom (one of the
k8s/overlays/rings/*overlays) with the ingress controller reachable at a stable IP or load balancer DNS name. - An ingress controller installed. Examples below assume
ingress-nginx, butcert-managerworks with any controller that supportsIngressresources. - A domain you control (
autocom.appin examples) with access to its DNS provider's API (Cloudflare, Route53, Google DNS, etc.). cert-managerinstalled in the cluster (kubectl get crd | grep cert-manager.ioshould show CRDs).
Step 1 — Wildcard DNS record
In your DNS provider, add:
Record: *.autocom.app
Type: A (or CNAME if your LB returns a DNS name)
Value: <ingress-controller-LB-IP>
TTL: 300
Also add a bare-domain record for the landlord / platform UI:
Record: autocom.app
Type: A
Value: <ingress-controller-LB-IP>
Verify propagation:
dig +short acme.autocom.app # any subdomain → LB IP
dig +short whatever.autocom.app # still LB IP (proof of wildcard)
From this point on, any new subdomain is already routable. You never touch DNS again when provisioning tenants.
Step 2 — cert-manager ClusterIssuer with DNS-01
Wildcard certs require DNS-01 challenges (HTTP-01 doesn't work with wildcards). This example uses Cloudflare; adapt the solvers block to your DNS provider.
2a. Create the API token secret
In Cloudflare, create an API token with Zone.DNS:Edit on the autocom.app zone. Then:
kubectl create secret generic cloudflare-api-token \
--namespace cert-manager \
--from-literal=api-token=<your-token>
2b. ClusterIssuer
# cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod-dns
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: ops@autocom.app # ← real, monitored address
privateKeySecretRef:
name: letsencrypt-prod-dns-key
solvers:
- dns01:
cloudflare:
apiTokenSecretRef:
name: cloudflare-api-token
key: api-token
selector:
dnsZones:
- "autocom.app"
kubectl apply -f cluster-issuer.yaml
kubectl get clusterissuer letsencrypt-prod-dns
# READY should be True within ~30 seconds
2c. Certificate resource
In whichever namespace your ingress lives (e.g., autocom for the stable ring):
# wildcard-cert.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: autocom-wildcard
namespace: autocom
spec:
secretName: autocom-wildcard-tls
issuerRef:
name: letsencrypt-prod-dns
kind: ClusterIssuer
commonName: autocom.app
dnsNames:
- "autocom.app" # bare domain
- "*.autocom.app" # all subdomains
kubectl apply -f wildcard-cert.yaml
# Monitor issuance
kubectl get certificate -n autocom autocom-wildcard -w
# READY=True usually takes 1-3 min (DNS-01 propagation)
kubectl get secret -n autocom autocom-wildcard-tls
# NAME TYPE DATA AGE
# autocom-wildcard-tls kubernetes.io/tls 2 2m
cert-manager auto-renews 30 days before expiry. No operator action needed.
Step 3 — Wildcard ingress
Single Ingress resource with a *.autocom.app host rule:
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: autocom
namespace: autocom
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
# Preserve the Host header — Stancl's domain-based tenancy reads it.
nginx.ingress.kubernetes.io/upstream-vhost: "$host"
spec:
ingressClassName: nginx
tls:
- hosts:
- "autocom.app"
- "*.autocom.app"
secretName: autocom-wildcard-tls
rules:
# Bare domain → landlord / platform UI (served by nginx → frontend)
- host: "autocom.app"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port: { number: 80 }
# Any subdomain → tenant-scoped (same backend, Host header disambiguates)
- host: "*.autocom.app"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port: { number: 80 }
kubectl apply -f ingress.yaml
kubectl describe ingress -n autocom autocom
# Expect: no events about cert errors; TLS block shows autocom-wildcard-tls
Step 4 — Smoke-test the full chain
From outside the cluster:
# Existing landlord tenant (platform UI)
curl -sS -I https://autocom.app/ # expect 200 or 302
# Existing provisioned tenant
curl -sS https://acme.autocom.app/api/v1/install/check # expect JSON
# Non-existent tenant → backend returns 404 / tenant-not-found
curl -sS https://doesnotexist.autocom.app/api/v1/install/check # expect 404 or tenant error
All three return via the same wildcard cert. The first two succeed; the third fails cleanly at the tenancy middleware (nothing in the domains table).
Step 5 — Onboard a new tenant
With the infra above in place, onboarding is now application-layer only:
curl -sS -X POST https://autocom.app/api/v1/reseller-register/apply \
-H "Content-Type: application/json" \
-d '{
"referral_code": "WELCOME-2026",
"name": "Jane",
"email": "jane@example.com",
"password": "...",
"password_confirmation": "...",
"business_name": "Jane's Shop",
"preferred_domain": "janes-shop"
}'
After admin approval, janes-shop.autocom.app is live. No DNS change, no cert change, no ingress change.
Other DNS providers
The solvers block in the ClusterIssuer changes per provider. Commonly used:
| Provider | cert-manager provider | Docs |
|---|---|---|
| Cloudflare | cloudflare |
https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/ |
| AWS Route 53 | route53 |
https://cert-manager.io/docs/configuration/acme/dns01/route53/ |
| Google Cloud DNS | clouddns |
https://cert-manager.io/docs/configuration/acme/dns01/google/ |
| DigitalOcean | digitalocean |
https://cert-manager.io/docs/configuration/acme/dns01/digitalocean/ |
| Webhook (anything else) | webhook |
https://cert-manager.io/docs/configuration/acme/dns01/webhook/ |
Troubleshooting
Cert stays READY=False for more than 5 minutes
kubectl describe certificate -n autocom autocom-wildcard
kubectl describe challenge -n autocom
kubectl describe order -n autocom
kubectl logs -n cert-manager -l app=cert-manager --tail=200
Most common causes:
- DNS-01 TXT record not propagating (Cloudflare proxy on? Wait 60s)
- API token missing
Zone.DNS:Editscope - Rate-limited by Let's Encrypt (5 fails per hour per account — use their staging server first)
Subdomain resolves but returns landlord content
Means the tenant's domain row doesn't exist. Check:
# From a working api pod
kubectl exec -n autocom deploy/api -- php artisan tinker --execute='
echo \App\Models\Tenant::find("new-tenant-id")?->name ?? "not found";
'
If the tenant exists but the subdomain doesn't resolve to it, the Domain row is missing — not the tenant itself.
Host header stripped at ingress
Rare, but some custom ingress controllers drop or rewrite the Host header. Verify:
kubectl logs -n autocom deploy/api --tail=100 | grep "Host:"
If the Host header is the service name (nginx) instead of the external domain, add nginx.ingress.kubernetes.io/upstream-vhost: "$host" to the Ingress annotations.
What this doesn't cover (yet)
- Custom domains (
app.bigcorp.com→ tenantbigcorp). Needs per-customer cert issuance and a separate Ingress rule. Not wired in. - Regional routing (
acme.us.autocom.appvsacme.eu.autocom.appserving different data regions). Needs a regional prefix in the hostname parsing — not implemented. - Multiple rings sharing a wildcard cert. The cert is per-namespace right now. If you're running multi-ring production (stable + edge + canary as separate namespaces), each ring's namespace needs its own Certificate resource OR you need to share the secret across namespaces via the kubed / reflector operator.