personal-presence-os

Going Live

  • shipped
  • deployment
  • infrastructure

The push to ship

When I started building this engine, I wanted it live immediately. Not "deployed to a staging environment" live — actually serving real pages on real domains. The instinct came from watching builders like Levelsio, who treats shipping as a reflex. The idea isn't to wait until everything is polished. It's to get the feedback loop running as fast as possible: build, push, see it break, fix it, push again.

That urgency shaped every decision in this deployment.

Recognising the Levelsio instinct

There's a pattern among indie builders who ship fast: they don't separate "building" from "deploying." The deploy is part of the build. Levelsio runs production on a single VPS, keeps the stack minimal, avoids infrastructure complexity that doesn't serve the product. It's not laziness — it's discipline. Every layer of abstraction you add is a layer you maintain.

personal-presence-os follows the same instinct. No container orchestration. No CI/CD pipeline (yet — just a webhook). No managed hosting platform with its own opinions about how your app should work. A VPS, a process, a CDN in front. Done.

The cost analysis

Before choosing this stack, I evaluated AWS Amplify Hosting seriously. Here's how it compared:

Hetzner VPS + Cloudflare AWS Amplify (3 apps)
Hosting ~$4/month $0 (free tier)
DNS $0 (Cloudflare free) $0–1/month (Route 53)
CDN / TLS $0 (Cloudflare free) $0 (included)
Domain registration ~$2/month amortized ~$2/month amortized
Total ~$6/month $2–3/month
Server ops ~15 min/month 0
Architecture match Perfect Compromised

The $4/month premium buys architectural coherence. Amplify would have required three separate apps — one per domain — because it serves a single baseDirectory per app and can't route by Host header. That breaks the engine's core design: one process, many domains, sites.yaml as the single source of truth.

Next.js on Amplify could have solved this with middleware-based Host routing, but that would mean adopting React, a bundler, and a framework — all explicitly forbidden by the project's principles. The juice wasn't worth the squeeze.

The stack

Getting this running required a few specific pieces:

Hetzner CX22 VPS — Ubuntu, $4/month. Hardened with SSH on a non-standard port (key-only, no root login), fail2ban for brute-force protection, UFW firewall allowing only SSH, HTTP, and HTTPS.

Bun as runtime — The engine uses Bun for both building (bun run build prerenders all sites to dist/) and serving (engine/src/server.js runs a Hono HTTP server). One quirk: Bun auto-detects export default on Hono apps and tries to call Bun.serve() itself, conflicting with the @hono/node-server listener. Fixed by switching to a named export (export { app }).

Cloudflare as the TLS and CDN layer — Free tier. DNS managed in Cloudflare, A records pointing to the VPS IP with proxy enabled (orange cloud). SSL mode set to Flexible — Cloudflare terminates HTTPS at the edge and connects to the origin over HTTP on port 80. The *.kda.zone wildcard certificate covers all subdomains automatically, so lego-submarine.kda.zone worked without any extra config.

systemd for process management — The Hono server runs as a systemd service with AmbientCapabilities=CAP_NET_BIND_SERVICE, which lets a non-root user bind to port 80. Auto-restarts on crash, starts on boot.

GitHub webhook for deploys — A lightweight webhook listener on port 9000 receives push events from GitHub, pulls the latest code, runs bun run build, and restarts the service. No CI pipeline, no build minutes, no third-party dependency.

What's live

wardleymaps.com runs on a separate stack and wasn't part of this deployment.

What's next

The webhook auto-deploy needs testing under real conditions. The DNS sync script — reading sites.yaml and creating Cloudflare A records via API — is designed but not built yet. And the engine itself still needs work: search indexing, web components for interactive content, and more sites.

But it's live. That's the part that matters.