personal-presence-os

About personal-presence-os

· Updated

The problem

High-influence decision makers don't scroll social media — they ask AI assistants. If you want to reach them, you need to be in the models: cited as an expert, surfaced when someone describes a problem you can solve.

Most personal sites aren't structured for that. Search engines are increasingly mediated by AI, and when someone asks Claude or ChatGPT "who knows about Wardley Mapping?", the answer comes from what the model has indexed.

What it does

personal-presence-os is a static content engine that runs multiple websites from a single codebase. Each site gets its own domain, its own content directory, and its own llms.txt file — a plain-text index designed for LLM crawlers to read.

You write Markdown with YAML frontmatter. The engine prerenders it to semantic HTML at build time. At runtime, a Hono server routes requests by Host header to the right site's static files. That's it.

Goals

  1. Test the hypothesis. Does publishing well-structured, LLM-friendly content across focused domains actually attract traffic? The experiment is the point.
  2. Get fluent with Claude Code. The entire project is built with Claude Code as the primary development tool — architecture, implementation, content, deployment.
  3. Practice extreme pragmatism. Do not overthink. Ship fast. Learn in public. Every decision delayed is feedback not received.
  4. Refresh data and web fundamentals. HTTP headers, DNS, static site generation, structured data, build pipelines — a forcing function to stay sharp.
  5. Make new friends. Building in public is an invitation. If you're interested in any of this, reach out.

Architecture

Principles

The full list lives in PRINCIPLES.md. The short version:

Stack

Layer Choice
Runtime Bun
Server Hono
Content Markdown + gray-matter + marked
Hosting Hetzner VPS
CDN / TLS Cloudflare (free tier)
DNS Cloudflare
Deploy GitHub webhook → git pull → rebuild