Mermaid diagrams in Markdown, without handing the page to a CDN
What shipped
Any fenced code block tagged mermaid in a Markdown file now renders as a diagram in the browser. Flowcharts, sequence diagrams, state machines — whatever Mermaid's syntax supports. The worked example sits in the Agency log entry on negative loops, which leans on five small flowcharts to carry structure that prose alone would blur.
The full mechanism is documented as a docs page. This entry is the story of the trade-offs behind it, not the manual.
The constraint that shaped the design
Mermaid is a client-side renderer — it ships as JavaScript that finds <pre class="mermaid"> blocks in the DOM and replaces each one with an inline SVG. That works fine for a human on a modern browser. It fails quietly for the three other audiences the engine has to serve:
- LLM crawlers. After Mermaid runs, the original diagram source is gone from the DOM — replaced by a tree of SVG path elements. A bot that scrapes the rendered page sees opaque vector graphics where there used to be readable text. The whole "write once, let every consumer understand it" principle collapses.
- Readers without JavaScript. Privacy extensions, hardened browsers,
reader mode— any of these strips the script and leaves a blank container where the diagram should be. - Search crawlers and social preview generators that execute partial or no JavaScript. Same problem, different bot.
The fix is to write every diagram three times into a single container. One layer is what Mermaid operates on. One is a <noscript> fallback with the raw source. And one is a visually hidden <div> that keeps the source in the DOM after Mermaid replaces the visible <pre> with an SVG — so automated consumers still find readable text on the rendered page, not just vector paths. Three layers, one source of truth per diagram. The Markdown author writes the block once; the engine handles the rest.
The CDN trap
The first version pulled Mermaid from jsDelivr via a dynamic import(). Small, cheap, no build-step impact — and it worked on every browser I opened. It then failed in an automated build environment where outbound requests to the CDN were blocked, which meant a deploy could theoretically succeed with broken diagrams and nobody notice until a human loaded the page.
That is the kind of dependency that compounds badly. A CDN outage, a firewall rule, a future CSP tightening — any one of them turns a diagram into a blank box, on someone else's schedule. The engine's whole point is that nothing about the runtime depends on a third party staying up or staying reachable.
So Mermaid is now vendored: engine/vendor/mermaid.esm.min.js committed to the repo, served from the engine's own /vendor/:filename route, copied into every site's dist/<domain>/vendor/ at build time. The npm mermaid dependency in package.json exists only so the bundle can be regenerated from a known version — runtime never reaches into node_modules. Deterministic builds, no external runtime dependency, no CSP exception.
Lazy, but only where it matters
Mermaid is not small. Dropping it on every page would be a tax paid by readers who never see a diagram — the log index, the about page, the decision records. So the layout checks the rendered body for class="mermaid" before deciding whether to emit the loader script at all. Pages without diagrams don't carry even a single byte of Mermaid.
Pages that do have diagrams ship a tiny inline module script that dynamically imports the full bundle only after confirming there are blocks to render. The import is cached, so a second diagram on the same page costs a DOM query, not another network round trip.
What this is good for — and what it isn't
Mermaid earns its keep when the shape of a relationship is the thing worth communicating: a feedback loop, a sequence of messages, a decision tree. Prose describing "A goes to B, which conditions C" is heavier than a six-node flowchart that shows the same thing.
It is the wrong tool for precise architecture diagrams (use a hand-drawn SVG), for anything with more than about fifteen nodes (the auto-layout stops helping), and for visuals that need to survive outside the site — cross-posted copies on Substack or LinkedIn will show the raw source unless the target platform also runs Mermaid. Canonical diagrams stay on the domain; platform copies link back, per the distribution strategy.
Why three layers, once more, with feeling
The durable bit of this change is the triple-layer container, not the Mermaid integration. Any future client-side renderer — KaTeX for math, a custom chart component, whatever — will face the same problem: the browser turns source into pixels, and every non-browser consumer loses the source in the process. Keeping the raw source in a visually hidden element alongside the rendered output is a small pattern that preserves the engine's core promise across all of them: write once, let every consumer — human, search engine, LLM — read the same thing.