<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Self-Hosting on Andrei Mahalean&#39;s Weblog</title>
    <link>https://blog.maha.nz/tags/self-hosting/</link>
    <description>Recent content in Self-Hosting on Andrei Mahalean&#39;s Weblog</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-US</language>
    <copyright>Andrei Mahalean (CC BY 4.0)</copyright>
    <lastBuildDate>Sun, 05 Apr 2026 00:00:00 +1300</lastBuildDate>
    <atom:link href="https://blog.maha.nz/tags/self-hosting/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Goodbye GitHub</title>
      <link>https://blog.maha.nz/posts/goodbye-github/</link>
      <pubDate>Sun, 05 Apr 2026 00:00:00 +1300</pubDate>
      <guid>https://blog.maha.nz/posts/goodbye-github/</guid>
      <description>&lt;p&gt;I migrated all my personal repositories off GitHub and onto a self-hosted &lt;a href=&#34;https://forgejo.org&#34;&gt;Forgejo&lt;/a&gt; instance running on my homelab NixOS machine. Here&amp;rsquo;s why.&lt;/p&gt;&#xA;&lt;h2 id=&#34;the-tipping-point&#34;&gt;The tipping point&lt;/h2&gt;&#xA;&lt;p&gt;GitHub used to be the obvious place to put code. It had a reputation that was genuinely earned. Good tooling, a huge community, reliable enough that you didn&amp;rsquo;t think about it.&lt;/p&gt;&#xA;&lt;p&gt;That reputation has taken a beating. GitHub has had multiple outages over the last year, the kind that block pushes and CI runs at the worst times. And then there&amp;rsquo;s the Copilot training on public repositories which I&amp;rsquo;m not a big fan of.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I migrated all my personal repositories off GitHub and onto a self-hosted <a href="https://forgejo.org">Forgejo</a> instance running on my homelab NixOS machine. Here&rsquo;s why.</p>
<h2 id="the-tipping-point">The tipping point</h2>
<p>GitHub used to be the obvious place to put code. It had a reputation that was genuinely earned. Good tooling, a huge community, reliable enough that you didn&rsquo;t think about it.</p>
<p>That reputation has taken a beating. GitHub has had multiple outages over the last year, the kind that block pushes and CI runs at the worst times. And then there&rsquo;s the Copilot training on public repositories which I&rsquo;m not a big fan of.</p>
<p>Neither of those things alone would have moved me. What changed the calculus was Forgejo. It&rsquo;s a mature, actively maintained Git hosting platform with a straightforward migration path. When the alternative is good enough, the reasons to stay somewhere that&rsquo;s quietly gotten worse start to feel thin.</p>
<p>So I moved.</p>
<h2 id="what-i-moved">What I moved</h2>
<p>All my personal repositories, public and private. The goal was to own my own stuff. It was also a useful forcing function to clean out old repos that had been sitting there doing nothing.</p>
<p>Forgejo is running on my NixOS homelab box. It&rsquo;s LAN-only for now. I&rsquo;m not exposing it publicly. That&rsquo;s a deliberate choice. If I want to share something, I&rsquo;ll figure that out when it comes up.</p>
<p>The GitHub repos are deleted.</p>
<h2 id="how-the-migration-went">How the migration went</h2>
<p>Easier than expected. Forgejo has a built-in migration tool that imports a GitHub repo including its history, issues, and pull requests. If you want to keep pushing to GitHub in parallel, it supports setting the old repo as a push mirror.</p>
<p>The only thing that needed reworking was CI. Forgejo Actions uses the same syntax as GitHub Actions, but with one immediate difference: action references need a full <code>https://</code> URL instead of the shorthand <code>owner/repo@sha</code> format. So <code>actions/checkout@abc123</code> becomes <code>https://github.com/actions/checkout@abc123</code>. Mechanical, but you have to do it.</p>
<p>That said, I&rsquo;m using the migration as an opportunity to rethink CI entirely. Rather than stitching together marketplace actions, I&rsquo;m moving toward a flake-based Nix approach where the pipeline drops into a Nix dev shell and runs commands from there. The goal is that local and CI environments are identical, down to the commit hash of each package. The pre-commit workflow for <code>tf-infra</code> already looks like this:</p>





<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">https://github.com/DeterminateSystems/nix-installer-action@ef8a148080ab6020fd15196c2084a2eea5ff2d25</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">with</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">extra-conf</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;sandbox = false&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl">- <span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="l">nix develop --command pre-commit run</span></span></span></code></pre></div><p>One less thing that only works on GitHub.</p>
<h2 id="was-it-worth-it">Was it worth it?</h2>
<p>Yes. Not because anything dramatic changed day-to-day. The repos are still there, git still works the same way. But there&rsquo;s something satisfying about knowing the authoritative copy of my code sits on hardware I own, in my house, running software I can inspect and modify.</p>
<p>It&rsquo;s part of the same impulse that got me into self-hosting everything else. I&rsquo;d rather manage the complexity myself than rent the illusion of simplicity from a platform that can change its terms, train on my data, or simply disappear.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Nixos Part 1</title>
      <link>https://blog.maha.nz/posts/nixos-part-1/</link>
      <pubDate>Mon, 30 Mar 2026 00:00:00 +1300</pubDate>
      <guid>https://blog.maha.nz/posts/nixos-part-1/</guid>
      <description>&lt;h1 id=&#34;why-nixos&#34;&gt;Why NixOS?&lt;/h1&gt;&#xA;&lt;blockquote&gt;&#xA;&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt; My NixOS Homelab: Part 1 of 10&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Traditional Linux servers accumulate changes over time until no one really knows what they&amp;rsquo;re running. NixOS fixes this by making your entire system configuration a file you can read, commit to git, and roll back at will. The learning curve is steep and the docs aren&amp;rsquo;t great, but it&amp;rsquo;s worth it. This series is the story of how I got here.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="why-nixos">Why NixOS?</h1>
<blockquote>
<p><strong>Series:</strong> My NixOS Homelab: Part 1 of 10</p>
<p><strong>TL;DR:</strong> Traditional Linux servers accumulate changes over time until no one really knows what they&rsquo;re running. NixOS fixes this by making your entire system configuration a file you can read, commit to git, and roll back at will. The learning curve is steep and the docs aren&rsquo;t great, but it&rsquo;s worth it. This series is the story of how I got here.</p>
</blockquote>
<hr>
<h2 id="how-i-got-here">How I Got Here</h2>
<p>My homelab went through a few iterations before landing on NixOS.</p>
<p>First it was the obvious thing: install Ubuntu, <code>apt install</code> everything, configure it by hand. It worked. For a while.</p>
<p>Then I moved to Docker Compose. Services were more portable, easier to version, easier to reproduce. Better, but the host OS was still a snowflake. Every tweak I&rsquo;d made to the underlying system was stored only in my memory and a pile of shell history.</p>
<p>Then I tried <a href="https://github.com/davestephens/ansible-nas">ansible-nas</a>, which at least acknowledged the problem. The idea was right: describe your homelab as code, run a playbook, get a configured system. But it still felt wrong. Ansible describes <em>actions</em> to take, not the <em>state</em> you want. Run the same playbook twice and you&rsquo;re hoping the idempotency checks hold. The underlying system could still drift between runs.</p>
<p>The moment that crystallised it: a disk died, and I had to rebuild from scratch using ansible-nas. The playbook ran. But it wasn&rsquo;t a complete picture of what I&rsquo;d had. Things needed fixing manually. I can&rsquo;t even remember exactly what. Which is precisely the problem. If you can&rsquo;t remember what your system needs to be correct, you don&rsquo;t have a declarative system. You have a script with amnesia.</p>
<p>There had to be a better way.</p>
<hr>
<h2 id="i-tried-nixos-before-it-didnt-stick">I Tried NixOS Before. It Didn&rsquo;t Stick.</h2>
<p>I&rsquo;d actually tried NixOS years earlier, on a laptop. It did not go well.</p>
<p>The learning curve was brutal. The documentation was (and honestly, still is) not great. Nix the language has a unique syntax that nothing prepares you for. And the whole Flakes situation (the feature that makes NixOS properly reproducible) was labelled &ldquo;experimental&rdquo;, which at the time made me nervous enough to avoid it. I gave up and went back to something that just worked.</p>
<p>What changed was a <a href="https://news.ycombinator.com/item?id=47479751">HN thread</a> where a bunch of people made the case that NixOS pairs particularly well with LLMs. The argument: because NixOS config is a declarative language with a massive, well-structured package repository, an LLM can actually reason about it (I am aware LLMs don&rsquo;t actually <em>think</em> but for the sake of the conversation, we&rsquo;ll go with that word). It can look up available options in nixpkgs, write correct module config, and catch mistakes in ways that are much harder with imperative shell scripts.</p>
<p>I decided to try again, this time leaning into that. The workflow I landed on: plan changes with Claude Opus (for the design thinking), implement with Claude Sonnet (for the actual Nix config), review the diff myself before applying. This is not a post about AI-assisted infrastructure, but it&rsquo;s worth being honest that the tooling is part of why it finally clicked.</p>
<h2 id="nixos-your-entire-system-is-a-file">NixOS: Your Entire System Is a File</h2>
<p>NixOS takes a radically different approach to the problem. Instead of running commands that mutate system state, you declare what your system should look like in a set of configuration files. Then you apply that configuration, and NixOS makes it so.</p>
<p>The entire state of my server (every installed package, every service, every systemd unit, every user, every open port) is described in a git repository. To understand what my server is running, I read the config. To change something, I edit a file and run <code>nixos-rebuild switch</code>. To undo that change, I run <code>nixos-rebuild switch --rollback</code> and I&rsquo;m back to exactly where I was before.</p>
<p>It sounds almost too good, so let me be clear: NixOS is not easy. The learning curve is real. The Nix language takes getting used to. The ecosystem has rough edges. There are times you&rsquo;ll spend an afternoon solving a problem that would&rsquo;ve taken five minutes on Ubuntu.</p>
<p>But once it clicks, it changes how you think about infrastructure. Going back to imperative config feels like giving up something important.</p>
<h2 id="the-mental-model-shift">The Mental Model Shift</h2>
<p>Before NixOS makes sense, you need to internalise one idea:</p>
<blockquote>
<p><strong>NixOS doesn&rsquo;t modify your system. It builds a new one and switches to it.</strong></p>
</blockquote>
<p>When you run <code>nixos-rebuild switch</code>, Nix evaluates your configuration, builds every package and config file from scratch in an immutable store (<code>/nix/store</code>), and then atomically switches the running system to that new generation. Your previous configuration still exists. It&rsquo;s still in the boot menu. You can switch back to it instantly.</p>
<p>This has a few profound consequences:</p>
<p><strong>There is no &ldquo;system state&rdquo; outside your config.</strong> Anything you set up by hand outside of your Nix config doesn&rsquo;t survive a rebuild. This sounds punishing at first. It&rsquo;s actually liberating. It forces you to commit changes to config rather than letting them drift into the void.</p>
<p><strong>Every package is content-addressed and isolated.</strong> Two packages can depend on different versions of a library, both installed simultaneously, with zero conflict. The infamous &ldquo;dependency hell&rdquo; problem is just&hellip; gone.</p>
<p><strong>Rollbacks are instantaneous.</strong> Because each <code>nixos-rebuild switch</code> produces a new system generation and the old one is kept, rolling back means rebooting and picking the previous entry from GRUB, or running <code>nixos-rebuild switch --rollback</code> without even rebooting.</p>
<p><strong>Reproducibility is built in.</strong> With Nix Flakes (which we&rsquo;ll use from day one in this series), every input to your system (nixpkgs itself, every community module, every tool) is pinned to an exact commit hash. Your <code>flake.lock</code> is a complete bill of materials for your system. Check it into git and you can rebuild the exact same system six months from now on entirely different hardware.</p>
<hr>
<h2 id="why-not-just-use-ansible">Why Not Just Use Ansible?</h2>
<p>Fair question. I tried that (via ansible-nas, specifically).</p>
<p><strong>Ansible</strong> brings idempotence and repeatability, but it&rsquo;s still fundamentally imperative. It describes <em>actions</em> to take, not the desired <em>state</em> of the system. Run the same playbook twice and you&rsquo;re hoping the authors wrote proper idempotency checks everywhere. The underlying system packages are still managed by the distro&rsquo;s package manager and can drift between runs. When I had to rebuild after a disk failure, ansible-nas got me close, but not all the way there.</p>
<p><strong>Docker Compose / Podman</strong> solves the &ldquo;which version of this app&rdquo; problem for containerised workloads, but your host system is still a snowflake. The kernel, system packages, networking configuration, secrets management. All of that lives outside the containers and can drift.</p>
<p><strong>NixOS</strong> manages everything: the host OS, the kernel, system services, user packages, dotfiles via Home Manager, and containers too if you want them. The whole stack.</p>
<hr>
<h2 id="what-were-building-meet-bunk">What We&rsquo;re Building: Meet <code>bunk</code></h2>
<p>Throughout this series, I&rsquo;ll be working with a real machine: a homelab server called <code>bunk</code>.</p>
<p><strong>Hardware:</strong></p>
<ul>
<li>AMD Ryzen CPU</li>
<li>Gigabyte GA-B450M-S2H motherboard</li>
<li>465 GB NVMe OS drive</li>
<li>4 × spinning HDDs in two ZFS mirror pools (~6 TB usable)</li>
</ul>
<p><strong>Services running (all declared in Nix config):</strong></p>
<table>
  <thead>
      <tr>
          <th>Category</th>
          <th>Services</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td><strong>Media</strong></td>
          <td>Jellyfin, Sonarr, Radarr, Prowlarr, Bazarr, NZBGet, Transmission</td>
      </tr>
      <tr>
          <td><strong>Photos</strong></td>
          <td>Immich (with ML face recognition)</td>
      </tr>
      <tr>
          <td><strong>Documents</strong></td>
          <td>Paperless-ngx</td>
      </tr>
      <tr>
          <td><strong>Books</strong></td>
          <td>Calibre-Web, Shelfmark</td>
      </tr>
      <tr>
          <td><strong>Finance</strong></td>
          <td>Actual Budget</td>
      </tr>
      <tr>
          <td><strong>Productivity</strong></td>
          <td>Vikunja, Mealie, n8n</td>
      </tr>
      <tr>
          <td><strong>Monitoring</strong></td>
          <td>Uptime Kuma, Scrutiny</td>
      </tr>
      <tr>
          <td><strong>Infrastructure</strong></td>
          <td>Caddy, Authelia, Podman, NFS, Restic, BorgBackup</td>
      </tr>
  </tbody>
</table>
<p>That&rsquo;s 30+ services, all declared in Nix, all backed up to two separate cloud providers. The system configuration lives in a git repository and rebuilding from scratch is one command.</p>
<p>One honest caveat: the <em>system</em> is declarative, but application-level configuration is not always. Some services can be configured entirely through NixOS module options. Some accept environment variables. Some require you to click through a setup wizard on first run and store their state in a database. I think of it in three tiers: NixOS module options where possible, environment variables for secrets and simple settings, and Click-Ops as a last resort. The Click-Ops stuff is backed up offsite and I can roll back the system around it, so I&rsquo;m comfortable with that trade-off. I revisit it once a year to see if anything can move up the chain.</p>
<p>Not all services support SSO either. Some won&rsquo;t let you disable their built-in auth at all, and others lack OIDC support entirely. For those, Authelia sits out and the app handles its own authentication. It&rsquo;s not perfectly uniform, but it works.</p>
<hr>
<h2 id="the-core-concepts-youll-need">The Core Concepts You&rsquo;ll Need</h2>
<p>Before diving into installation in Part 2, here are the key ideas that will come up repeatedly:</p>
<h3 id="nix-the-language">Nix (the language)</h3>
<p>A purely functional, lazily-evaluated language used to describe packages and configurations. It has an unusual syntax that takes some getting used to. You don&rsquo;t need to master it to be productive, but you do need to be comfortable reading it.</p>





<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-nix" data-lang="nix"><span class="line"><span class="cl"><span class="c1"># A simple NixOS service declaration</span>
</span></span><span class="line"><span class="cl"><span class="n">services</span><span class="o">.</span><span class="n">caddy</span> <span class="o">=</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">  <span class="n">enable</span> <span class="o">=</span> <span class="no">true</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">  <span class="n">virtualHosts</span><span class="o">.</span><span class="s2">&#34;jellyfin.maha.nz&#34;</span><span class="o">.</span><span class="n">extraConfig</span> <span class="o">=</span> <span class="s1">&#39;&#39;
</span></span></span><span class="line"><span class="cl"><span class="s1">    reverse_proxy localhost:8096
</span></span></span><span class="line"><span class="cl"><span class="s1">  &#39;&#39;</span><span class="p">;</span>
</span></span><span class="line"><span class="cl"><span class="p">};</span></span></span></code></pre></div><h3 id="nixpkgs">Nixpkgs</h3>
<p>One of the largest package repositories in existence, with over 100,000 packages. It&rsquo;s also the source of NixOS module definitions: those <code>services.*</code> and <code>programs.*</code> options that let you configure system services declaratively.</p>
<h3 id="nix-flakes">Nix Flakes</h3>
<p>An experimental (but widely adopted) feature that pins all your Nix inputs to exact versions, making builds fully reproducible. We&rsquo;ll use flakes from day one. Think of <code>flake.nix</code> as <code>package.json</code> and <code>flake.lock</code> as <code>package-lock.json</code>, but for your entire operating system.</p>
<h3 id="home-manager">Home Manager</h3>
<p>A Nix-based tool for managing user-level configuration (dotfiles, user packages, shell config). We&rsquo;ll use it as a NixOS module so user and system config live in the same git repo.</p>
<h3 id="generations">Generations</h3>
<p>Each time you run <code>nixos-rebuild switch</code>, NixOS creates a new <strong>generation</strong> (a snapshot of your system configuration). You can list them with <code>nix-env --list-generations</code>, boot into any of them from the bootloader, and roll back instantly. Old generations are garbage-collected when you run <code>nix-collect-garbage</code>.</p>
<hr>
<h2 id="the-trade-offs-lets-be-honest">The Trade-offs (Let&rsquo;s Be Honest)</h2>
<p>NixOS is not for everyone. Here&rsquo;s what you&rsquo;re signing up for:</p>
<p><strong>The Nix language is genuinely weird.</strong> It&rsquo;s not Python or YAML or HCL. It&rsquo;s a functional language with lazy evaluation, and some concepts (like <code>lib.mkIf</code>, <code>lib.optionalAttrs</code>, overlays) take real effort to understand if you are not that smart like me.</p>
<p><strong>Error messages can be cryptic.</strong> When your config fails to evaluate, the error output is sometimes helpful and sometimes looks like it was generated by a Turing machine having an existential crisis.</p>
<p><strong>The ecosystem moves fast.</strong> Nixpkgs is massive and well-maintained, but NixOS-specific modules vary in quality. Some services have beautifully-designed modules with every option exposed. Others have bare-bones modules that still require manual workarounds.</p>
<p><strong>First-time setup is slower.</strong> Doing what takes ten minutes on Ubuntu might take an hour on NixOS the first time, as you figure out the right module option or why your service won&rsquo;t start.</p>
<p><strong>The payoff is real.</strong> After the initial investment, day-to-day operations become remarkably smooth. Adding a new service is adding a few lines to a config file. Upgrading the whole system is <code>nix flake update &amp;&amp; nixos-rebuild switch</code>. Breaking something is a five-second rollback.</p>
<hr>
<h2 id="whats-coming-in-this-series">What&rsquo;s Coming in This Series</h2>
<p>Here&rsquo;s the roadmap:</p>
<ol>
<li><strong>Part 1 (this post):</strong> Why NixOS?</li>
<li><strong>Part 2:</strong> Installing NixOS with Flakes and ZFS from day one</li>
<li><strong>Part 3:</strong> Structuring a config that doesn&rsquo;t fall apart</li>
<li><strong>Part 4:</strong> Secrets management with agenix (encrypted config in git)</li>
<li><strong>Part 5:</strong> Caddy + Authelia: SSO for every service</li>
<li><strong>Part 6:</strong> Self-hosting a media stack the NixOS way</li>
<li><strong>Part 7:</strong> Running containers when NixOS modules don&rsquo;t exist</li>
<li><strong>Part 8:</strong> Backups with Restic + BorgBackup</li>
<li><strong>Part 9:</strong> Monitoring, alerting, and knowing when things break</li>
<li><strong>Part 10:</strong> Deploying, updating, and rolling back without fear</li>
</ol>
<p>Each article is written to be useful on its own, but they build on each other. If you&rsquo;re starting from scratch, I&rsquo;d recommend reading in order. If you&rsquo;re already running NixOS and just want to know how I&rsquo;ve handled secrets or backups, jump ahead.</p>
<p>The config repo is private, but all the relevant examples will be in the posts themselves. I&rsquo;ll include real snippets throughout the series.</p>
<hr>
<h2 id="should-you-use-nixos">Should You Use NixOS?</h2>
<p>If any of these sound familiar, NixOS is probably worth your time:</p>
<ul>
<li>You&rsquo;ve ever broken a server and thought &ldquo;I have no idea how I set this up 2 years ago.&rdquo; (Maybe that says something about my appetite for documenting my own stuff)</li>
<li>You&rsquo;ve ever avoided upgrading a server because you were scared of what might break</li>
<li>You care about reproducibility and want your infrastructure to be code</li>
<li>You want to rebuild a machine from scratch in one command and have it come out identical</li>
<li>You enjoy learning fundamentally different approaches to old problems</li>
</ul>
<p>If you just want to run a few containers and don&rsquo;t want to invest in learning a new paradigm, stick with Docker Compose on Ubuntu. That&rsquo;s a completely valid choice.</p>
<p>A word on the popular alternatives. Kubernetes comes up a lot in homelab circles. I have no interest in running it at home. It&rsquo;s a single host, I don&rsquo;t need high availability, and I don&rsquo;t want to spend a Sunday afternoon debugging a pod networking issue when I just want to watch a movie. Proxmox I&rsquo;ve used before and it&rsquo;s genuinely good, but it doesn&rsquo;t solve the idempotency problem. You still end up with VMs and containers that drift over time. Same goes for TrueNAS and similar. I also prefer staying close to the metal. I&rsquo;m comfortable with Linux and I&rsquo;d rather own the whole stack than add an abstraction layer I have to understand on top of it.</p>
<p>But if you&rsquo;re ready to think about your homelab the way a software engineer thinks about code (versioned, tested, reviewable, rollbackable), NixOS is worth every frustrating moment of the learning curve.</p>
<p>See you in <a href="../nixos-part-2/">Part 2</a>, where we&rsquo;ll boot the installer and write our first <code>flake.nix</code>.</p>
<hr>
<h2 id="further-reading">Further Reading</h2>
<ul>
<li><a href="https://nixos.org/manual/nixos/stable/">NixOS Manual</a>: the authoritative reference</li>
<li><a href="https://nix.dev">nix.dev</a>: excellent learning-oriented guides</li>
<li><a href="https://zero-to-nix.com">Zero to Nix</a>: a modern introduction to Nix concepts</li>
<li><a href="https://search.nixos.org/packages">Nixpkgs Search</a>: find packages and NixOS options</li>
<li><a href="https://github.com/nix-community/awesome-nix">awesome-nix</a>: curated list of community resources</li>
<li><a href="https://leanpub.com/nixos-in-production">NixOS In Production</a>: The NixOS handbook for professional use</li>
</ul>
<hr>
<p><em>Written in March 2026. NixOS 25.11 (unstable channel at time of writing), Nix 2.24.</em></p>
]]></content:encoded>
    </item>
    <item>
      <title>Blog Setup</title>
      <link>https://blog.maha.nz/posts/blog-config/</link>
      <pubDate>Fri, 16 Aug 2024 00:00:00 +1200</pubDate>
      <guid>https://blog.maha.nz/posts/blog-config/</guid>
      <description>&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://blog.maha.nz/posts/blog-config/#infrastructure&#34;&gt;Infrastructure&lt;/a&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://blog.maha.nz/posts/blog-config/#infra-deployment&#34;&gt;Infra Deployment&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://blog.maha.nz/posts/blog-config/#authoring&#34;&gt;Authoring&lt;/a&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://blog.maha.nz/posts/blog-config/#content-deployment&#34;&gt;Content Deployment&lt;/a&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://blog.maha.nz/posts/blog-config/#local&#34;&gt;Local&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://blog.maha.nz/posts/blog-config/#github-workflow&#34;&gt;Github Workflow&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://blog.maha.nz/posts/blog-config/#analytics&#34;&gt;Analytics&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://blog.maha.nz/posts/blog-config/#monitoring&#34;&gt;Monitoring&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://blog.maha.nz/posts/blog-config/#optimizations&#34;&gt;Optimizations&lt;/a&gt;&lt;/li&gt;&#xA;&lt;li&gt;&lt;a href=&#34;https://blog.maha.nz/posts/blog-config/#future-improvements&#34;&gt;Future improvements&lt;/a&gt;&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;In this post, I explore the technology stack powering this blog, detailing the tools and processes used for both authoring content and deploying the site.&lt;/p&gt;&#xA;&lt;p&gt;Requirements which I have considered that led me to my final choice:&lt;/p&gt;&#xA;&lt;ul&gt;&#xA;&lt;li&gt;&lt;strong&gt;Simple&lt;/strong&gt;: I have a full time job and a family. The last thing I want to do is spend time troubleshooting a k8s cluster if my blog is down. I want to take out complexity out of the equation and make sure I have the least amount of tech friction, so I can focus on the content.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Cheap&lt;/strong&gt;: Predictable cost, no surprises needed. Static monthly fee is ideal.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Reproductible&lt;/strong&gt;: A side-effect of simplicity, if something is not working and I have spent more than 10 minutes troubleshooting it, blow the whole thing away and re-deploy it easily.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Fast&lt;/strong&gt;: Minimalistic theme, fast build time, quick deployments. Keep it light (but in dark mode of course).&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Secure&lt;/strong&gt;: Use HTTPS, ensure HSTS and other security headers can be easily set.&lt;/li&gt;&#xA;&lt;li&gt;&lt;strong&gt;Governance&lt;/strong&gt;: I know I could just do this in GitHub Pages, or some S3/Azure Storage static hosting, but I want to have control over the webserver configuration.&lt;/li&gt;&#xA;&lt;/ul&gt;&#xA;&lt;p&gt;With this in mind, I have decided to host the blog on the smallest DigitalOcean (DO) droplet ($4 USD/month), which comes with 512MB of memory and a 10GB disk, in the Sydney region.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<ul>
<li><a href="/posts/blog-config/#infrastructure">Infrastructure</a>
<ul>
<li><a href="/posts/blog-config/#infra-deployment">Infra Deployment</a></li>
</ul>
</li>
<li><a href="/posts/blog-config/#authoring">Authoring</a>
<ul>
<li><a href="/posts/blog-config/#content-deployment">Content Deployment</a>
<ul>
<li><a href="/posts/blog-config/#local">Local</a></li>
<li><a href="/posts/blog-config/#github-workflow">Github Workflow</a></li>
</ul>
</li>
</ul>
</li>
<li><a href="/posts/blog-config/#analytics">Analytics</a></li>
<li><a href="/posts/blog-config/#monitoring">Monitoring</a></li>
<li><a href="/posts/blog-config/#optimizations">Optimizations</a></li>
<li><a href="/posts/blog-config/#future-improvements">Future improvements</a></li>
</ul>
<p>In this post, I explore the technology stack powering this blog, detailing the tools and processes used for both authoring content and deploying the site.</p>
<p>Requirements which I have considered that led me to my final choice:</p>
<ul>
<li><strong>Simple</strong>: I have a full time job and a family. The last thing I want to do is spend time troubleshooting a k8s cluster if my blog is down. I want to take out complexity out of the equation and make sure I have the least amount of tech friction, so I can focus on the content.</li>
<li><strong>Cheap</strong>: Predictable cost, no surprises needed. Static monthly fee is ideal.</li>
<li><strong>Reproductible</strong>: A side-effect of simplicity, if something is not working and I have spent more than 10 minutes troubleshooting it, blow the whole thing away and re-deploy it easily.</li>
<li><strong>Fast</strong>: Minimalistic theme, fast build time, quick deployments. Keep it light (but in dark mode of course).</li>
<li><strong>Secure</strong>: Use HTTPS, ensure HSTS and other security headers can be easily set.</li>
<li><strong>Governance</strong>: I know I could just do this in GitHub Pages, or some S3/Azure Storage static hosting, but I want to have control over the webserver configuration.</li>
</ul>
<p>With this in mind, I have decided to host the blog on the smallest DigitalOcean (DO) droplet ($4 USD/month), which comes with 512MB of memory and a 10GB disk, in the Sydney region.</p>
<h2 id="infrastructure">Infrastructure</h2>
<p>The nameservers for my domain were already hosted in DO, so I have configured the DNS hostname for the site <code>blog.maha.nz</code> to point to the reserved (floating) IP which is attached to the droplet. The DO firewall is allowing inbound ports for SSH and HTTP(S) and blocking everything else.</p>
<p><img src="/posts/blog-config/infra.png" alt="Infra"></p>
<p>Normally my pick of server OS would be the latest Ubuntu LTS, but I have been unhappy with Canonical forcing snaps down our throats, so I have decided to go back to good ol&rsquo; Debian, which will not let me down.</p>
<p>NGINX is usually my choice of webserver, but I had heard good things about Caddy and was willing to give it a try. Caddy has a very simple configuration (Caddyfile) and will automatically sort out the TLS certificate using LetsEncrypt, nice and simple.
For this scenario, Caddy simply serves the <code>/var/www/html</code> folder, where all the Hugo static content is uploaded to.</p>
<p>For the time being, a very basic uptime monitor is configured in DigitalOcean which will automatically email me when the site is down. I plan to expand on this later on and add more checks, both for the OS &amp; the services.</p>
<p><img src="/posts/blog-config/uptime.png" alt="Uptime check"></p>
<p>By now, you may be thinking <em>&ldquo;This sure sounds like a lot of manual configuration&rdquo;</em>. Let me re-assure you that all of this is deployed via Terraform, then I make use of cloud-init user data to configure the OS and webserver to the point where Caddy serves an empty folder.</p>
<p>The Terraform code looks like this:</p>
<ul>
<li>First I create the cloud init configuration, which comes in two stages:</li>
</ul>





<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-hcl" data-lang="hcl"><span class="line"><span class="cl"><span class="k">data</span> <span class="s2">&#34;cloudinit_config&#34; &#34;blog&#34;</span> {
</span></span><span class="line"><span class="cl"><span class="n">  gzip</span>          <span class="o">=</span> <span class="kt">false</span>
</span></span><span class="line"><span class="cl"><span class="n">  base64_encode</span> <span class="o">=</span> <span class="kt">false</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">  <span class="k">part</span> {
</span></span><span class="line"><span class="cl"><span class="n">    filename</span>     <span class="o">=</span> <span class="s2">&#34;cloud-config.yaml&#34;</span>
</span></span><span class="line"><span class="cl"><span class="n">    content_type</span> <span class="o">=</span> <span class="s2">&#34;text/cloud-config&#34;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="n">    content</span> <span class="o">=</span> <span class="k">file</span><span class="p">(</span><span class="s2">&#34;${path.module}/cloud-config.yaml&#34;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">  }
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">  <span class="k">part</span> {
</span></span><span class="line"><span class="cl"><span class="n">    filename</span>     <span class="o">=</span> <span class="s2">&#34;setup-caddy.sh&#34;</span>
</span></span><span class="line"><span class="cl"><span class="n">    content_type</span> <span class="o">=</span> <span class="s2">&#34;text/x-shellscript&#34;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="n">    content</span> <span class="o">=</span> <span class="k">file</span><span class="p">(</span><span class="s2">&#34;${path.module}/setup-caddy.sh&#34;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">  }
</span></span><span class="line"><span class="cl">}</span></span></code></pre></div><p>In the first stage, the <code>cloud-config.yaml</code> installs some base packages via apt, creates the caddy user which the webserver runs as, adds the pubkey for this user so I can deploy using ssh with key auth. Finally it installs the Caddy webserver.</p>





<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c">#cloud-config</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="nt">package_update</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="nt">package_upgrade</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="nt">packages</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">curl</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">debian-keyring</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">debian-archive-keyring</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">apt-transport-https</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">gnupg</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">rsync</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">lsof</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="nt">users</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">caddy</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">ssh_authorized_keys</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="s1">&#39;ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICB5N1kyv35KTvDXBrqDs4n1x/mQPxk2eC/h7/htnyOx caddy@blog.maha.nz&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="nt">runcmd</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="c"># Add Caddy official repository</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">curl -1sLf &#39;https://dl.cloudsmith.io/public/caddy/stable/gpg.key&#39; | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">curl -1sLf &#39;https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt&#39; | tee /etc/apt/sources.list.d/caddy-stable.list</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="c"># Update package list</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">apt update</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="c"># Install Caddy</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">apt install -y caddy</span></span></span></code></pre></div><ul>
<li>In the next stage, the <code>setup-caddy.sh</code> script is executed. This creates the public www folder, sets up the permissions for it and applies the Caddyfile configuration:</li>
</ul>





<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl"><span class="cp">#!/bin/sh
</span></span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nv">PUBLIC</span><span class="o">=</span>/var/www/html
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">mkdir -p <span class="si">${</span><span class="nv">PUBLIC</span><span class="si">}</span>
</span></span><span class="line"><span class="cl">chown -R caddy:caddy <span class="si">${</span><span class="nv">PUBLIC</span><span class="si">}</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">cat <span class="s">&lt;&lt; EOF &gt; /etc/caddy/Caddyfile
</span></span></span><span class="line"><span class="cl"><span class="s">{
</span></span></span><span class="line"><span class="cl"><span class="s">  email REDACTED
</span></span></span><span class="line"><span class="cl"><span class="s">}
</span></span></span><span class="line"><span class="cl"><span class="s">
</span></span></span><span class="line"><span class="cl"><span class="s">blog.maha.nz {
</span></span></span><span class="line"><span class="cl"><span class="s">  root * ${PUBLIC}
</span></span></span><span class="line"><span class="cl"><span class="s">  file_server
</span></span></span><span class="line"><span class="cl"><span class="s">
</span></span></span><span class="line"><span class="cl"><span class="s">  # Add multiple headers
</span></span></span><span class="line"><span class="cl"><span class="s">  header {
</span></span></span><span class="line"><span class="cl"><span class="s">      X-Frame-Options &#34;deny&#34;
</span></span></span><span class="line"><span class="cl"><span class="s">      X-XSS-Protection &#34;1; mode=block&#34;
</span></span></span><span class="line"><span class="cl"><span class="s">      Content-Security-Policy: &#34;default-src &#39;none&#39;; manifest-src &#39;self&#39;; font-src &#39;self&#39;; img-src &#39;self&#39;; style-src &#39;self&#39;; form-action &#39;none&#39;; frame-ancestors &#39;none&#39;; base-uri &#39;none&#39;&#34;
</span></span></span><span class="line"><span class="cl"><span class="s">      X-Content-Type-Options: &#34;nosniff&#34;
</span></span></span><span class="line"><span class="cl"><span class="s">      Strict-Transport-Security: &#34;max-age=31536000; includeSubDomains; preload&#34;
</span></span></span><span class="line"><span class="cl"><span class="s">      Cache-Control: max-age=31536000, public
</span></span></span><span class="line"><span class="cl"><span class="s">      Referrer-Policy: no-referrer
</span></span></span><span class="line"><span class="cl"><span class="s">      Feature-Policy: microphone &#39;none&#39;; payment &#39;none&#39;; geolocation &#39;none&#39;; midi &#39;none&#39;; sync-xhr &#39;none&#39;; camera &#39;none&#39;; magnetometer &#39;none&#39;; gyroscope &#39;none&#39;
</span></span></span><span class="line"><span class="cl"><span class="s">  }
</span></span></span><span class="line"><span class="cl"><span class="s">}
</span></span></span><span class="line"><span class="cl"><span class="s">EOF</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">systemctl restart caddy</span></span></code></pre></div><p>This data block is passed as the <code>user_data</code> attribute when we create the droplet:</p>





<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-hcl" data-lang="hcl"><span class="line"><span class="cl"><span class="k">resource</span> <span class="s2">&#34;digitalocean_droplet&#34; &#34;web&#34;</span> {
</span></span><span class="line"><span class="cl"><span class="n">  image</span>  <span class="o">=</span> <span class="s2">&#34;debian-12-x64&#34;</span>
</span></span><span class="line"><span class="cl"><span class="n">  name</span>   <span class="o">=</span> <span class="s2">&#34;do-web-1&#34;</span>
</span></span><span class="line"><span class="cl"><span class="n">  region</span> <span class="o">=</span> <span class="s2">&#34;syd1&#34;</span>
</span></span><span class="line"><span class="cl"><span class="n">  size</span>   <span class="o">=</span> <span class="s2">&#34;s-1vcpu-512mb-10gb&#34;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="n">  user_data</span> <span class="o">=</span> <span class="k">data</span><span class="p">.</span><span class="k">cloudinit_config</span><span class="p">.</span><span class="k">blog</span><span class="p">.</span><span class="k">rendered</span>
</span></span><span class="line"><span class="cl">}</span></span></code></pre></div><p>Now the webserver is up, and Caddy has configured a TLS certificate, but no content has been uploaded yet to the public folder, it is at this point where we will start getting a <code>404</code> status code response for our root <code>https://blog.maha.nz/</code>. Before this, we do not get any HTTP response because the web server is still launching.</p>
<p>The content can be deployed now, and I will cover that process in the <a href="/posts/blog-config/#content-deployment">Authoring &gt; Deployment</a> section.</p>
<h3 id="infra-deployment">Infra Deployment</h3>
<p>I use HCP Terraform free tier to deploy all DigitalOcean infrastructure, including my DNS records.
For deploying I use a VCS workflow, where a push to my private infra GitHub repo will trigger a Terraform plan &amp; apply.</p>
<h2 id="authoring">Authoring</h2>
<p>The blog is hosted as a <a href="https://github.com/mahalel/blog-maha-nz">github public repository</a>. The source directory for Hugo is the <code>./src</code> folder. To add a new post I add it as Markdown to the <code>./src/content</code> folder. The theme is the <a href="https://github.com/clente/hugo-bearcub">Hugo Bear Cub theme</a> which is added as a Git submodule.</p>
<p>Dependencies are installed via the nix flake available at the root of the repository, this can be loaded either manually with <code>nix develop</code> or with a <a href="https://github.com/direnv/direnv">direnv</a> configuration that applies the flake when you enter the root folder.</p>
<p>I edit the <code>md</code> files with my primary editor <a href="https://helix-editor.com/">helix</a> or with Visual Studio Code as my backup editor if that is ever needed.</p>
<h3 id="content-deployment">Content Deployment</h3>
<h4 id="local">Local</h4>
<p>My SSH config is configured with the user, host &amp; identity file for the blog. With this in place I can simply run the <a href="https://github.com/mahalel/blog-maha-nz/blob/main/deploy.sh">deploy.sh</a> script locally and it will build the site then rsync it over.</p>
<p>This is simple and fast enough for my needs, I may wrap this up in a Makefile or <a href="https://taskfile.dev/">Taskfile</a> later on.</p>
<h4 id="github-workflow">Github Workflow</h4>
<p>A Github action has been setup which will deploy the site contents. This can be trigerred on push, manually, and it will also run on a schedule every 5 minutes. The 5 minutes is a best effort though, as Github can <a href="https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows#schedule">delay your schedule</a> during periods of high loads.</p>
<p>The workflow first checks the status code returned by a <code>curl</code> GET request to the root of the blog. When the response status code is 404 it means that I have redeployed the infra and the droplet has been rebuilt. Because the webserver responds to http traffic, but there is no content at the root, it is implied that the site is ready to have its content re-deployed, so the workflow will execute.</p>
<p>The deployment is a recreation of the rsync deploy script, as GitHub actions. Hugo is pinned to a specific version which <em>should</em> match the version retrieved via the nix flake.</p>
<p>The ECDSA key that allows the rsync command to succeed is read as a GitHub secret, we need to disable StrictHostKey checking and ignore known hosts signature because each droplet rebuild will give us a different host key.</p>
<p>I will consider adding a GitHub self-hosted runner in the future if I want to reduce the time the site is unavailable between rebuilds, but at this point I am ok with this tradeoff.</p>
<h2 id="analytics">Analytics</h2>
<p>None. I was toying with the idea of using Matomo but I then realised focusing on the numbers would be the wrong incentive for writing. I decided to proceed without any analytics.</p>
<h2 id="monitoring">Monitoring</h2>
<p>For now, I am using a Digitalocean HTTP Uptime Check which will email me when there is no response to a HTTPS request. After I rebuild my home server I will switch it over to use <a href="https://github.com/louislam/uptime-kuma">Uptime Kuma</a></p>
<h2 id="optimizations">Optimizations</h2>
<p>HTTP Observatory is a free online tool that scans websites for security vulnerabilities and best practices, I have used it to help me improve the web security of the site by providing detailed reports and recommendations on various security headers, SSL/TLS configuration, and other critical security measures.</p>
<p>Based on the recommendations, I have managed to <a href="https://developer.mozilla.org/en-US/observatory/analyze?host=blog.maha.nz">achieve an A+ score</a> with the following Caddyfile configuration:</p>





<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-txt" data-lang="txt"><span class="line"><span class="cl">blog.maha.nz {
</span></span><span class="line"><span class="cl">  root * ${PUBLIC}
</span></span><span class="line"><span class="cl">  file_server
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">  # Add multiple headers
</span></span><span class="line"><span class="cl">  header {
</span></span><span class="line"><span class="cl">      X-Frame-Options &#34;deny&#34;
</span></span><span class="line"><span class="cl">      X-XSS-Protection &#34;1; mode=block&#34;
</span></span><span class="line"><span class="cl">      Content-Security-Policy: &#34;default-src &#39;none&#39;; manifest-src &#39;self&#39;; font-src &#39;self&#39;; img-src &#39;self&#39;; style-src &#39;self&#39;; form-action &#39;none&#39;; frame-ancestors &#39;none&#39;; base-uri &#39;none&#39;&#34;
</span></span><span class="line"><span class="cl">      X-Content-Type-Options: &#34;nosniff&#34;
</span></span><span class="line"><span class="cl">      Strict-Transport-Security: &#34;max-age=31536000; includeSubDomains; preload&#34;
</span></span><span class="line"><span class="cl">      Cache-Control: max-age=31536000, public
</span></span><span class="line"><span class="cl">      Referrer-Policy: no-referrer
</span></span><span class="line"><span class="cl">      Feature-Policy: microphone &#39;none&#39;; payment &#39;none&#39;; geolocation &#39;none&#39;; midi &#39;none&#39;; sync-xhr &#39;none&#39;; camera &#39;none&#39;; magnetometer &#39;none&#39;; gyroscope &#39;none&#39;
</span></span><span class="line"><span class="cl">  }
</span></span><span class="line"><span class="cl">}</span></span></code></pre></div><h2 id="future-improvements">Future improvements</h2>
<ul>
<li>Update monitoring to Uptime Kuma</li>
<li>Wrap up the deploy in a taskfile</li>
<li>Add self-hosted runner for shorter downtime</li>
<li>Convert all images to WEBP automatically when Hugo generates the site</li>
</ul>
]]></content:encoded>
    </item>
  </channel>
</rss>
