Personal Cloud

June 2025 ยท 8 minute read

LLMs have made starting and shipping a small software project really cheap and a whole lot of fun. I can have a silly idea late one afternoon and be ready to ship it in just a few hours.

But the shipping… Ship where? How? This is still a struggle. I wrote on X regarding HistoryMan:

For reference this took about 1.5 hours for functionality (30 minutes being figuring how to get api access to google Gemini), 1 hour of CSS fine tuning and 3 hours of figuring out how to host it.

The struggle for me is a combination of two very dangerous factors:

I’ve built several deployment systems in my career. I’ve used even more. They all suck in one way or another. But the problem I’m facing today is very distinct from all of those: Small Scale.

The Apps I’m shipping are personal or at the very most hobby-scale. If I built something a bunch of people wanted to use, that would be great, but that’s rarely the point at the present moment. What is important for these projects is speed and simplicity.

With this in mind, here is how i’ve been approaching deployment.

Infrastructure

In the basement of my house I have a tiny Intel NUC running Ubuntu. It runs silly things like Home Assistant that powers the complex web of mesh networks that allow me to turn the lights on and off in my office. As a modern-ish computer, it’s actually incredibly powerful compared to what I could afford in the Cloud: 16-cores, 32G of RAM, 1TB of nvme disk. This can run a lot of low-traffic projects at a very low marginal cost.

Public-ish Services

For HistoryMan, I wanted the service to be publicly available in the sense that anyone I tell about the URL can find it. This presents more security risks than any capacity concerns, so a concern was trying to keep it somewhat isolated. If hacked, I don’t want it to trivially have access to my entire network or other projects on the host.

I got thinking about an old technology I hadn’t thought about in ages: Vagrant Before Docker containerized our world, Vagrant made virtualization the hot new thing. It made spinning up a virtual machine super easy:

$ vagrant init generic/ubuntu2210
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
$ vagrant up
Bringing machine 'default' up with 'libvirt' provider...
...

And you have an empty linux server ready to work. Oh, but it’s slow right? That’s why we moved to containers? On my server this takes 15 seconds I haven’t even looked into optimizing anything.

This new VM does need to be configured. I know I should be pulling out puppet or whatever other configuration management software is cool now-days (is there one?) but this should be a pretty simple setup and I like bash. So of course I rolled my own: VagrantInit

Vagrant.configure("2") do |config|
  config.vm.box = "generic/ubuntu2210"
  config.vm.hostname = "vinit-demo"

  # Unidirectional rsync folder
  config.vm.synced_folder "./vinit", "/var/vinit", type: "rsync"
  config.vm.provision "shell", inline: "/var/vinit/vinit.sh"
end

I add a couple of bash scripts in my repository and bam. A repeatably configured server.

To deploy the application itself, I got so wrapped up in the ancient joy of virtual machines that I decided to skip docker entirely and just deploy it old school… but with a twist: Heroku-style deploy on push.

I created a bare repository on the deployment machine:

$ git init --bare /data/historyman.git

I also have a /data/historyman directory which will contain a current symlink and will contain all my git worktrees.

I created a hook in this remote repository that handles the deployment:

#!/bin/bash
set -e

while read oldrev newrev ref
do
  if [[ "$ref" = "refs/heads/main" ]]; then
    RELEASE_ID=$(date +%Y%m%d%H%M%S)

    echo "===> Creating Release $RELEASE_ID ($newrev)"
    cd /data/historyman.git
    git worktree add "../historyman/release-$RELEASE_ID" $newrev
    cd /data/historyman/release-$RELEASE_ID/

    echo "===> Initializing"
    source /etc/default/historyman
    /home/deploy/.cargo/bin/uv sync
    ln -sfn /data/historyman/release-$RELEASE_ID /data/historyman/current

    echo "===> Restarting services"
    sudo systemctl restart historyman
  fi
done

You might be concerned that we have to create a whole python virtualenv on every deploy. Certainly this must be slow and wasteful!

$ time git push production
Enumerating objects: 21, done.
Counting objects: 100% (21/21), done.
Delta compression using up to 8 threads
Compressing objects: 100% (7/7), done.
Writing objects: 100% (11/11), 1.18 KiB | 1.18 MiB/s, done.
Total 11 (delta 3), reused 0 (delta 0), pack-reused 0 (from 0)
remote: Received push to refs/heads/main
remote: ===> Creating Release 20250609002427 (450c61cd5b0ecaaa98fdb25174f70680814b8b77)
remote: Preparing worktree (detached HEAD 450c61c)
remote: HEAD is now at 450c61c new deployment infrastructure
remote: ===> Initializing
remote: Using Python 3.12.5
remote: Creating virtualenv at: .venv
remote: Resolved 63 packages in 0.74ms
remote: Prepared 61 packages in 1ms
remote: Installed 61 packages in 30ms
... snip ...
remote: ===> Restarting services
To historyman:/data/historyman.git
   8295884..450c61c  main -> main

real    0m0.423s
user    0m0.026s
sys     0m0.033s

uv is Dope

Finally, once my application is running, how can I expose this server to the public internet? Well there is one cloud-ish service that I really love: Cloudflare.

Specifically, Cloudflare Tunnels I install their cloudflared agent on this host, authenticate it with my account, and then I can route whatever domain name I want to it. As a bonus, I can trivially use cloudflare to block robots, certain IPs, entire countries or whatever else. I get all this at my scale for… free?

Private Services

Most projects do not need to be exposed to the public Internet. The solution above actually works reasonably well in that (not mentioned) I use Tailscale+Caddy on these hosts as well. This makes exposing port 443 (https) on the host to my tailnet with a valid certificate trivial With this solution I can create a VM named squatchy and very quickly access the service as https://squatchy.<my tailnet name>.ts.net

Why Stop There?

I’m always looking for simpler ways to deploy to my private cloud. My latest project is a personal finance application for my kids named “piggybank”. It’s built on rails and I’ve been hearing about the deployment system built into Rails 8 called Kamal

Kamal is Docker-based. Rails itself generates a Dockerfile you can use to package up your application and ship it off to production. But as you might guess, the documentation is geared to more normal scenarios where you have a dedicated server in a datacenter with an IP address. How can I use this to serve an application on my Tailnet?

The most obvious solution would be to again create a VM. One named “piggybank” which would show up on my tailnet. Kamal would deploy to the “piggybank” host, setup all the docker machinery and off we go. But now I’ve got to deal with VMs and Docker, which feels rather extra. Instead I embarked a stupidly convoluted scheme to avoid the separate VM entirely.

Tailscale can run in a Docker container. There is even a published base image (tailscale/tailscale) that, given an authentication token, simply starts up a container with the given name and joins your tailnet.

To understand the options to integration with Kamal though, you first need to know a few things about it’s architecture. As I said, it’s Docker based. One of the critical features is zero-downtime deployments. This means that during a deploy, two versions of your app will be running as traffic is migrated from one to the other. This is handled by a component called kamal-proxy which itself runs as a Docker container. The kamal command, when booting the deployment environment, creates a docker network named kamal which the kamal-proxy container and all instances of the deployed application will join. This allows the proxy to direct traffic and different backends.

The question is how to expose the proxy to the tailnet. By default the proxy will try to grab ports 80 and 443 on the Host itself. But this server is running more than just this one application.

I solved this by running per-application tailscale/tailscale containers. Each of these can join the tailnet with a specific name (like “piggybank”) and then proxies (using Tailscale serve) incoming traffic to kamal-proxy. I configure them in the host’s docker-compose.yaml like this:

  ts-kamal-proxy-piggybank:
    image: tailscale/tailscale:latest
    hostname: piggybank
    networks:
      - kamal
    environment:
      - TS_AUTHKEY=<auth key>
      - TS_STATE_DIR=/var/lib/tailscale
      - TS_SERVE_CONFIG=/config/serve.json
      - TS_CERT_DOMAIN=piggybank.<tailnet name>.ts.net
    volumes:
      - tailscale-kamal-proxy-state:/var/lib/tailscale
      - ./ts-kamal-proxy:/config
    devices:
      - /dev/net/tun:/dev/net/tun
    cap_add:
      - net_admin
      - sys_module
    restart: unless-stopped

Where I’ve provided this config (it can be shared between all the ts- containers):

{
  "TCP": {
    "443": {
      "HTTPS": true
    }
  },
  "Web": {
    "${TS_CERT_DOMAIN}:443": {
      "Handlers": {
        "/": {
          "Proxy": "http://kamal-proxy:80"
        }
      }
    }
  }
}

There is one more trick though. By default, the proxy container will still try to grab the ports on the Host. To disable we have to use kamal proxy boot_config to tell kamal how the proxy container should be started:

$ kamal proxy boot_config set --no-publish

Kamal itself is configured (with a deploy.yaml file) very simply:

proxy:
  ssl: false
  forward_headers: true

With this pattern I can deploy as many Kamal apps as I like to my single server and expose them individually with friendly names on my tailnet.

This isn’t as fast as my VM+git method sadly. A no-op deployment where everything is cached takes 8 seconds. I’m even using a private docker registry to avoid the hit of going to the cloud. This is the tradeoff we get for zero-downtime deployment and shipping Docker images.

What’s Next

I try to remind myself the goal here is to make deployment frictionless so I can simply Build. I hope to get there one day. It feels as though I’m circling in on something that will make me happy, but I fully expect my next deployment to nerd-snipe myself into trying something else. My wish list is: