Adding a .com Domain to a Running Self-Hosted Stack Without Downtime

Introduction

I run a self-hosted platform on a single Hetzner VPS — Caddy as the reverse proxy, Docker Compose managing all services, and a handful of subdomains pointing at different internal containers. The whole stack had been running happily under a .ca domain for months.

When I decided to register a .com and run everything simultaneously, I also ran into a few registrar surprises along the way — if you want the full story on domain add-on dark patterns, I covered that separately in Pre-Checked Add-Ons and Domain Registrar Dark Patterns.


The Setup

The two public-facing pieces of the stack are:

Subdomain Service
[DOMAIN].ca Static site (file_server)
blog.[DOMAIN].ca Ghost (reverse proxy → port 2368)

Beyond those, there are several internal service subdomains behind authentication — but those aren't relevant to the migration story. The same pattern applies to all of them regardless.

All DNS records were managed in Hetzner DNS console. Caddy handled automatic SSL via Let's Encrypt for every subdomain.

The goal: add [DOMAIN].com and all matching subdomains, pointing at the same services, running simultaneously.


Step 1 — Register the .com and Point Nameservers at Hetzner

The .com domain was registered at WHC.ca. Rather than managing DNS in two places, I pointed the .com nameservers directly at Hetzner — the same place the .ca DNS already lived.

In WHC's control panel, I set custom nameservers:

helium.ns.hetzner.de
hydrogen.ns.hetzner.com
oxygen.ns.hetzner.com

Then in the Hetzner DNS console, I created a new zone for [DOMAIN].com and added A records:

A    @        [SERVER_IP]
A    www      [SERVER_IP]
A    blog     [SERVER_IP]
# ...repeat for any other subdomains in your stack

Keeping both domains' DNS in one place means one dashboard, one set of records to maintain, and no risk of the two falling out of sync.


Step 2 — Update the Caddyfile

This is where most of the work happens. Each .ca block needed a matching .com block with identical config, just a different hostname.

For the main site and www:

[DOMAIN].com, www.[DOMAIN].com {
  root * /var/www/[PROJECT]
  file_server
  encode gzip
}

For the blog:

blog.[DOMAIN].com {
  reverse_proxy ghost:2368
}

For any service subdomains that have CORS headers configured, make sure to update the Access-Control-Allow-Origin value to match the new domain — this is the most common mistake when duplicating Caddy blocks:

# .ca version
[SERVICE].[DOMAIN].ca {
  handle {
    reverse_proxy [container]:[port] {
      header_down -Access-Control-Allow-Origin
    }
    header Access-Control-Allow-Origin "https://[DOMAIN].ca"
  }
}

# .com version — origin must be updated
[SERVICE].[DOMAIN].com {
  handle {
    reverse_proxy [container]:[port] {
      header_down -Access-Control-Allow-Origin
    }
    header Access-Control-Allow-Origin "https://[DOMAIN].com"
  }
}

Step 3 — Deploy the Updated Caddyfile

My deployment workflow for Caddy config changes:

# Copy from host to container
docker cp /root/Caddyfile root-caddy-1:/etc/caddy/Caddyfile

# Reload without restarting
docker exec root-caddy-1 caddy reload --config /etc/caddy/Caddyfile

The caddy reload is non-disruptive — existing connections stay alive, Caddy picks up the new config, and immediately starts requesting SSL certificates for the new domains via Let's Encrypt ACME.


What I Hit Along the Way

DNS hadn't propagated yet

After updating nameservers at WHC, visiting [DOMAIN].com showed WHC's default parking page instead of my site. The nameserver change was confirmed correct — it was just local DNS cache.

To verify the nameservers were actually propagated at the authoritative level:

dig NS [DOMAIN].com +short

Output confirmed Hetzner nameservers were live. The parking page was a cached response in the local browser. Running ipconfig /flushdns on Windows and clearing the browser's DNS cache resolved it immediately.

Two containers weren't running

After the Caddy reload, Caddy logs showed errors like:

dial tcp: lookup [service] on 127.0.0.11:53: server misbehaving

These errors mean Caddy is trying to resolve a container name via Docker's internal DNS resolver (127.0.0.11), but the lookup is failing — usually because the container isn't running or isn't on the same Docker network.

docker ps | grep [service-name]
# no output

Some containers had stopped. A simple restart fixed it:

docker compose -f /root/docker-compose.yml up -d [service-name]

SSL certificates provisioned automatically

One thing worth highlighting: Caddy handles SSL for all new domains automatically. As soon as DNS propagated and Caddy had the new blocks in its config, it began the ACME challenge flow for each new hostname. No manual cert requests, no certbot commands, no cron jobs. Within a minute or two of DNS going live, every .com subdomain had a valid certificate.


Verification

After DNS propagation and Caddy reload, I tested the public subdomains:

https://[DOMAIN].com     → static site ✅
https://blog.[DOMAIN].com → Ghost blog ✅

All .ca subdomains continued working without interruption throughout.


Key Lessons

1. Keep DNS for all domains in one place. Pointing the .com nameservers at Hetzner (where .ca DNS already lived) means one dashboard to manage everything. No risk of records drifting out of sync across two providers.

2. Update CORS origins when duplicating Caddy blocks. The Access-Control-Allow-Origin headers are hardcoded to a specific domain. Copying a block without updating these will cause CORS failures on the new domain.

3. caddy reload is zero-downtime. You don't need to restart the Caddy container to pick up config changes. caddy reload is safe to run on a live server.

4. DNS propagation and local cache are different things. dig NS [DOMAIN].com tells you the truth about propagation. Your browser showing the wrong page is almost always local cache — flush it before assuming DNS hasn't propagated.

5. Docker container name resolution requires containers to be on the same network. Caddy resolves service names via Docker's internal DNS. If a container isn't running or isn't on the same Docker network, you'll get server misbehaving errors — not a clean "connection refused."

6. Take a snapshot before major changes. Before touching DNS, Caddyfiles, or docker-compose configs on a production server, take a snapshot. On Hetzner this is one click and the VPS stays live during the process.


Part of a series on running a self-hosted stack. Also read: Why MicroBin's Uploader Password Silently Does Nothing and Pre-Checked Add-Ons and Domain Registrar Dark Patterns.

Subscribe to The Chimp Talks

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe