A Domain-Addressable Home PaaS | Transform a Raspberry Pi into a public, domain-backed, TLS-terminated platform

 December 2024    10 min read

CloudLinux
<h1 data-number="1" id="the-physical-node-hardware-and-operating-system"><span class="header-section-number">1</span> The Physical Node: Hardware and Operating System</h1> <p>A Raspberry Pi 4 with Alpine Linux serves as the foundation. Alpine is specifically chosen for ARM systems because it offers a minimal userland, low memory footprint, and fast boot times. The combination matters: Alpine uses musl libc instead of glibc, and busybox instead of GNU coreutils. These choices reduce complexity and resource consumption, critical on ARM hardware with limited memory and storage.</p> <p>Alpine replaces systemd with OpenRC, a lighter init system. OpenRC manages services through simple shell scripts in <code>/etc/init.d/</code>, avoiding the complexity and resource overhead of systemd. For a single-node home platform, this simplicity is an advantage. You trade some convenience tooling for predictability and lower overhead.</p> <p>The tradeoff is real: some software expects glibc or systemd. You may encounter incompatibilities or need to use musl-compatible builds. But for Docker, where each container brings its own libc and init, Alpine excels as a minimal host OS.</p> <h1 data-number="2" id="installing-docker-on-alpine"><span class="header-section-number">2</span> Installing Docker on Alpine</h1> <p>Docker integrates cleanly with Alpine through the standard <code>apk</code> package manager:</p> <pre><code>apk add docker docker-compose</code></pre> <p>Enable and start Docker via OpenRC:</p> <pre><code>rc-service docker start rc-update add docker default</code></pre> <p>Alpine presents one critical consideration: ARM64 image compatibility. Docker on ARM automatically selects architecture-specific images when pulled from registries that support multi-arch manifests (Docker Hub, Quay, etc.). However, if an image is only built for x86-64, the pull succeeds but the container fails at runtime with an <q>exec format error.</q></p> <p>To verify an image supports ARM64 before pulling, check its manifest:</p> <pre><code>docker pull --dry-run image-name:tag</code></pre> <p>Or inspect the registry directly. When building custom images on Alpine for this platform, ensure your Dockerfile and dependencies are ARM64-compatible.</p> <p>Docker’s networking model isolates containers by default into a bridge network. Services do not bind to host ports unless explicitly configured. This isolation is a security feature: containers are not directly exposed to the host network or internet unless you intentionally forward traffic to them.</p> <h1 data-number="3" id="making-a-private-node-public-port-443-only"><span class="header-section-number">3</span> Making a Private Node Public: Port 443 Only</h1> <p>Your Raspberry Pi sits behind residential NAT (Network Address Translation) provided by your router. NAT blocks unsolicited inbound traffic from the internet. To make services public, you configure port forwarding on your router: route external TCP 443 to the Pi’s internal IP address on port 443.</p> <p>Only forward port 443. Not 80. Not SSH on 22. Minimizing exposed ports reduces the attack surface. Every open port is a potential vector for scanning, exploitation, or misconfiguration. By exposing only 443, you force all traffic through a single TLS-encrypted channel.</p> <p>The packet flow looks like this: a browser request reaches your public IP address on port 443. The router forwards it to the Pi’s internal IP on port 443. A reverse proxy on the Pi (Caddy, in this case) terminates TLS and routes traffic to internal containers. Containers never see unencrypted internet traffic directly.</p> <p>Forcing TLS-only interaction means plaintext protocols (HTTP, unencrypted SSH, etc.) cannot be accessed from outside. You defend against eavesdropping and MITM attacks at the network boundary.</p> <h1 data-number="4" id="domain-name-and-dynamic-dns-automation"><span class="header-section-number">4</span> Domain Name and Dynamic DNS Automation</h1> <p>To access your Pi by domain name instead of IP address, you need two pieces:</p> <ol type="1"> <li>A domain registered at a registrar (Namecheap, name.com, etc.)</li> <li>An A record in DNS pointing that domain to your public IP address</li> </ol> <p>The problem: residential internet providers assign dynamic IP addresses. Your public IP changes periodically, sometimes weekly, sometimes after a reboot. When it changes, your A record becomes stale, and your domain stops resolving.</p> <p>The solution is a dynamic DNS updater. This is a small background process that runs on your Pi:</p> <ol type="1"> <li>Every 5-10 minutes, query an external API to detect your current public IP (e.g., <code>https://ifconfig.me</code>)</li> <li>Compare it to the IP in your domain’s A record</li> <li>If they differ, call your registrar’s DNS API to update the A record</li> <li>Log the change</li> </ol> <p>This creates a reconciliation loop. The desired state is <q>my domain’s A record points to my current public IP.</q> The updater continuously observes reality and corrects divergence.</p> <p>Consider DNS TTL (time-to-live): a short TTL means changes propagate quickly to DNS caches worldwide, but generates more API calls. A long TTL reduces API churn but delays propagation when IP changes. For a home setup, a 5-10 minute TTL balances responsiveness and API cost.</p> <p>If the updater fails, your DNS record eventually becomes stale. The impact depends on how long the IP was unchanged before failure. A monitoring alert is wise: notify yourself if the updater hasn’t run in 15 minutes.</p> <h1 data-number="5" id="caddy-as-the-edge-gateway"><span class="header-section-number">5</span> Caddy as the Edge Gateway</h1> <p>Caddy is a reverse proxy that handles three critical functions:</p> <ol type="1"> <li>TLS termination: Caddy accepts encrypted connections on 443, decrypts them, and forwards to internal services</li> <li>Automatic Let’s Encrypt certificates: Caddy provisions and renews TLS certificates without manual intervention</li> <li>Routing: Caddy reads a Caddyfile and routes requests by domain or path to internal services</li> </ol> <p>Example Caddyfile:</p> <pre><code>example.com { reverse_proxy localhost:5000 } api.example.com { reverse_proxy localhost:5001 } admin.example.com { reverse_proxy localhost:5002 }</code></pre> <p>Each <code>reverse_proxy</code> directive points to an internal service. <q>localhost</q> from Caddy’s perspective means the Docker bridge network. When Caddy and your application container are on the same Docker network, Caddy can reach the application by the container’s network name.</p> <p>Automatic certificate provisioning works because Caddy detects that it’s handling HTTPS traffic for a domain, then automatically requests a certificate from Let’s Encrypt using the ACME protocol. Renewal happens automatically 30 days before expiration.</p> <p>Compare this to Nginx: Nginx is faster and lighter, but requires manual certificate management (certbot, cron jobs, etc.). Caddy’s automatic ACME is a significant operational win on a home platform where manual intervention is inconvenient.</p> <p>Request flow: DNS resolves <code>example.com</code> to your public IP. Browser connects to port 443. Router forwards to Pi:443. Caddy receives the encrypted connection, terminates TLS, and forwards plaintext traffic to the application container running on the internal Docker network. The response returns through the same path, encrypted by Caddy, and sent back to the browser.</p> <h1 data-number="6" id="internal-services-behind-the-proxy"><span class="header-section-number">6</span> Internal Services Behind the Proxy</h1> <p>Your application containers (web services, APIs, databases) run entirely on the internal Docker network. They do not bind to host ports. Only Caddy binds to the host’s port 443.</p> <p>For example, a Node.js API might run in a container and listen on port 3000, but only on the Docker bridge network. Caddy, also on that network, can reach it via the container’s hostname. The host machine itself cannot directly reach port 3000. This is intentional: it prevents accidental exposure and enforces all traffic through Caddy’s TLS termination.</p> <p>A database like PostgreSQL runs in another container, listening on port 5432, but again only on the internal network. Application containers can connect to it, but it’s completely isolated from external traffic.</p> <p>Optional components include a private Docker registry for storing custom images, or Portainer for a web UI to manage containers. These are implementation choices, not mandatory.</p> <p>The critical principle: do not bind application services to host ports directly. This forces all traffic through the intentional, TLS-terminated edge.</p> <h1 data-number="7" id="minimal-security-envelope"><span class="header-section-number">7</span> Minimal Security Envelope</h1> <p>Security in this architecture flows from simplicity and constraint:</p> <ul> <li>Single exposed port (443) eliminates entire classes of vulnerabilities</li> <li>TLS encryption everywhere prevents passive eavesdropping</li> <li>No direct database exposure ensures applications mediate all data access</li> <li>Docker network isolation prevents one compromised container from easily accessing others</li> <li>Least privilege: containers run without unnecessary capabilities or elevated permissions</li> </ul> <p>Attack vectors still exist. An attacker could:</p> <ul> <li>Exploit a vulnerability in Caddy or an application</li> <li>Perform DNS hijacking (outside your control, but registrar account security matters)</li> <li>Attempt brute-force attacks on exposed services</li> <li>Exploit misconfigured application logic</li> </ul> <p>Hardening steps include:</p> <ul> <li>Running fail2ban to block repeated connection attempts</li> <li>Configuring firewall rules on the Pi to drop traffic not on port 443</li> <li>Implementing rate limiting in Caddy or the application</li> <li>Regular security updates for Alpine, Docker, and application dependencies</li> <li>Strong authentication (API keys, OAuth, etc.) for any user-facing service</li> </ul> <p>Compared to a typical VPS in a data center, your home Pi lacks DDoS protection, redundant networking, and professional security monitoring. A motivated attacker could overwhelm your residential ISP link or identify your physical location. Self-hosting trades convenience and infrastructure investment for autonomy and learning.</p> <h1 data-number="8" id="the-emergent-platform"><span class="header-section-number">8</span> The Emergent Platform</h1> <p>When you step back, these components form a cohesive platform:</p> <ul> <li>Services are deployed as Docker containers</li> <li>Services are made public via domain names routed through Caddy</li> <li>DNS automatically reconciles with IP changes</li> <li>TLS is transparent and automatic</li> <li>The entire system is ARM-native and runs on a low-power device</li> </ul> <p>This resembles a small Platform-as-a-Service. You don’t have the orchestration of Kubernetes or the convenience of Heroku, but you have the core primitives: ingress (Caddy), service routing (DNS + reverse proxy), TLS termination, and containerized applications.</p> <p>Full request lifecycle:</p> <ol type="1"> <li>Client resolves <code>api.example.com</code> via DNS, getting your public IP</li> <li>Client connects to your public IP:443</li> <li>Router port-forwards to Pi:443</li> <li>Caddy receives the connection, validates your TLS certificate, and decrypts the request</li> <li>Caddyfile routing directs traffic to the correct internal container</li> <li>Container processes the request and returns a response</li> <li>Caddy re-encrypts and sends back to the client</li> </ol> <p>Scaling limitations are real. A single Pi has finite CPU, memory, and network bandwidth. Running ten high-traffic applications will overwhelm it. But for a personal project, a few lightweight services, or a staging environment, it’s sufficient.</p> <p>Incremental improvements without full orchestration include: adding a monitoring stack (Prometheus + Grafana) to track resource usage, implementing automated backups of Docker volumes, introducing a reverse tunnel (Cloudflare Tunnel, ngrok) as an alternative to port forwarding, or spreading workload across two Pis and a simple load balancer.</p> <h1 data-number="9" id="non-obvious-insights"><span class="header-section-number">9</span> Non-Obvious Insights</h1> <p><strong>Dynamic DNS as reconciliation:</strong> The updater doesn’t manage state; it continuously observes your public IP and corrects divergence from DNS. This mirrors controller patterns in distributed systems: desired state, observed state, action to narrow the gap. Understanding this perspective helps when you later add monitoring, backups, or multi-node setups.</p> <p><strong>Single-port exposure as design discipline:</strong> Constraints often improve design. Forcing all traffic through port 443 simplifies firewall rules, makes threat modeling tractable, and ensures TLS termination happens at a single point. Every system touching port 443 behaves the same way. Compare to systems with dozens of open ports, each with different expectations.</p> <p><strong>Alpine plus Docker synergy:</strong> Alpine’s philosophy aligns with containerization: minimal base layer, minimal dependencies, fast distribution. When your host OS and containers both embrace minimalism, resource consumption and attack surface shrink together.</p> <p><strong>Home ISP as unreliable upstream:</strong> Unlike cloud providers with SLAs, residential internet is best-effort. Your Pi may lose connectivity, your IP may change without warning, your bandwidth may be capped or throttled. Design for transience: the updater tolerates temporary failures, applications handle disconnection gracefully, and monitoring alerts you to persistent issues.</p> <h1 data-number="10" id="possible-visualizations"><span class="header-section-number">10</span> Possible Visualizations</h1> <p><strong>Network Flow:</strong> Browser -&gt; DNS Resolver -&gt; Public IP -&gt; Router (443) -&gt; Caddy (Host:443) -&gt; Internal Docker Network -&gt; Service Container</p> <p><strong>Docker Network Topology:</strong> Caddy container and service containers all connected to an internal bridge network. Caddy is the only container with a port binding to the host (443). Services bind only to the bridge network.</p> <p><strong>DNS Update Sequence:</strong> Pi queries external API for public IP -&gt; Compares to current A record -&gt; If different, calls registrar API with new IP -&gt; Registrar updates DNS -&gt; Propagates globally</p> <p><strong>Layer Stack:</strong> Hardware (Raspberry Pi) -&gt; Alpine Linux -&gt; Docker Engine -&gt; Caddy Container + Application Containers -&gt; User Services</p> <h1 data-number="11" id="what-comes-next"><span class="header-section-number">11</span> What Comes Next</h1> <p>From here, several directions branch:</p> <ul> <li>Add automated volume backups to an external storage service</li> <li>Introduce a monitoring stack (Prometheus scraping metrics, Grafana dashboards) to visualize CPU, memory, network usage</li> <li>Replace port forwarding with a reverse tunnel (Cloudflare Tunnel) for situations where port forwarding is blocked or unreliable</li> <li>Spread workload across two Pis with a simple TCP load balancer or DNS round-robin</li> <li>Implement GitOps: a Git repository that defines all containers and their configuration, with an automated pipeline that synchronizes deployed state to the Git source of truth</li> </ul> <p>Each adds complexity, but in measured steps. You retain the simplicity of the single-Pi setup while expanding capability.</p> <h1 data-number="12" id="conclusion"><span class="header-section-number">12</span> Conclusion</h1> <p>A Raspberry Pi running Alpine Linux, Docker, and Caddy becomes a domain-addressable platform for your own services. Residential networking constraints are solved with dynamic DNS automation and forced TLS termination on a single port. The result is a system simple enough to understand end-to-end, robust enough for personal projects, and minimal enough to run on hardware drawing a few watts of power. It demonstrates that <q>cloud native</q> principles can be rebuilt at home, at the edge, without surrendering security or usability.</p>
Previous Alpine Linux Instal… Next GPU ML Pipeline on …

© 2026 | CC0 1.0 Universal Codebase | Powered by django