← Back to Articles
· kubernetes

Building a Kubernetes Cluster on Raspberry Pi with Talos Linux

A step-by-step companion to Episode 1 of the Kubernetes on Raspberry Pi series. Covers hardware setup, Talos Linux installation, cluster bootstrapping, and network configuration.

Watch the Video kubernetestalosraspberry-pihomelabcluster

Six Raspberry Pi 4s, a rack case, a switch, and some MicroSD cards. That's the entire hardware budget for a production-grade Kubernetes cluster: one control plane node and five workers running Talos Linux. This is the companion article to Episode 1 of the Kubernetes on Raspberry Pi series.

All YAML configs and scripts referenced here are in the kubernetes-series GitHub repo under video-01-cluster-build/.

Hardware

Component Model Qty Affiliate Link
Raspberry Pi 4 (4GB) Raspberry Pi 4 Model B 6 Amazon
Cluster case GeeekPi 6-Layer Rack 1 Amazon
USB power Anker 60W 6-Port Charging Station 1 Amazon
Switch TP-Link 16-Port Gigabit 1 Amazon
MicroSD 32GB+ (Class 10) 6 Amazon

Affiliate disclosure: Some links in the hardware table above are affiliate links. If you purchase through them I may earn a small commission at no extra cost to you. I only link to hardware I've actually used in this series.

Note: 4GB Pi 4s are sufficient for this series. You don't need 8GB models.

Why Talos Linux?

Talos is a minimal, immutable Linux OS designed specifically for Kubernetes. There's no SSH, no shell, and no package manager. You manage everything through a declarative API (talosctl). This might feel restrictive at first, but it's exactly what makes it production-grade: minimal attack surface, consistent and reproducible node state, and API-driven configuration that eliminates config drift.

The tradeoff is that Talos-specific quirks come up regularly throughout this series, especially around Pod Security Standards (PSS) and node configuration. We'll tackle those as they arise.

Network Layout

Before touching a node, plan your static IPs. Talos nodes need stable addresses. DHCP reservations or static assignment both work. Here's the layout used in this series:

Node IP
Control Plane 10.51.50.200
Worker 1 10.51.50.201
Worker 2 10.51.50.202
Worker 3 10.51.50.203
Worker 4 10.51.50.204
Worker 5 10.51.50.205

Adjust these to fit your network. The important thing is they don't change after bootstrapping.

Installation

Start by flashing Talos to each MicroSD card. Download the Talos Raspberry Pi image from the Talos releases page, then flash it with dd:

# Flash with dd (replace /dev/sdX with your card)
xzcat talos-rpi4-arm64.img.xz | sudo dd of=/dev/sdX bs=4M status=progress && sync

Repeat for all 6 cards. Once they're flashed and inserted, install talosctl and generate your cluster configs:

talosctl gen config my-cluster https://10.51.50.200:6443 \
  --output-dir ./video-01-cluster-build/

This produces three files: controlplane.yaml, worker.yaml, and talosconfig. Boot your control plane node with the flashed SD card and apply config to it:

talosctl apply-config \
  --nodes 10.51.50.200 \
  --file ./video-01-cluster-build/controlplane.yaml \
  --insecure

Then apply the worker config to each of the five worker nodes:

for ip in 10.51.50.201 10.51.50.202 10.51.50.203 10.51.50.204 10.51.50.205; do
  talosctl apply-config \
    --nodes $ip \
    --file ./video-01-cluster-build/worker.yaml \
    --insecure
done

With all nodes configured, bootstrap the cluster. This step runs once, on the control plane only:

talosctl bootstrap \
  --nodes 10.51.50.200 \
  --talosconfig ./video-01-cluster-build/talosconfig

Common mistake: Running bootstrap more than once will break the cluster. If something goes wrong, re-flash and start over.

Finally, retrieve the kubeconfig and verify all nodes are healthy:

talosctl kubeconfig \
  --nodes 10.51.50.200 \
  --talosconfig ./video-01-cluster-build/talosconfig \
  ./kubeconfig

export KUBECONFIG=./kubeconfig
kubectl get nodes

You should see all six nodes with Ready status within a few minutes. If a node shows NotReady, give it another minute. Talos takes a bit to fully initialize on Pi hardware.

What's Next

A cluster without workloads is just expensive blinking lights. In Episode 2 we set up NFS persistent storage so our applications have somewhere to keep their data.

← Back to Articles