Building a Raspberry Pi Cluster
Building a Kubernetes cluster using Raspberry Pis and k3s, to deploy small personal projects on.
Hardware
- 3 x 4GB Raspberry Pi 4
- 3 x 32GB SD Cards
- 3 x Raspberry Pi Power Adapters
- TP-Link TL-WR802N Nano Router
- TP-Link TL-SG105S Network Switch
- 4 x Ethernet Cables
Installing OS
-
For this build - we are using the Ubuntu Server 20.04.5LTS (64-BIT) OS
-
There is a Raspberry Pi Imager Tool that can be used to write the OS to an SD card.
-
It will give you the option to set up WiFi automatically (we’re not doing this), and also SSH access.
- At the moment we’re just using SSH password access
Networking
-
This project uses the TP-Link TL-WR802N Nano Router so we can connect the Pis to the home network without relying on wifi in the Pis
-
We are using Client mode in the nano router - so it wirelessly connects to the main home network, and then connects to a network switch via ethernet. The network switch is the TP-Link TL-SG105S and this then connects to the Pis via ethernet. This means the Pis will be connected to the main home network.
-
Within the main home router admin panel, it was found that the DHCP range was 192.168.0.10 - 192.168.0.254
- During the nano router setup in Client mode, it lets you choose a ‘Smart IP’ mode - which effectively sets the nano router IP as the same as the main home router (192.168.0.1)
-
From then onwards, the tplink admin panel can’t be accessed because it’s IP wll be the same as the main home router admin panel (192.168.0.1)
-
When you place the SD cards into the Pis and boot them up, they probably won’t appear in the DHCP Clients list in the main home router admin panel
-
They should appear in an IP scanner such as Angry IP Scanner
-
Once you’ve found their IPs, SSH into them using
ssh <user>@ip-address
orssh <user>@<hostname>.local
-
Once SSH’d in, use
ip addr show
to find their MAC address -
We want the Pis to have reserved (static) IPs so they don’t get new IPs every time they reboot
-
The reservation can be done in main home network ISP admin panel
- For Virgin Media - this is in Advanced Settings -> DHCP -> Add reserved rule
- Enter the the IPs and MAC addresses found previously
For this build, the MAC addresses, hostnames and static IP addresses of the Pis are:
hostname | MAC Address | IP Address |
---|---|---|
mmiles-pi-master-00 | XX:XX:XX:XX:C4:70 | 192.168.0.46 |
mmiles-pi-worker-00 | XX:XX:XX:XX:6C:67 | 192.168.0.45 |
mmiles-pi-worker-01 | XX:XX:XX:XX:3C:37 | 192.168.0.47 |
Installing k3s
Run the following in each of the Pis once SSH’d into them:
sudo apt upgrade -y
sudo apt install -y docker.io
sudo docker info
sudo sed -i \
'$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1/' \
/boot/firmware/cmdline.txt
sudo reboot
Then on the Pi you want to be ‘master’:
curl -sfL https://get.k3s.io | sh -
In the master pi, you will also have to add it’s IP to /etc/systemd/system/k3s.service
sudo vim /etc/systemd/system/k3s.service
ExecStart before:
ExecStart=/usr/local/bin/k3s \
server \
ExecStart after:
ExecStart=/usr/local/bin/k3s \
server --node-external-ip <master_node_ip> \
Then get a token from the master pi:
sudo cat /var/lib/rancher/k3s/server/node-token
Then on the worker node (replace $YOUR_CLUSTER_TOKEN with token from above):
curl -sfL https://get.k3s.io | K3S_URL=https://$YOUR_SERVER_NODE_IP:6443 K3S_TOKEN=$YOUR_CLUSTER_TOKEN sh -
Repeat this for each worker node you want to add.
Kube Config File
Back on your master node, there should be some config stored in k3s config file:
sudo k3s kubectl config view --raw
This probably won’t appear on the worker nodes.
We’re going to copy this into the standard .kube/config file:
export KUBECONFIG=~/.kube/config
mkdir ~/.kube 2> /dev/null
sudo k3s kubectl config view --raw > "$KUBECONFIG"
chmod 600 "$KUBECONFIG"
We’ll also add export KUBECONFIG=~/.kube/config to ~/.profile and ~/.bashrc on the master node to make it persist on reboot.
Now you should be able to run:
k3s kubectl get nodes
or just kubectl get nodes
And see the master and any worker nodes you’ve attached.
Local Machine Access
Firstly, machine must be connected to the main hone network.
Then install kubectl on your machine:
brew install kubernetes-cli
Then go into your master pi and get the k3 config:
sudo k3s kubectl config view --raw
Copy it and put it in your local machines ~.kube/config
file.
You will probably need to replace the server field in the config from:
server: https://127.0.0.1:6443
To whatever the actual static IP of your master node is:
server: https://192.168.0.102:6443
Then if you do a kubectl get nodes
from local machine, you should get the status of pi cluster:
NAME STATUS ROLES AGE VERSION
mmiles-pi-worker-00 Ready <none> 69m v1.28.4+k3s2
mmiles-pi-master Ready control-plane,master 81m v1.28.4+k3s2
mmiles-pi-worker-01 Ready <none> 13m v1.28.4+k3s2
Namespaces
-
Namespace and other kube manifests are saved in the GitHub repo:
-
There is currently a
test
andprod
namespace
Backups
I did find that one of the SD cards seemed to go corrupt at one point, and the OS needed to be reinstalled.
Therefore, it’s best to routinely backup the SD cards so they can be restored and rejoin the cluster easily.
- Take the SD card out the Pi and insert into machine
- Use
diskutil list
to find the name of the SD card disk -> it will be something like/dev/disk2
- Backup the SD card using
sudo dd if=/dev/disk2 of='/path/to/backup/location/image_name.dmg'
(this might take a while) - Do this for each SD card / Pi
Then if you need to restore a SD using one of the image backups:
- Insert SD card into machine, and wipe it
- Use
diskutil list
to find the name of the SD card disk -> it will be something like/dev/disk2
- Unmount it using
sudo diskutil unmountDisk /dev/disk2
- Backup from the image using
sudo dd if='/path/to/backup/location/image_name.dmg' of=/dev/disk2
(this might take a while)
Next Steps
- Install Helm
- Buy a case!