NFS in the Raspberry Pi Cluster
Allowing a physical disk that is attached to one of the Pis to be shared in the Pi cluster, for persistent storage
Mount Physical Disk
-
Firstly, choose a Pi node that you want to host the NFS server (this will be the one that has the external drive attached to it)
-
For me, this was
mmiles-pi-worker-00
which has an IP of192.168.0.45
-
Plug the drive in to the Pi (for this example, I just used a small USB Drive)
- I have already formatted it to use the MS-DOS (FAT) filesystem because it works with MacOS and Linux
-
In windows and mac you can just open a drive in the file system. This isn’t the same in Linux, you need to firstly mount it.
-
SSH into the Pi where the drive is plugged in, and use
sudo fdisk -l
to find the name of the device- It will be called something like
/dev/sda2
- It will be called something like
-
Create a new partition (this will wipe it and reconfigure it to use the nxt4 filesystem. It might not then appear on Mac/Windows machines)
sudo mkfs.ext4 /dev/sda2
-
Create a folder to mount to using (I am using
/volumes/usb-drive
):sudo mkdir /volumes/usb-drive
-
And then mount
sudo mount /dev/sda2 /volumes/usb-drive/
-
Now your external drive is mounted at
/volumes/usb-drive/
-
We want this to be done automatically going forward, so find the unique id of the drive using:
sudo blkid
-
You should see something like this:
/dev/mmcblk0p1: LABEL_FATBOOT="system-boot" LABEL="system-boot" UUID="9FBA-406B" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="ef507fe6-01"
/dev/mmcblk0p2: LABEL="writable" UUID="f7380f37-4684-4f77-abe0-219400572e43" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="ef507fe6-02"
/dev/sda2: LABEL_FATBOOT="TESTDRIVE" LABEL="TESTDRIVE" UUID="07C8-1AF0" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="fcfe8b05-89f4-4b2d-b84e-139e86454449"
/dev/sda1: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="67E3-17ED" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="66d287e4-9e9a-41f7-9240-ed713a25b854"
/dev/loop1: TYPE="squashfs"
/dev/loop4: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop0: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
-
My external drive (/dev/sda2) has the unique id
07C8-1AF0
-
Edit the fstab file using
sudo nano /etc/fstab
- And add the following line using your unique id and mount location:
UUID=07C8-1AF0 /volumes/usb-drive/ vfat defaults 0 0
- And add the following line using your unique id and mount location:
-
Then when you reboot using
sudo reboot
, you should find the drive is automatically mounted when you rundf -h
:
matt@mmiles-pi-worker-00:~$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 185M 3.1M 182M 2% /run
/dev/mmcblk0p2 29G 6.9G 21G 26% /
tmpfs 923M 0 923M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/mmcblk0p1 505M 138M 367M 28% /boot/firmware
/dev/sda2 15G 24K 14G 1% /volumes/usb-drive
tmpfs 185M 4.0K 185M 1% /run/user/1000
- Note that after you add this line
/etc/fstab
, the Pi might struggle to start if the drive isn’t attached on startup (it is trying to mount a drive that isn’t there). Connect the Pi to a monitor and keyboard to directly debug if needed.
Setting Up NFS Server
-
Now that the pysical disk is mounted, you can set up the NFS Server on the same Pi node
-
This is a good page for reference, but will summarise the highlights:
-
On your host (mmiles-pi-worker-01 in my case), install the
nfs-kernel-server
package:
sudo apt update
sudo apt install nfs-kernel-server
- On the other Pis, install the
nfs-common
package:
sudo apt update
sudo apt install nfs-common
-
Fix the permissions for your mount directory:
sudo chown nobody:nogroup /volumes/usb-drive/
-
Configure the NFS Exports on the host Pi using
sudo nano /etc/exports
. You need to add the IPs of all the nodes in your cluster (including the host itself):
/volumes/usb-drive 192.168.0.45(rw,sync,no_root_squash,no_subtree_check)
/volumes/usb-drive 192.168.0.46(rw,sync,no_root_squash,no_subtree_check)
/volumes/usb-drive 192.168.0.47(rw,sync,no_root_squash,no_subtree_check)
-
Then restart the server using
sudo systemctl restart nfs-kernel-server
-
You might also need to adjust firewall settings on the host (see article linked for more info, I didn’t have to amend anything for my setup)
-
You can test that it is is working by SSH’ing onto another node and running:
sudo mkdir -p /tmp/nfs-test
sudo mount 192.168.0.45:/volumes/usb-drive /tmp/nfs-test
- If the drive appears when you run
df -h
in the other node, then it has worked and you can unmount it usingsudo umount /tmp/nfs-test
Deploy EFS Provisioner in the Cluster
- To enable NFS in the cluster, we need to install the csi-driver-nfs helm chart
helm install csi-driver-nfs /path/to/chart --kube-context mmiles-pi-cluster --namespace test
Create StorageClass and PersistentVolumeClaims
-
The StorageClass is a set of instructions about how to provision a volume
-
Create a file called
storage-class.yaml
and populate it with:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi # call it whatever you want
provisioner: nfs.csi.k8s.io
parameters:
server: 192.168.0.45 # needs to be IP of your NFS server host Pi
share: /volumes/usb-drive # needs to be the mount path of your drive on host Pi
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
- hard
- nfsvers=4.1
-
And then install using
kubectl apply -f storage-class.yaml --context mmiles-pi-cluster --namespace test
-
You should see a new StorageClass resource
-
Can now create another file called
pvc.yaml
and populate it with:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: demo-pvc # name it whatever you want
labels:
app: plex-label # name it whatever you want
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-csi # should be same as storageClass defined above
resources:
requests:
storage: 1Gi # also configurable
-
Deploy using
kubectl apply -f pvc.yaml --context mmiles-pi-cluster --namespace test
-
This should create a PersistentVolumeClaim and Volume resource in the cluster, which is provisioned/attached by the
csi-nfs-controller
that was installed in the step above- If you describe the volume, it will show that the VolumeHandle is
192.168.0.45#volumes/usb-drive
- If you describe the volume, it will show that the VolumeHandle is
-
You can now use the PVC in a kube deployment, so it persists data to the volume. You just need to configure the Deployment manifest to use the PVC (in the
volumes
section)