Rebuilding pi-bernetes over and over again

While I use my homelab cluster for internal hosting and testing, I also spend significant time fixing and rebuilding it. Since I first posted about building the cluster, I’ve had to stop and rebuild it about 4-5 times. I’ve made various improvements over time and kept them documented in git, but at some point I don’t have a repeatable build for the homelab cluster.

At the same time, my new job has led me to dust off ansible as an operational tool. I’ve used it in the past (and even ran meetups on it), but I hadn’t actually written any playbooks in years. This seemed like a good time to solve both problems at once!

Reacquainting with ansible & building the playbook

I remembered the syntax and logic of ansible, but there were a few changes since I last used ansible. Fortunately, one of those changes included a vscode extension for both ansible and a linter! Most of my past playbooks were for F5 and other network devices. Instead of trying to find a device, I just started to build my inventory using the existing pi cluster and gather facts about the hosts.

I found the official k3s-ansible playbook but didn’t want to start off using it. Ansible does a good job of abstracting away the mechanics and leaves the end user able to declare their intent–but that’s not great for learning. I decided to start from scratch (for now) and create my own playbook based on my current installation with k3sup1. Based on my many installations to this same group of hardware, my current installation script looks like:

k3sup install --host azeroth.local \
  --user pi \
  --ssh-key ~/.ssh/pi_cluster \
  --context azeroth \
--cluster \
--local-path ~/.kube/config \
--merge \
--k3s-extra-args '--flannel-backend=wireguard-native --disable=servicelb --disable=traefik' \
--k3s-version=v1.28.2+k3s1
for host (brokenisles eastking kalimdor northrend pandaria)
  do k3sup join --host ${host}.local --server-host 10.20.40.100 --user pi --ssh-key ~/.ssh/pi_cluster --k3s-version=v1.28.2+k3s1
done

With ansible, you need both a playbook (which has plays and tasks) as well as an inventory file. To keep it simple, I wanted an inventory where I just list the hosts and ansible determines who has the control plane role. (Yes, my theme this time is World of Warcraft worlds/continents–Lok’tar ogar!).

[k3s]
azeroth
eastking
kalimdor
brokenisles
northrend
pandaria

For the playbook, I used the same strategy and just moved the arguments into a new playbook where I ran the commands based on which host.

- name: K3S control plane
  hosts: k3s[0]
  tasks:
    - name: Install K3S
      ansible.builtin.command:
        argv:
          - k3sup
          - install
          - --host={{ ansible_facts['hostname'] }}.local
          - --user
          - pi
          - --ssh-key
          - ~/.ssh/pi_cluster
          - --context
          - azeroth
          - --cluster
          - --local-path
          - ~/.kube/config
          - --merge
          - --k3s-extra-args
          - '--flannel-backend=wireguard-native --disable=servicelb --disable=traefik'
          - --k3s-version=v1.28.2+k3s1
      delegate_to: localhost
    - name: Record control plane IP
      ansible.builtin.set_fact:
        server_host: "{{ ansible_facts['default_ipv4']['address'] }}"
- name: K3S worker plane
  hosts: k3s[1:]
  tasks:
    - name: Host and IP (debug)
      ansible.builtin.debug:
        msg: "{{ ansible_facts['hostname'] }}: {{ ansible_facts['default_ipv4']['address'] }}"
    - name: Install K3S
      ansible.builtin.command:
        argv:
          - k3sup
          - join
          - --host={{ ansible_facts['hostname'] }}.local
          - --server-host {{ server_host }}
          - --user
          - pi
          - --ssh-key
          - ~/.ssh/pi_cluster
          - --k3s-version=v1.28.2+k3s1
      delegate_to: localhost

Breaking it down, this playbook repeats my custom installation, but wraps it around with ansible. It’s not ideal but did give me enough exposure to ansible (again) to move on to my goal: using the k3s-ansible playbook.

Adding k3s-ansible to the project

After nuking the cluster once again…I was able to clone the project, change my inventory to match the new format, and got the cluster up and running again pretty easily! I then tried moving the playbook into my homelab folder, ran it…and it broke!

I had copied the playbooks, but not the roles, and I had to get the directory structure in proper order. I also knew that by copying files from the project, I’d lose any updates made to the public repo. I wanted to pull updates down, so I instead imported the repo as a submodule and then symlinked the folders I needed to the right spot.

I wanted to hide the submodule(s) (anticipating more for this pattern) and be able to symlink the parts I need from a hidden folder. Thus, I created the folder .submodules and added the submodule to that folder.

mkdir .submodules
git submodule add https://github.com/k3s-io/k3s-ansible.git .submodules/k3s-ansible
git submodule init
mkdir playbooks

For the playbooks, I wanted a place where I could pull in the submodule playbooks but also store and create my own. I anticipate needing to add a few things to the cluster immediately after it’s built (LoadBalancerClass, CSI, etc.) and I want a singular playbook folder at the root of the project.

mkdir playbooks
ln -s .submodules/k3s-ansible/playbooks playbooks/k3s-cluster

I need the roles to make the playbooks work, but wanted to carry them over individually in case I add roles of my own.

mkdir roles
for role in $(ls .submodules/k3s-ansible/roles/)
do
    ln -s .submodules/k3s-ansible/roles/$role roles/$role
done

I had to redirect the role and inventory lookup to the root of the project. I also enabled caching for my inventory–for what I do, it doesn’t hurt.

[defaults]
roles_path = ./roles
inventory  = ./inventory.yaml
fact_caching = jsonfile
fact_caching_connection = ~/.ansible/cache

A repeatable working cluster

With all this done, I can run ansible-playbook playbooks/k3s-cluster/site.yaml and off we go!

PLAY [Cluster prep] ***************************************************************************************

TASK [Gathering Facts] ************************************************************************************
ok: [kalimdor]
ok: [northrend]
ok: [eastking]
ok: [brokenisles]
ok: [azeroth]
ok: [pandaria]

...

PLAY RECAP ************************************************************************************************
azeroth            : ok=31   changed=6    unreachable=0    failed=0    skipped=46   rescued=0    ignored=0
brokenisles        : ok=20   changed=3    unreachable=0    failed=0    skipped=38   rescued=0    ignored=0
eastking           : ok=20   changed=3    unreachable=0    failed=0    skipped=38   rescued=0    ignored=0
kalimdor           : ok=20   changed=3    unreachable=0    failed=0    skipped=38   rescued=0    ignored=0
northrend          : ok=20   changed=3    unreachable=0    failed=0    skipped=38   rescued=0    ignored=0
pandaria           : ok=20   changed=3    unreachable=0    failed=0    skipped=38   rescued=0    ignored=0

There’s still work to do. I need to add all the components and operators that I plan to use, and to also put my services back in a reusable (and backed up) format. Stay tuned!2

  1. I used k3sup to build this cluster before (and during). I still think it’s a great project and makes it easy for someone playing around to get started. My needs have changed, and thus k3sup isn’t optimal for me right now. ↩︎
  2. …assuming I actually write those blog posts! Encouragement helps! ↩︎

Building PI-BERNETES: a home lab

I bought my first Raspberry Pi (B+) in 2014 when they first launched. I remember buying it because I was spending my time coding but wanted to do so on personal hardware that was accessible and replaceable, and the B+ was $35 USD at the time. I still have it, and it still works (though not in use today).

At the time of writing, I have 23 different single board computers (SBC) but was mostly intrigued by the Raspberry Pi 4 because of the arm64 architecture and 4 GB available RAM. So I set out to build what was completely unnecessary and yet fun–a Kubernetes cluster out of Raspberry Pies!

Design Phase

I turned to the one “true” source for inspiration: the internet. #100DaysOfHomeLab

I really like this case and how clean it looks!
A really neat project with some additional ideas on interfacing between the cluster and the environment.

I found a few ideas and started to figure out what my design considerations were.

  • Cable management and airflow is important. Since I’m an ex-Network Engineer (though those skills have yet to leave me), I wanted to make sure I could keep them running cool without a lot of noise, and that means spending a little extra on power over ethernet (PoE).
  • Modular and expandable. I’ve seen the TuringPi boards, but this doesn’t fit my need as I want to be able to remove or add boards without affecting the surrounding components.
  • Mix of compute and storage. I knew I had some workloads that would need more than I wanted to (reasonably) fit on a SD card, so I wanted the cluster to support both compute units and storage units. In this case, that’s just mounting the hard drives as bays and attaching them to a raspberry pi.
  • Self-sustaining. I plan to use this cluster for operating my home automation and running private services for projects and community contributions outside of work, so I don’t want to depend on any outward services that I can’t swap out.

Hardware

Software

Selecting a container scheduler. Given my experience with containers, I knew that I wanted to run containers across these devices. With the rise of arm64 architectures being massively commercialized through AWS Graviton, Apple silicon, Azure VM, and GCP Tau series compute, I wanted to build an arm64-based distro that was capable of running containers. Since I wanted to keep the cluster self-sustained, I ruled out the typical AWS services like ECS Anywhere and EKS Anywhere because they have to communicate with the cloud on some level (plus EKS Anywhere doesn’t have arm64 support yet!). Given how much work I do with kubernetes, I wanted to select a k8s distro and ultimately selected K3s (pronounced “kates”) because it’s backed by SUSE (Rancher), is lightweight (helps save resources for running containers) and has packaging included.

Packaging with addons. Since kubernetes doesn’t provide a lot of services on its own (by design), there are a few things to include into this cluster build that will help offer the same services and kubernetes resources like you would get from a cloud-based distribution. K3s includes, helm, serviceLB, and traefik–but it was hard to customize the last two so I disabled them and installed traefik on my own plus MetalLB for load balancing. Since some of the nodes have extra storage, I wanted a storage controller that could integrate with scheduling pods that need hot storage to schedule onto the nodes with the SSDs, and selected longhorn.

Customizing these addons wasn’t difficult, but like with many open source solutions, different version documentation can be a real problem. For example, MetalLB recently switched from a ConfigMap to CRDs for defining resources, so it took extra digging to get it running but I did with these resources:

Traefik required customizations, mainly to the helm chart to automatically use the MetalLB load balancer and VIP and to enable ingressClass resources. I also added cert-manager to support encrypted endpoints using LetsEncrypt.

Instead of trying to list every customization, I also spent some time making this process repeatable. I originally bought all this hardware in 2020 and built a cluster but ran into problems early and made too many changes to record. This time, when I started, I made sure I documented the process. My manifests and notes all will end up in a Github repo (with the secrets removed) for anyone else to learn from my experiences.

What’s the point?

So far, other professionals would tell you that I have a working kubernetes cluster that does absolutely NOTHING. Why connect all of these nodes together? What can you do with it?

Since I’ve been an operator for most of my career, I tend to get everything ready for use before building a single thing. But I do have ideas of what to run on this cluster and how it’s used.

  • Home automation. I currently have Home Assistant running on its own Raspberry Pi (as one of the blades in the picture), but I’d like to move this to containers and work with that community on repeatable processes.
  • Git server. Sometimes, there are code projects you don’t want out on the public internet. I plan to run Gitea on this cluster and back it on the SSDs.
  • Home cloud. If you develop on AWS and haven’t seen LocalStack, I highly recommend checking it out. The idea started behind lambda-local and dynamodb-local but quickly expanded and added arm64 support.
  • Minecraft server. Because I have kids, and one of them is learning to program.
  • Media server. I have a bunch of DVDs and Blurays that never get used because I’m too lazy to put the DVD in the tray, so I’m gonna digitize them and host on Plex or something similar.
  • Code server. It’s been a dream of mine to work from a tablet, and coding always tends to be one of those misses. At least with code-server, I can make it easy to use an IDE (as long as there’s reliable internet).
  • Donate unused compute. There’s services like Folding@home and BOINC that allow scientific & academic communities to run their code on remote machines, and I can donate my “unused” CPU cycles to one of these programs. I’ll of course prioritize my own workloads, but if I’m not using those cycles then they might as well go to a good cause.
  • Random sparks or ideas. Because I had set most of this up before KubeCon North America 2022, I had a running cluster ready for running coding challenges and testing out new projects and ideas and was able to complete most of the challenges on the showroom floor, during sessions, or while at the hotel.

Ultimately, having this cluster gives me the freedom to run side projects and test various ideas from my house. It’s not production-ready, but rather experimentation-ready!