Building a smart device without writing code

A while ago (2019 according to the repo), I was learning about the Internet of Things (IoT) and went through the process to prototype a smart indicator light. I made it communicate with AWS IoT where I could both change the color of the light on the device (and it would report to the cloud) as well as change the color in the cloud (and have the indicator light change).

Separately, I’ve also been spending more time on home automation. I have Home Assistant setup in my house and had been reading on ESPHome but haven’t come up with a good test project–so I decided to repurpose my indicator light to work with ESPHome!

Original Hardware

I wanted to see whether I could use the original hardware without modifications–mostly because instead of recreating the board, I just dusted it off…

Original prototype on breadboard

As the name may imply, ESPHome must be used by hardware supporting the ESP32 or ESP8266 (or RP2040, but that’s for another time) chipset. I originally made this prototype on a Raspberry Pi but wanted to use a smaller form factor for portability and yet retain the ability to connect using Wi-Fi. I’ve been using the Adafruit Feather HUZZAH on a few projects, and had back then, so I stuck with it.

Removing the Software

This project has been through a few iterations of software. I first started with python on the Raspberry Pi [code]. This would use the GPIO to control each leg of the RGB LED and would communicate with an IoT shadow in the cloud for the status. This meant AWS IoT was my interfacing layer and I could build a web-based GUI, Alexa skill, or mobile app to control this light.

When I switched to the Feather, it meant I needed to change programming languages. While MicroPython was an option, it still required the interpreter at runtime and didn’t have the benefit of compiled code. I also wrote the program in C (using Arduino) but never bothered to become proficient with C’s syntax, and ended up using JavaScript (using Mongoose OS). The trouble with all of these approaches is that the intent is simple (power to pin when condition) but writing the code becomes more difficult.

ESPHome has a different approach–you declare which components to use and provide the configuration for those components, then ESPHome compiles the modules and configuration together and produces an artifact that can be loaded onto the device. Anyone familiar with kubernetes will recognize this pattern: declare your intent in a resource file and let kubernetes build it. With ESPHome, I declare the light and which pins to use for output, and it builds the rest of it for me.

This is the configuration section for the LED in ESPHome:

light:
  - platform: rgb
    id: torch_led
    name: "torch_light"
    red: led_red
    green: led_green
    blue: led_blue

output:
  - id: led_red
    platform: esp8266_pwm
    pin: GPIO14
    inverted: true
  - id: led_green
    platform: esp8266_pwm
    pin: GPIO12
    inverted: true
  - id: led_blue
    platform: esp8266_pwm
    pin: GPIO13
    inverted: true

While this appears simple enough, I still did have to spend time learning the different values, but was able to piece it together by looking at the examples on ESPHome’s website.

There’s an added benefit of “no code” solutions like ESPHome like the included features regarding Wi-Fi, Over The Air updates (OTA) and API integration. For every programming language, adding these features meant extra lines of code and setup, but ESPHome packages them as part of the configuration and build process. Much of the code in the earlier revisions was dedicated to Wi-Fi and API connectivity, with only a small section actually controlling the physical hardware. I added OTA when moving to ESPHome, and wrote less lines as a result of the switch!

Migrating from AWS IoT to Home Assistant

While I was able to build interfaces that worked with IoT and the cloud, I wanted something that was already interconnected and didn’t require me to build the integrations. I also prefer for control traffic to be kept as local as possible. While the cloud rarely goes offline, my internet connection is much more susceptible to outages which would render the light inoperable. With a local brain, I can save on both.

Like this project, I’ve built Home Assistant a few times over the years and have been slowly expanding it to incorporate the features I need. I have the Home Assistant Podcast on my feed and it seems like everyone kept mentioning ESPHome and how it integrates with Home Assistant. Plus, ESPHome is easily run as an addon for Home Assistant. However, the best part is the interfacing is done for me! When I create the light in ESPHome, the device and entity show up in Home Assistant and include the interfacing.

The color and brightness controls come automatically in Home Assistant since I selected a RGB light as the platform in ESPHome.

Because I’ve integrated Home Assistant with Alexa, I also automatically get an Alexa interface through the Alexa app as well as voice control!

Alexa app also automatically can control the light.

Okay, now what?

What’s the point of a RGB light that’s “smart-controlled”? The device isn’t practical–but it’s one of the first small board projects I built and have spent a lot of time with. I’d already completed this project, but was able to repurpose it and find out something new. So the point–is discovery.

That’s because great achievement has no road map. The X-Ray is pretty good, and so is penicillin, and neither were discovered with a practical objective in mind. I mean, when the electron was discovered in 1897, it was useless. And now we have an entire world run by electronics. Haydn and Mozart never studied the classics. They couldn’t. They invented them.

Dr. Dalton Milgate, exerpt from the fictional series The West Wing S3E16

The project itself is a learning tool–now that I’ve made this work, I’ve also been able to add smart controls to a LEGO set with lights. Now as I’m automating my house, if I need a random motion sensor that communicates with MQTT then I can build it and integrate it quickly!

ESPHome also makes home automation more available to everyone. I speak more programming languages than languages–but not everyone does. Writing a config file is much easier than writing code and it cuts down on development time. Less time on software means that I also get more time on hardware!

Image from Yarn

Cheap and quick Mastodon alias

EDIT: The format is JRD+JSON per RFC 7033. Changed the reference below, and thanks to mdaniel on HN.

With the uncertainty of Twitter looming over us, I did what everyone else in the community did and looked to alternatives, including Mastodon. The appeal of Mastodon is the distributed nature, but that’s also a pitfall to muggles (non-technicals).

I want to use a simple alias for being able to find my Mastodon name, and went and purchased salvo.chat. I have a number of Twitter aliases (because of course I do!) so I wanted something incredibly simple. Unfortunately, most of the Mastodon hosting providers are completely overwhelmed right now as we all figure out how to do with the influx of demand.

I did consider setting up a Mastodon server, but I definitely started to over-engineer it (was gonna host it in Kubernetes on my homelab) so instead I changed gears and said “what can I do fast that’s a temporary alias?”

There’s been a few technologies that I’ve been looking for a use case. I need something now that’s cost-predictable, simple, and easy to setup–and put these together real quick:

  • DigitalOcean droplet (1 vCPU, 512 MB RAM, 10 GB SSD, $4/mo)
  • Caddy server

The droplet was simple enough, and I get a credit for two months (somehow I’ve never used DO before) which gives me time to customize and find a long-term solution. However, the exciting part is using Caddy. It’s written in Go and includes some nice features including automatic HTTPS. This means with almost NO configuration I can have a secure website that will alias to my mastodon alias.

I wasn’t really sure what to do to get it working though, but fortunately I came across Mastodon on your own domain without hosting a server by Maarten Balliauw which walked me through the technical details. I took his discovery and used it to setup my alias server.

Steps to recreate

First I had to get my webfinger details from the current provider–a simple cURL helps here.

$ curl https://mastodon.cloud/.well-known/webfinger?resource=acct:buzzsurfr@mastodon.cloud
{
    "subject": "acct:buzzsurfr@mastodon.cloud",
    "aliases": [
        "https://mastodon.cloud/@buzzsurfr",
        "https://mastodon.cloud/users/buzzsurfr"
    ],
    "links": [
        {
            "rel": "http://webfinger.net/rel/profile-page",
            "type": "text/html",
            "href": "https://mastodon.cloud/@buzzsurfr"
        },
        {
            "rel": "self",
            "type": "application/activity+json",
            "href": "https://mastodon.cloud/users/buzzsurfr"
        },
        {
            "rel": "http://ostatus.org/schema/1.0/subscribe",
            "template": "https://mastodon.cloud/authorize_interaction?uri={uri}"
        }
    ]
}

I also pre-built a droplet and set my DNS for the domain to point to the droplet.

I then saved this to a file in the droplet, and moved to installing Caddy. I went the package route so I could make quick updates if necessary, then had to find the Caddyfile (which was in /etc/caddy). The Caddyfile has enough to launch a web server locally. The only changes I had to make was to change the listener to the domain (which enables automatic HTTPS) and added the header so that the webfinger response would be JRD+JSON. I’m not sure it was necessary, but when you work on load balancers as I have, you want to make sure.

# The Caddyfile is an easy way to configure your Caddy web server.
#
# Unless the file starts with a global options block, the first
# uncommented line is always the address of your site.
#
# To use your own domain name (with automatic HTTPS), first make
# sure your domain's A/AAAA DNS records are properly pointed to
# this machine's public IP, then replace ":80" below with your
# domain name.

salvo.chat {
	# Set this path to your site's directory.
	root * /usr/share/caddy

	# Enable the static file server.
	file_server

	# Another common task is to set up a reverse proxy:
	# reverse_proxy localhost:8080

	# Or serve a PHP site through php-fpm:
	# php_fastcgi localhost:9000

	route {
		header /.well-known/* Content-type application/jrd+json
	}
}

# Refer to the Caddy docs for more information:
# https://caddyserver.com/docs/caddyfile

A quick restart, and my server was running. I tried cURL on the new URL:

$ curl https://salvo.chat/.well-known/webfinger?resource=acct:buzzsurfr@mastodon.cloud
{
    "subject": "acct:buzzsurfr@mastodon.cloud",
    "aliases": [
        "https://mastodon.cloud/@buzzsurfr",
        "https://mastodon.cloud/users/buzzsurfr"
    ],
    "links": [
        {
            "rel": "http://webfinger.net/rel/profile-page",
            "type": "text/html",
            "href": "https://mastodon.cloud/@buzzsurfr"
        },
        {
            "rel": "self",
            "type": "application/activity+json",
            "href": "https://mastodon.cloud/users/buzzsurfr"
        },
        {
            "rel": "http://ostatus.org/schema/1.0/subscribe",
            "template": "https://mastodon.cloud/authorize_interaction?uri={uri}"
        }
    ]
}

And that’s it! Now if you go to your Mastodon client and search for @theo@salvo.chat my @buzzsurfr@mastodon.cloud comes up!

Building PI-BERNETES: a home lab

I bought my first Raspberry Pi (B+) in 2014 when they first launched. I remember buying it because I was spending my time coding but wanted to do so on personal hardware that was accessible and replaceable, and the B+ was $35 USD at the time. I still have it, and it still works (though not in use today).

At the time of writing, I have 23 different single board computers (SBC) but was mostly intrigued by the Raspberry Pi 4 because of the arm64 architecture and 4 GB available RAM. So I set out to build what was completely unnecessary and yet fun–a Kubernetes cluster out of Raspberry Pies!

Design Phase

I turned to the one “true” source for inspiration: the internet. #100DaysOfHomeLab

I really like this case and how clean it looks!
A really neat project with some additional ideas on interfacing between the cluster and the environment.

I found a few ideas and started to figure out what my design considerations were.

  • Cable management and airflow is important. Since I’m an ex-Network Engineer (though those skills have yet to leave me), I wanted to make sure I could keep them running cool without a lot of noise, and that means spending a little extra on power over ethernet (PoE).
  • Modular and expandable. I’ve seen the TuringPi boards, but this doesn’t fit my need as I want to be able to remove or add boards without affecting the surrounding components.
  • Mix of compute and storage. I knew I had some workloads that would need more than I wanted to (reasonably) fit on a SD card, so I wanted the cluster to support both compute units and storage units. In this case, that’s just mounting the hard drives as bays and attaching them to a raspberry pi.
  • Self-sustaining. I plan to use this cluster for operating my home automation and running private services for projects and community contributions outside of work, so I don’t want to depend on any outward services that I can’t swap out.

Hardware

Software

Selecting a container scheduler. Given my experience with containers, I knew that I wanted to run containers across these devices. With the rise of arm64 architectures being massively commercialized through AWS Graviton, Apple silicon, Azure VM, and GCP Tau series compute, I wanted to build an arm64-based distro that was capable of running containers. Since I wanted to keep the cluster self-sustained, I ruled out the typical AWS services like ECS Anywhere and EKS Anywhere because they have to communicate with the cloud on some level (plus EKS Anywhere doesn’t have arm64 support yet!). Given how much work I do with kubernetes, I wanted to select a k8s distro and ultimately selected K3s (pronounced “kates”) because it’s backed by SUSE (Rancher), is lightweight (helps save resources for running containers) and has packaging included.

Packaging with addons. Since kubernetes doesn’t provide a lot of services on its own (by design), there are a few things to include into this cluster build that will help offer the same services and kubernetes resources like you would get from a cloud-based distribution. K3s includes, helm, serviceLB, and traefik–but it was hard to customize the last two so I disabled them and installed traefik on my own plus MetalLB for load balancing. Since some of the nodes have extra storage, I wanted a storage controller that could integrate with scheduling pods that need hot storage to schedule onto the nodes with the SSDs, and selected longhorn.

Customizing these addons wasn’t difficult, but like with many open source solutions, different version documentation can be a real problem. For example, MetalLB recently switched from a ConfigMap to CRDs for defining resources, so it took extra digging to get it running but I did with these resources:

Traefik required customizations, mainly to the helm chart to automatically use the MetalLB load balancer and VIP and to enable ingressClass resources. I also added cert-manager to support encrypted endpoints using LetsEncrypt.

Instead of trying to list every customization, I also spent some time making this process repeatable. I originally bought all this hardware in 2020 and built a cluster but ran into problems early and made too many changes to record. This time, when I started, I made sure I documented the process. My manifests and notes all will end up in a Github repo (with the secrets removed) for anyone else to learn from my experiences.

What’s the point?

So far, other professionals would tell you that I have a working kubernetes cluster that does absolutely NOTHING. Why connect all of these nodes together? What can you do with it?

Since I’ve been an operator for most of my career, I tend to get everything ready for use before building a single thing. But I do have ideas of what to run on this cluster and how it’s used.

  • Home automation. I currently have Home Assistant running on its own Raspberry Pi (as one of the blades in the picture), but I’d like to move this to containers and work with that community on repeatable processes.
  • Git server. Sometimes, there are code projects you don’t want out on the public internet. I plan to run Gitea on this cluster and back it on the SSDs.
  • Home cloud. If you develop on AWS and haven’t seen LocalStack, I highly recommend checking it out. The idea started behind lambda-local and dynamodb-local but quickly expanded and added arm64 support.
  • Minecraft server. Because I have kids, and one of them is learning to program.
  • Media server. I have a bunch of DVDs and Blurays that never get used because I’m too lazy to put the DVD in the tray, so I’m gonna digitize them and host on Plex or something similar.
  • Code server. It’s been a dream of mine to work from a tablet, and coding always tends to be one of those misses. At least with code-server, I can make it easy to use an IDE (as long as there’s reliable internet).
  • Donate unused compute. There’s services like Folding@home and BOINC that allow scientific & academic communities to run their code on remote machines, and I can donate my “unused” CPU cycles to one of these programs. I’ll of course prioritize my own workloads, but if I’m not using those cycles then they might as well go to a good cause.
  • Random sparks or ideas. Because I had set most of this up before KubeCon North America 2022, I had a running cluster ready for running coding challenges and testing out new projects and ideas and was able to complete most of the challenges on the showroom floor, during sessions, or while at the hotel.

Ultimately, having this cluster gives me the freedom to run side projects and test various ideas from my house. It’s not production-ready, but rather experimentation-ready!

Day 5 – Cruising/Day at Sea

Floor calendar

Today was our day to do ABSOLUTELY NOTHING. We woke up when we want, ate breakfast, second breakfast, cheese (not me), lunch, snack, first dinner, second dinner, and snacks when we want, drank when we want (at least 15 minutes apart), and DID what we want!

So what did we do? We did start out with breakfast and had to get Marie her bougie coffee, but then decided to sit out on the deck and do nothing.

Well, that only works when the wind isn’t blowing and the temperature is nice. However, the solarium roof was closed so it felt amazing and we opted to lie around, read books, drink, (write blog posts,) and relax! I vividly remember going on cruises before and wanting to do ALL the activities—today I just wanted to sit down.

We all regrouped for lunch then headed down for Marvel trivia! Our group got 12/15–didn’t win but had fun. There were some questions that brought out the “true fans” as the answer wasn’t correct, but there was a clear winner in the end.

I also have a new drink: the Dirty Martini. 🍸 there’s a few ways to make it, but I don’t care as I basically feel like I’m drinking olives! 🫒. It’s also less sugar but doesn’t mess you up as fast as scotch. I’m definitely not pulling off the James Bond look but I still feel fancy.

After trivia, we had a tight schedule—because we had a siesta planned. I don’t know that it was actually the plan…but it became the plan when I fell asleep. I was woken up for dinner, and I’m all about that!

One of the perks of a cruise is the restaurant staff will go out of their way to accommodate allergies and intolerances—they are committed to you having the best onboard experience possible. Because of that, I get a preview of tomorrow’s meal selection and talk with the maître d’ (Megan told me how to spell that—ask her for the meaning of the word) about tomorrow’s order. I get to help customize tomorrow’s meal, but when I arrived today they told me that they had to make another adjustment. When you have a dairy allergy that can really interrupt a vacation, you take special care to make the right food choices. However, I also LOVE that the restaurant staff is also looking out for me and makes adjustments to ensure I’m at peak performance. It takes some of the burden off of me—which is part of being on vacation!

After dinner we picked a group activity for the family—50’s and 60’s dancing in the Colony Club. I 100% was the type of person to watch others dance! 🤣 but it was a good “calm down” activity for the night before we all retire.

But of course, there’s on thing that we all come to a cruise for: the towel animals! Today’s post is brought to you by the towel BUNNY!

Sonobuoy – a Simple, multi-protocol echo proxy

A few customers that use AWS App Mesh want a way to ensure that the Virtual Gateway is properly routing, not just up and available. The envoy for the Virtual Gateway provides a health check, but requires in-depth knowledge and observability to determine whether the proxy is successfully routing traffic. Instead, create a simple route to /health or /ping and send it to a known, working service.

There’s a plethora of different options to use for the backend. Some setup a nginx/envoy proxy to respond to packets where others will use a clone of a microservice. Instead, I wrote my own.

Introducing sonobuoy. Written in Go, sonobuoy can be deployed as an agent or a sidecar container and supports TCP, HTTP, and gRPC protocols to provide the cleanest successful response to a request.

Here’s an example, with the TCP listener on port 2869, the HTTP listener on port 2870, and the gRPC listener on port 2871:

Automatically Deploy Hugo Blog to Amazon S3

I had grand aspirations of maintaining a personal blog on a weekly basis, but sometimes that isn’t always possible. I’ve been using my iPad and Working Copy to write posts, but had to use my regular computer to build and publish. CI/CD pipelines help, but I couldn’t find the right security and cost optimizations for my use case…until this year.

My prior model had my blog stored on GitLab because it enabled a free private repository (mainly to hide drafts and future posts). I was building locally using a Docker container and then uploading to Amazon S3 via a script.

At the beginning of the year, GitHub announced free private repositories (for up to 3 contributors), and I promptly moved my repo to GitHub. (NOTE: I don’t use CodeCommit because it’s more difficult to plumb together with Working Copy.)

I was able to now plumb CodePipeline and CodeBuild to build my site, but fell short on deploying my blog to S3. I had to build a Lambda function to extract the artifact and upload to S3. The function is only 20 lines of Python, so it wasn’t difficult.

But then, AWS announced deploying to S3 from CodePipeline, meaning my Lambda function was useful for exactly 10 days!

Now, I can write a post from my iPad and publish it to my blog with a simple commit on the iPad! It’s a good start to 2019, with (hopefully) more topics coming soon…

Add Athena Partition for ELB Access Logs

If you’ve worked on a load balancer, then at some point you’ve been witness to the load balancer taking the blame for an application problem (like a rite of passage). This used to be difficult to exonerate, but with AWS Elastic Load Balancing you can capture Access Logs (Classic and Application only) and very quickly identify whether the load balancer contributed to the problem.

Much like any log analysis, the volume of logs and frequency of access are key to identify the best log analysis solution. If you have a large store of logs but infrequently access them, then a low-cost option is Amazon Athena. Athena enables you to run SQL-based queries against your data in S3 without an ETL process. The data is durable and you only pay for the volume of data scanned per query. AWS also includes documentation and templates for querying Classic Load Balancer logs and Application Load Balancer logs.

This is a great model, but with a potential flaw–as the data set grows in size, the queries become slower and more expensive. To remediate, Amazon Athena allows you to partition your data. This restricts the amount of data scanned, thus lowering costs and increasing speed of the query.

ELB Access Logs store the logs in S3 using the following format:

s3://bucket[/prefix]/AWSLogs/{{AccountId}}}}/elasticloadbalancing/{{region}}/{{yyyy}}/{{mm}}/{{dd}}/{{AccountId}}_elasticloadbalancing_{{region}}_{{load-balancer-name}}_{{end-time}}_{{ip-address}}_{{random-string}}.log

Since the prefix does not pre-define partitions, the partitions must be created manually. Instead of creating partitions ad-hoc, create a CloudWatch Scheduled Event that runs daily targeted at a Lambda function that adds the partition. To simplify the process, I created buzzsurfr/athena-add-partition.

This project is both the Lambda function code and a CloudFormation template to deploy the Lambda function and the CloudWatch Scheduled Event. Logs are sent from the Load Balancer into a S3 bucket. Daily, the CloudWatch Scheduled Event will invoke the Lambda function to add a partition to the Athena table.

Using the partitions requires modifying the SQL query used in the Athena console. Consider the basic query to return all records: SELECT * FROM logs.elb_logs. Add/append to a WHERE clause including the partition keys with values. For example, to query only the records for July 31, 2018, run:

SELECT *
FROM logs.elb_logs
WHERE
  (
    year = '2018' AND
    month = '07' AND
    day = '31'
  )

This query with partitions enabled restricts Athena to only scanning

s3://bucket/prefix/AWSLogs/{{AccountId}}/elasticloadbalancing/{{region}}/2018/07/31/

instead of

s3://bucket/prefix/AWSLogs/{{AccountId}}/elasticloadbalancing/{{region}}/

resulting in a significant reduction in cost and processing time.

Using partitions also makes it easier to enable other Storage Classes like Infrequent Access, where you pay less to store but pay more to access. Without partitions, every query would scan the bucket/prefix and potentially cost more due to the access cost for objects with Infrequent Access storage class.

This model can be applied to other logs stored in S3 that do not have pre-defined partitions, such as CloudTrail logsCloudFront logs, or for other applications that export logs to S3, but don’t allow modifications to the organizational structure.

Blog Restart

It’s been over 10 years since I had a blog, or at least maintained one. I want to promote my personal brand but have often not put forth the effort. I have a significant amount of experience, so it’s just a matter of putting my experiences down “on paper”…and having the right tool to publish.

Enter Hugo. I’ve been a fan of Markdown for awhile, and make avid use of it for projects on GitHub or written for mkdocs. I wanted something that could deploy to a static site since my actual code rarely changes, and to save overall costs. My current site is built on GitHub Pages, but does not allow me the necessary capabilities, and I wanted something similar to mkdocs but that I could easily deploy a scaffold and work.

While I often travel with a laptop, I’ve also been looking at my mobile productivity, and I feel that I could accomplish more by using my mobile device. When I have an idea, I want to commit quickly. My tablet is an easy way to do so since it takes up less room and has less time to boot, but has lacked a sufficient productivity tool.

I’ve typed up this post partially using Working Copy. For me, it has the right blend of git integration and file editor (with Markdown syntax highlighting). You can’t push without the in-app purchase, but the free version plus a 10-day trial lets you test before buying, which let me make sure it fits my workflow.

For my blog content, I plan to document my experiences through my IT journey in hopes that it will also help others. I’ve always embraced the IT community, and a blog is my latest way of giving back. I’ve always struggled with trying to get the best structure and methods before pushing something new, and that’s always led to me never launching. This time, I’m accepting that the blog may not be perfect, but it’s out there and functional. I’ll be able to make improvements over time and grow this into a resource for all.