Day 2 – Underground Seattle/Pike’s Place Market

Today was a day to explore downtown Seattle with the group. Public transportation is big in Seattle, so after a brief walk/run and a stop at the local coffee shop (because, ☕️) we took a bus 🚌 to downtown and stopped at Pioneer Square and got a cappuccino (with almond milk) before heading on a tour of Underground Seattle.

Latte art for the morning

Underground Seattle

Many don’t know that there’s an entire underground system in Seattle. Without going through the full history, they actually built Seattle at one level then decided to level the city to be flatter. (Check the photo captions)

Pike’s Place Market

We then walked up to Pike’s Place Market. We walked around a few shops, saw the “flying fish”, and even had a quick bite a Lowell’s with an awesome view of the market below. After eating (four of us split two meals, and were still full) we went further into the market and came across selling “BBQ Pork Sticks”…and got one!

Megan has also been talking about Piroshky Piroshky nonstop, so we stopped in there but they didn’t have any I could eat so I’ll get one next time. However, it smelled DELICIOUS! 🤤

“flying fish”

Evening: Uwajimaya Asian Market

After a quick (but much needed) siesta, we ventured out to the food court Uwajimaya Asian Market so we could all pick out which food we wanted. We then walked around and found a boba tea place where I had my first boba tea! I’m amazed we ate at all considering how many times we stopped for food today, but nonetheless we ventured on. It was funny how after walking out of the market that we saw a place that had “rice hot dogs” and we had to seriously consider trying one. I’m convinced I could come here 20 times and never eat the same food twice!

We also set our plans for tomorrow. (Hint: 🍩🏛) Megan also got me hooked on Only Murders in the Building, and the second season just came out, so we’re ending the night with that and a dirty martini! 🍸

Day 1 – Flight to Seattle/Baseball Game

I started the day at 5:30 AM EDT, got dressed, was already packed, and caught an Uber 🚕 to the airport. I had worn my boots (because I was already pretty heavy on my checked bag) and of course they set off the sensors. Despite being TSA PreCheck, still took a whopping 6 minutes to get through security! 😀

Starting the trip…

My itinerary today took me to Seattle via Dallas. The first leg was uneventful (I did get a good shot of downtown Dallas), but we sat on the ground for an hour and a half at Dallas because of a scheduling issue…which is much better than a maintenance issue!

downtown Dallas from the air

We had done some prudent planning—my flight (originally) arrived with Cassidy & Owen’s flight so that we could make a single airport run, which oddly still worked in our favor! After making it to Seattle, we arrived at our first destination—Megan’s apartment! 🏠

We quickly settled in, took a siesta, and then went down to catch the ferry ⛴ to get to T-Mobile Park for the Orioles vs. Mariners game! ⚾️

The ferry ⛴ from West Seattle to Seattle
downtown Seattle from aboard the ferry ⛴

Because of all the travel and theoretically short layover, I didn’t have a chance to eat lunch or dinner…well, I did but then passed on it. When we got through the gate, I was on a mission—🦀🍟 CRAB FRIES 🦀🍟

It’s just fries, seasoned with Old Bay, topped with crab meat!!!

🦀🍟

The game wasn’t very exciting for 8 1/2 innings until we rallied (rally caps 🧢 and all) for the Mariners to win 2-0!

4 out of 5 vacationers recommend a pre-vacation baseball game!

After the game, we stopped by the market to pick up a few things, and headed back to the apartment—and finally went to sleep around 12:30 AM PDT—a 21 hour day! 🥱

Alaska Trip 2022 – Day 0 (Prologue)

The past few years have been full of challenges, ranging from health issues, to family health issues, to spousal health issues, to spouse issues…it’s been a rollercoaster 🎢 and in that time I haven’t been able to relax and to focus on myself. I decided to start 2022 by taking better care of myself…and actually stuck to it! I’ve been eating better, exercising, changing my diet, and I’ve reached a point where I am taking a 15-day trip centered around an Alaskan cruise! 🛳

Why an Alaska trip? Because when the opportunity arises—you take it! My friend Megan booked a family cruise, but was married to a
cheap, lying, no-good, rotten, four-flushing, low-life, snake-licking, dirt-eating, inbred, overstuffed, ignorant, blood-sucking, dog-kissing, brainless, dickless, hopeless, heartless, fat-ass, bug-eyed, stiff-legged, spotty-lipped, worm-headed sack of monkey shit he is!

(Hallelujah! Holy shit! Where’s the Tylenol?)
#iykyk

Fortunately for me, I get to take his place…which works because Megan and I both need a friend and I’ve known her family for YEARS! Our plan is to make this the most relaxing vacation any of us have taken in a long time!

So why write about it? Well, I took a trip to Italy with my mom and dad back in 2005, and my mom made me keep a journal. I’m glad she did, because we were able to share it with my grandparents so they could experience the trip through our writing. While I’ve thought about journaling my adventures before, I haven’t felt like doing so until this one. So I am.

(Plus, my mom told me to. Thanks mom!)

I found my transcript of the Italy journal I wrote—which only solidifies my place as a nerd 🤓 because I literally wrote in metadata notes like “insert map of flight” or “attach picture of statue”. Thanks to the power of technology, I can now do that in real time!

My goal is to document my journey as it’s happening. This way, my enduring fans (again, probably just my mother) can read about my trip and see the beautiful sights and experiences.

Speaking of the trip…

Itinerary

This started as a 7-day Alaskan cruise. When I came onboard (to the planning)—the second-funnest detail was that I didn’t know it was an Alaskan cruise! Megan’s family all live in Florida so I had assumed that we’d leave out of Florida. NOPE!

Why second-funnest? Because the funnest detail is that it’s a one-way cruise! Which meant after a brief moment of panic that we needed to expand the trip. Then flight prices rose again, and before you know it—we’re at 15 days!

I couldn’t sleep last night, which means I’m REALLY excited to begin! Got up at 5:30 AM (EDT), caught a ride to the airport, and waiting on my plane to depart!

Sonobuoy – a Simple, multi-protocol echo proxy

A few customers that use AWS App Mesh want a way to ensure that the Virtual Gateway is properly routing, not just up and available. The envoy for the Virtual Gateway provides a health check, but requires in-depth knowledge and observability to determine whether the proxy is successfully routing traffic. Instead, create a simple route to /health or /ping and send it to a known, working service.

There’s a plethora of different options to use for the backend. Some setup a nginx/envoy proxy to respond to packets where others will use a clone of a microservice. Instead, I wrote my own.

Introducing sonobuoy. Written in Go, sonobuoy can be deployed as an agent or a sidecar container and supports TCP, HTTP, and gRPC protocols to provide the cleanest successful response to a request.

Here’s an example, with the TCP listener on port 2869, the HTTP listener on port 2870, and the gRPC listener on port 2871:

Match Containers to Host Processes

During my presentation Securing Container Workloads on AWS Fargate, I built a demo environment where I could build and run various containers and show the effect they had on the host. While my demo went well, a key piece of feedback is that customers liked how I presented the demo environment by having containers and their host processes on one side. To that end, I’ll show you.

Containers Pane

To show the currently running containers on a given host, use docker ps. The normal format (for v18.09.1) looks like:

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
525a7b49ef67        nginx               "nginx -g 'daemon of…"   About an hour ago   Up About an hour    80/tcp                   tender_shirley

However, for this demo, I was only concerned with the name, image, command, and current status (which has the time it’s been running), so I formatted the output using the --format flag, and stuck it inside watch to update every second.

Command
watch -n 1 "docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Command}}\t{{.Status}}'"
Output
Every 1.0s: docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Command}}\t{{.Status}}' localhost.localdomain: Sat Feb 23 13:44:58 2019 NAMES IMAGE COMMAND STATUS tender_shirley nginx "nginx -g 'daemon of " Up About an hour

Host Processes Pane

Getting the host processes (and a way to map them to containers) was more difficult. The best tool in Linux for looking at processes is ps (which is where Docker gets the name for docker ps), but this doesn’t give us all the information about a container.

When a container starts, it spawns as a process with a specific process identifier (PID) in the host, but the container sees the PID as 1. This process can also spawn other processes, which will reference ther parent process PPID. Subprocesses will show up with a PPID of the main process PID but inside the container as PPID 1. For my demo, I wanted to show both the processes and subprocesses at the host, and include information about the user running each process.

Thus, I built a script called watchpids.sh. This script gathered the host PIDs, found all of the children PIDs and then fed the list of PIDs to ps, also formatting the list to show the running time of the process, the PID, the PPID, the user associated with the process, and the command run. Again, execution of the script was wrapped in watch.

Script

[gist https://gist.github.com/buzzsurfr/ad3d29da6324cc290a7ead4270ad38f8 /]

With both the containers and processes displayed, map the container STATUS to the host process ELAPSED time to see what processes show up on the host whenever a new container is started.

Terminal Window

Tying it all together, I used tmux to build the container and host process panes on the right, and an area to type commands on the left.

tmux uses either keyboard shortcuts or commands inside the session to change the environment–going for a “scripting” approach, I chose the latter.

Commands
tmux new-session -d -s builder_demo
tmux split-window -h
tmux split-window -dv "watch -n 1 \"docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Command}}\t{{.Status}}'\""
tmux select-pane -t 0
tmux send-keys -t 1 'watch -n 1 ./watchpids.sh' C-m
tmux -2 attach-session -d
Screenshot

Securing Container Workloads on AWS Fargate

When containers first became mainstream (think PyCon 2013 with Solomon Hykes on stage), everyone thought it had potential and began to test running containers on their own, but almost no one set out to put containers in production that day. They wanted to see it battle-tested…which has happened over time. Containers have matured from an emerging technology to production-ready where it’s generally considered safe, but there’s a new problem. Now, we need our business processes, tools, and architecture models to mature as well.

The top ask I hear about containers comes down to security. Containers were built as a way to isolate workloads for one another, but many of the security models from virtual machines do not work in containers, and thus we must evolve our thought process.

To that end, I presented an AWS Online Tech Talk about how to secure container workloads using AWS Fargate (though many of the lessons also apply generally across containers). I demonstrated some quick steps to make your containers more secure during the build process as well as how to enhance visibility and security around containers running in AWS Fargate.

What’s Your Exit Strategy?

Why are we afraid of “lock in”? Typically we hear the term and automatically assume it’s bad. It certainly can be, but doesn’t mean that every situation you’re in is a bad one.

On February 8, 2019, I gave an Ignite talk regarding Exit Strategies and “lock in” at DevOpsDays Charlotte. We broke down “lock in” and the varying degrees of it, then talked about how you can use it to your advantage by having an Exit Strategy (which is exactly as it sounds).

“lock in” isn’t exclusive to technology–what about your current employer?

Automatically Deploy Hugo Blog to Amazon S3

I had grand aspirations of maintaining a personal blog on a weekly basis, but sometimes that isn’t always possible. I’ve been using my iPad and Working Copy to write posts, but had to use my regular computer to build and publish. CI/CD pipelines help, but I couldn’t find the right security and cost optimizations for my use case…until this year.

My prior model had my blog stored on GitLab because it enabled a free private repository (mainly to hide drafts and future posts). I was building locally using a Docker container and then uploading to Amazon S3 via a script.

At the beginning of the year, GitHub announced free private repositories (for up to 3 contributors), and I promptly moved my repo to GitHub. (NOTE: I don’t use CodeCommit because it’s more difficult to plumb together with Working Copy.)

I was able to now plumb CodePipeline and CodeBuild to build my site, but fell short on deploying my blog to S3. I had to build a Lambda function to extract the artifact and upload to S3. The function is only 20 lines of Python, so it wasn’t difficult.

But then, AWS announced deploying to S3 from CodePipeline, meaning my Lambda function was useful for exactly 10 days!

Now, I can write a post from my iPad and publish it to my blog with a simple commit on the iPad! It’s a good start to 2019, with (hopefully) more topics coming soon…

Rotate IAM Access Keys

How often do you change your password?

Within AWS is a service called Trusted Advisor. Trusted Advisor runs checks in an AWS account looking for best practices around Cost Optimization, Fault Tolerance, Performance, and Security.

In the Security section, there’s a check (Business and Enterprise Support only) for the age of an Access Key attached to an IAM user. The Trusted Advisor check that will warn for any key older than 90 days and alert for any key older than 2 years. AWS recommends rotating the access keys for each IAM user in the account.

From Trusted Advisor Best Practices (Checks):

Checks for active IAM access keys that have not been rotated in the last 90 days. When you rotate your access keys regularly, you reduce the chance that a compromised key could be used without your knowledge to access resources. For the purposes of this check, the last rotation date and time is when the access key was created or most recently activated. The access key number and date come from the access_key_1_last_rotated and access_key_2_last_rotated information in the most recent IAM credential report.

The reason for these times is the mean time to crack an access key. Using today’s standard processing unit, and AWS Access Key could take xxx to crack, and users should rotate their Access Key before that time.

Yet in my experience, this often goes unchecked. I’ve come across an Access Key that was 4.5 years old! I asked why not change it, and the answer is mostly the same–the AWS Administrators and Security teams do not own and manage the credential, and the user doesn’t want to change the credential for fear it will break their process.

Rotating an AWS Access Key is not difficult. It’s a few simple commands to the AWS CLI (which you presumably have installed if you have an Access Key).

  1. Create a new access key (CreateAccessKey API)
  2. Configure AWS CLI to use the new access key (aws configure)
  3. Disable the old access key (UpdateAccessKey API)
  4. Delete the old access key (DeleteAccessKey API)

Instead of requiring each user to remember the correct API calls and parameters to each, I’ve created a script in buzzsurfr/aws-utils called rotate_access_key.py that orchestrates the process. Written in Python (a dependency of AWS CLI, so again should be present), the script minimizes the number of parameters and removes the undifferentiated heavy lifting associated with selecting the correct key. The user’s access is confirmed to be stable by using the new access key to remove the old access key. The script can be scheduled using crown or Scheduled Tasks and supports CLI profiles.

usage: rotate_access_key.py [-h] --user-name USER_NAME
                            [--access-key-id ACCESS_KEY_ID]
                            [--profile PROFILE] [--delete] [--no-delete]
                            [--verbose]

optional arguments:
  -h, --help            show this help message and exit
  --user-name USER_NAME
                        UserName of the AWS user
  --access-key-id ACCESS_KEY_ID
                        Specific Access Key to replace
  --profile PROFILE     Local profile
  --delete              Delete old access key after inactivating (Default)
  --no-delete           Do not delete old access key after inactivating
  --verbose             Verbose

In order to use the script, the user must have the right set of permissions for their IAM user. This template is an example and only grants the IAM user permissions to change their own access Key.

From IAM: Allows IAM Users to Rotate Their Own Credentials Programmatically and in the Console:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:ListUsers",
                "iam:GetAccountPasswordPolicy"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:*AccessKey*",
                "iam:ChangePassword",
                "iam:GetUser",
                "iam:*ServiceSpecificCredential*",
                "iam:*SigningCertificate*"
            ],
            "Resource": ["arn:aws:iam::*:user/${aws:username}"]
        }
    ]
}

This script is designed for users to rotate their credentials. This does not apply for “service accounts” (where the credential is configured on a server or unattended machine). If the machine is an EC2 Instance or ECS Task, then attaching an IAM Role to the instance or task will automatically handle rotating the credential. If the machine is on-premise or hosted elsewhere, then adapt the script to work unattended (I’ve thought about coding it as well).

As an AWS Administrator, you cant simply pass out the script and expect all users to rotate their access keys on time. Remember to build the system around it. Periodically query the TA check looking for access keys older than 90 days (warned), and send that user a reminder to rotate their access key. Take it a step further by automatically disabling access keys older than 120 days (warn them in the reminder). Help create good security posture and a good experience for your users, and make your account more secure at the same time!

F5 Archive

Back in 2013, I led a “proof of concept” test for an enterprise-grade load balancing solution. We evaluated many products, but had a shortlist of 4 vendors, and ultimately selected F5 Networks. While the selection criteria was different, I personally liked F5’s extensibility. I continued to work with F5 for a few years, earning my professional-level certification and engaging with the DevCentral community.

Management API

While many network professionals grew up on CLI-based tools, at that time I knew the importance of having an API for managing devices. While CLI-based tools work, they offer very little in programmability and orchestration. Any orchestrated solution using a CLI has to account for the various ways of connecting to the CLI–which are always subject to change by the vendor. APIs offer a standard interface for connecting to and managing a device, and are often themselves extended by a provided CLI or SDK that communicates with the API.

F5’s original “iControl” API was a SOAP-based API. Anyone who wrote a SOAP API call knows why they stopped, but F5 also provided bigsuds, a Python library that would call the API. Bigsuds made it easy to programmatically connect to any F5 and accomplish any goal.

I created a set of bigsuds scripts and published them to buzzsurfr/f5-bigsuds-utils and DevCentral. They range from connecting to the active device in a HA pair to locating orphaned iRules (iRules not associated with a Virtual Server) to finding a Pool/Virtual Server based on a Node IP Address.

In 2013, F5 also released their first version of iControlREST, a REST-based API, and the f5-sdk, which offered a cleaner interface and object-oriented code for maintaining a F5 device. I converted some of my scripts to use the f5-sdk and again pushed them to buzzsurfr/f5-sdk-utils and DevCentral.

Programmable Logic

Hardware vendors have historically struggled with keeping up with the pace of innovation in technology. One time, we were evaluating a core network refresh. Instead of discussing what the products can do, we spent more time discussing what they will do in the future. I recall a colleague asking all the major vendors when they would support TRILL (don’t judge {{< emoji “:smiley:” >}}). Almost always, the answer required new hardware, and it would be no sooner than 18 months.

While I understand the need to put this type of logic directly into hardware, why not have a stopgap? Put a process in place to code the feature in software, then promote it to hardware at a later date. F5 was the first time I saw this business model, and I was immediately drawn to it. If the F5 didn’t have a feature I needed, then I just wrote the logic in an iRule. iRules take my logic and process it as part of the F5’s routing logic. Suddenly, I stopped asking my F5 representatives about when a feature would release and instead on how I could program that feature myself.

F5’s come with preloaded iRules, but I had to create my own over time, and collected them in buzzsurfr/f5-iRules. A few examples:

  • One time, I had a customer with a broken app that would intermittently respond with multiple Content-Length headers (which breaks RFC 2616). They weren’t sure why, but it needed to be fixed. We fixed it with an iRule until they could find and resolve the bug in the application. This wasn’t a load balancing problem, but we still used the load balancer to workaround the problem and remove customer pain.
  • I had a need to implement Content Switching, which wasn’t supported by F5 at the time. With iRules, I was able to create content switching at both the host and path until the F5 supported content switching.

I don’t spend much time with F5 products these days, but I still use the programmable logic model. In my current role at AWS, I often find gaps in features that are needed by my customers, and many times we’re able to develop a Lambda function to fill the gap until the feature is released. I’ve watched this same model serve both F5 Networks and AWS well, and I hope the trend continues with other products as we continue to evolve.

My F5-based repositories