Oops! Something went sideways.

Looks like the styling got goofed up. Sorry about that, unless it's what you wanted. If this isn't what you were looking for, try force refreshing your page. You can do that by pressing Shift + F5, or holding Shift and clicking on the "reload" icon. (It's the weird circle arrow thing "⟳" just above this page, usually next to where it says https://blog.unitedheroes.net...)

isn't quite ashamed enough to present

jr conlin's ink stained banana

:: Asteroids and Papercuts

i work on long lived projects. These are projects that tend to run for years, and might even be considered to be tech debt magnets. That’s pretty natural, but it’s interesting considering that the rest of the company tends not to really think that way. They are instead focused on “the next version” of a product.

Since i’ve got services that need to last for years, i’ve got to manage levels of tech debt that happen. Some of it for internal reasons, a lot of it for external ones. This is because i need to balance working on those services with new services we’re rolling out because we’re generally understaffed and have lots of priorities.

To that end, i’ve started classifying concerns into “Asteroids and Papercuts”, and i’m wondering if others might find the framework useful.

Asteroids

An Asteroid is a big, potentially civilization ending event that will arrive at a known time. It’s something you need to pay attention to now, but the date is still (hopefully) a way off. This can include things like:

  • The language your service is built on is no longer actively supported or receiving security updates.
  • You need to move your data from one, cloud based, proprietary data store to another cloud based, proprietary store for reasons.
  • A key individual who has deep domain expertise is leaving the company.

(All of those things happened at least once with projects i worked on, so yay!)

When dealing with an incoming asteroid, your priorities are:

  1. Knowing the date
  2. Knowing the damage
  3. Knowing the mitigations: Short term and long term
  4. Putting together the work load estimate that is based on worst case scenarios

That last one is the trickiest, and kinda requires your most pessimistic attitude. Basically, try to factor in the other things that can and will go wrong. It’s important to avoid jargon, undefined TLA (Three Letter Acronyms), and presumed understanding when writing each of those up.

For instance you’d want to write up a summary that clearly specifies the problem, includes dates and provides a reasonably terrifying summary of why this is really important. So for something like a language going EOL:

This is something that will probably require at least a day or two of focused effort.

If you can’t get the time to do that from your management, be sure to get that in an email or document so that when the service fails, you have proof that you were told what you work on isn’t critical. (You don’t have to be snippy, just a quick email saying

If the manager insists on doing an in person meeting, take notes and feel free to send a follow-up email that includes the points of discussion and resolution, and ask that they confirm that this is correct.

Also start looking around for a different team/org/company because this is bullshit politics and your manager is setting you up to be the sacrificial offering when it all goes to hell.

Once you have a plan, treat it as a high priority task and get your various product people aware of it and working toward it. Make it a banner line on your weekly reports. Time is your asset and enemy because it will go faster than you expect, particularly if you have other priorities, and you will have other priorities.

Papercuts

Papercuts are smaller annoyances. Things like blocked library updates or significant bits of notable tech debt. The thing about papercuts is that while they’re small, if you get enough of them, they will kill you. (e.g. Death by 1,000 Papercuts)

While these tend to sit in the backlog forever, it’s important to track them because they can also fester and turn into significant events. Each papercut can be a unique thing, so it can be hard to come up with as clear strategy as for an asteroid, but you should have one in any case. Fill out the bug/issue/ticket with details for future you. Note the relationship a given papercut has internally in the project or across the org. Show not only that it’s important, but how a delay on fixing it impacts the bottom line. Basically, present it so that someone who has no understanding of the tech or why this is important can understand why this is important.

Of course, there are lots of other issues that you can address, and lots of ways to categorize things. Not everything is or should be an Asteroid or Papercut. There will be some things in your backlog that are there to die, neglected and alone, but there will always be things that are more critical that you need to pay attention to. Your team and mileage will vary, but hopefully you now have a framework to help present critical issues up the chain if you didn’t already.

:: Getting HomeAssistant 2021 Running on Docker and a Raspberry Pi 4

Home Assistant is a marvelous app that makes your home smarter. It’s also a raging pain in the ass if you’re an early adopter and actually have set things up already. This post is not only a helpful guide for how to update and use the latest flavor of Home Assistant, it’s a lovely well for me to scream into instead of yelling obscenities at the squirrels in the backyard.

A lot of this is going to be date dependent, so denizens of the future you’re probably going to have an easier time of things.

Recently, HomeAssistant has gotten a lot of work done (i’ll blame the pandemic and idle developers, which are sometimes the devil’s playground). To that end, it requires Python 3.8+.

Problem #1: Debian & Python 3.7

As of this date, if you’re running Raspbian/Raspberry Pi OS on your Raspberry Pi, the underlying system you’re running is Debian Buster1. That means the Python you’ve got installed is Python 3.7.3. As of December 2020, HomeAssistant considers Python 3.7 obsolete and stopped supporting it2. You could download the source for Python, and compile it locally. You could also walk from Utqiagvik, Alaska to Tierra del Fuego. You might want to consider not doing that though.

When a version of python becomes obsolete, it’s not the main program that breaks. No, that honor goes to a percentage of the many, many small libraries that the software now uses and drags in like a hoarder at an unguarded Costco. For me, it was about two months in when i discovered that one of my webcams no longer worked.

Fortunately, there is a solution, kind of. The HomeAssistant folks offer multiple ways to install, including an SD image (or if you’re like me and you have other programs running on your Pi, because, well, you can and HomeAssistant isn’t THAT big of a pig), you can run it inside of a Docker image.

Docker is free (regardless of the impression you get from the site). And while you can install it using apt, i’d actually encourage you to install it using the (ugh3) docker install script at https://get.docker.com. If you prefer a few more steps, you can also follow the Debian instructions.

Problem #2: Docker

Docker is clever because it uses a combination of virtual images in order to run applications in a sandbox. It makes up for that cleverness by eating disk space and being horrifically obtuse about how it should be used. Suffice to say that you have images which are the bits of stuff that get run to do things and containers which are the actual, running programs. That will become important in a bit, but it’s also worth noting that you really need to keep an eye on how much space the images are eating up. There are lots of documents you can read, but suffice to say that

$ docker container prune
$ docker image prune

are your bestest friends4.

Problem #3: node

Ah, but wait! There may be something else you’ll need before you can get going. HomeAssistant has moved onto a more modern (pronounced: “sÉ™-pôrt′-ed“) Z-Wave integration system called “ZWave-js”. This is a node.js app. Again, the Debian default is going to be old. So, instead, grab a copy from the download page (for Raspberry Pi, you want the ARMv7 one). Once you have it, you can tar -xvf node-*.tar.xz which will extract node into it’s own directory. You might also need to sudo apt install xz-utils to get the xz decompression tool for tar. You can move that wherever (i usually keep those in $HOME/app). You may also want to add $HOME/node-v-linux-armv71/bin to your $PATH, since you’re going to need those in a bit.

Node helpfully includes npm or the Node Package Manager. This nifty little tool allows you to install packages. What it doesn’t make frightfully clear is that the packages npm installs go into `pwd`/node_modules so you can wind up creating lots of node_modules directories as you try to figure out where the hell things are installed. For now, go to your home directory and run these there.


$ npm install zwave-js
$ npm install @zwave-js/server

This should install some programs and in node_modules/.bin/ there should be a zwave-server.

Yay.

Well, sort of “yay”.

zwave-server needs the device address for your ZWave USB dongle. You can get this from your old .homeassistant/configuration.yaml file. For me, it’s under

zwave:
usb_path: /dev/serial/by-id/usb-0658_0200-1f00

Yours is probably under something like /dev/ttyUSB0 or /dev/ttyAMA0 or something more sane.

Once you have the device path you can then fire up
$ node_modules/.bin/zwave-server /dev/serial/by-id/usb-0658_0200-1f005

and be awestruck by just how chatty this thing is. You’ll then kill it and add 2>&1 > /dev/null & to the command so that it runs in the background and all the chatter goes to /dev/null, because there’s a lot of it, -h and --help do nothing, and i just want to get things running.

Problem #3: ZWave

Wait, didn’t we just solve that? We solved about half of that.

The old ZWave config system is deprecated. While zwave-js does an amazing job recovering and loading devices so you don’t have to re-sync them, there are a few things you still need.

  1. Your system security key. If you have a security key (because you have locks or garage door openers or something) you’re going to need the code for it. Hopefully you have it, still. You can sometimes also get it out of the zwave logs.
  2. Your friendly device names. HomeAssistant’s integration doesn’t use the friendlier device names when listing things out. You will probably have to reset them based on the device node-ids or device ids. Both of these are in the old configuration data. You can also fire up the old version of HomeAssistant, grab a notepad and take notes.

Once you’ve got your list of ZWave things, drop the old, deprecated ZWave integration. Comment out the old zwave: section from the configuration.yaml file.

Now you should be able to get HomeAssistant started.

To start the HomeAssistant docker run

$ docker run \
-rm \
-d \
--name="home-assistant" \
-v $HOME/.homeassistant:/config \
-v /etc/localtime:/etc/localtime:ro \
--net=host \
homeassistant/home-assistant:stable
Remove the container once it exits
Run in the background (daemon)
Name the container “home-assistant”
Link the config directory as /config
Link the time to the system time
Use host networking
What you want to run

Give that a few and you should be able to bring up the admin panel on port :8123 like before. You’ll need to enable the ZWave-JS integration under /config/integrations. If the zwave-server above is running, you should be able to just connect to the default websocket port. Once that’s done and the device list is loaded, simply walk the displayed list setting each entity id back to whatever you had originally set it. They should show up on the Lovelace UI, work in scripts and all the other joy, just like before.

There may be a few other things you’ll do, but it’s getting late and my tequila bottle is empty and i need to go make some tacos for dinner, so you’re on your own.

Footnotes & snark

1Debian is stable. It wants to be very stable. i’m talking “fixed to the bedrock” stable, and much like the bedrock, it tends to move at a geological scale. This means that stuff on Debian tends to be a bit “legacy”, and they don’t release new versions very often. It’s rumored that they only do so once the magic smoke released from overclocking a VT-100 terminal is white.

2 One might ask “wait, why did HomeAssistant basically drop support for Raspberry Pi?” It’s a good question, but basically works out that 3.7.3 was releases mid 2018 and there’s a fair bit of cruft in it compared with later releases. i’ll note that 3.7.3 is still supported until 2023, and just dumping support for it is kinda rude to folks, but they do suggest docker and docker adds a fairly minimal amount of overhead. At this point, it’s looking more and more likely that running apps in some sort of sandbox, be it docker, flatpak, snap or something else will probably keep OS’s secure enough from devs that want to play with all the shiny, new toys.

3 Yes, i get it. Giving folks a clever shell script they can run in sudo sure is a fun and easy way to get things done. It’s also like tossing a stranger the keys to your car so he can go get your take-away order. It’s usually safe, but there’s the off chance that he runs over a kindergarten class or uses it in a bank heist.

4 i am absolutely not kidding about this. Dockers default setting is to fill your disk with old crap. Calling prune goes through and can delete gigabytes of old image data. You want to do that before your computer locks up because / is out of space. Hell, you want to keep a close eye on things using docker image ls -a because docker doesn’t always show you everything, and docker can keep lots of old versions of packages lying around for reasons.

5 So, yeah, fun fact. If you try to later switch to a different ZWave controller USB, it won’t work because the devices and protocol specify the controller, not the local settings. Oh, but it may screw up your local /dev list forcing you to do crap like this because the /dev/ttyUSB0 device is now pointing to an invalid endpoint and something keeps resetting it. It’s super annoying.

So, one of the HomeAssistant folks reached out and asked “Why aren’t you running the Home Assistant Operating System, which handles a fair bit of this for me.

The short answer is mostly FUD on my part, since i’ve not dug super deep into the OS option and been burned by similar things in the past, but a bit more than that too.

i may convert everything over to HAOS (trying to backronym a C to the start of that for some reason), but there are a few personal caveats:

  1. This would basically be a from scratch rebuild. Much of my HASS is still YAML based, so to do this right, i’d have to rebuild it all in the native UI. Not impossible, but a lot of work.
  2. i run a few things on this machine. It’s my understanding that HAOS is basically Alpine running a slew of Docker images, and it’s fairly easy to convert things into Dockers to run on it. This may mean rebuilding stuff like PiHole, as well as my semi-hacky python scripts, and whatever other bits i might need. Again, not impossible, but a lot of work.
  3. Much of the logging does not get written to the SD Card. i’ve gone in and softlinked much of it to write to a USB3 drive attached to the Pi. This is because Linux is “write heavy” and that can burn through SD card write cycles pretty quickly. Again, not impossible (since i can probably mount the USB drive and alter Docker files configs to use that drive), but … well, you can see the pattern emerging.

i have no doubt that things would be a lot easier if i were to have started from scratch yesterday. i didn’t. There’s definitely sunk cost at play here, but sadly, i can’t ignore it because it’s actually functional.

:: Ec-COVID-Nomics

SARS/Corona Virus 2019 (COVID19) is a terrible disease, on a lot of fronts. The thing i really can’t get over are folks that say stuff like this:


A NextDoor post where someone proudly claims their going to a 40-50 person gathering because the disease has a \

i mean, sure? If you’re young-ish and fortunate, you do have a fairly good chance of getting through it alive. Hooray?

Of course, that’s not really the problem, at least, that’s not the most significant problem you face in the US.

Let’s consider what you face if you get the disease.

First off, there’s dealing with the disease itself. For some folk, it’s nothing. As in they have no symptoms what-so-ever. Other folks require hospitalization. How your body reacts to COVID is pretty much anywhere in-between, and there’s no knowing what it will be. There are also potential long term considerations as well, since it’s still quite a new disease and nobody is quite sure how it will impact everyone.

You may only be “sick” for a few days get “better” and since you got the ‘rona, feel you don’t need to worry about wearing a mask. You’re now a spreader since you’re still contagious since the virus is still very much present in your system. (You could also be asymptomatic, which means you have the disease, but aren’t showing or feeling any symptoms. Feel free to read up about “Typhoid Mary” if you want a nice, historical record of how this could happen.)

But let’s say you’re unfortunate enough to actually require hospitalization. Because we’re America, once you’re released, you’re looking at a bill of anywhere from $32,000 to $73,000 (depending on how good your coverage is). It can also be a whole lot more than that, depending on where you get your care.

i don’t know if you’re able to buy a car right now out of pocket, but that’s kind of the numbers you’re looking at. If you’re not, you’re going to have to figure out where to get that money. Again, since we’re America, you’ll probably turn to the age old practice of finding someone to sue. If not you, don’t worry, your insurance company will probably do it for you. They don’t want to spend that sort of money either, so if they can find someone who exhibited clear, reckless behavior, you bet they’ll be right on top of that.

Of course, if you’re in the clear and someone you’ve contacted afterwards develops COVID, well, let’s just say that announcing your open defiance of strongly suggested health guidelines may not be quite as bold as you had thought.

(i honestly believe that this is the major reason that the US has not implemented Contact Tracing like Canada has. i’m pretty sure someone figured out that having a clear path between litigant and plaintiff may not be fantastic.)

What’s more, again, since we’re America, and our health providers don’t like pre-existing conditions, this is something that could actually come back to haunt you years from now.

So, yeah, that’s why i have zero intention of going to large gatherings so long as COVID is still very much a thing.

:: Why Are You Doing That?

i’ve been doing a fair bit of mentoring lately. i guess because i’m obviously old and folks think i’ve got some wisdom about anything. To be fair, i am old.

Anyway, recently i got into a discussion with someone who’s been thinking a fair bit about her career. She started off doing data work, then did a bit of UI/Front end stuff, and just didn’t find it super fun or compelling. Honestly, very understandable.

i’ve always hated the semi-utopian thing about “Find a job doing what you love”. i’ll just come out and say that’s incredibly rare. There’s a reason that they can do shows about folks who manage to do that, it’s because they’re unique enough to be interesting. The rest of us? Yeah, we’re not as fortunate.

Don’t get me wrong. i have training for programming and some level of skill at it, but what makes me happy is not making servers go ping, but fixing problems and clearing tasks off of lists. i could do that anywhere and feel just as much sense of accomplishment. What makes me excited is not what i do, but why i do it.

Call me an idealist, but i actually really do want to make things better for people. To that end, i view personal privacy and security currently woefully lacking. It’s out there, but it’s not the path of least resistance and so folks tend to skip over it. Working for mozilla gives me the daily opportunity to fix things to be easier, more secure, and more private. That’s the reason i’m still working there and not, i dunno, CTO of some ad network that resells organs harvested from orphans or something.

What i do is not glamorous. My peers and i keep a bunch of back-end services running. We’re not going to be top of HackerNews. Heck if we get double digit count of stars on github, we’ll wonder what the hell happened. Still, we have around half a billion active connections and deliver messages in less than 100ms, juggle nearly a petabyte of encrypted user data, and write in the latest version of Rust because it’s the most performant and cheapest option for doing all of that.

Still, i’m working for a company that’s philosophically aligned with my interests, so yeah, i’ll deal with the frustrations and stress just fine, thank you.

Of course, there are down sides. i will never be invited on to the stage to talk about what i do. That means that promotions and bonuses are rare events. You’ll never know if i do my job right, but you’ll sure as hell know when i do it wrong.

:: The Internet Hates Long Lived Things

First off, this is not about ageism. i’m talking about long lived connections. There are a few folk out there that believe that you can hold a connection between two devices open forever. This is not the case. There are a lot of reasons that a great many things will actively fight your long lived connection. So, here are a few insights from someone who has dealt with Very Long Connections in Webpush and was once naive like you.

Why does the internet hate long lived connections?

Short answer: Money.

Longer answer:
The internet is not free.

Everything about the internet costs money, because everything requires either power or devices. Devices are way more costly because you not only need to buy and power them, you need to shelter, maintain, inspect, and eventually replace them. This includes everything from colocation farms to servers to cables to the conduits that carry the cables and the folks who’s jobs it is to do all that sheltering, maintaining and inspection. The costs may be near infinitesimal for a 10 byte ping, but they’re there, and they add up surprisingly fast.

i’ll also add in that connections between devices also have a software cost. Turns out, there are a limited number of connections that a given computer can accept. There are also constraints depending on the language you use, how much memory you have installed, how fast your CPU is, and how many files you need to have open. There are fun ways to tweak that number and get really high counts, but if you’re doing any actual work with them, you’re going to hit that upper limit. If you’re doing real, serious work (like running TLS so things are secure) boy golly are you going to hit that number and it’s not going to be anywhere near that 10 million connection number someone built for Erlang.

So, in that sort of world where having connections that are basically doing nothing but tying up resources, connections are not going to stick around. You may not want to pay for them, and neither do any of the dozens of intermediary companies what want to maximize profits. They’ll spot a connection as being underused and will simply drop it, since there is probably some other company that wants to use it and send lots of capital producing data over it.

There are tons of reasons a connection could be killed at any time and a whole lot of incentive to ignore any requests you might make to keep a low bandwidth connection up. This includes various “Keep Alive” packets helpfully provided by protocol authors. Those tend to be very light weight dedicated Ping/Ack packets that are sent on a regular cycle. They’re useful if you’ve got a lull for a few minutes, but anything longer than that and the connection is toast. You’re better off crafting a NoOp type message that you fire off regularly. Granted, i fully expect that those will be dropped in the future too once providers use stuff like packet inspection machine learning to further reduce costs and free up “idle” connections.

Well, what about using stateless UDP instead of stateful TCP?

It’s not a bad idea, really. It’s the reason that QUIC is the base for HTTP 3.0, and it’s very clever about making sure that packets get handled correctly. Packets are assigned Server Ids, and cryptography is isolated so data corruption doesn’t cause blockages. Even though, if there’s a connection severance, it’s still dependent on the Client getting back to the Server. The server needs to be at a known, fixed address. That’s neato for things like HTTP, but less so for things like WebPush where the client could be waiting hours or days for a response, and unless the client is actively monitoring the connection (remember, built in KeepAlive packets ain’t enough), it’s basically doing long polling, so you’re kind of back at square one.

(There’s definitely something to be said about that for things like WebPush. WebPush’s “Immediate receipt” requirement, like relativistic travel, depends a great deal on the perspective of the parties involved. That’s a topic for another post.)

So, be mindful young protocol developer/designer. The internet is out to get your long lived connection dream and will dance on it’s grave at every opportunity.

Blogs of note
personal Christopher Conlin USMC Henriette's Herbal Blog Where have all the good blogs gone?
geek ultramookie

Powered by WordPress
Hosted on Dreamhost.