Goodbye Daisuke Satō

We had some sad news this week in the manga world, as translated by the Anime News Network:

Writer and manga creator Daisuke Satō passed away due to ischemic heart disease on March 22. He was 52. His immediate family have already held a funeral service, and his younger sister Yūko Shinmyō was the chief mourner.

This hit home pretty hard. My dad is still in recovery having just had surgery for this last month. My thoughts are with his family who didn’t get to feel the relief and joy that my sister and I did.

Satō was a prolific author of fascinating alternate-history books, and the writer behind the Imperial Guards manga. I’ve been meaning to check out more of his stuff; now I will be.

For most anime and manga fans though, he’ll go down in history as the literary genius behind Highschool of the Dead, the obsessively over-the-top apocalyptic epidemic zombie mutation virus horror series without regard for decency or physics!

Joking aside, it was friggen amazing series; equal parts Shaun of the Dead and serious commentary. It remains the only horror (and ecchi, let’s be honest!) anime series I’ve ever finished and enjoyed.

I'm 31

A pretty low-key birthday today, but that’s always been my style. I was terrified about turning 30, but recent family adventures render me happy, relieved and feeling very fortunate for making it this far ^_^.

Here, have a figlet:

 _____ _
|___ // |
  |_ \| |
 ___) | |

Or this one!

  ****   ** 
 */// * *** 
/    /*//** 
   ***  /** 
  /// * /** 
 *   /* /** 
/ ****  ****
 ////  //// 

As far as years go, my 30th on this planet was amazing. I met so many of my foreign friends in New York, Philly and New Jersey; made progress on several personal fronts; advanced in my career at a company with colleagues I’d consider friends. After so many years of veritable shit, I have much to be thankful for now.

I’m looking forward to next year, when I become fully 32bit compliant. This whole time I’ve been backporting binaries from future me, and recompiling them for my insufficient 16bit brain.

ECC in AMD Ryzen

There was that period in the 2000s when anyone worth their salt build their game machines with AMDs. Athlons were faster, cheaper, and had that underdog status.

I’ve been looking for an excuse to build another AMD machine since my last machine a decade ago. My first game machine in years almost had an FX, but I got an i5 when it became obvious Intel had a clear thermal advantage; an important consideration for Mini-ITX builds.

Fast forward to this year, and I’d decided on the budget Xeon E3-1220 v5 for my Microserver replacement NAS. And low and behold, AMD threw this down:

ECC is not disabled. It works, but not validated for our consumer client platform.

Validated means run it through server/workstation grade testing. For the first Ryzen processors, focused on the prosumer/gaming market, this feature is enabled and working but not validated by AMD. You should not have issues creating a whitebox homelab or NAS with ECC memory enabled.

yes, if you enable ECC support in the BIOS so check with the MB feature list before you buy.

In the words of Spock: “fascinating.” Provided your board has support, ECC memory is within the reach of consumer tech for the first time. I’ve always wondered why ECC was limited to high end workstation and server rigs.

Now I’m considering a Ryzen for this NAS tower!

Keep it simple, Ansible

In case my love for Ansible weren’t obvious, I thought this line in their best practices section was great:

Keep It Simple

When you can do something simply, do something simply. Do not reach to use every feature of Ansible together, all at once. Use what works for you. For example, you will probably not need vars, vars_files, vars_prompt and –extra-vars all at once, while also using an external inventory file.

If something feels complicated, it probably is, and may be a good opportunity to simplify things.

One of my old programming lecturers once said clear code was better than clever code. I think that applies to sysadmins as well.

Sila’s gamble

Below are a list of names from spam email over the last week, and the first association that came to me.

  • Silas Gamble: I hope they didn’t lose much.
  • Nila Martin: I wonder if they’re also short.
  • Terrell Cain: You’ve done it again.
  • Vicente Paul: And he’ll make you pay for it!
  • Claudine Russell: Floral teapots.
  • Violet Russo: A pilot who thinks women aren’t developers.
  • Elmer Watson: A clumsy hunter in a Harry Potter film.
  • Earl Foster: Classy name can’t save bad beer.
  • Maude Guzman: And then there’s Guzman.
  • Ryan Smith: Saving someone going to Washington.
  • Mr Business School: He must be huge.
  • Clement Carney: Pope on a ferris wheel.
  • Sue Nunez: Why, what did Nunez do?
  • Dean Collins: Su-Su-Ssudio!

Python interpreter for Ansible

Last week I talked about using Ansible for FreeBSD automation, but forgot to address the other elephant in the room: Ansible can’t find Python on FreeBSD hosts.

The problem

As per its Linux heritage, Ansible defaults to the following Python path:


FreeBSD (and NetBSD, and Solaris, and macOS with Homebrew, put Python elsewhere, which Ansible can’t find. It’s a curious design decision, we’ve had the following portable shebang recommended for years:

#!/usr/bin/env python

This issue thread suggests the Ansible developers don’t see this as a bug. So it’s up to us to work around it for platform-agnostic playbooks.

As part of your bootstrapping process, you can install Python from pkgng or ports, then symlink:

# ln -s /usr/local/bin/python /usr/bin/python

This works, but is fragile and nasty. We can do better.

Solution 2: Grouped FreeBSD hosts

In your Ansible hosts or inventory files, group your FreeBSD hosts and apply a var to them:



Solution 3: All hosts

If you only target FreeBSD hosts, you can set the var for all hosts:


Solution 4: All hosts for env Python

Hey wait a minute, I’ve got an idea. You can do the same thing as above, even if you have a mix of different hosts:

ansible_python_interpreter="/usr/bin/env python"

Huzzah, it works! This will now become part of my Ansible boilerplate.

Guarenteed market share

Galen Gruman wrote this interesting tidbit at the end of a CIO review for Microsoft Teams:

Microsoft is the underdog here, and relying on its installed base is a dangerous strategy—as Microsoft should know from its Yammer, Windows 8, and Windows Phone debacles. Microsoft’s imprimatur no longer guarantees a product’s adoption. It needs to actually be good. Yes, there are IT shops that will give Microsoft years to get things right—they prefer it to relying on a small company like Slack or Atlassian—but that’s the same miscalculation Microsoft made with Windows Phone and Windows 8.

It’s amazing how this has changed from the 1990s and 2000s.

I also had to look up what imprimatur means:

An official license to publish or print something, especially when censorship applies. [..] (by extension) Any mark of official approval.

It’s also a noun in Czech, French and Latin, for those who wanted to know. Imprimatur, not Microsoft Teams.

Safari in 2017

I used Safari as my default browser for a week in 2015, but went back to Firefox. I wondered if much had changed in the intervening time, so I tried again.

The bad

  • The lack of favicons still make horizontal tabs difficult to differentiate.

  • The narrow address bar persists, with a huge waste of space either side. I appreciate the Apple UX teams are trying to reduce the need for user-facing URLs, but we’re not there yet, and we need to see them.

  • No sidebar tabs, or extensions to get them. Once you’ve used a browser with stacked tabs this, shoehorning everything into a thin horizontal bar feels like madness.

But the good

  • Ellis Tsung’s uBlock Origin for Safari has mitigated many of the problems I had with Safari two years ago. For privacy and ad whitelisting, you needn’t look further.

  • It’s still smooth as silk; definitely the fastest browser on the Mac.

The Zip Insider

I’m a bit of an Iomega aficionado. They were the quintessential 1990s consumer IT company, and even had coloured peripherals before the original iMac.

They made a name for themselves with their early Bernoulli boxes, but the Zip drive was their breakout device. It never reached critical mass like the floppies they attempted to usurp, but they had the creative and business markets cornered until writable CDs came around.

Most of their devices came in external and internal versions, with parallel port, SCSI, IDE/ATAPI and later USB. They had some pretty clever tricks; the parallel port Zip could pass through a printer connector, and the external SCSI Jaz drive could be connected to a parallel port with an active adaptor if needed.

My first Iomega device was an internal, 100MB ATAPI Zip drive. The width of the disks meant the drive fit (barely!) into the spare 3.5” bay I had. I loved that little beige drive, but was a bit envious of friends who had the external blue one. It looked so cool.

A year later, I loaded up the “Iomega Tour” included in the Zip drive’s driver CD, and saw this slide for an internal Zip drive that looked nothing like the one I had:

So cool! Granted it took up an entire, larger 5.25” drive bay, but it had the same colours and lines of their external drive. It was so much cooler than the tiny beige box that I had.

Problem was, nobody seemed to stock this obscure drive. In one of my nerdier escapades, I even took the step of printing out the slide and showing it to bemused Sim Lim Square and Funan Centre staff, without success. It never appeared in eBay searches, not for want of trying. I couldn’t find any reference to it online. Reverse image searches didn’t show up anything. After a while, I assumed it must have been a prototype.

That is, until today! The unit above is an original Zip Insider, bought from eBay almost 20 years after I got my first drive. Not to get all Malcolm Gladwell on you, but turns out this was their earliest internal SCSI variant. They must not have sold well given SCSI was limited to Macs and high end PCs at the time, but IDE was everywhere.

I only have a small problem right now; I don’t have a spare machine to put this in. I’m thinking it’ll end up in the Supermicro homelab box I’m building, just because.

Ansible with FreeBSD

I use Ansible where possible at work; it’s really wonderful stuff for Linux. Unfortunately, its support for the BSDs has never been fantastic, evidenced by their zero-dependency claim when a Python interpreter is required!

Given the dearth of BSD Ansible material online, I thought I’d share some tips I’ve learned since trying it out. This is all valid as of Ansible

Bootstrapping a fresh FreeBSD install

Since I wrote my first playbooks, the Ansible BSD docs now list a process using the “raw” method to bootstrap dependencies on a fresh FreeBSD install:

ansible -m raw -a "pkg install -y python" bsd_host

This itself makes some assumptions. The pkgng binary package manager is only available by default on 10.x and above, and requires bootstrapping with the “pkg” command first.

The Joviam Cloud makes it trivial to create a base FreeBSD image with required packages (such as Python, the Saltstack client, etc), clone from it as a template, and inject your SSH keys on start. I’ll probably stick with this approach, but it’s good to know we can get closer to starting from scratch.

Using pkgng

Pkgng has been the default FreeBSD package manager since 10.0-RELEASE. Ansible includes a module for it, albeit with less support than the standard Linux tools. It should look familar to apt and yum users:

- name: install/upgrade/confirm figlet package is installed
  become: yes
  become_method: sudo
    name: figlet
    state: latest

Unfortunately, it doesn’t have a provision for package pinning. This is important so your custom builds from ports don’t get clobbered by newer, generic builds in pkgng.

Using Portinstall

Thesedays I try to use binary packages where possible. nginx-devel is the exception, because it doesn’t include the headers_more extension which is all but mandatory now for privacy and SSL headers.

The portinstall module gives you:

- name: verify nginx package is installed
  become: yes
  become_method: sudo
    name: www/nginx-devel
    state: present

What’s not clear is how to define custom build options. You can drop to a shell to define them during the make process, but this isn’t idempotent. Regardless of whether its installed or running, you’ll be building it each time.

As I said above, because the pkgng module doesn’t have a provision for pinning, a pkg upgrade will potentially overwrite your custom ports when a new version comes out. It should be easy enough to drop to a shell to do this, but its part of the workflow that still needs to be done manually.


Ansible is a cinch on Debian, and I want to use it on my personal FreeBSD boxes as well. Provided you only use binary packages and bootstrap it using the first process above, it works great. For custom ports, things get complicated quickly.

When I have more answers to these ambiguous cases, I’ll share them here.