A social network’s inaccuracy

I got this message from The Facepalm Book, a social network I log into at least once a year, maybe even twice:

Join groups to connect with people who share your interests. RECOMMENDED FOR YOU: [..] GET COLLEGE HOMEWORK HELP [..] Canberra Local Meetup Group [..] PENRITH REGION: “Buy, Sell, Swap, Free, Trade”

I haven’t been in COLLEGE for years, I’m not a Canberra Local, and I don’t live in the PENRITH REGION. You’d think this would be basic, entry level stuff they could glean even from the precious little data they have on me, and what they creepily pieced together from friends and family.

One thing that will save us from malevolence is incompetence.


Journalism: make them care

Adam Davidson wrote this in the context of American politics, but I think it just as easily applies to discussions on online privacy and security, and to every country in the world:

Any reporter who thought it was important to point out that most Americans don’t know $THING are in a different position from the one I’ve been in. Our core job is to report on things people don’t know they should care about. And to make them care.

I even feel that responsibility as a silly guy with a blog.


Firefox’s situation reminds me of OpenSSL

In 2014 a simple but critical OpenSSL vulnerability was disclosed, affecting the security of hundreds of millions of websites. We rapidly realised the entire industry had come to depend on this one underfunded, understaffed, and underappreciated community to maintain this critical piece of Internet infrastructure. Companies and the wider community committed to funding its future development, and other projects also adopted its codebase.

The latest layoffs at Mozilla hint at a similar situation, though fewer people are talking about it.

How we got here

It’s hard to overstate Firefox’s role in the creation of the modern web. Mozilla didn’t just offer an alternative browser bourne from the ashes of Netscape Navigator and the Mozilla Suite; it ended the dark, monoculture days of IE. This had two effects:

  1. Broad web standards could be proposed and put into practice because no one player owned or dominated the conversation.

  2. Developers got back into the habit of testing sites against multiple rendering engines, and justifying to their managers that it was necessary.

Now we’re faced with every major browser being built upon WebKit and its derivatives like Blink (referred to as WebKit here for the sake of brevity). Microsoft went as far as saying they weren’t judging their Edge browser on standards compliance, but whether it rendered the same as WebKit. Now it also just runs Blink, the same engine behind Chromium, Chrome, Opera, Naver Whale, Vivaldi, and others.

In other words, we’re perilously close to a monoculture again. There are a few important differences this time, but I fear they’re overstated.

Why it’s the same, again

The most argued point is that WebKit is a better shepherd of open standards than IE was. This is true, but while Google and Apple weren’t pushing us to use ActiveX or Silverlight, they aren’t above unilateral action either. AMP, mandatory HTTPS, shorter certificate durations, and platform exclusivity have been dictated outside independent standards bodies, thanks to their market reach and clout. And let’s not forget Google bafflingly tried to argue that Android was more open because it included Flash, another proprietary browser extension. The use of this tech as a marketing bullet point set back its deprecation years.

But what about innovation and compatibly? WebKit has dozens of browsers targeting multiple platforms, whereas IE kept so many sites tied to Windows. I remember the dark days when I had to keep a Windows 2000 VM so I could log into my Singaporean banking site. Yet these new these companies also have their own ambitions: Google’s ad-driven business model gives it a powerful ulterior motive to hamper meaningful progress in online privacy.

Cross-engine testing is also slowly, but noticeably, feeling like a lost cause. It’s as though I’ve gone back twenty years, only this time I’m told to “just” download Chrome instead of IE. Or worse, sites simply don’t work. I’m right back there again spoofing my useragent, with memories of Lou Bega playing through Winamp, and my Palm Treo buzzing on the desk next to my school work. This isn’t progress.

The web depends on Firefox more than it realises, just as it did and does with OpenSSL. And we risk forgetting again at our own peril.


Rubenerd Show 412: The wandering mug episode

Rubenerd Show 412

Podcast: Play in new window | Download

47:35 – An impromptu discussion on everyone’s favourite comestible beverage conveyance, among several other unrelated topics. Recorded July 2020, only getting around to producing... gulp! Hey, that’s the sound someone drinking from a mug makes.

Recorded in Sydney, Australia. Licence for this track: Creative Commons Attribution 3.0. Attribution: Ruben Schade.

Released August 2020 on The Overnightscape Underground, an Internet talk radio channel focusing on a freeform monologue style, with diverse and fascinating hosts; this one notwithstanding.

Subscribe with iTunes, Pocket Casts, Overcast or add this feed to your podcast client.


Banpresto’s 1994 Sailor Mercury poster

You know that feeling upon seeing something so specific from your childhood that you hadn’t thought about in years? After what could be described as a challenging week, it was such a happy surprise to see an ultra-high resolution scan of this old graphic appear on a ton of the ’booru image boards!

For some context, is a phrase with three words. My sister and I grew up watching Sailor Moon like so many kids in the 1990s. I think we liked how flawed and real the characters were allowed to be; the DIC dubbed version made them out as these superhero role models with life lessons, but they dealt with the same mundane stuff we all had to. It also subtly introduced us to so many Japanese tropes, art, and culture.

The cute, blue-haired bookworm Mizuno Ami was easily my favourite character. She was intelligent, careful, and shy, but fiercely loyal to her friends and could summon tremendous courage when push came to shove. She also carried a portable computer before the age of smartphones, back when I coveted such devices above everything else. She was so cool.

I got into the original Japanese versions of Sailor Moon in my first couple of years of university, and soon realised how much more mature it was compared to the sanitised versions DIC produced for the west. I also had enough disposable income to trawl through eBay, and found the above poster for my dorm wall. Even during my darkest moments in those years, she was there offering encouragement with her smile and slightly-awkward pose that so typified her character. Come on Ruben, if I can do it, you can too!

Alas, during a trip back home to Singapore, water had leaked from the roof of the dorm building and down the sides of the wall, destroying most of the posters I’d put up. I peeled it off but it disintegrated in my hands. I won’t lie—as opposed to all the other times?—I was gutted.

Now I can print it again, and maybe this time frame it! Clara and I were thinking of putting her by the door to offer us encouragement before we leave to tackle the outside world.


Goodbye to the Three Beans in North Sydney

Coffee shops are where I get much of my writing and work done. I tend to find a handful of them wherever I live and type away in them for hours at a time. For me there are no environments more conducive to thought, enthusiasm, and energy, and I don’t think it’s just the caffeine! It also means I develop quite an attachment to them, which makes their closure all the more sad when I revisit an old suburb or city where I lived.

Earlier this week I discovered the Three Beans in Greenwood Plaza was gone. Clara and I got our first apartment together up the street, so I used to spend a lot of time sitting there writing many of the posts you probably read a few years ago.

Photo of the Greenwood Plaza atrium, showing the skylight and empty space where the coffee shop used to be.

The staff were friendly and already prepped my order the moment they saw me standing there. The coffees were also among the better brews in the area. I also thoroughly enjoyed sitting under the huge glass dome of the shopping centre atrium which let in plenty of warm, natural light. All that’s left now are the curved floorboards where the counter used to be, some spare chairs, and a background billboard perhaps explaining why it had to close up.

North Sydney is overwhelmingly a commercial area, so I’m not at all surprised the lockdowns have taken their toll. I hope the owners and staff are okay.


It it the tool, or the people using it?

If everyone uses a tool a specific way that ends up causing problems, is it the fault of the tool, or the operators?

The temptation is there to place the blame square on users. It’s why we use phrases like idiot-proof, or the poor worker blames their tools, or PEBKAC. Certainly I’ve shaken my head at those hammering square pegs into circular holes, then blaming the peg.

What do you mean your platform doesn’t support RDP for your FreeBSD templates? Okay fine, as long as Linux does it.

But it’s not always clear cut. Ill-conceived or poorly-implemented software will necessarily attract bad uses. A lock-picking set resembling a potato peeler will be used to prepare crisps, despite frustrated responses from its designer. You don’t want it used that way? Well, then why does it look like a potato peeler?

Slack is a perfect example, though it applies to many other chat applications. So much electronic ink has been spilled saying the software isn’t the problem, it’s that people use it to replace email, or don’t set boundaries, or create too many rooms, or post too frequently, or that it’s merely dysfunctional company culture writ large. Medium writers seem especially enamoured with this concept.

That all may be true, but if the tool happily accommodates people projecting their problems onto it, doesn’t it share some of the blame? If not, should it?

Good software is improved when its operation causes problems, even if they weren’t foreseen, or even the fault of the designers. Bad software is defended by only blaming users.


Technical support airing personal details in public

Customer service and support is hard, and don’t believe anyone who claims otherwise. Shows like Thank You For Calling made an entire series on how it can go wrong from both sides, and how it should be done. I’ve been on the giving and receiving end of that transaction many times, and it’s rarely a fun exercise. My heart goes out to support staff who deal with awful people, and customers who go for weeks without answers to their problems.

There’s an infinitely broader discussion to be had here about expectations, training, remuneration, the viability of SLAs, and basic human decency. This post is specific to one thing that irks me about so much infocomm support: how cavalier they often are with personal details in public.

This is how it goes:

  1. Someone who’s felt slighted or ignored will take to a forum saying customer service didn’t meet their expectations, with the hopes that the company will take the hit to their reputation more seriously.

  2. The company will reach out with an apology, a request for follow-up, and an assurance that customer support is a priority and that they’ll be learning from the experience, etc.

  3. The customer either thanks them, or goes nuclear saying it’s too little too late, that it shouldn’t have taken airing dirty laundry in public to get concrete action, and that they’ll move to another provider.

The company is now in a tough spot. Any act of assertiveness in defence of their position will almost certainly be construed as covering up or making excuses, whether or not they’re in the right. Contrition is rarely seen as genuine. Ignoring the issue will just generate more heat. And matching the customer’s aggressiveness will only end in disaster. We’ve all seen each of these outcomes.

(This is why public relations exists as an occupation, and why I’m self aware enough to know I shouldn’t be allowed anywhere near it!)

But where I draw the empathy line is when customer’s private information gets used as a public defence. I routinely see providers comment on the dates and times of calls, what was said in emails, and the specific names of staff internal in the customer’s company they spoke to. I understand they’re seen as a way to rebuke misleading or false claims made by customers, and quite often quotes are paraphrased, but it still strikes me as a tremendous breach of confidence, and one of the more unbecoming, unprofessional behaviours I see. It’s the electronic equivalent of losing your cool, and it shows.

But it’s even more than that. Divulging this information doesn’t help their cause, it broadcasts to the world that they don’t take information security seriously. I’ve only been in the industry for a decade, but I know this usually is emblematic of a wider cultural issue within companies that goes beyond a single customer’s complaint on a web forum. A whirlpool, you could say. That alone should ring alarm bells.


You lock your data with us, we cannot fail

One of the most important things I’ve heard in the last few years was Michael Dexter’s comment about OpenZFS at AsiaBSDCon 2019. I’ve paraphrased it here a few times:

Once we lose data once, on any platform, people won’t trust us again.

Bruce Momjian also wrote about this in the context of databases last week, emphasis added:

Having worked with databases for over three decades, I have found there are three aspects of database software that make it unique: Variable workloads, performance requirements, and durability.

Most other software do not have these requirements. Because of them, databases typically need more tuning, monitoring, and maintenance than other software, like Java applications or even operating systems.

He linked to an article he wrote in 2012: You Lock Your Data in Postgres — We Cannot Fail. It’s a good read, and also introduces the responsibility of supplying reliable hardware. But the stakes are still fantastically high to get this right on the software side, and to tolerate the real world conditions in which it’s expected to work.

I’d trust Postgres over other DBs for mission critical work in a similar vein to what Michael and Bruce raise above: it’s proven itself durable and trustworthy. Trust is hard to earn, and easy to lose.


Bruce Schneier on blockchain tech

This article from Wired resurfaced recently on The Twitters. I was ready to go into it with a grain of salt the size of a skeptical journalist, until I saw it was by Bruce Schneier. He’s more than earned all our trust and respect, and he wears the same flat caps I do.

He didn’t pull any punches on the tech’s necessity, emphasis added:

Do you need a public blockchain? The answer is almost certainly no. A blockchain probably doesn’t solve the security problems you think it solves. The security problems it solves are probably not the ones you have. (Manipulating audit data is probably not your major security risk.) A false trust in blockchain can itself be a security risk. The inefficiencies, especially in scaling, are probably not worth it. I have looked at many blockchain applications, and all of them could achieve the same security properties without using a blockchain—of course, then they wouldn’t have the cool name.

He expands on the emphasised point:

Does the blockchain change the system of trust in any meaningful way, or just shift it around? Does it just try to replace trust with verification? Does it strengthen existing trust relationships, or try to go against them? How can trust be abused in the new system, and is this better or worse than the potential abuses in the old system? And lastly: What would your system look like if you didn’t use blockchain at all?


Uncommon Advansys Iomega Jaz Jet SCSI-II cards

Before the advent of modern serial interfaces in the 1990s, you had the option of using parallel ports or SCSI for connecting external PC drives. Parallel ports were ubiquitous and supported printer pass-through, but they was slow even by the standards of the day. SCSI was daisy-chainable and could surpass the performance of an internal drive, but far fewer people had SCSI cards. Including me, until we got an EPSON scanner back in the day.

Iomega offered a solution in their Jaz Jet PCI SCSI Accelerator card. This allowed people to connect the Jaz drive which only came in SCSI, and the Zip Plus that offered higher speeds than their standard parallel port Zip drives.

But here’s where things got a bit weird. Look on eBay and on driver sites for this card, and you’ll invariably be told they were rebadged Adaptec AHA-2930Us. Windows 95’s Add New Hardware wizard even installed a driver for this specific card when I selected the Jaz Jet PCI SCSI Accelerator, which proceeded not to work.

What was going on!? I checked the underside of this card I got from a family friend years ago, and realised it said ADP-960. The primary silicon also didn’t have the Adaptec branding anywhere, it said Advansys.

View of the card showing the model number and Iomega logo

I did a search for both and came across valvestate’s comment on the Vintage Computer Federation forums, emphasis added:

Hi Everyone. New guy here. Found this place just after I got mostly everything working – setting up DOS 6 on a scuzzy removable disk on a Pentium 3 machine I have lying around. Don’t ask why, I have a masochistic fascination with older removable storage. Figured my first post can be a contribution with all the hair I pulled over it.

You and me both! There are few things as fascinating as vintage computer and Hi-Fi disk systems.

So after surfing the net trying to find DOS drivers and trying them all for the Iomega Jaz Jet SCSI PCI card, I kept getting an error with the ASPI drivers saying that no adapters were found. Supposedly it was a rebadged Adaptec 2930U, but mine was not, it was an Advanced System Products card, labelled as ABP-960U. The card worked in Windows with some TLC, but still wouldn’t in DOS.

I had a hunch and pulled the PCI ID in Windows, looked it up online, and found my card had a slightly different Sybsystem ID from what the database showed for an ABP-960U. I hexedited the driver, advaspi.sys at byte 8D83, changing 10 to 30, and it worked. If you ever run into this, just search for 1310 and change it to 1330. In my file, there was only one instance of the word 1310.

I fortunately didn’t have to go that far to get it to work. Using the Iomega Tools floppy specifically from the Iomega Jaz drive, then choosing the specific model in the Windows 95 Add New hardware wizard did the trick:

Screenshot of Windows 95 C showing the Iomega Properties pane for the Jaz drive

And people say I do pointless things on my day off.


Coalgirls signing off, and toxic people didn’t help

I definitely wouldn’t have ever downloaded anything from fansubs, or contributed to them in any way. But Coalgirls were arguably one of the more famous ones that I wouldn’t have downloaded from in the 2010s. This was the last post, written around this time in 2017:

It has been over a year since I last subbed anything, and as a result, my mental health has improved a ton. I’m sure you know, but I have paranoid personality disorder, which I may have actually developed from fansubbing. But whether it came from fansubbing, something else, or if I was just born with it, I’ve managed to get it under control over the last year by engaging in other activities and with other people.

I don’t want to re-agitate it with all the progress I’ve made, so I am going to officially drop any open project I still have. I apologize to those waiting patiently for me.

This has become a recurring theme here lately, but I’m really starting to tire of trolls, snarks, and shitposters. They’re not half as witty or funny as they think they are, and for each of them who do it for misplaced fun in a vain attempt to fill a void left by a lack of empathy, short attention spans, and inability to intelligently connect with others, there’s a victim who suffers quietly in the darkness.

This person did something, tirelessly, for free. Questionable copyright and legality aside, these fansub communities were for years the first and only exposure so many people had to anime, especially in the early days when official English subtitles and releases didn’t exist. This spawned entire conventions, merchandise exports, and eventually convinced distribution companies that there was a market for anime in the West beyond Astroboy, Sailor Moon, and Pokémon. We owe a debt of gratitude to these communities.

The fact others felt entitled not only to critique, but to make this volunteer’s life miserable, is pathetic on their part, and had real-world consequences.

But ever in search of a silver lining, I found one here too. The free speech these people erroneously invoke as justification to be free from consequences can just as easily be applied to writing Codes of Conduct. The mere mention of such a thing seems to scare these people away like kryptonite, which is a brilliant side effect.


Today’s word: Allocable

I learned a new word today from a spelling suggestion. Here’s the definition from WordNet, which is still up:

S (adj) allocable. (capable of being distributed)

I’ve been using allocatable in all my technical documentation since I started my career. I’m torn; allocable is succinct and can be found in more dictionaries, but allocatable sounds closer to allocate, which renders the language more approachable. Or is it appropable?

Which do you think is better? Let me know.


The Galaxy Towers

Photo of the tower complex overlooking the Hudson river

Today I learned of the Galaxy Towers in New Jersey, in the eastern United States:

Galaxy Towers [..] are a trio of 415 feet (126 m) octagonal towers located at 7000 Kennedy Boulevard East in the southeastern corner of Guttenberg, New Jersey, United States, overlooking the Hudson River. The towers were built in 1976 [..]

I love Brutalist architecture. As opposed to so many modern buildings that pretend to be interesting with painted-on lines and coloured cladding panels—many of which are a fire hazard—brutalist buildings and those fashioned in a similar style were unabashedly utilitarian. They’re also such a snapshot in time now.

Thanks to Alexander Krivenyshev for the photo and uploading it to the Commons.


To static site, or not, again

It’s that time of year when I reassess whether I want to keep using a static-site generator, or go back to maintaining a hosted CMS. Those of you who’ve read my silly blog here over years have witnessed me thinking out loud about this many times.

It was an easier decision to move back to a CMS when I generated the site with Jekyll, on account of it taking half an hour to build my site. Optimisations and simplifying my themes cut this down, but it was still a barrier to me writing, which defeats the entire point of having a blog. Hugo now generates the entire site and any pushed updates in a matter of seconds now, so that concern is moot.

Ongoing maintenance is the other consideration. I shepherd enough servers at work and home to want to be on the hook for another stack, even though it did let me mess around with more frondend web development back in the day. In the last year though I’ve ended up running some Ghost blogs and Lychee photo hosting for friends and family, all of which could easily host my blog here too with minimal extra resources and time.

Ghost uses Node, which I have reservations about. But the UI and experience are slick, and having SQL makes batch editing posts fast. I log onto sites I maintain and immediately want to jump in and use it myself. That’s the real reason for this post, if I’m being honest!

So really the only things keeping me here are momentum, and the fact my posts and themes are in version control. That last point is nice, not only so I can roll back mistakes and get a nice history of the site, but it also acts as a backup. But even then, I have backup scripts for the Ghost blogs I host that dump the databases and write out changed Markdown and XHTML files, back when I wanted insurance in case I wanted to go back to static sites.

I’m not sure. Which probably means everything will stay the same. Isn’t that how it always goes?


When the industry dismisses qualitative metrics

My post yesterday about computers not feeling faster, despite performing better in certain benchmarks, made me think about qualitatitve metrics in IT in a broader sense. Things like usability, accessibility, even attractiveness and whether a system is nice to use. Barring very specific circumstances, I consider these as core to a system’s success as whether they technically function.

I’m glad to see these metrics getting more attention, especially in free and open source software communities. But go to any technical forum, bug tracker, or mailing list, and they’re still often seen as fringe ideas to be ridiculued, discounted, or dismissed. And we all end up paying for it in engineering time and helping confused people down the line.

I can empathise to an extent. Qualitative metrics are, by their very definition, hard to measure. You can apply a rational, scientific method to verify and improve them, but it takes far more work and the outcome can still be frustratingly ambugious. It’s easy then to dismiss the entire exercise as an open-ended waste of time, and an expensive opportunity cost when technical features can be worked on instead. Throw into the mix that so many of these projects are volunteer efforts, and an outsider coming in saying the usability and appearance of their software could be improved is a recipe for resentment.

But this cuts both ways. Qualitiative metrics are still routinely seen as a bolt-on you can add later, like where to put the steering wheel after you’ve made a dashboard. Unless you’re thinking about how the system will be used when designing it, you end up with software like PGP that’s technically excellent and entirely unusable by anyone in the real world. Dismissing people for not reading the manual or Wikipedia article on public key crptography isn’t just unhelpful, it’s how you breed resentment on the other side. It also holds back meaningful progress, in this case with decentralised, secured communications.

Accessibility is fortunately now being taken more seriously, and is becoming harder to defend when your software ignores it. But other metrics like ease of use and appearance are still routinely dismissed as shallow and unimportant. Spend five minutes talking to an industrial engineer about SAP and see how far you get with saying such concerns are foofoo, and limited to those you deem unintelligent. Because that’s really what so much of this comes down to: I can use the system, so therefore anyone else who struggles is a luddite who only appreciates form over function. Lulz Apple, amirite?

We’ll never convince some people that it’s a worthwhile endeavour making the world a nicer place. So at the very least a starting point should be that improving qualitiative metrics will lead to better use of the system. Because if you can’t use it, or don’t want to, what’s the point?


New computers don’t feel faster

Weird Al’s It’s All About the Pentiums was such a fun song when it came out, and now it’s a beautiful timecapsule into the late 1990s. This part has stuck with me:

My new computer’s got the clocks, it rocks
But it was obsolete before I opened the box
You say you’ve had your desktop for over a week?
Throw that junk away, man, it’s an antique!
Your laptop is a month old, well that’s great
If you could use a nice, heavy paperweight

Remember when this was a thing? The euphoria at discovering how much faster your new computer felt compard to your old one? Then the craving for your next fix when a new machine came out to trounce it less than a year later? It was exciting. People called it planned obselecence; it was really just the tremendous pace of technological progress.

(Perhaps it didn’t feel so agregous back then because you could upgrade things. Even my iBook could have RAM and AirPort/Wi-Fi card update. Smartphones today are sealed units designed to be thrown away).

I remember going from the first machine I built as a kid with a Pentium MMX, to an HP Brio BAx with a Pentium 3, then a blueberry iMac DV with a PowerPC G3, then a DIY tower with an AMD Athlon XP, then an iBook. Each one looked faster on paper, ran games better, and booted Connectix Virtual PC machines more smoothly. But, more importantly, they felt faster. You could feel the money you spent.

Modern machines do better in specific benchmarks, like generating my seven thousand blog posts with Hugo in fewer seconds. But in day to day use, this current MacBook Pro feels no faster than the 2015 or 2012 models I used. This isn’t rose-tinted glasses, I booted and used them before writing this. I could not have said that about the difference between our family PC in 1992, and my iMac in 2000.

We’ve seemingly inked out all the performance improvements we can from silicon, and horribe Electron applications are more than happy to erase a decade of RAM upgrades, performance improvements, and energy efficiency. But it also just comes down to the maturity of the industry. We’ve largely decided what a desktop computer should look like. Even innovative companies like Apple have only been able to think of the widely-panned TouchBar, and a new keyboard mechanism that was a step backwards in accessibility.

(All the innovation in the 2020s is happening on phones, with tablets like the iPad benefitting by proxy. We all know why, but it doesn’t change things for fans of desktop computing).

But there’s a silver lining to all of this. I haven’t bought a new computer since 2006, everything since my first-generation MacBook Pro has been refurbished or second-hand. It doesn’t fulfill my inner child like unboxing a new piece of kit, but I’ve saved thousands of dollars over the years, and perceptively they perform as well as anything I could buy new.

I’ve always supplemented my Macs with second-hand ultraportables, whether they be ThinkPad X series machines, or more recently the Panasonic Let’s Note. They’re cheaper than some dinners I’ve had, and desktop FreeBSD and Linux positively scream on them. In these cases I upgrade not for better performance, but if a newer model is significantly lighter, cuter (cough), or has measurably-better battery life. And for the few times where I need more grunt, I can ship it back to my Microservers at home, or spin up a cloud VM for an hour.

My rational side loves this about contemporary computing. But my inner child longs for the days when personal computers were exciting.


Type-Moon Racing Umu fig announced

It’s been a difficult week in many ways. Kris Delmhorst’s new album hugely helped, as did playing the silly summer event in everyone’s favourite mobile game.

But now Good Smile Racing and Type-Moon Racing have announced another figure of everyone’s favourite emperor from Fate/Grand Order, the Fate/EXTRA series, and Fate/EXTELLA… am I missing any? I’ve been in this franchise almost from the start and even I find it hard to keep track of it all.

The sculptor captured her subtly-cheeky expression and confident pose so well, and the detailing on her eyes really does Wadarco’s indelible art style justice. That single strand of hair angled off her ponytail is probably my favourite feature.

Photos of the new fig by Good Smile Racing and Type-Moon Racing showing Umu in a jumpsuit and a gigantic lance!

She won’t be shipping until September… next year. I can only imagine the disruption COVID has caused on their supply chains at the headquarters of these evil, wallet-parting companies. But if there’s anyone else that could get us through this bleak period, it would be Umu. Or at least, the promise of future Umu. Umu!


Working from home still tenuous sometimes

Yesterday Clara and I were having no end of trouble setting up and using our respective video conferencing sessions, web demos, and company chat. It was enough that I gave up trying to video in, and had to succumb to these newfangled things called phones to dial a string of numbers and only get audio out of it. It was weird, and very low quality.

We got this email about an hour into our adventures:

There is a service disruption in your area
Performance issues and packet loss affecting some services

Disruption started Affected service: $username

Window Start Thu 20 Aug 2020 11:05AM AEST
Window End No ETA

I’m relieved when a telco at home, or an upstream at work, reports problems like this. It means they’ve been acknowledged, and presumably a NOC somewhere is investigating the cause and are fixing. Ambiguity is the most frustrating part of IT, especially when it comes to networks.

(This is one of the few ways I can empathise with doctors and car mechanics. They must constantly get asked by family and friends for advise on specific problems with insufficient information. Why does my car make this BBZZT sound? Why do I have this specific pain?)

I can tell immediately when our FTTB connection fails, because it feels as though everyone in our apartment building instinctively reaches for their phones to tether. Hundreds of people across dozens of floors drags the whole network down to crawl; which I suppose is tempting to do when we’re all stuck in our carpeted homes.

We’re not in higher stage lock-down here, so Clara and I donned our masks and headed to a coffee shop in the local shopping centre to tether off another cell tower instead! Call it forced exercise.

Still though, the experience reminded me that despite our Malcolm Turnbull’d Internet in Australia, network infrastructure is one of the precious few things keeping us and the economy going at this stage. These implausibly tiny little wires strung through the ground into our buildings have replaced direct human interaction. You could say they’re the ultimate masks.


You don’t need tmux or screen for ZFS

Back in January I mentioned how to add redundancy to a ZFS pool by adding a mirrored drive. Someone with a private account on Twitter asked me why FreeBSD—and NetBSD!—doesn’t ship with a tmux or screen equivilent in base in order to daemonise the process and let them run in the background.

ZFS already does this for its internal commands. For example, I used zfs add to add a drive to fix a older RAIDZ2 pool, so I can just run:

# zpool status

Under status:

status: One or more devices is currently being resilvered.
The pool will continue to function, possibly in a degraded state.

The pool will also continue to be available, though potentially with reduced IO performance while it (re-)replicates.

The only time I use tmux is during a large ZFS send/receive operation between machines over SSH. At that stage we’ve introduced networks into the mix, which even the most robust, trustworthy storage system in the world can’t guarantee will stay up!