Apple’s rumoured 16-inch MacBook Pro

Dave Lee had an interesting take on why Apple is rumoured to be releasing a 16-inch laptop: they’re in the United States and still inexplicably use imperial measurements like the inch and furlong. But the other reason: they’ve painted themselves into a thermal corner with the current MacBook Pro hardware design.

Play 16” MacBook Pro - Apple’s Last Chance

There’s precedent for this idea. Apple came out and said as much about their Mac Pro, and for similar reasons Dave discusses. Apple’s current laptops were designed with a strict thermal ceiling that looked reasonable with Intel and AMDs road maps a few years ago, like the Mac Pro and its dual GPUs. But the latest high-end crop will use far more power.

I’ll almost certainly be wrong about this, but I don’t think beefing up a laptop to let it dissipate more heat sounds like Apple. They’d rather sacrifice performance to achieve the best battery life, weight, and thinness. Whether you and I think that’s a good idea is another story.

I can’t remember where I read the factoid that Apple sells more laptops used as desktop replacements than iMacs and Mac Pros combined. By which I mean laptops that spent their lives plugged into to a desk and external display. From that vantage point, a beefy MacBook Pro with awesome performance would be a boon.

I’d give up a performance boost for a usable keyboard, mind. I’ve been using this 13” work MacBook Pro for a few months and I’m seriously contemplating carrying an external keyboard around to overcome the dull throbbing in my knuckles and finger joints. The butterfly key mechanism isn’t as much a bad keyboard as it is user hostile. Even my GPD Pocket feels better.


Pop Up Parade anime figs

Anime fig collecting is a time-consuming hobby that sucks up your wallet, all your available free space, and is absolutely worth it. So much of our lives sit in digital realms, its nice to have physical anchors in the real world, even in this case it’s a cute character from a series or game we love.

(Admittedly Clara and I have had to pare back our collections since we started living in studio apartments and reducing our amount of stuff, but we still keep an eye open for fun new ones).

Good Smile Company is one of the better known manufacturers, perhaps most famous for their line of Nendoroids and Good Smile Racing merchandise. Now the evil geniuses are capitalising on the public perception that figs are getting too expensive with a new fig line:

POP UP PARADE is a new series of figures that are easy to collect with affordable prices and releases planned just four months after preorders begin! Each figure stands around 17-18cm in height and the series features a vast selection of characters from popular anime and game series, with many more to be added soon!

Their first one is virtual idol Hatsune Miku, understandable given how iconic she’s become over the years. Both this series, and this new fig, present severe challenges to our decluttering and limited living space.

Anime

Homebrew no longer accepts options

I moved to Homebrew from MacPorts in 2011, and pkgsrc in 2008. I liked how it kept brews in their own directory trees, that brew definitions were easy to read Ruby files, and the tooling was simple.

But then today I was trying to install PerlMagick, like a gentleman, and saw this error for the first time:

$ brew install ImageMagick --with-perl
==> Error: invalid option: --with-perl

I’d been building with this for years. But sure enough, the options tool returning nothing:

$ brew options ImageMagick
==> [crickets]

Something funky was going on. I did some digging, and in August last year options were removed from Homebrew. The justification was:

Options in formulae don’t produce a good user experience because they have to be built from source, we don’t test them in CI and each combination of options provides a new chance for new failures to occur.

I’m ${ADJECTIVE} disappointed, but I can empathise. Testing every permutation of options for every package isn’t feasible, especially for a bootstrapped open source project. Failures would likely also be blamed on Homebrew over the original port. To a user experience engineer or designer, it’s an obvious and easy win to remove this arguably lesser-used feature.

But that’s not an issue with Homebrew, or package managers. It should be expected behavior that people are taking on more responsibility for a package if they build with custom options. That’s why binary packages are installed by default; providing build options is an explicit directive to override what the maintainers have chosen.

This change effectively renders Homebrew a binary-only installer, in line with the Mac App Store. They’re free to do this, but for those of us using Homebrew as the missing package manager for macOS, as per their own slogan, it renders it significantly less useful.

There will doubtlessly be well-intentioned workarounds suggested that are more complex than what we had before. Some of them may work great! But the message being sent is clear: package build options are a hindrance to user experience, and features critical to certain workflows may be revoked at any time.

I’ll see if I can work around this, but it’s also time to keep an open mind about other options. I appreciate all the hard work the Homebrew team have done over the years, but it’s not as good a fit for my use case as it once was.


When client-side validation attacks

Step one: Being asked for an address by StarTrack.

Step two: Entering a postcode, and getting a list of suburbs.

Step three: Selecting and autofilling a suburb from the dropdown.

Step four: Can’t progress because the autofilled value doesn’t pass client validation, so use Inspect Element to override so you can submit the form!

I understand the need for validation, and appreciate these auto-fill address fields must have drastically decreased errors. But wow they’re brittle.


Our moving NBN adventure

Australia’s National Broadband Network (NBN) was envisaged with the same foresight that gave us a phone system and power grid. Other jurisdictions were able to pull this off with economies of scale and fibre optics, but it was politicised in Australia into an expensive hodgepodge with electronic gauge breaks, three word slogans, and greenfield copper.

But I digress! This is my page to track the progress of moving our NBN connection to our new address.

  • 2019-02-05: Lodged ticket to transfer connection.
  • 2019-02-06: ISP asked for details. I responded to.
  • 2019-02-07: ISP asked for more details. I responded to.
  • 2019-02-11: Crickets. I poked the ticket.
  • 2019-02-12: Crickets. I poked the ticket.
  • 2019-02-13: Crickets. I poked the ticket, and emailed sales.
  • 2019-02-14: Response from sales with a phone number. Called.
  • 2019-02-15: Notice from NBNco saying technician booked for the 21st.
  • 2019-02-21: Might be installed.

Time to transfer as of the 19th February: two weeks.

Update: @Zoomosis pointed out my dates all said 2018, not 2019. The NBN may take a significant amount of time to provision, but at least it hasn’t been an entire year.


FreeBSD shared object libssl.so.8 not found

I was helping a client with their FreeBSD install on the weekend, and she was having trouble with pkg. We sorted it out, and I was granted permission to share.

When she attempted to update a package:

==> lf-elf.so.1: Shared object "libssl.so.8" not found required by "pkg".

This is usually indicitive of a system version mismatch, such as running FreeBSD 11.x packages on a system upgraded to 12 with freebsd-update. THe recommended solution is to reinstall pkgng with:

# pkg-static install -f pkg

If this still doesn’t work, you can force pkgng to boostrap:

# pkg-static bookstrap -f
==> Major OS version upgrade detected. Running "pkg-static install -f pkg" recommended.
==> pkg(8) is already installed. Forcing reinstallation through pkg(7).

It’s also worth checking your /etc/pkg/FreeBSD.conf to see if it changed from latest to quarterly if you elected to use the former before.

Now you can update your repo, then upgrade and install packages as before:

# pkg update
==> All respositories are up to date

Baby Shark on Wikipedia

Music Monday! This made my day: the song Baby Shark is categorised on Wikipedia under fictional sharks. Fictional sharks, doo doo d-doo doo doo.

Media

The unfortunately-named bought Eero

In 2016 I noted the explosion of tech and engineering podcasts being sponsored by a WiFi equipment manufacturer called Eero, as opposed to a mattress.

I thought it was surprising and a tad funny their marketing department didn’t check if it was a common abbreviation of an unfortunate term in this context. I chalked it up to having the same self-depracating humour I do. Or maybe they figured you can use Eeros to download ero. While you’re on a mattress.

Now a large, and very well-known third party has bought them. I know the big fish eating smaller ones is increasingly—some may say depressingly—inevitable, but it’s not what people signed up for when buying these devices

(If the argument is people should have known they’d be bought, surely that’s a broader indightment of the entire consumer technology industry right now).

Eero made a big deal about protecting privacy. We’ll see how their new corporate stewards handle it; we’ve all learned to be skeptical when reading assurances that nothing will change. Because they’ll know whether you’re downloading ero with your Eero.

Anime

144dpi images in ImageMagick

The srcset HTML5 attribute that delivers images based on display DPI is so brilliant and elegant, it finally convinced me to move off the otherwise-superior XHTML+RDFa a few years ago. Here’s the syntax with two resolutions:

<img src="$NORMAL" srcset="$NORMAL 1x, $RETINA 2x"
    alt="Caption" style="width:$NORMAL-WIDTH" />

(I’ve moved to Markdown for most of my editing, but I generate this code myself. Markdown deliberately has no provision to specify image widths or alternative versions, so the output will always look awful now, unless you use site-wide CSS that doesn’t carry through to RSS).

To feed into this, I convert each image bound for this blog into a Retina/HiDPI; version, and a regular version. Presumably I’ll eventually have to do 3x as well.

This is how I create the Retina images in ImageMagick with 144dpi. This should work with GraphicsMagick as well, but I haven’t tested:

$ magick convert "file.ext" \ 
    -units PixelsPerInch    \
    -resample 144           \
    -resize 1000x           \
    "file@2x.ext"

Note the option order; ImageMagick chains these so changing the order will result in different output. Most images also start at 72dpi by default, so the smaller size can usually be accomplished with just a -resize.

The irony isn’t lost on me that a post about images doesn’t include one. Pretend I’ve inserted a work of visual brilliance here to demonstrate.


Delivering blog post text in RSS

RSS may have been done and dusted a decade or more ago, but I’ve been thinking a lot about RDF again of late, and have been exploring some of its implementation.

RSS 2.0—referred to as RSS henceforth for brevity—has a few methods for delivering textual payloads:

The <description> attribute

This comes by default in RSS, and is defined in the specification as containing an “item synopsis”. Most blogs I encounter include the entire text, including Dave Winer’s who wrote this spec.

I follow the RDF/XML standard and include CDATA-escaped content as WordPress does, because it’s cleaner:

<description><!CDATA[[<p>Here’s an entire post.</p>]]</description>

But Dave Winer and other authors escape their HTML instead:

<description>&lt;p&gt;Here&#x2019;s an entire post.&lt;/p&gt;</description>

The <content:encoded> attribute

This is included in an RSS 1.0 namespace, which you include with:

xmlns:content="http://purl.org/rss/1.0/modules/content/

Its spec defines it as “an element whose contents are the entity-encoded or CDATA-escaped version of the content of the item.” This renders it the same as <description> in practice.

<content:encoded><!CDATA[[<p>Here’s an entire post.</p>]]</content:encoded>

I used to include a CDATA-escaped version of blog post text using this, but removed it because it duplicated how I used <description>.

The <excerpt:encoded> attribute

This comes to us via WordPress, which you include with the below. It no longer resolves, though RSS only needs it as an identifier.

xmlns:excerpt="http://wordpress.org/export/1.2/excerpt/"

This allows a post excerpt to be included alongside, and distinct from, the full blog post content. In practice it carries a CDATA-escaped payload, even if it rarely has more than plain text.

<excerpt:encoded><!CDATA[[It's about zettai ryouiki.]]></excerpt:encoded>

I use it alongside <description> to provide an abstract, which I also feed into Schema, OpenGraph and Twitter description fields.

The <dc:description> attribute

This comes from the venerable Dublin Core, and is also referenced in the broader dcterms namespace. You include it with:

xmlns:dc="http://purl.org/dc/elements/1.1/"

The spec makes it clear it “may include but is not limited to: an abstract, a table of contents, a graphical representation, or a free-text account of the resource.” So arguably it overlaps the above.

<dc:description><!CDATA[[It's about zettai ryouiki.]]></dc:description>

I leaned heavily towards using this, especially given I already use Dublin Core to ascribe authorship without leaking my email address as per the RSS specification. But <excerpt:encoded> appears more widely supported in feed readers from my own testing.

Other namespaces

Podcast-specific namespaces including iTunes and Yahoo’s abandonware MediaRSS define description attributes, presumably so they can be distinct from the blog post used to describe and download the content. These are beyond the scope of this post, but worth keeping in mind if you’re specifically delivering podcasts or mixed content.