That complexity inflection point


This thought is still only half-baked, but I haven’t been able to get it out of my head all week. I’m not sure how else to describe it without launching into an example.

Almost all cars produced since their invention have consisted of the same rough components. They were made bigger to accommodate more people and goods, and their engines became more powerful and more efficient at extracting energy from dinosaur juice in the same physical space.

At some point all the gains that physics would allow had been wrung out, and engineers had to look elsewhere to supplement performance. Turbochargers were early additions that used existing energy in new ways. These days we have hybrid cars, which an engineer from those early Model T’s would look at and not have the foggiest idea how it works. They wouldn’t say it’s a “car” the way they understand it.

The same thing has happened with computers. Journalists attribute it to Moore’s Law, but there’s more to it than that. Computers, like cars, have had the same rough components, even if individually they’d be unrecognisable. Think storage, compute, memory, registers, etc.

When engineers started hitting the limits of silicon, we branched out into multiple cores and multi-threaded software to ink out more performance. Then we started offloading compute to remote clouds, mimicking the mainframes of yore. Apple’s huge performance gains over x86 with their M1 chip came from integrating previously disparate components into the ultimate system on a chip, reducing latency between memory and compute.

(Apple weren’t the first to do a system on a chip. Commodore’s TED incorporated sound, video and IO onto one IC in the 1980s. There were plenty more before and since).

But then we’re left with the same question. Is an Apple M1 chip a computer? It fits the classical definition of Turing completeness, and it’s fit for purpose. But those of us who grew up upgrading and replacing components in our own systems would see it more as an appliance. But if it does the same thing, and objectively better, than what tinkerable systems could do before, does it matter?

I suppose it’s the natural evolution of things. Something is invented, it’s improved to the point it physically can’t be anymore, and then has to evolve in different ways to continue being improved. Then the taxonomy changes.

Adding complexity to overcome physics is necessary, but it definitely is one of the reasons I love retro computing, as I’m sure vintage car buffs like their old machines. There’s a golden period before physics rears its head with these things.

Author bio and support


Ruben Schade is a technical writer and infrastructure architect in Sydney, Australia who refers to himself in the third person in bios. Hi!

The site is powered by Hugo, FreeBSD, and OpenZFS on OrionVM, everyone’s favourite bespoke cloud infrastructure provider.

If you found this post helpful or entertaining, you can shout me a coffee or send a comment. Thanks ☺️.