The Internet’s short memory for retrocomputing
HardwareWe’ve all been trained over the last two decades to solve problems by performing web searches. I have reference books, canonical documentation, and gigs of PDFs, but I’m just as guilty as everyone else doing a search for a specific error message or function, especially when I’m in a hurry.
There’s a self-deprecating joke that much of the Internet is written indirectly by StackOverflow, just as we used to say half of it was glued together with Perl and shell scripts cobbled together from woodcut O’Reilly books. It’s likely true.
This works fine for contemporary systems, but the Web is a young and forgetful place. Information for systems that existed prior to its introduction tends to be sparse; what did exist has probably been lost; and what does remain is buried under other similar-sounding stuff.
There are a few reasons for this:
-
More people means more attention, so there’s an incentive to write and document stuff. This does result in large quantities of low-quality information being churned out, but the law of large numbers still works on our favour.
-
Businesses are contractually obligated to support their current software and systems, or at the very least provide documentation about how it works. Theoretically, maybe, hopefully.
-
The people who wrote, maintained, and were interested in such software either retired, or moved on.
My hope is that as we trend back up the bathtub curve, so too does the amount of information about a particular piece of vintage computing. Not to mention all the new information about using old systems in a contemporary setting; I’m sure the original designers of Hercules ISA cards weren’t worried about upscaling and correcting 4K widescreen display ratios.
I see three lessons here:
-
I need more books from the time period, and to get used to referring to them again when I have issues.
-
Comes after one.
-
I’m rapidly realising that if I care about information about this stuff, I should be archiving and making it available too.