This is one of the better retorts to “the cloud is just someone else’s computer” I’ve seen. I saw a few different posts mention it.

The web didn’t shut down recently because one of Amazon’s US East data centres went offline. It shut down because architects didn’t build redundancy into their stacks. Putting your eggs in one basket, regardless of what that basket is made from, is a bad idea.

I don’t talk often about my work here, but I have meetings with clients running their own colo stacks that go down more often than these well-publicised outages, and those using cloud providers that survive them without customer-facing downtime. Then you have people like Jason Tubnor who run FreeBSD bhyve stacks on their own tin without any problems for years, and McDonalds that hosted all their kiosk images on S3 without redundancy.

Maturity comes from realising that everything breaks, everything goes down, and everything can’t be relied upon. How you deal with it, as Keanu would say, is up to you. How does your stack handle a subset of it disappearing? Does it have any form of failover? What technical and human contingencies have you got in place when it does need to run in a degraded state? Because it will, someday, for reasons that may be entirely outside your control. It’s turtles all the way down.

I do agree though that centralisation of so much Internet infrastructure in the hands of a couple of providers is a bad thing. If only there were alternatives to them!