My own drive reliability stats

Hardware

Backblaze publishes drive reliability data at scale, but what about a random person on the Internet with his small homelab setup? For all my talk about drives over the years, I’ve never shared my own broad experience.

These are the drive manufacturers I’ve had since 2017, with failures defined as having multiple ZFS scrubs detecting bad sectors, or the drive outright failing to turn on.

  • 8× Western Digital Red and Red Pros, no failures
  • 5× Seagate, 3 failures, including 1 DOA
  • 8× HGST and WD Ultrastars, 2 failures

The odd-numbered Seagate was an external unit I originally intended to shuck, but ended up being scratchspace for Clara’s Mac which is still going strong. I also haven’t bought a Toshiba since the IDE days, if only because nobody has ever had much stock in Australia or Singapore, though I’d be tempted to try.

There’s no variable control here whatsoever. Two of the HGST drives were in my Debian Xen test boxes with XFS and DRBD over InfiniBand, but the rest were in various FreeBSD towers with OpenZFS, including a Microserver up at my dad’s place I do ZFS send/receives of family photos up to. Two WD drives were also briefly in a NetBSD box when I was testing version 9.0’s ZFS support.

Therefore, these stats are almost completely pointless! I just thought it was interesting that while my experience with Seagate broadly correlates with what others have said, the former industry darling HGST has stung me too. The sounds of their 8 TB drives failing harkened back to the IBM DeathStar days.

I’m weird and ended up always running personal drives in ZFS mirrors or RAID1. It just makes buying and replacing drives easier, especially when you can only afford a new drive every few months. In that way, a dead drive is an inconvenience rather than a disaster while I deal with warranties, returns, and replacements. But it’s still a pain, which is why I value reliable drives.

I haven’t been impressed with WD’s SMR shenanigans, but they earned my trust when it comes to reliability. Maybe I can ask work one day if we can publish some of our larger cluster data next time we do a fleet upgrade.

Author bio and support

Me!

Ruben Schade is a technical writer and infrastructure architect in Sydney, Australia who refers to himself in the third person in bios. Hi!

The site is powered by Hugo, FreeBSD, and OpenZFS on OrionVM, everyone’s favourite bespoke cloud infrastructure provider.

If you found this post helpful or entertaining, you can shout me a coffee or send a comment. Thanks ☺️.