The @Zoomosis on ZFS, tape, and external drives


A chat about LTO and Blu-ray burners last night lead to a discussion on The Bird Site about backups, something I thought was especially prescient given my post yesterday. I’m shamelessly quoting @Zoomosis’ entire thread here:

Tape storage is something I’ve always been curious about but it doesn’t seem practical for home users since if the tape drive dies it could be hard to fix or find a replacement. Bluray may be a bit similar - This is the only SATA BD drive I have. Though they aren’t hard to find.

True, I hadn’t thought about that. I’ve got an Iomega Ditto drive out of pure nostalgic pointlessness, and I’ve used LTO at work, but I’d risk losing access if the drive itself vanished. The other issue is the steep barrier to entry: I’d probably need SAS not SATA, and the cost to even get a drive to read and write the carts discourages me, even if the carts themselves are affordable per gig.

Lately I’ve been pondering the idea of mirrored ZFS pools over USB 3.0 using inexpensive Seagate external drives. I’m not sure what USB 3.0 is like on FreeBSD but it’s worked well for me in Ubuntu.

USB 3 has worked great for me in FreeBSD for years, so I don’t think that’d be a problem. Whether it’s kosher or even a good idea though, I’d also built what I called a “stonehenge” of external drives with mirrored zpools on top in the past. I remember meeting a ZFS storage expert at a conference whispering to me that it’s not an ideal use case, but that it’s likely still more robust and reliable than using external drives with any other file system given its copy-on-write design. Then they said not to quote them by name… read into that what you will!

(As an aside, I miss talking about servers over a Premium Malt’s).

I ended up replacing my HPE MicroServer companion cubes with a dedicated Supermicro box with 8x SATA ports because I was tired of dealing with external drives, their various power bricks, and all the extra cabling. But whether I should have or not, I had them running ZFS for years with few problems. I say few; the ones I had were entirely my own fault when I bumped a drive or the power board to which it was attached. But again, being in a ZFS mirror meant I was able to resilver after the fact. Performance was okay, though I never had active workloads running on them.

I’ve also been known to use USB3 headers on motherboards and SATA to USB adaptors to gain a few extra internal storage devices. I can feel the second-hand worry and angst among the FreeBSD GEOM developers from here.

The biggest problem may be if the drives are SMR. You might’ve written about this already but a ZFS resilver or zpool scrub will tend to hammer both drives in the mirror which SMR is particularly allergic to, potentially causing SATA/USB timeouts. I’ll just have to experiment though.

Yeah, it sucks. I talked about it at a high level last year, but truth be told I’ve never actually used an SMR drive (inadvertently or otherwise). I was tempted by an earlier Seagate unit that was advertised as SMR to use for WORM applications like Plex, but gave up when I heard about its performance in RAIDs.

At the moment I just use XFS and manually rsync from one drive to the other, which works but isn’t ideal.

What is it they say, the cobbler’s child walks barefoot? XFS can at least have metadata verification enabled, and rsync does checksumming. If everyone I knew, family or otherwise, even had a backup regime like that, I’d feel infinitely calmer. We all know most people don’t even have a single backup drive.

Author bio and support


Ruben Schade is a technical writer and infrastructure architect in Sydney, Australia who refers to himself in the third person. Hi!

The site is powered by Hugo, FreeBSD, and OpenZFS on OrionVM, everyone’s favourite bespoke cloud infrastructure provider.

If you found this post helpful or entertaining, you can shout me a coffee or send a comment. Thanks ☺️.