Expanding our FreeBSD home file server


This is what I’d call a thinking out loud about personal circumstances post, rather than anything prescriptive or useful for discerning computators general. You’ve been warned!

Clara and I are running low on drive space on our OpenZFS file server, once again. We have a running joke that driveageddon seems to rear its fragmented head every August. Maybe it’s a self-fulfilling prophecy, though it’s files doing all the filling on these implausibly-fast spinning platters of metal.

(Has someone made a discus anime?)

Our FreeBSD server is the centre of our world. It uses a combination of NetBSD and Debian VMs running in Xen (to be replaced with bhyve at some point) and FreeBSD jails to serve and delegate anything we can offload from our personal and work machines. I have other boxes for tinkering and testing, but this one runs the latest -RELEASE with as unexotic a configuration as I can make it. Vim is saying unexotic isn’t a word. It’s probably right.

My attitude for at least the last six years (possibly longer) has been to buy a pair of the largest drives I can afford, and to cycle out the oldest pair. 2019 was the year I finally said goodbye to a pair of HGST 3 TB units that had performed flawlessly for almost a decade. They’re now in anti-static bags in a safe-deposit box, acting as a cold backup for our most critical family photos and documents.

There’s a thought there that I haven’t had to replace a hard drive due to outright failure in a long time, but I’d dare not mention that here lest I invoke the wrath of Murphys Law. Good thing I didn’t.

But here’s the thing. This time I’m not faced with the same space or chipset constraints, so I could add more drives instead of swapping. Last year I replaced our workhorse HPE Microserver with a refurbished Supermicro workstation board with 8× SATA and 2× NVMe (albeit one on a PCI-E daughterboard) and an old Antec 300 case with 8 LFF drive bays. I even considered getting an additional RAID controller, provided I could use it in JBOD mode for ZFS. That was an unconscionable number of abbreviations and acronyms, and I’m not even a network engineer.

You could argue the timing is great. Chia has driven up the cost of drives, meaning this year I won’t be getting as much of a capacity jump as I have in previous years. Granted going from 4 to 10 would be nice, but it’s still only 6 TB of effective extra space for many hundreds of dollars; not to mention that I insist on using ZFS mirrors for redundancy and ease of replacements/upgrades. Adding drives instead will give me all the extra capacity.

It all makes sense, but my main concerns are still noise and heat. Clara and I live in a one-bedroom apartment now, which is much nicer than sleeping in a studio while the computer in the other end of the room loudly seeks and scrubs its ZFS pools on a recurring basis. But we work from home now, and I have experience with specific WD drives in my bedroom growing up that I don’t want to inadvertently repeat. I’d likely tolerate it, but it’s not fair to Clara having something clicking and buzzing away within earshot all day.

We’ve lucked out thus far with our current HGST, WDs, and Seagates. The read/write heads on the SSDs are also so silent as to be practically non-existent (cough)! But I’ve read reviews of current larger drives of people complaining about noise; the WD Golds and Toshibas seem to frequently cause people ire.

This post was as open-ended as the bag of kettle chips I regret eating. Maybe I need to do some Acoustic Research.

Author bio and support


Ruben Schade is a technical writer and infrastructure architect in Sydney, Australia who refers to himself in the third person. Hi!

The site is powered by Hugo, FreeBSD, and OpenZFS on OrionVM, everyone’s favourite bespoke cloud infrastructure provider.

If you found this post helpful or entertaining, you can shout me a coffee or send a comment. Thanks ☺️.