Latest interface: 0.3.1
Latest system: 010
smokey7722
User

4 posts

Posted on 4 June 2016 @ 00:27edited 01:25 16s
I'm looking to build a new system and could use a hand trying to best design the drive configuration. I am planning on purchasing 24x 8TB HGST HUH728080AL4200 drives connected via an HBA. I'll also most likely have 256GB of ram in the chassis as well. I am looking for a level of protection that isn't going to result in a 50% reduction in usable storage. My original thought was 21 drives in a raidz3 with 3 hot spare though thats pretty risky and from what I understand not the best design for performance. I could do two 10 drive raidz2 with 4 hot spares though if that would be better overall. I am still somewhat new to ZFS and learning so I was hoping I could lean on some veterans here.

Single 21x8TB in Raidz3, 129TB total usable (3 hot spare) (loss of 6 drive capacities in parity/spare)
Two 10x8TB in Raidz2, 57.5TB each, 115TB total usable (4 hot spare) (loss of 8 drive capacities in parity/spare)

Given the large quantity of RAM in the machine I was told not to bother with L2ARC though I would be using a pair of SSD's (mirrored) to support ZIL. Also, as these are 8TB drives, should I be concerned about rebuild times while in a Raidz2 configuration vs the benefit of a third parity of a Raidz3 configuration?

Thanks!!
Aro
User

20 posts

Posted on 4 June 2016 @ 18:05
Very good question, id say put 3x8z2 and ad 1-2 spares via your motherboard connectors(hot spare and any hdd in that perspective can be SATA if SAS is not supported)if you need. What interests me more is how to configure them. I have 8TBx6(z2) HGST HUH728080AL5200 (SAS) now and they are not supported fully. I get no smart information on web interface. What im interested in is the best pool and file system configuration for these hdds. I have selected 4k, not sure if it was right choice. HGST are 512e, so they are 4k native.
smokey7722
User

4 posts

Posted on 5 June 2016 @ 16:58
Physically no space for more than 24 drives in the chassis (AIC RSC-4ETS). OS Drives will be using the only two 2.5" slots (mirrored) and that leaves 24x 3.5" slots for these drives. So that would leave 3x7z2 (36TB usable each totalling 108TB usable) and 3 hsp.
Aro
User

20 posts

Posted on 6 June 2016 @ 15:22
Why not use 3x7z3? As far as I understand, hot spare needs to be replaced anyway. z2 likes even numbers and z3 likes uneven, so you would get better protection(43 percent fail protection vs 28.5 with z2). You can even put pcie ssd card and use all 24 in 3x8z2 config if don't need better protection. And hot spare is not permanent, it is only until you replace original broken hdd and then rebuild of 30h+ starts again. So unless you cant get to server within 12h after hdd fail, I see no point in hot spare. Just keep one extra next to enclosure.
smokey7722
User

4 posts

Posted on 6 June 2016 @ 17:34
Going that route with one HSP per vdev ends up with 12 drives lost, which means I my as well go with mirroring at that point. My whole goal was trying to develop a solution that did lost result in a 50% loss. Hot spare is very much so critical when using an OS that can handle them, like Solaris. I travel quite a bit and there are times when I may be gone for a week or even more. Not being able to put a cold spare in would result in failure or me forcing a power down remotely to reduce the chance of failure - neither of which is really an option.
bitmap
User

26 posts

Posted on 10 July 2016 @ 22:38
Theories have been made that zfs is more efficient (less lost space in padding blocks) with 11x than with 7x disks in raidz3, as well you will also get greater usable, non-parity space with 11x+11x z3 than in 7x+7x+7x z3. If access speed is more important than parity and padding overhead losses, then the three vdev 7x should perform better than two vdev 11x.

Don't worry that you have to use every disk slot in the chassis. Having an empty slot or two will be quite handy if you need to replace a failing disk without having to remove the old one first, as rebuild can still read and verify checksum using the good parts of the disk much faster than reverse calculate it from the parity. As to number of hot spares, since FreeBSD doesn't yet auto-rebuild onto spares, (unless as you say, when the machine is remote that you can't quickly get "hands on"), you might be better off to leave the spares mounted in trays but slightly ejected from the chassis (cold spare). You don't put wear on the disk you need to rebuild onto, which would end up the same age in spinning time, and risk that your spares will start to fail along with other disks. You could insert the disks before you travel without need to even configure anything, so they are available if needed.

If you have some time to test, build it a few different ways, test the performance, then wipe and try a different configuration. No one configuration is going to be wrong as long as it meets your desired performance and reliability.

One thing I can't recall if you mentioned was where your boot device would be.
If you have the space in the chassis, a small mirrored ssd pair just for the os might be useful (I think I saw Intel still makes a 56GB device) in addition to slog ssds, rather than putting your os filesystem in the same pool with all that data. If you need to completely rebuild the OS, you don't touch the important data disks.

Finally, if this massive amount of data is mission critical, start thinking of your backup strategy before you build. You might consider ZFS replication to a similar server, maybe with less RAM, and slog, maybe fewer disks in raidz1 or z2 since it would be just for backup. If within the same building, a dedicated 10G ethernet will permit frequent sync and smaller delta of data loss should the master be totally lost. My backup server uses half as many sata disks that are slower but twice the size of my sas master, but obviously not possible when using the 8TBs.
bitmap
User

26 posts

Posted on 10 July 2016 @ 23:00
Ahh, I see you did mention OS on ssd in the only two 2.5 slots. (That looks like a sweet chassis, BTW)

Something like this would permit you to externally hot-swap ssds using empty PCI slots, and get a few more inside the chassis:
https://www.amazon.com/StarTech-com-2-5in-Removable-Drive-Expansion/dp/B002MWDRD6/

DVD_Chef
User

128 posts

Posted on 11 July 2016 @ 19:41
bitmap wrote:
Something like this would permit you to externally hot-swap ssds using empty PCI slots, and get a few more inside the chassis:
https://www.amazon.com/StarTech-com-2-5in-Removable-Drive-Expansion/dp/B002MWDRD6/


I have successfully used these in the past to hold the mirrored boot drives for a couple of storage boxes. Much better than just using Velcro or zip ties to "secure" the bare drives in the case, and also allows easy swapping without opening the case.
Last Page

Valid XHTML 1.1