Latest interface: 0.3.1
Latest system: 010
Pete-L
User

3 posts

Posted on 12 August 2016 @ 12:08
Firstly as my first post I'd like to say hello to everyone here!

Right down to business, i'm building a Debian8 x64 box and hoped to get some configuration advice from everyone here. Here is my hardware -

2x Xeon E5-2620 v4 @ 2.10Ghz
128GB Memory
Supermicro X10DAI
2x LSI 9361-8i PCIe 3.0 Controllers (1 full and 1 empty)
8x 1TB Samsung 850 Pro SSD (Raid5 on one of the controllers, no hot spare)
2x 512GB Samsung 950 Pro NVMe (PCIe 3.0)

The server is running VMWare ESXi 6.0 to separate the hardware from the guest OS allowing easier upgrades, however it will be a dedicated host for this guest Debian server. The Debian server will be running ElasticSearch for indexing where Read Performance is the priority rather than Write.

Inside my Debian VM I have added a drive from the Raid5 volume and 2 drives that map to each of the NVMe cards. My initial thought was to create a standard pool with the Raid5 volume and then add the two NVMe cards as cache devices, but wanted to know if anyone else has any suggestions around getting every last bit of performance out of this?

zpool create -f pool0 /dev/sdb -m /zfs
zpool add pool0 -f cache /dev/sdc /dev/sdd
zfs set sync=disabled pool0

Any guidance much appreciated!
DVD_Chef
User

128 posts

Posted on 12 August 2016 @ 16:56
Are you saying that you will be using the LSI controller to create that RAID5 group? Normally when using zfs you want to give it as direct a connection to the drives as possible, or it can not properly perform its error correction magic. All of my builds have the raid cards flashed to be dumb HBA's, or if that is not possible configured in pass-through mode instead.
Pete-L
User

3 posts

Posted on 23 August 2016 @ 11:08
Hi, sorry for the slow reply. I dont believe the card supports pass-through/jbod.
Last Page

Valid XHTML 1.1