Latest interface: 0.3.1
Latest system: 010
BIBIN
User

7 posts

Posted on 6 January 2016 @ 18:22
Hello,

We are currently testing our using ZFS on Linux as a storage platform for our VPS nodes but we don't seem to be getting the performance figures we expected. Can you please provide some suggestions as to what we should be tweaking to reach higher iops?

Hardware is SuperMicro with the MegaRAID 2108 chipset as a daughter card on each server. We had three servers that we tested: pure SSD with 4 x 480GB Chronos drives, 4 x 600GB SAS 10k drives and 480GB SSD cache, and lastly 4 x 1TB SAS 7.2k with 480GB SSD cache.

We set the onboard raid controller to essentially JBOD (raid0 per drive with cache turned off). We got the best performance when using Z2 with LZ4 compression. Here are the results we saw:

Server ----- RAID ----- Filesystem ---- Read Speed ----- Write Speed ------ Read IOPS ------ Write IOPS




pure SSD with 4 x 480GB ------ Soft - Z2 ------- ZFS without compression ------- 4.1GB/s ------ 778 MB/s ----- 23025 ----- 7664
Chronos drives

pure SSD with 4 x 480GB ------ Soft - Z2 ------- ZFS with lz4 compression ------- 4.6GB/s ----- 1.8GB/s ------ 47189 ----- 15715
Chronos drives

4 x 600GB SAS 10k drives ------- Soft - Z2 --- --- ZFS without compression ------ 4.0Gb/S ------ 486Mb/s ----- 10234 ------ 3413
with 480GB SSD cache

4 x 600GB SAS 10k drives ------ Soft - Z2 ------ ZFS with lz4 compression ------- 4.8Gb/s -------- 2.2Gb/s ----- 51056 ------- 17077
wtth 480GB SSD cache


4 x 1TB SAS 7.2k drives ----- Soft - Z2 ------- ZFS without compression --------- 4.1Gb/s ------- 1.4Gb/s -------- 53486 -------- 17840
with 480GB SSD cache


4 x 1TB SAS 7.2k drives
with 480GB SSD cache ------- Soft - Z2 ------ ZFS with lz4 compression -------- 4.4Gb/s -------- 1.7Gb/s ------- 37803 --------- 12594


It doesn't seem like there is a big difference between the pure SSD setup and the others, even without the SSD cache on the other setups. Is there something we are missing here or something we should be looking into? We were expecting iops to be a lot higher than the results.

Thank you for your help!
CiPHER
Developer

1199 posts

Posted on 7 January 2016 @ 12:49edited 12:49 56s
How did you perform those benchmarks?

Because HDDs can process about 60 IOps for 5400rpm and 90 IOps for 7200rpm. One RAID-Z1/2/3 counts as the IOps performance for one disk. So your 4-disk array should be 60 - 90 IOps, not 12500 or 17800. Those IOps will be served primarily by your RAM cache (ARC) and not by the disks.

Benchmarking with L2ARC usually doesn't make that much sense, because the benchmark will process I/O load in random fashion and not repetitive where L2ARC shines.

I think your benchmark is not that good. Realistic performance should be MUCH lower for HDD setups and much higher for L2ARC cache and pure SSD setups.

That said, 50.000 read IOps is not that bad even when RAM Caching is concerned. What figures where you expecting?
BIBIN
User

7 posts

Posted on 8 February 2016 @ 14:54
Hi,

We have tested on a baremetal server. May be the mentioned IOPS will be served by physical RAM cache and not disk IOPS. We are using the fio tool to test the IOPS. I am hoping to create a Kvm vps in the zpool storage as a dataset to test the disk IOPS?. pleas advice us .
Nietschy
User

8 posts

Posted on 21 February 2016 @ 15:02edited 15:04 36s
I watched a discussion about that matter recently.
https://www.youtube.com/watch?v=B_OEUfOmU8w
It starts at about 28 minutes, there comes a storage architect into the field and talk a lot about in depth information, and also about your question.

Generally you have to tune the system wide "queue depth" parameter for SSD performance. (1h3m)
A high queue depth kills hard drives, but really favours SSD storage.
Sad thing is, that you can only have SSD pools on that system because it is a parameter that tunes for the the whole system, not on a pool level.

Btw. he talks about your raid controller way of going at 55 minutes...

Another very informative article about SSD and trim (linux): https://blog.algolia.com/when-solid-state-drives-are-not-that-solid/
Last Page

Valid XHTML 1.1