Latest interface: 0.3.1
Latest system: 010
zsozso
User

13 posts

Posted on 2 April 2012 @ 16:48edited 16:49 23s
Hi,

It's my first post in this forum, altough I've read the forum from quite a while now.
I've made a 10 disk raidz2 pool with 8 drives(the last 2 drives had the data from my previous pool) and 2 md devices.
(The pool was made with using the gnop trick so It should not be an allignment problem.)
After creating the pool I've removed the md devices so the pool became degraded but still usable. I've copied all my data from the ramaining two disks, then replaced the 2 remaining unavailable devices with them. The raid was resilvered in about 2x5h and all was fine.

I've tried to benchmark the pool in the degraded state with only 8 drives, and I got ~800MB/s read 600MB/s write. After rebuilding the pool now I have 300-600MB/s read and 500-600MB/s write, which is not too terrible but It should be faster than in the degraded state.
I've checked with gstat and my suspicion was right. The last two drives are 99% busy and the rest of the drives are 40-60% busy.
My configuration:
pool: tank
state: ONLINE
scan: resilvered 48K in 0h0m with 0 errors on Mon Apr 2 18:37:06 2012
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gpt/tank3 ONLINE 0 0 0
gpt/tank5 ONLINE 0 0 0
gpt/tank6 ONLINE 0 0 0
gpt/tank9 ONLINE 0 0 0
gpt/tank10 ONLINE 0 0 0
gpt/tank7 ONLINE 0 0 0
gpt/tank8 ONLINE 0 0 0
gpt/tank4 ONLINE 0 0 0
gpt/tank1 ONLINE 0 0 0
gpt/tank2 ONLINE 0 0 0

zdb tank | grep ashift
ashift: 12

Cpu: g530
MB: Gigabyte z68
Ram: 16GB
Controller: m1015 IT firmware
Disks: 10xSamsung F4 204UI

I'm kind of running out of ideas, if someone could point out some I would be thankful.

Best regards,
zso
zsozso
User

13 posts

Posted on 3 April 2012 @ 05:32
Hmm this is rather strange, just for fun I just gave it another spin:

ZFSguru 0.2.0-beta5 pool benchmark
Pool : tank (18.1T, 27% full)
Test size : 64 GiB
Data source : /dev/zero
Read throughput : 911.2 MB/s = 869 MiB/s
Write throughput: 559.1 MB/s = 533.2 MiB/s

I'm not sure what happened but it seems that the bottleneck is gone.
The_Dave
User

221 posts

Posted on 3 April 2012 @ 12:26
Nice speeds!
zsozso
User

13 posts

Posted on 3 April 2012 @ 13:24
At First I got ~300MB/s read, which was quite slow compared to ~800MB/s in degraded mode.
I'm happy with the speed now.
zsozso
User

13 posts

Posted on 4 April 2012 @ 03:33
Ok now this is strange.

ZFSguru 0.2.0-beta5 pool benchmark
Pool : tank (18.1T, 27% full)
Test size : 64 GiB
Data source : /dev/zero
Read throughput : 577 MB/s = 550.3 MiB/s
Write throughput: 537.7 MB/s = 512.7 MiB/s

I had to reboot and now I get these results.
I'm pretty confused, didn't change anything.
zsozso
User

13 posts

Posted on 7 April 2012 @ 22:20
Well it seems I'm not getting any further in finding a sollution.

I've tried almost everything now, recreating the pool in all kind of ways, tuning and such, but I still cannot reproduce those sweet numbers.

I even tried to install openidiana but it only brought me bitter dissapointment. (Much slower than bsd)

The only somewhat effective tuning options are min/max pending and aggregation.
I've tried to make 2 raid0 arrays with 5 disks in each of them just to find out if there is a bad disk, but it seems that the disks are almost identical in speed.

Raid0 pool1:
ZFSguru 0.2.0-beta5 pool benchmark
Pool : storage1 (9.06T, 0% full)
Test size : 64 GiB
Data source : /dev/zero
Read throughput : 551.6 MB/s = 526 MiB/s
Write throughput: 481.7 MB/s = 459.3 MiB/s

Raid0 pool2:
ZFSguru 0.2.0-beta5 pool benchmark
Pool : storage2 (9.06T, 0% full)
Test size : 64 GiB
Data source : /dev/zero
Read throughput : 567.7 MB/s = 541.4 MiB/s
Write throughput: 545.7 MB/s = 520.4 MiB/s

My main problem is the read performance, sometimes the write is faster than the read.
Afaik it should be higher when using raidz2.

Now testing RAIDZ2 configuration with 10 disks: cWmRzmId@
READ: 590 MiB/sec = 590 MiB/sec avg
WRITE: 576 MiB/sec = 576 MiB/sec avg

Any idea would help me greatly,
zsozso
The_Dave
User

221 posts

Posted on 8 April 2012 @ 15:14
Check your CPU utilization? Also I know with the soon to be released ZFSGuru system image update due any day now, new drivers will be included for your HBA card. I am looking forward to the new drivers to see if they bring any more speed.
zsozso
User

13 posts

Posted on 9 April 2012 @ 09:20
The cpu still has 20-40% idle left. And it was working before with 900+MB/s
zsozso
User

13 posts

Posted on 9 April 2012 @ 09:34
Okay I'm really not sure wth is going on.

ZFSguru 0.2.0-beta5 pool benchmark
Pool : tank (18.1T, 15% full)
Test size : 64 GiB
Data source : /dev/zero
Read throughput : 902.8 MB/s = 860.9 MiB/s
Write throughput: 535.4 MB/s = 510.6 MiB/s

I didn't even touch the system for two days, and before that I recreated and tested the array 40-60 times.
The_Dave
User

221 posts

Posted on 9 April 2012 @ 10:42
. <--Grain of salt...take the benchmarks with that in mind. No matter what you are easily maxing multiple GigE connections so what does it matter in the end really? Are you running VM's off the storage?
zsozso
User

13 posts

Posted on 9 April 2012 @ 15:15
I'm just trying to get out the max out of the system. I will be using 10GbE.
I've tried with bonnie as well, 350/700 MB/s.
aaront
User

75 posts

Posted on 20 May 2013 @ 14:24
could be an unbalanced pool, check with
zpool iostat -v tank

if its trying to write to only some of the disks, because they have less data, you will see the drop in throughput.
Last Page

Valid XHTML 1.1