Latest interface: 0.3.1
Latest system: 010
Pages: 1 2
DVD_Chef
User

128 posts

Posted on 13 February 2013 @ 11:07
I need to build a box with better random access read speed than what my current RAIDZ2 based setup provides. In this application there is a lot of raw data captured that must be processed by multiple operators, but 80% will be looked at once and not be looked at again (just archived). Because of this access pattern I do not think adding cache drives will speed things up, so was looking at RAID10 instead. What is the difference in fault tolerance going from a multiple vdev RAIDZ2 setup to a RAID10 design? The box allocated for this contains 15 2TB drives which are currently configured as a striped pool of two 7 drive RAIDZ2 vdevs and a spare. This theoretically allows the loss of two drives from each vdev without loosing data. With a setup of 7 striped pairs I can only loose one drive from each mirror, but each mirror holds less data, so a single drive failure puts less data in jeopardy? The raw capacity drops from 20TB to 12TB, so one would think the fault tolerance is better but is that actually true. The working dataset is 7-9TB, so going to 3 disk mirrors would be tight. Would pool performance degrade being at 90% capacity if I went with the 3 disk mirrors?

From reading what aaront has posted here, RAID10 should give me a significant performance boost over the RAIDZ2 setup.

Thoughts?
flanigan
User

18 posts

Posted on 19 February 2013 @ 12:32
I'm not expert, but I believe a RAIDZ2 will always provide better fault tolerance than RAID10. You touched on it in your post, but with RAIDZ2, you can lose any combination of 2 drives within the vdev and your data will be safe, whereas with RAID10, if you lose the wrong two drives, your data is toast. So RAID10 kind of falls in between RAIDZ1 and RAIDZ2 in terms of fault tolerance.

Of course, RAID10 should offer much better performance, so it's a tradeoff.

You could also go with RAID10 + offsite storage like Crashplan (google Crashplan on FreeBSD), so that if you did end up losing a your pool, you'd have a second copy of the data.
aaront
User

75 posts

Posted on 20 February 2013 @ 16:20
I only do raid10. The performance is so much better than z or z2 and rebuild times are tiny. However I also have hourly, daily, weekly snapshots all saved and sent to another setup. So worst case I lose less than 60 min of data. NOTHING in zfs is considered a backup. Raid is not a backup. Always have a backup.
DVD_Chef
User

128 posts

Posted on 22 February 2013 @ 11:44
Thanks for the responses guys.

I forgot to mention that the data is replicated to another box using rsync daily, so backup is covered. It is also cached on the capture devices that feed this box for a few days as well.

I am currently testing the box in the Z2 configuration before destroying the pool and trying the raid10 setup.

Results to follow.
DVD_Chef
User

128 posts

Posted on 12 August 2013 @ 15:57edited 16:02 20s
Here are the quick benchmark results from the "RAID10" setup, for those that asked.

ZFSguru 0.2.0-beta8 (9.1-006) pool benchmark
Pool : RAID10 (12.7T, 0% full)
Test size : 128 GiB
normal read : 509 MB/s
normal write : 100 MB/s
I/O bandwidth : 7 GB/s

And here is the pool info.

zfsguru.bsd$ zpool status
pool: RAID10
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
RAID10 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
label/T8 ONLINE 0 0 0
label/T0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
label/T1 ONLINE 0 0 0
label/T9 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
label/T2 ONLINE 0 0 0
label/T10 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
label/T3 ONLINE 0 0 0
label/T11 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
label/T4 ONLINE 0 0 0
label/T12 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
label/T5 ONLINE 0 0 0
label/T13 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
label/T6 ONLINE 0 0 0
label/T14 ONLINE 0 0 0
logs
gpt/SSD01_Log01 ONLINE 0 0 0
cache
gpt/SSD01_Cache01 ONLINE 0 0 0

errors: No known data errors

DVD_Chef
User

128 posts

Posted on 12 August 2013 @ 17:09
Thought the writes were a little slow so I moved the drives to a newer 15 bay supermicro server. Reads up a little, but the writes almost 3x as much.

ZFSguru 0.2.0-beta8 (9.1-006) pool benchmark
Pool : RAID10 (12.7T, 0% full)
Test size : 128 GiB
normal read : 539 MB/s
normal write : 281 MB/s
I/O bandwidth : 7 GB/s

I think I will use this one instead.
aaront
User

75 posts

Posted on 13 August 2013 @ 17:29
@dvd_chef
I'm working on cleaning up all my scripts for posting, but I can send you what I have if you want to do automatic snapshots/sending to the other box instead of rsync.
DVD_Chef
User

128 posts

Posted on 14 August 2013 @ 17:06
aaront wrote: @dvd_chef
I'm working on cleaning up all my scripts for posting, but I can send you what I have if you want to do automatic snapshots/sending to the other box instead of rsync.

The server this is replacing is not currently ZFS, so rsync is my best option. I would definitely like to see them when they are ready, as this data will be mirrored to another ZFS box. My goal this year is to upgrade/migrate all 60TB of data here to freebsd based systems running ZFS!
zfsnewbie
User

2 posts

Posted on 8 July 2014 @ 20:54edited 20:55 35s
DVD_Chef wrote: Here are the quick benchmark results from the "RAID10" setup, for those that asked.

ZFSguru 0.2.0-beta8 (9.1-006) pool benchmark
Pool : RAID10 (12.7T, 0% full)
Test size : 128 GiB
normal read : 509 MB/s
normal write : 100 MB/s
I/O bandwidth : 7 GB/s

And here is the pool info.

zfsguru.bsd$ zpool status
pool: RAID10
state: ONLINE
scan: none requested
config:

<<<< Clipped for Brevity >>>>

errors: No known data errors









Hi DVD_Chef,

First, thank you very much for purblishing your performance benchmarks (and the later update as well!.

I was wondering though, this shows the 'after' numbers after you changed from Raid Z2 to Raid 10 and then the second set Raid 10 on newer hardward. Do you have the 'before' numbers so that we can compare on the same hardware the performance of Raid Z2 verus Raid 10?

Thanks so much.

Hope you have a great day.

-- Newbie





DVD_Chef
User

128 posts

Posted on 15 July 2014 @ 17:57
zfsnewbie

Thanks for the response, and I will dig through my notes to see if I still have that data.

Those systems actually have been replaced again with new 16bay units from Aberdeen stuffed with 4tb SAS drives running NexentaStor. Nice software and features but expensive for 128TB of licenses. Was "forced" by management to replace most of my zfsguru based storage boxes with hardware/software from vendors that offered support contracts. At least I got shiny new hardware and an upgrade to SAS enterprise drives.
DVD_Chef
User

128 posts

Posted on 17 July 2014 @ 19:51
Here is the result of the advanced benchmark script.

ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 14 disks
disk 1: gpt/disk00
disk 2: gpt/disk01
disk 3: gpt/disk03
disk 4: gpt/disk04
disk 5: gpt/disk05
disk 6: gpt/disk08
disk 7: gpt/disk06
disk 8: gpt/disk07
disk 9: gpt/disk09
disk 10: gpt/disk010
disk 11: gpt/disk011
disk 12: gpt/disk012
disk 13: gpt/disk013
disk 14: gpt/disk02


  • Test Settings: TS32;

  • Tuning: none

  • Stopping background processes: sendmail, moused, syslogd and cron

  • Stopping Samba service


  • Now testing RAID0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 537 MiB/sec 542 MiB/sec 543 MiB/sec = 541 MiB/sec avg
    WRITE: 386 MiB/sec 413 MiB/sec 415 MiB/sec = 405 MiB/sec avg

    Now testing RAID0 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
    READ: 544 MiB/sec 546 MiB/sec 544 MiB/sec = 545 MiB/sec avg
    WRITE: 416 MiB/sec 418 MiB/sec 418 MiB/sec = 417 MiB/sec avg

    Now testing RAID0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
    READ: 548 MiB/sec 550 MiB/sec 548 MiB/sec = 548 MiB/sec avg
    WRITE: 416 MiB/sec 416 MiB/sec 421 MiB/sec = 418 MiB/sec avg

    Now testing RAIDZ configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 335 MiB/sec 334 MiB/sec 338 MiB/sec = 336 MiB/sec avg
    WRITE: 260 MiB/sec 262 MiB/sec 262 MiB/sec = 261 MiB/sec avg

    Now testing RAIDZ configuration with 13 disks: cWmRd@cWmRd@cWmRd@
    READ: 321 MiB/sec 323 MiB/sec 322 MiB/sec = 322 MiB/sec avg
    WRITE: 260 MiB/sec 263 MiB/sec 262 MiB/sec = 262 MiB/sec avg

    Now testing RAIDZ configuration with 14 disks: cWmRd@cWmRd@cWmRd@
    READ: 308 MiB/sec 307 MiB/sec 308 MiB/sec = 308 MiB/sec avg
    WRITE: 263 MiB/sec 261 MiB/sec 258 MiB/sec = 260 MiB/sec avg

    Now testing RAIDZ2 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 331 MiB/sec 332 MiB/sec 332 MiB/sec = 332 MiB/sec avg
    WRITE: 237 MiB/sec 231 MiB/sec 240 MiB/sec = 236 MiB/sec avg

    Now testing RAIDZ2 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
    READ: 335 MiB/sec 334 MiB/sec 333 MiB/sec = 334 MiB/sec avg
    WRITE: 232 MiB/sec 235 MiB/sec 232 MiB/sec = 233 MiB/sec avg

    Now testing RAIDZ2 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
    READ: 323 MiB/sec 320 MiB/sec 321 MiB/sec = 321 MiB/sec avg
    WRITE: 240 MiB/sec 240 MiB/sec 244 MiB/sec = 241 MiB/sec avg

    Now testing RAID1 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 435 MiB/sec 429 MiB/sec 423 MiB/sec = 429 MiB/sec avg
    WRITE: 42 MiB/sec 43 MiB/sec 43 MiB/sec = 43 MiB/sec avg

    Now testing RAID1 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
    READ: 459 MiB/sec 460 MiB/sec 459 MiB/sec = 459 MiB/sec avg
    WRITE: 43 MiB/sec 43 MiB/sec 42 MiB/sec = 43 MiB/sec avg

    Now testing RAID1 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
    READ: 458 MiB/sec 466 MiB/sec 472 MiB/sec = 465 MiB/sec avg
    WRITE: 43 MiB/sec 43 MiB/sec 44 MiB/sec = 43 MiB/sec avg

    Now testing RAID1+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 552 MiB/sec 551 MiB/sec 547 MiB/sec = 550 MiB/sec avg
    WRITE: 295 MiB/sec 317 MiB/sec 299 MiB/sec = 304 MiB/sec avg

    Now testing RAID1+0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
    READ: 553 MiB/sec 552 MiB/sec 553 MiB/sec = 553 MiB/sec avg
    WRITE: 337 MiB/sec 305 MiB/sec 338 MiB/sec = 327 MiB/sec avg

    Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 411 MiB/sec 412 MiB/sec 411 MiB/sec = 411 MiB/sec avg
    WRITE: 242 MiB/sec 241 MiB/sec 237 MiB/sec = 240 MiB/sec avg

    Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 417 MiB/sec 415 MiB/sec 417 MiB/sec = 416 MiB/sec avg
    WRITE: 249 MiB/sec 246 MiB/sec 247 MiB/sec = 247 MiB/sec avg

    Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 361 MiB/sec 363 MiB/sec 356 MiB/sec = 360 MiB/sec avg
    WRITE: 227 MiB/sec 229 MiB/sec 226 MiB/sec = 227 MiB/sec avg

    Now testing RAID0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 555 MiB/sec 552 MiB/sec 548 MiB/sec = 552 MiB/sec avg
    WRITE: 407 MiB/sec 373 MiB/sec 373 MiB/sec = 384 MiB/sec avg

    Now testing RAID0 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
    READ: 543 MiB/sec 544 MiB/sec 545 MiB/sec = 544 MiB/sec avg
    WRITE: 425 MiB/sec 423 MiB/sec 421 MiB/sec = 423 MiB/sec avg

    Now testing RAID0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
    READ: 537 MiB/sec 537 MiB/sec 534 MiB/sec = 536 MiB/sec avg
    WRITE: 424 MiB/sec 424 MiB/sec 425 MiB/sec = 424 MiB/sec avg

    Now testing RAID0 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
    READ: 544 MiB/sec 545 MiB/sec 546 MiB/sec = 545 MiB/sec avg
    WRITE: 422 MiB/sec 426 MiB/sec 424 MiB/sec = 424 MiB/sec avg

    Now testing RAIDZ configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 353 MiB/sec 354 MiB/sec 350 MiB/sec = 352 MiB/sec avg
    WRITE: 245 MiB/sec 258 MiB/sec 242 MiB/sec = 248 MiB/sec avg

    Now testing RAIDZ configuration with 9 disks: cWmRd@cWmRd@cWmRd@
    READ: 360 MiB/sec 360 MiB/sec 360 MiB/sec = 360 MiB/sec avg
    WRITE: 261 MiB/sec 247 MiB/sec 258 MiB/sec = 255 MiB/sec avg

    Now testing RAIDZ configuration with 10 disks: cWmRd@cWmRd@cWmRd@
    READ: 346 MiB/sec 344 MiB/sec 346 MiB/sec = 345 MiB/sec avg
    WRITE: 262 MiB/sec 247 MiB/sec 255 MiB/sec = 255 MiB/sec avg

    Now testing RAIDZ configuration with 11 disks: cWmRd@cWmRd@cWmRd@
    READ: 337 MiB/sec 342 MiB/sec 335 MiB/sec = 338 MiB/sec avg
    WRITE: 257 MiB/sec 267 MiB/sec 264 MiB/sec = 262 MiB/sec avg

    Now testing RAIDZ2 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 393 MiB/sec 391 MiB/sec 388 MiB/sec = 391 MiB/sec avg
    WRITE: 190 MiB/sec 179 MiB/sec 183 MiB/sec = 184 MiB/sec avg

    Now testing RAIDZ2 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
    READ: 368 MiB/sec 372 MiB/sec 379 MiB/sec = 373 MiB/sec avg
    WRITE: 226 MiB/sec 212 MiB/sec 218 MiB/sec = 219 MiB/sec avg

    Now testing RAIDZ2 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
    READ: 350 MiB/sec 349 MiB/sec 353 MiB/sec = 351 MiB/sec avg
    WRITE: 241 MiB/sec 235 MiB/sec 234 MiB/sec = 237 MiB/sec avg

    Now testing RAIDZ2 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
    READ: 350 MiB/sec 350 MiB/sec 350 MiB/sec = 350 MiB/sec avg
    WRITE: 230 MiB/sec 240 MiB/sec 235 MiB/sec = 235 MiB/sec avg

    Now testing RAID1 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 414 MiB/sec 411 MiB/sec 417 MiB/sec = 414 MiB/sec avg
    WRITE: 48 MiB/sec 51 MiB/sec 55 MiB/sec = 51 MiB/sec avg

    Now testing RAID1 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
    READ: 384 MiB/sec 381 MiB/sec 384 MiB/sec = 383 MiB/sec avg
    WRITE: 50 MiB/sec 51 MiB/sec 53 MiB/sec = 51 MiB/sec avg

    Now testing RAID1 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
    READ: 386 MiB/sec 386 MiB/sec 381 MiB/sec = 385 MiB/sec avg
    WRITE: 49 MiB/sec 49 MiB/sec 54 MiB/sec = 51 MiB/sec avg

    Now testing RAID1 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
    READ: 375 MiB/sec 386 MiB/sec 380 MiB/sec = 380 MiB/sec avg
    WRITE: 51 MiB/sec 52 MiB/sec 54 MiB/sec = 52 MiB/sec avg

    Now testing RAID1+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 520 MiB/sec 523 MiB/sec 520 MiB/sec = 521 MiB/sec avg
    WRITE: 254 MiB/sec 240 MiB/sec 254 MiB/sec = 249 MiB/sec avg

    Now testing RAID1+0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
    READ: 533 MiB/sec 533 MiB/sec 533 MiB/sec = 533 MiB/sec avg
    WRITE: 302 MiB/sec 301 MiB/sec 297 MiB/sec = 300 MiB/sec avg

    Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 412 MiB/sec 409 MiB/sec 409 MiB/sec = 416 MiB/sec avg
    WRITE: 247 MiB/sec 242 MiB/sec 244 MiB/sec = 247 MiB/sec avg

    Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 417 MiB/sec 415 MiB/sec 416 MiB/sec = 416 MiB/sec avg
    WRITE: 246 MiB/sec 246 MiB/sec 246 MiB/sec = 246 MiB/sec avg

    Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 365 MiB/sec 367 MiB/sec 364 MiB/sec = 365 MiB/sec avg
    WRITE: 229 MiB/sec 227 MiB/sec 228 MiB/sec = 228 MiB/sec avg

    Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 499 MiB/sec 496 MiB/sec 511 MiB/sec = 502 MiB/sec avg
    WRITE: 369 MiB/sec 390 MiB/sec 406 MiB/sec = 388 MiB/sec avg

    Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
    READ: 560 MiB/sec 561 MiB/sec 567 MiB/sec = 563 MiB/sec avg
    WRITE: 331 MiB/sec 311 MiB/sec 325 MiB/sec = 322 MiB/sec avg

    Now testing RAID0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
    READ: 561 MiB/sec 566 MiB/sec 568 MiB/sec = 565 MiB/sec avg
    WRITE: 371 MiB/sec 361 MiB/sec 335 MiB/sec = 356 MiB/sec avg

    Now testing RAID0 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
    READ: 557 MiB/sec 558 MiB/sec 559 MiB/sec = 558 MiB/sec avg
    WRITE: 365 MiB/sec 363 MiB/sec 386 MiB/sec = 372 MiB/sec avg

    Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 354 MiB/sec 353 MiB/sec 354 MiB/sec = 354 MiB/sec avg
    WRITE: 214 MiB/sec 221 MiB/sec 208 MiB/sec = 214 MiB/sec avg

    Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
    READ: 379 MiB/sec 374 MiB/sec 380 MiB/sec = 377 MiB/sec avg
    WRITE: 176 MiB/sec 191 MiB/sec 168 MiB/sec = 178 MiB/sec avg

    Now testing RAIDZ configuration with 6 disks: cWmRd@cWmRd@cWmRd@
    READ: 374 MiB/sec 376 MiB/sec 374 MiB/sec = 374 MiB/sec avg
    WRITE: 153 MiB/sec 141 MiB/sec 160 MiB/sec = 151 MiB/sec avg

    Now testing RAIDZ configuration with 7 disks: cWmRd@cWmRd@cWmRd@
    READ: 353 MiB/sec 352 MiB/sec 351 MiB/sec = 352 MiB/sec avg
    WRITE: 251 MiB/sec 241 MiB/sec 247 MiB/sec = 246 MiB/sec avg

    Now testing RAIDZ2 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 264 MiB/sec 268 MiB/sec 269 MiB/sec = 267 MiB/sec avg
    WRITE: 113 MiB/sec 113 MiB/sec 112 MiB/sec = 113 MiB/sec avg

    Now testing RAIDZ2 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
    READ: 337 MiB/sec 333 MiB/sec 340 MiB/sec = 337 MiB/sec avg
    WRITE: 127 MiB/sec 128 MiB/sec 133 MiB/sec = 129 MiB/sec avg

    Now testing RAIDZ2 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
    READ: 367 MiB/sec 366 MiB/sec 368 MiB/sec = 367 MiB/sec avg
    WRITE: 171 MiB/sec 174 MiB/sec 163 MiB/sec = 169 MiB/sec avg

    Now testing RAIDZ2 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
    READ: 353 MiB/sec 350 MiB/sec 353 MiB/sec = 352 MiB/sec avg
    WRITE: 167 MiB/sec 185 MiB/sec 217 MiB/sec = 190 MiB/sec avg

    Now testing RAID1 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 318 MiB/sec 320 MiB/sec 323 MiB/sec = 320 MiB/sec avg
    WRITE: 94 MiB/sec 97 MiB/sec 109 MiB/sec = 100 MiB/sec avg

    Now testing RAID1 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
    READ: 346 MiB/sec 338 MiB/sec 333 MiB/sec = 339 MiB/sec avg
    WRITE: 53 MiB/sec 64 MiB/sec 54 MiB/sec = 57 MiB/sec avg

    Now testing RAID1 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
    READ: 395 MiB/sec 397 MiB/sec 399 MiB/sec = 397 MiB/sec avg
    WRITE: 67 MiB/sec 53 MiB/sec 64 MiB/sec = 61 MiB/sec avg

    Now testing RAID1 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
    READ: 447 MiB/sec 445 MiB/sec 450 MiB/sec = 447 MiB/sec avg
    WRITE: 54 MiB/sec 65 MiB/sec 54 MiB/sec = 58 MiB/sec avg

    Now testing RAID1+0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 365 MiB/sec 360 MiB/sec 360 MiB/sec = 361 MiB/sec avg
    WRITE: 242 MiB/sec 229 MiB/sec 222 MiB/sec = 231 MiB/sec avg

    Now testing RAID1+0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
    READ: 479 MiB/sec 479 MiB/sec 479 MiB/sec = 479 MiB/sec avg
    WRITE: 194 MiB/sec 192 MiB/sec 235 MiB/sec = 207 MiB/sec avg

    Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 408 MiB/sec 408 MiB/sec 409 MiB/sec = 354 MiB/sec avg
    WRITE: 246 MiB/sec 243 MiB/sec 243 MiB/sec = 214 MiB/sec avg

    Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 418 MiB/sec 416 MiB/sec 417 MiB/sec = 354 MiB/sec avg
    WRITE: 245 MiB/sec 246 MiB/sec 246 MiB/sec = 214 MiB/sec avg

    Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 364 MiB/sec 363 MiB/sec 365 MiB/sec = 367 MiB/sec avg
    WRITE: 229 MiB/sec 228 MiB/sec 228 MiB/sec = 169 MiB/sec avg

    Now testing RAID0 configuration with 1 disks: cWmRd@cWmRd@cWmRd@
    READ: 137 MiB/sec 137 MiB/sec 136 MiB/sec = 137 MiB/sec avg
    WRITE: 102 MiB/sec 113 MiB/sec 110 MiB/sec = 108 MiB/sec avg

    Now testing RAID0 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
    READ: 272 MiB/sec 274 MiB/sec 269 MiB/sec = 272 MiB/sec avg
    WRITE: 232 MiB/sec 228 MiB/sec 230 MiB/sec = 230 MiB/sec avg

    Now testing RAID0 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
    READ: 398 MiB/sec 397 MiB/sec 393 MiB/sec = 396 MiB/sec avg
    WRITE: 349 MiB/sec 330 MiB/sec 336 MiB/sec = 338 MiB/sec avg

    Now testing RAIDZ configuration with 2 disks: cWmRd@cWmRd@cWmRd@
    READ: 139 MiB/sec 138 MiB/sec 138 MiB/sec = 139 MiB/sec avg
    WRITE: 96 MiB/sec 106 MiB/sec 101 MiB/sec = 101 MiB/sec avg

    Now testing RAIDZ configuration with 3 disks: cWmRd@cWmRd@cWmRd@
    READ: 259 MiB/sec 257 MiB/sec 260 MiB/sec = 259 MiB/sec avg
    WRITE: 157 MiB/sec 152 MiB/sec 141 MiB/sec = 150 MiB/sec avg

    Now testing RAIDZ2 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
    READ: 139 MiB/sec 136 MiB/sec 138 MiB/sec = 138 MiB/sec avg
    WRITE: 103 MiB/sec 96 MiB/sec 102 MiB/sec = 100 MiB/sec avg

    Now testing RAID1 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
    READ: 197 MiB/sec 198 MiB/sec 196 MiB/sec = 197 MiB/sec avg
    WRITE: 98 MiB/sec 105 MiB/sec 104 MiB/sec = 102 MiB/sec avg

    Now testing RAID1 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
    READ: 263 MiB/sec 264 MiB/sec 259 MiB/sec = 262 MiB/sec avg
    WRITE: 118 MiB/sec 101 MiB/sec 104 MiB/sec = 108 MiB/sec avg

    Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 407 MiB/sec 405 MiB/sec 406 MiB/sec = 354 MiB/sec avg
    WRITE: 247 MiB/sec 247 MiB/sec 246 MiB/sec = 214 MiB/sec avg

    Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 416 MiB/sec 416 MiB/sec 417 MiB/sec = 354 MiB/sec avg
    WRITE: 246 MiB/sec 245 MiB/sec 246 MiB/sec = 214 MiB/sec avg

    Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 364 MiB/sec 367 MiB/sec 364 MiB/sec = 367 MiB/sec avg
    WRITE: 230 MiB/sec 228 MiB/sec 227 MiB/sec = 169 MiB/sec avg

    Done
    DVD_Chef
    User

    128 posts

    Posted on 17 July 2014 @ 20:14
    And here are the graphs

    https://www.dropbox.com/s/v82cffkprqqriqg/AdvancedBench_seqread.png

    https://www.dropbox.com/s/mv0wk34r0g1tnoi/AdvancedBench_seqwrite.png
    CiPHER
    Developer

    1199 posts

    Posted on 17 July 2014 @ 21:21
    Hi DVD_Chef,

    Thank you for the benchmark graphs and posting them here!

    Can you please tell me what kind of hardware you are using?
    DVD_Chef
    User

    128 posts

    Posted on 17 July 2014 @ 21:40
    It is an older 15 bay supermicro server, which originally was a datadomain dd460 restorer. It has dual 64-bit quad core Intel(R) Xeon(TM) CPU 3.20GHz with 16GB of memory. Storage pool is 14 2TB Hitachi 7200rpm drives (HDS723020BLA642) spread equally across 2 supermicro SAT2-MV8 pci-x HBA's. The OS drive is a 500GB notebook drive attached to the motherboard SATA connector.
    CiPHER
    Developer

    1199 posts

    Posted on 17 July 2014 @ 22:25
    I see. I was somewhat dissappointed to see the low scores or bad scaling as the number of disks grew. But i am pretty sure it is because of the controller. PCI-X is not that good, Marvell is not that good. The scaling is pretty bad.

    If you would use modern PCI-express controllers on the same CPU and same memory, i think you would have gotten much better scores, or at least better scaling when the disk count goes up.

    Now it just hits a roof where the latencies go higher. This can be seen with RAID-Z2 very well, which is very sensitive to latency. But also with the RAID1 mirror which takes a dive. If you got similar performing disks connected to a good controller, the RAID1/mirror scaling should be a horizontal line. If it drops, it means the latencies grow as the disk count increases.
    DVD_Chef
    User

    128 posts

    Posted on 18 July 2014 @ 17:28
    Unfortunately PCI-X 133 is the best that motherboard can do, as it has no PCI-E slots. Just for completeness I fired off the benchmark again to run the random read/write test section. I will add those to the thread when finished.
    CiPHER
    Developer

    1199 posts

    Posted on 18 July 2014 @ 23:01
    Great! :)
    DVD_Chef
    User

    128 posts

    Posted on 21 July 2014 @ 22:09
    Here are the results of the Advanced Bench Random RW tests.

    https://www.dropbox.com/s/4n48j3s3iab252s/AdvancedBench_RandRead.png

    https://www.dropbox.com/s/g1w72imgtfig03m/AdvancedBench_RandWrite.png

    https://www.dropbox.com/s/3gnc95uneumvaq5/AdvancedBench_RandReadWrite.png


    ZFSGURU-benchmark, version 1
    Test size: 32.000 gigabytes (GiB)
    Test rounds: 2
    Cooldown period: 2 seconds
    Sector size override: default (no override)
    Number of disks: 14 disks
    disk 1: gpt/disk00
    disk 2: gpt/disk01
    disk 3: gpt/disk03
    disk 4: gpt/disk04
    disk 5: gpt/disk05
    disk 6: gpt/disk08
    disk 7: gpt/disk06
    disk 8: gpt/disk07
    disk 9: gpt/disk09
    disk 10: gpt/disk010
    disk 11: gpt/disk011
    disk 12: gpt/disk012
    disk 13: gpt/disk013
    disk 14: gpt/disk02


  • Test Settings: TS32; TR2; SC1;

  • Tuning: none

  • Stopping background processes: sendmail, moused, syslogd and cron

  • Stopping Samba service


  • Now testing RAID0 configuration with 12 disks: czmId@czmId@
    raidtest.read: 471 474 = 472 IOps ( ~30 MiB/sec )
    raidtest.write: 493 476 = 484 IOps ( ~31 MiB/sec )
    raidtest.mixed: 493 499 = 496 IOps ( ~31 MiB/sec )

    Now testing RAID0 configuration with 13 disks: czmId@czmId@
    raidtest.read: 486 500 = 493 IOps ( ~31 MiB/sec )
    raidtest.write: 496 497 = 496 IOps ( ~31 MiB/sec )
    raidtest.mixed: 501 511 = 506 IOps ( ~32 MiB/sec )

    Now testing RAID0 configuration with 14 disks: czmId@czmId@
    raidtest.read: 517 519 = 518 IOps ( ~33 MiB/sec )
    raidtest.write: 513 517 = 515 IOps ( ~33 MiB/sec )
    raidtest.mixed: 517 524 = 520 IOps ( ~33 MiB/sec )

    Now testing RAIDZ configuration with 12 disks: czmId@czmId@
    raidtest.read: 84 84 = 84 IOps ( ~5544 KiB/sec )
    raidtest.write: 110 110 = 110 IOps ( ~7260 KiB/sec )
    raidtest.mixed: 114 115 = 114 IOps ( ~7524 KiB/sec )

    Now testing RAIDZ configuration with 13 disks: czmId@czmId@
    raidtest.read: 66 85 = 75 IOps ( ~4950 KiB/sec )
    raidtest.write: 92 108 = 100 IOps ( ~6600 KiB/sec )
    raidtest.mixed: 94 113 = 103 IOps ( ~6798 KiB/sec )

    Now testing RAIDZ configuration with 14 disks: czmId@czmId@
    raidtest.read: 84 83 = 83 IOps ( ~5478 KiB/sec )
    raidtest.write: 106 106 = 106 IOps ( ~6996 KiB/sec )
    raidtest.mixed: 110 111 = 110 IOps ( ~7260 KiB/sec )

    Now testing RAIDZ2 configuration with 12 disks: czmId@czmId@
    raidtest.read: 87 89 = 88 IOps ( ~5808 KiB/sec )
    raidtest.write: 114 115 = 114 IOps ( ~7524 KiB/sec )
    raidtest.mixed: 117 118 = 117 IOps ( ~7722 KiB/sec )

    Now testing RAIDZ2 configuration with 13 disks: czmId@czmId@
    raidtest.read: 74 86 = 80 IOps ( ~5280 KiB/sec )
    raidtest.write: 101 113 = 107 IOps ( ~7062 KiB/sec )
    raidtest.mixed: 108 117 = 112 IOps ( ~7392 KiB/sec )

    Now testing RAIDZ2 configuration with 14 disks: czmId@czmId@
    raidtest.read: 86 84 = 85 IOps ( ~5610 KiB/sec )
    raidtest.write: 110 110 = 110 IOps ( ~7260 KiB/sec )
    raidtest.mixed: 117 117 = 117 IOps ( ~7722 KiB/sec )

    Now testing RAID1 configuration with 12 disks: czmId@czmId@
    raidtest.read: 440 452 = 446 IOps ( ~28 MiB/sec )
    raidtest.write: 263 270 = 266 IOps ( ~17 MiB/sec )
    raidtest.mixed: 260 260 = 260 IOps ( ~16 MiB/sec )

    Now testing RAID1 configuration with 13 disks: czmId@czmId@
    raidtest.read: 432 477 = 454 IOps ( ~29 MiB/sec )
    raidtest.write: 259 292 = 275 IOps ( ~17 MiB/sec )
    raidtest.mixed: 260 296 = 278 IOps ( ~17 MiB/sec )

    Now testing RAID1 configuration with 14 disks: czmId@czmId@
    raidtest.read: 489 500 = 494 IOps ( ~31 MiB/sec )
    raidtest.write: 294 259 = 276 IOps ( ~17 MiB/sec )
    raidtest.mixed: 295 274 = 284 IOps ( ~18 MiB/sec )

    Now testing RAID1+0 configuration with 12 disks: czmId@czmId@
    raidtest.read: 515 530 = 522 IOps ( ~33 MiB/sec )
    raidtest.write: 488 484 = 486 IOps ( ~31 MiB/sec )
    raidtest.mixed: 495 494 = 494 IOps ( ~31 MiB/sec )

    Now testing RAID1+0 configuration with 14 disks: czmId@czmId@
    raidtest.read: 585 569 = 577 IOps ( ~37 MiB/sec )
    raidtest.write: 512 515 = 513 IOps ( ~33 MiB/sec )
    raidtest.mixed: 519 516 = 517 IOps ( ~33 MiB/sec )

    Now testing RAIDZ+0 configuration with 8 disks: czmId@czmId@
    raidtest.read: 146 148 = 147 IOps ( ~9702 KiB/sec )
    raidtest.write: 182 179 = 180 IOps ( ~11 MiB/sec )
    raidtest.mixed: 179 181 = 180 IOps ( ~11 MiB/sec )

    Now testing RAIDZ+0 configuration with 12 disks: czmId@czmId@
    raidtest.read: 193 192 = 192 IOps ( ~12 MiB/sec )
    raidtest.write: 234 235 = 234 IOps ( ~15 MiB/sec )
    raidtest.mixed: 240 235 = 237 IOps ( ~15 MiB/sec )

    Now testing RAIDZ2+0 configuration with 12 disks: czmId@czmId@
    raidtest.read: 186 179 = 182 IOps ( ~11 MiB/sec )
    raidtest.write: 210 205 = 207 IOps ( ~13 MiB/sec )
    raidtest.mixed: 209 206 = 207 IOps ( ~13 MiB/sec )

    Now testing RAID0 configuration with 8 disks: czmId@czmId@
    raidtest.read: 368 360 = 364 IOps ( ~23 MiB/sec )
    raidtest.write: 388 386 = 387 IOps ( ~24 MiB/sec )
    raidtest.mixed: 387 400 = 393 IOps ( ~25 MiB/sec )

    Now testing RAID0 configuration with 9 disks: czmId@czmId@
    raidtest.read: 399 403 = 401 IOps ( ~25 MiB/sec )
    raidtest.write: 403 419 = 411 IOps ( ~26 MiB/sec )
    raidtest.mixed: 422 418 = 420 IOps ( ~27 MiB/sec )

    Now testing RAID0 configuration with 10 disks: czmId@czmId@
    raidtest.read: 434 416 = 425 IOps ( ~27 MiB/sec )
    raidtest.write: 446 446 = 446 IOps ( ~28 MiB/sec )
    raidtest.mixed: 451 435 = 443 IOps ( ~28 MiB/sec )

    Now testing RAID0 configuration with 11 disks: czmId@czmId@
    raidtest.read: 454 466 = 460 IOps ( ~29 MiB/sec )
    raidtest.write: 466 474 = 470 IOps ( ~30 MiB/sec )
    raidtest.mixed: 472 461 = 466 IOps ( ~30 MiB/sec )

    Now testing RAIDZ configuration with 8 disks: czmId@czmId@
    raidtest.read: 92 90 = 91 IOps ( ~6006 KiB/sec )
    raidtest.write: 112 112 = 112 IOps ( ~7392 KiB/sec )
    raidtest.mixed: 115 115 = 115 IOps ( ~7590 KiB/sec )

    Now testing RAIDZ configuration with 9 disks: czmId@czmId@
    raidtest.read: 92 94 = 93 IOps ( ~6138 KiB/sec )
    raidtest.write: 114 114 = 114 IOps ( ~7524 KiB/sec )
    raidtest.mixed: 119 118 = 118 IOps ( ~7788 KiB/sec )

    Now testing RAIDZ configuration with 10 disks: czmId@czmId@
    raidtest.read: 87 88 = 87 IOps ( ~5742 KiB/sec )
    raidtest.write: 108 109 = 108 IOps ( ~7128 KiB/sec )
    raidtest.mixed: 114 115 = 114 IOps ( ~7524 KiB/sec )

    Now testing RAIDZ configuration with 11 disks: czmId@czmId@
    raidtest.read: 72 87 = 79 IOps ( ~5214 KiB/sec )
    raidtest.write: 95 110 = 102 IOps ( ~6732 KiB/sec )
    raidtest.mixed: 97 116 = 106 IOps ( ~6996 KiB/sec )

    Now testing RAIDZ2 configuration with 8 disks: czmId@czmId@
    raidtest.read: 110 111 = 110 IOps ( ~7260 KiB/sec )
    raidtest.write: 133 135 = 134 IOps ( ~8844 KiB/sec )
    raidtest.mixed: 137 139 = 138 IOps ( ~9108 KiB/sec )

    Now testing RAIDZ2 configuration with 9 disks: czmId@czmId@
    raidtest.read: 92 93 = 92 IOps ( ~6072 KiB/sec )
    raidtest.write: 122 122 = 122 IOps ( ~8052 KiB/sec )
    raidtest.mixed: 125 123 = 124 IOps ( ~8184 KiB/sec )

    Now testing RAIDZ2 configuration with 10 disks: czmId@czmId@
    raidtest.read: 93 94 = 93 IOps ( ~6138 KiB/sec )
    raidtest.write: 120 116 = 118 IOps ( ~7788 KiB/sec )
    raidtest.mixed: 126 124 = 125 IOps ( ~8250 KiB/sec )

    Now testing RAIDZ2 configuration with 11 disks: czmId@czmId@
    raidtest.read: 91 92 = 91 IOps ( ~6006 KiB/sec )
    raidtest.write: 117 117 = 117 IOps ( ~7722 KiB/sec )
    raidtest.mixed: 120 120 = 120 IOps ( ~7920 KiB/sec )

    Now testing RAID1 configuration with 8 disks: czmId@czmId@
    raidtest.read: 339 347 = 343 IOps ( ~22 MiB/sec )
    raidtest.write: 217 227 = 222 IOps ( ~14 MiB/sec )
    raidtest.mixed: 233 252 = 242 IOps ( ~15 MiB/sec )

    Now testing RAID1 configuration with 9 disks: czmId@czmId@
    raidtest.read: 358 318 = 338 IOps ( ~21 MiB/sec )
    raidtest.write: 237 232 = 234 IOps ( ~15 MiB/sec )
    raidtest.mixed: 250 236 = 243 IOps ( ~15 MiB/sec )

    Now testing RAID1 configuration with 10 disks: czmId@czmId@
    raidtest.read: 331 349 = 340 IOps ( ~21 MiB/sec )
    raidtest.write: 236 234 = 235 IOps ( ~15 MiB/sec )
    raidtest.mixed: 248 230 = 239 IOps ( ~15 MiB/sec )

    Now testing RAID1 configuration with 11 disks: czmId@czmId@
    raidtest.read: 410 396 = 403 IOps ( ~25 MiB/sec )
    raidtest.write: 266 231 = 248 IOps ( ~15 MiB/sec )
    raidtest.mixed: 261 238 = 249 IOps ( ~16 MiB/sec )

    Now testing RAID1+0 configuration with 8 disks: czmId@czmId@
    raidtest.read: 380 389 = 384 IOps ( ~24 MiB/sec )
    raidtest.write: 381 388 = 384 IOps ( ~24 MiB/sec )
    raidtest.mixed: 385 385 = 385 IOps ( ~24 MiB/sec )

    Now testing RAID1+0 configuration with 10 disks: czmId@czmId@
    raidtest.read: 457 460 = 458 IOps ( ~29 MiB/sec )
    raidtest.write: 437 445 = 441 IOps ( ~28 MiB/sec )
    raidtest.mixed: 439 440 = 439 IOps ( ~28 MiB/sec )

    Now testing RAIDZ+0 configuration with 8 disks: czmId@czmId@
    raidtest.read: 140 140 = 192 IOps ( ~12 MiB/sec )
    raidtest.write: 181 179 = 234 IOps ( ~15 MiB/sec )
    raidtest.mixed: 180 179 = 237 IOps ( ~15 MiB/sec )

    Now testing RAIDZ+0 configuration with 12 disks: czmId@czmId@
    raidtest.read: 194 192 = 193 IOps ( ~12 MiB/sec )
    raidtest.write: 231 233 = 232 IOps ( ~14 MiB/sec )
    raidtest.mixed: 230 237 = 233 IOps ( ~15 MiB/sec )

    Now testing RAIDZ2+0 configuration with 12 disks: czmId@czmId@
    raidtest.read: 177 180 = 178 IOps ( ~11 MiB/sec )
    raidtest.write: 205 205 = 205 IOps ( ~13 MiB/sec )
    raidtest.mixed: 203 206 = 204 IOps ( ~13 MiB/sec )

    Now testing RAID0 configuration with 4 disks: czmId@czmId@
    raidtest.read: 207 210 = 208 IOps ( ~13 MiB/sec )
    raidtest.write: 249 253 = 251 IOps ( ~16 MiB/sec )
    raidtest.mixed: 257 257 = 257 IOps ( ~16 MiB/sec )

    Now testing RAID0 configuration with 5 disks: czmId@czmId@
    raidtest.read: 257 255 = 256 IOps ( ~16 MiB/sec )
    raidtest.write: 277 287 = 282 IOps ( ~18 MiB/sec )
    raidtest.mixed: 288 293 = 290 IOps ( ~18 MiB/sec )

    Now testing RAID0 configuration with 6 disks: czmId@czmId@
    raidtest.read: 293 290 = 291 IOps ( ~18 MiB/sec )
    raidtest.write: 326 313 = 319 IOps ( ~20 MiB/sec )
    raidtest.mixed: 330 327 = 328 IOps ( ~21 MiB/sec )

    Now testing RAID0 configuration with 7 disks: czmId@czmId@
    raidtest.read: 316 316 = 316 IOps ( ~20 MiB/sec )
    raidtest.write: 361 352 = 356 IOps ( ~22 MiB/sec )
    raidtest.mixed: 360 360 = 360 IOps ( ~23 MiB/sec )

    Now testing RAIDZ configuration with 4 disks: czmId@czmId@
    raidtest.read: 90 92 = 91 IOps ( ~6006 KiB/sec )
    raidtest.write: 116 116 = 116 IOps ( ~7656 KiB/sec )
    raidtest.mixed: 115 114 = 114 IOps ( ~7524 KiB/sec )

    Now testing RAIDZ configuration with 5 disks: czmId@czmId@
    raidtest.read: 104 105 = 104 IOps ( ~6864 KiB/sec )
    raidtest.write: 125 127 = 126 IOps ( ~8316 KiB/sec )
    raidtest.mixed: 125 127 = 126 IOps ( ~8316 KiB/sec )

    Now testing RAIDZ configuration with 6 disks: czmId@czmId@
    raidtest.read: 87 89 = 88 IOps ( ~5808 KiB/sec )
    raidtest.write: 108 105 = 106 IOps ( ~6996 KiB/sec )
    raidtest.mixed: 109 108 = 108 IOps ( ~7128 KiB/sec )

    Now testing RAIDZ configuration with 7 disks: czmId@czmId@
    raidtest.read: 91 92 = 91 IOps ( ~6006 KiB/sec )
    raidtest.write: 118 116 = 117 IOps ( ~7722 KiB/sec )
    raidtest.mixed: 121 117 = 119 IOps ( ~7854 KiB/sec )

    Now testing RAIDZ2 configuration with 4 disks: czmId@czmId@
    raidtest.read: 94 94 = 94 IOps ( ~6204 KiB/sec )
    raidtest.write: 138 138 = 138 IOps ( ~9108 KiB/sec )
    raidtest.mixed: 138 140 = 139 IOps ( ~9174 KiB/sec )

    Now testing RAIDZ2 configuration with 5 disks: czmId@czmId@
    raidtest.read: 132 129 = 130 IOps ( ~8580 KiB/sec )
    raidtest.write: 149 147 = 148 IOps ( ~9768 KiB/sec )
    raidtest.mixed: 143 145 = 144 IOps ( ~9504 KiB/sec )

    Now testing RAIDZ2 configuration with 6 disks: czmId@czmId@
    raidtest.read: 98 131 = 114 IOps ( ~7524 KiB/sec )
    raidtest.write: 126 151 = 138 IOps ( ~9108 KiB/sec )
    raidtest.mixed: 126 152 = 139 IOps ( ~9174 KiB/sec )

    Now testing RAIDZ2 configuration with 7 disks: czmId@czmId@
    raidtest.read: 82 98 = 90 IOps ( ~5940 KiB/sec )
    raidtest.write: 115 127 = 121 IOps ( ~7986 KiB/sec )
    raidtest.mixed: 116 129 = 122 IOps ( ~8052 KiB/sec )

    Now testing RAID1 configuration with 4 disks: czmId@czmId@
    raidtest.read: 223 175 = 199 IOps ( ~12 MiB/sec )
    raidtest.write: 210 184 = 197 IOps ( ~12 MiB/sec )
    raidtest.mixed: 215 186 = 200 IOps ( ~12 MiB/sec )

    Now testing RAID1 configuration with 5 disks: czmId@czmId@
    raidtest.read: 267 213 = 240 IOps ( ~15 MiB/sec )
    raidtest.write: 208 187 = 197 IOps ( ~12 MiB/sec )
    raidtest.mixed: 207 182 = 194 IOps ( ~12 MiB/sec )

    Now testing RAID1 configuration with 6 disks: czmId@czmId@
    raidtest.read: 298 247 = 272 IOps ( ~17 MiB/sec )
    raidtest.write: 224 189 = 206 IOps ( ~13 MiB/sec )
    raidtest.mixed: 220 193 = 206 IOps ( ~13 MiB/sec )

    Now testing RAID1 configuration with 7 disks: czmId@czmId@
    raidtest.read: 290 337 = 313 IOps ( ~20 MiB/sec )
    raidtest.write: 208 245 = 226 IOps ( ~14 MiB/sec )
    raidtest.mixed: 213 233 = 223 IOps ( ~14 MiB/sec )

    Now testing RAID1+0 configuration with 4 disks: czmId@czmId@
    raidtest.read: 233 205 = 219 IOps ( ~14 MiB/sec )
    raidtest.write: 255 235 = 245 IOps ( ~15 MiB/sec )
    raidtest.mixed: 259 231 = 245 IOps ( ~15 MiB/sec )

    Now testing RAID1+0 configuration with 6 disks: czmId@czmId@
    raidtest.read: 298 312 = 305 IOps ( ~19 MiB/sec )
    raidtest.write: 313 311 = 312 IOps ( ~20 MiB/sec )
    raidtest.mixed: 319 306 = 312 IOps ( ~20 MiB/sec )

    Now testing RAIDZ+0 configuration with 8 disks: czmId@czmId@
    raidtest.read: 140 141 = 91 IOps ( ~6006 KiB/sec )
    raidtest.write: 174 179 = 116 IOps ( ~7656 KiB/sec )
    raidtest.mixed: 178 180 = 114 IOps ( ~7524 KiB/sec )

    Now testing RAIDZ+0 configuration with 12 disks: czmId@czmId@
    raidtest.read: 193 192 = 91 IOps ( ~6006 KiB/sec )
    raidtest.write: 233 234 = 116 IOps ( ~7656 KiB/sec )
    raidtest.mixed: 237 235 = 114 IOps ( ~7524 KiB/sec )

    Now testing RAIDZ2+0 configuration with 12 disks: czmId@czmId@
    raidtest.read: 155 186 = 114 IOps ( ~7524 KiB/sec )
    raidtest.write: 189 206 = 138 IOps ( ~9108 KiB/sec )
    raidtest.mixed: 197 203 = 139 IOps ( ~9174 KiB/sec )

    Now testing RAID0 configuration with 1 disks: czmId@czmId@
    raidtest.read: 66 66 = 66 IOps ( ~4356 KiB/sec )
    raidtest.write: 79 80 = 79 IOps ( ~5214 KiB/sec )
    raidtest.mixed: 79 78 = 78 IOps ( ~5148 KiB/sec )

    Now testing RAID0 configuration with 2 disks: czmId@czmId@
    raidtest.read: 119 120 = 119 IOps ( ~7854 KiB/sec )
    raidtest.write: 151 152 = 151 IOps ( ~9966 KiB/sec )
    raidtest.mixed: 151 149 = 150 IOps ( ~9900 KiB/sec )

    Now testing RAID0 configuration with 3 disks: czmId@czmId@
    raidtest.read: 171 167 = 169 IOps ( ~10 MiB/sec )
    raidtest.write: 204 207 = 205 IOps ( ~13 MiB/sec )
    raidtest.mixed: 211 207 = 209 IOps ( ~13 MiB/sec )

    Now testing RAIDZ configuration with 2 disks: czmId@czmId@
    raidtest.read: 100 103 = 101 IOps ( ~6666 KiB/sec )
    raidtest.write: 120 124 = 122 IOps ( ~8052 KiB/sec )
    raidtest.mixed: 121 120 = 120 IOps ( ~7920 KiB/sec )

    Now testing RAIDZ configuration with 3 disks: czmId@czmId@
    raidtest.read: 103 103 = 103 IOps ( ~6798 KiB/sec )
    raidtest.write: 122 121 = 121 IOps ( ~7986 KiB/sec )
    raidtest.mixed: 120 121 = 120 IOps ( ~7920 KiB/sec )

    Now testing RAIDZ2 configuration with 3 disks: czmId@czmId@
    raidtest.read: 212 84 = 148 IOps ( ~9768 KiB/sec )
    raidtest.write: 205 102 = 153 IOps ( ~10098 KiB/sec )
    raidtest.mixed: 176 110 = 143 IOps ( ~9438 KiB/sec )

    Now testing RAID1 configuration with 2 disks: czmId@czmId@
    raidtest.read: 109 127 = 118 IOps ( ~7788 KiB/sec )
    raidtest.write: 129 142 = 135 IOps ( ~8910 KiB/sec )
    raidtest.mixed: 122 128 = 125 IOps ( ~8250 KiB/sec )

    Now testing RAID1 configuration with 3 disks: czmId@czmId@
    raidtest.read: 158 195 = 176 IOps ( ~11 MiB/sec )
    raidtest.write: 175 195 = 185 IOps ( ~11 MiB/sec )
    raidtest.mixed: 168 195 = 181 IOps ( ~11 MiB/sec )

    Now testing RAIDZ+0 configuration with 8 disks: czmId@czmId@
    raidtest.read: 141 142 = 91 IOps ( ~6006 KiB/sec )
    raidtest.write: 182 179 = 116 IOps ( ~7656 KiB/sec )
    raidtest.mixed: 183 181 = 114 IOps ( ~7524 KiB/sec )

    Now testing RAIDZ+0 configuration with 12 disks: czmId@czmId@
    raidtest.read: 188 196 = 91 IOps ( ~6006 KiB/sec )
    raidtest.write: 235 231 = 116 IOps ( ~7656 KiB/sec )
    raidtest.mixed: 233 240 = 114 IOps ( ~7524 KiB/sec )

    Now testing RAIDZ2+0 configuration with 12 disks: czmId@czmId@
    raidtest.read: 184 177 = 114 IOps ( ~7524 KiB/sec )
    raidtest.write: 202 206 = 138 IOps ( ~9108 KiB/sec )
    raidtest.mixed: 209 208 = 139 IOps ( ~9174 KiB/sec )

    Done
    DVD_Chef
    User

    128 posts

    Posted on 23 July 2014 @ 16:13
    I moved the drives to the original system they were in when I started this thread and ran the benchmark again. This system is the same case design but has PCI-E slots so it uses two IBM m1015 adapters with HBA mode firmware. Ram is the same at 16GB, but it does only have one processor but at the same speed. Even though this uses PCI-E adapters, it shows lower overall performance numbers than the other PCI-X 133 based system. The raid0 read maxes out in the 520's, as opposed to 550's for the other system. The other tests are also pretty consistent in showing the same 20-30 point difference

    ZFSGURU-benchmark, version 1
    Test size: 32.000 gigabytes (GiB)
    Test rounds: 3
    Cooldown period: 2 seconds
    Sector size override: default (no override)
    Number of disks: 14 disks
    disk 1: gpt/disk05
    disk 2: gpt/disk04
    disk 3: gpt/disk07
    disk 4: gpt/disk06
    disk 5: gpt/disk01
    disk 6: gpt/disk00
    disk 7: gpt/disk03
    disk 8: gpt/disk02
    disk 9: gpt/disk013
    disk 10: gpt/disk012
    disk 11: gpt/disk011
    disk 12: gpt/disk010
    disk 13: gpt/disk09
    disk 14: gpt/disk08


  • Test Settings: TS32;

  • Tuning: none

  • Stopping background processes: sendmail, moused, syslogd and cron

  • Stopping Samba service


  • Now testing RAID0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 523 MiB/sec 523 MiB/sec 524 MiB/sec = 523 MiB/sec avg
    WRITE: 378 MiB/sec 376 MiB/sec 366 MiB/sec = 373 MiB/sec avg

    Now testing RAID0 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
    READ: 526 MiB/sec 524 MiB/sec 523 MiB/sec = 524 MiB/sec avg
    WRITE: 375 MiB/sec 380 MiB/sec 381 MiB/sec = 379 MiB/sec avg

    Now testing RAID0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
    READ: 525 MiB/sec 526 MiB/sec 523 MiB/sec = 525 MiB/sec avg
    WRITE: 364 MiB/sec 379 MiB/sec 379 MiB/sec = 374 MiB/sec avg

    Now testing RAIDZ configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 300 MiB/sec 300 MiB/sec 302 MiB/sec = 301 MiB/sec avg
    WRITE: 203 MiB/sec 187 MiB/sec 180 MiB/sec = 190 MiB/sec avg

    Now testing RAIDZ configuration with 13 disks: cWmRd@cWmRd@cWmRd@
    READ: 300 MiB/sec 300 MiB/sec 301 MiB/sec = 300 MiB/sec avg
    WRITE: 185 MiB/sec 199 MiB/sec 212 MiB/sec = 199 MiB/sec avg

    Now testing RAIDZ configuration with 14 disks: cWmRd@cWmRd@cWmRd@
    READ: 301 MiB/sec 300 MiB/sec 299 MiB/sec = 300 MiB/sec avg
    WRITE: 200 MiB/sec 206 MiB/sec 206 MiB/sec = 204 MiB/sec avg

    Now testing RAIDZ2 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 300 MiB/sec 299 MiB/sec 299 MiB/sec = 299 MiB/sec avg
    WRITE: 179 MiB/sec 182 MiB/sec 186 MiB/sec = 182 MiB/sec avg

    Now testing RAIDZ2 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
    READ: 298 MiB/sec 296 MiB/sec 298 MiB/sec = 297 MiB/sec avg
    WRITE: 170 MiB/sec 179 MiB/sec 180 MiB/sec = 176 MiB/sec avg

    Now testing RAIDZ2 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
    READ: 298 MiB/sec 298 MiB/sec 298 MiB/sec = 298 MiB/sec avg
    WRITE: 189 MiB/sec 173 MiB/sec 184 MiB/sec = 182 MiB/sec avg

    Now testing RAID1 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 431 MiB/sec 436 MiB/sec 438 MiB/sec = 435 MiB/sec avg
    WRITE: 32 MiB/sec 30 MiB/sec 31 MiB/sec = 31 MiB/sec avg

    Now testing RAID1 configuration with 13 disks: cWmRd@cWmRd@cWmRd@
    READ: 459 MiB/sec 461 MiB/sec 471 MiB/sec = 464 MiB/sec avg
    WRITE: 30 MiB/sec 30 MiB/sec 29 MiB/sec = 29 MiB/sec avg

    Now testing RAID1 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
    READ: 462 MiB/sec 468 MiB/sec 465 MiB/sec = 465 MiB/sec avg
    WRITE: 28 MiB/sec 27 MiB/sec 28 MiB/sec = 28 MiB/sec avg

    Now testing RAID1+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 515 MiB/sec 518 MiB/sec 515 MiB/sec = 516 MiB/sec avg
    WRITE: 227 MiB/sec 206 MiB/sec 199 MiB/sec = 211 MiB/sec avg

    Now testing RAID1+0 configuration with 14 disks: cWmRd@cWmRd@cWmRd@
    READ: 518 MiB/sec 517 MiB/sec 516 MiB/sec = 517 MiB/sec avg
    WRITE: 238 MiB/sec 264 MiB/sec 273 MiB/sec = 258 MiB/sec avg

    Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 348 MiB/sec 348 MiB/sec 350 MiB/sec = 349 MiB/sec avg
    WRITE: 168 MiB/sec 152 MiB/sec 166 MiB/sec = 162 MiB/sec avg

    Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 349 MiB/sec 347 MiB/sec 350 MiB/sec = 349 MiB/sec avg
    WRITE: 217 MiB/sec 194 MiB/sec 207 MiB/sec = 206 MiB/sec avg

    Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 322 MiB/sec 322 MiB/sec 323 MiB/sec = 323 MiB/sec avg
    WRITE: 208 MiB/sec 217 MiB/sec 208 MiB/sec = 211 MiB/sec avg

    Now testing RAID0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 522 MiB/sec 522 MiB/sec 521 MiB/sec = 522 MiB/sec avg
    WRITE: 290 MiB/sec 290 MiB/sec 294 MiB/sec = 291 MiB/sec avg

    Now testing RAID0 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
    READ: 520 MiB/sec 523 MiB/sec 521 MiB/sec = 522 MiB/sec avg
    WRITE: 338 MiB/sec 317 MiB/sec 301 MiB/sec = 319 MiB/sec avg

    Now testing RAID0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
    READ: 522 MiB/sec 519 MiB/sec 524 MiB/sec = 521 MiB/sec avg
    WRITE: 333 MiB/sec 344 MiB/sec 348 MiB/sec = 342 MiB/sec avg

    Now testing RAID0 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
    READ: 520 MiB/sec 520 MiB/sec 524 MiB/sec = 521 MiB/sec avg
    WRITE: 367 MiB/sec 360 MiB/sec 349 MiB/sec = 359 MiB/sec avg

    Now testing RAIDZ configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 308 MiB/sec 310 MiB/sec 309 MiB/sec = 309 MiB/sec avg
    WRITE: 177 MiB/sec 166 MiB/sec 156 MiB/sec = 166 MiB/sec avg

    Now testing RAIDZ configuration with 9 disks: cWmRd@cWmRd@cWmRd@
    READ: 313 MiB/sec 312 MiB/sec 312 MiB/sec = 312 MiB/sec avg
    WRITE: 213 MiB/sec 221 MiB/sec 183 MiB/sec = 206 MiB/sec avg

    Now testing RAIDZ configuration with 10 disks: cWmRd@cWmRd@cWmRd@
    READ: 303 MiB/sec 304 MiB/sec 304 MiB/sec = 304 MiB/sec avg
    WRITE: 201 MiB/sec 187 MiB/sec 201 MiB/sec = 196 MiB/sec avg

    Now testing RAIDZ configuration with 11 disks: cWmRd@cWmRd@cWmRd@
    READ: 299 MiB/sec 301 MiB/sec 302 MiB/sec = 301 MiB/sec avg
    WRITE: 190 MiB/sec 179 MiB/sec 205 MiB/sec = 191 MiB/sec avg

    Now testing RAIDZ2 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 309 MiB/sec 309 MiB/sec 310 MiB/sec = 309 MiB/sec avg
    WRITE: 180 MiB/sec 179 MiB/sec 182 MiB/sec = 180 MiB/sec avg

    Now testing RAIDZ2 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
    READ: 307 MiB/sec 305 MiB/sec 305 MiB/sec = 305 MiB/sec avg
    WRITE: 199 MiB/sec 191 MiB/sec 186 MiB/sec = 192 MiB/sec avg

    Now testing RAIDZ2 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
    READ: 311 MiB/sec 311 MiB/sec 312 MiB/sec = 311 MiB/sec avg
    WRITE: 177 MiB/sec 180 MiB/sec 178 MiB/sec = 178 MiB/sec avg

    Now testing RAIDZ2 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
    READ: 301 MiB/sec 301 MiB/sec 302 MiB/sec = 301 MiB/sec avg
    WRITE: 174 MiB/sec 170 MiB/sec 186 MiB/sec = 177 MiB/sec avg

    Now testing RAID1 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 409 MiB/sec 400 MiB/sec 387 MiB/sec = 399 MiB/sec avg
    WRITE: 28 MiB/sec 29 MiB/sec 29 MiB/sec = 29 MiB/sec avg

    Now testing RAID1 configuration with 9 disks: cWmRd@cWmRd@cWmRd@
    READ: 384 MiB/sec 371 MiB/sec 376 MiB/sec = 377 MiB/sec avg
    WRITE: 29 MiB/sec 29 MiB/sec 29 MiB/sec = 29 MiB/sec avg

    Now testing RAID1 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
    READ: 377 MiB/sec 392 MiB/sec 379 MiB/sec = 383 MiB/sec avg
    WRITE: 29 MiB/sec 29 MiB/sec 28 MiB/sec = 29 MiB/sec avg

    Now testing RAID1 configuration with 11 disks: cWmRd@cWmRd@cWmRd@
    READ: 401 MiB/sec 389 MiB/sec 386 MiB/sec = 392 MiB/sec avg
    WRITE: 25 MiB/sec 26 MiB/sec 27 MiB/sec = 26 MiB/sec avg

    Now testing RAID1+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 498 MiB/sec 495 MiB/sec 496 MiB/sec = 496 MiB/sec avg
    WRITE: 141 MiB/sec 129 MiB/sec 135 MiB/sec = 135 MiB/sec avg

    Now testing RAID1+0 configuration with 10 disks: cWmRd@cWmRd@cWmRd@
    READ: 507 MiB/sec 515 MiB/sec 514 MiB/sec = 512 MiB/sec avg
    WRITE: 207 MiB/sec 166 MiB/sec 167 MiB/sec = 180 MiB/sec avg

    Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 345 MiB/sec 348 MiB/sec 349 MiB/sec = 349 MiB/sec avg
    WRITE: 159 MiB/sec 171 MiB/sec 162 MiB/sec = 206 MiB/sec avg

    Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 348 MiB/sec 347 MiB/sec 349 MiB/sec = 348 MiB/sec avg
    WRITE: 189 MiB/sec 213 MiB/sec 207 MiB/sec = 203 MiB/sec avg

    Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 324 MiB/sec 322 MiB/sec 323 MiB/sec = 323 MiB/sec avg
    WRITE: 209 MiB/sec 206 MiB/sec 213 MiB/sec = 209 MiB/sec avg

    Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 441 MiB/sec 437 MiB/sec 440 MiB/sec = 439 MiB/sec avg
    WRITE: 153 MiB/sec 130 MiB/sec 126 MiB/sec = 136 MiB/sec avg

    Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
    READ: 513 MiB/sec 510 MiB/sec 506 MiB/sec = 510 MiB/sec avg
    WRITE: 211 MiB/sec 180 MiB/sec 161 MiB/sec = 184 MiB/sec avg

    Now testing RAID0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
    READ: 520 MiB/sec 522 MiB/sec 523 MiB/sec = 522 MiB/sec avg
    WRITE: 183 MiB/sec 225 MiB/sec 249 MiB/sec = 219 MiB/sec avg

    Now testing RAID0 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
    READ: 522 MiB/sec 524 MiB/sec 520 MiB/sec = 522 MiB/sec avg
    WRITE: 241 MiB/sec 245 MiB/sec 225 MiB/sec = 237 MiB/sec avg

    Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 302 MiB/sec 303 MiB/sec 303 MiB/sec = 302 MiB/sec avg
    WRITE: 102 MiB/sec 85 MiB/sec 88 MiB/sec = 92 MiB/sec avg

    Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
    READ: 322 MiB/sec 321 MiB/sec 322 MiB/sec = 322 MiB/sec avg
    WRITE: 130 MiB/sec 108 MiB/sec 130 MiB/sec = 123 MiB/sec avg

    Now testing RAIDZ configuration with 6 disks: cWmRd@cWmRd@cWmRd@
    READ: 314 MiB/sec 315 MiB/sec 314 MiB/sec = 315 MiB/sec avg
    WRITE: 124 MiB/sec 133 MiB/sec 134 MiB/sec = 131 MiB/sec avg

    Now testing RAIDZ configuration with 7 disks: cWmRd@cWmRd@cWmRd@
    READ: 312 MiB/sec 310 MiB/sec 310 MiB/sec = 311 MiB/sec avg
    WRITE: 143 MiB/sec 129 MiB/sec 172 MiB/sec = 148 MiB/sec avg

    Now testing RAIDZ2 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 254 MiB/sec 256 MiB/sec 256 MiB/sec = 255 MiB/sec avg
    WRITE: 57 MiB/sec 62 MiB/sec 54 MiB/sec = 58 MiB/sec avg

    Now testing RAIDZ2 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
    READ: 322 MiB/sec 323 MiB/sec 325 MiB/sec = 323 MiB/sec avg
    WRITE: 90 MiB/sec 81 MiB/sec 93 MiB/sec = 88 MiB/sec avg

    Now testing RAIDZ2 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
    READ: 322 MiB/sec 320 MiB/sec 323 MiB/sec = 322 MiB/sec avg
    WRITE: 141 MiB/sec 117 MiB/sec 134 MiB/sec = 131 MiB/sec avg

    Now testing RAIDZ2 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
    READ: 312 MiB/sec 312 MiB/sec 313 MiB/sec = 312 MiB/sec avg
    WRITE: 182 MiB/sec 172 MiB/sec 137 MiB/sec = 164 MiB/sec avg

    Now testing RAID1 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 317 MiB/sec 315 MiB/sec 315 MiB/sec = 316 MiB/sec avg
    WRITE: 30 MiB/sec 28 MiB/sec 29 MiB/sec = 29 MiB/sec avg

    Now testing RAID1 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
    READ: 333 MiB/sec 323 MiB/sec 330 MiB/sec = 329 MiB/sec avg
    WRITE: 27 MiB/sec 29 MiB/sec 28 MiB/sec = 28 MiB/sec avg

    Now testing RAID1 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
    READ: 385 MiB/sec 394 MiB/sec 397 MiB/sec = 392 MiB/sec avg
    WRITE: 28 MiB/sec 27 MiB/sec 26 MiB/sec = 27 MiB/sec avg

    Now testing RAID1 configuration with 7 disks: cWmRd@cWmRd@cWmRd@
    READ: 434 MiB/sec 441 MiB/sec 439 MiB/sec = 438 MiB/sec avg
    WRITE: 26 MiB/sec 27 MiB/sec 27 MiB/sec = 27 MiB/sec avg

    Now testing RAID1+0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 341 MiB/sec 342 MiB/sec 342 MiB/sec = 342 MiB/sec avg
    WRITE: 62 MiB/sec 63 MiB/sec 67 MiB/sec = 64 MiB/sec avg

    Now testing RAID1+0 configuration with 6 disks: cWmRd@cWmRd@cWmRd@
    READ: 453 MiB/sec 454 MiB/sec 456 MiB/sec = 454 MiB/sec avg
    WRITE: 98 MiB/sec 97 MiB/sec 101 MiB/sec = 99 MiB/sec avg

    Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 345 MiB/sec 349 MiB/sec 350 MiB/sec = 302 MiB/sec avg
    WRITE: 166 MiB/sec 179 MiB/sec 183 MiB/sec = 92 MiB/sec avg

    Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 349 MiB/sec 345 MiB/sec 347 MiB/sec = 302 MiB/sec avg
    WRITE: 216 MiB/sec 209 MiB/sec 210 MiB/sec = 92 MiB/sec avg

    Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 323 MiB/sec 323 MiB/sec 322 MiB/sec = 322 MiB/sec avg
    WRITE: 207 MiB/sec 218 MiB/sec 204 MiB/sec = 131 MiB/sec avg

    Now testing RAID0 configuration with 1 disks: cWmRd@cWmRd@cWmRd@
    READ: 134 MiB/sec 136 MiB/sec 136 MiB/sec = 135 MiB/sec avg
    WRITE: 31 MiB/sec 32 MiB/sec 30 MiB/sec = 31 MiB/sec avg

    Now testing RAID0 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
    READ: 266 MiB/sec 259 MiB/sec 274 MiB/sec = 266 MiB/sec avg
    WRITE: 64 MiB/sec 59 MiB/sec 68 MiB/sec = 64 MiB/sec avg

    Now testing RAID0 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
    READ: 382 MiB/sec 382 MiB/sec 370 MiB/sec = 378 MiB/sec avg
    WRITE: 115 MiB/sec 111 MiB/sec 117 MiB/sec = 114 MiB/sec avg

    Now testing RAIDZ configuration with 2 disks: cWmRd@cWmRd@cWmRd@
    READ: 144 MiB/sec 143 MiB/sec 146 MiB/sec = 144 MiB/sec avg
    WRITE: 29 MiB/sec 28 MiB/sec 28 MiB/sec = 28 MiB/sec avg

    Now testing RAIDZ configuration with 3 disks: cWmRd@cWmRd@cWmRd@
    READ: 256 MiB/sec 266 MiB/sec 261 MiB/sec = 261 MiB/sec avg
    WRITE: 62 MiB/sec 55 MiB/sec 69 MiB/sec = 62 MiB/sec avg

    Now testing RAIDZ2 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
    READ: 142 MiB/sec 145 MiB/sec 140 MiB/sec = 142 MiB/sec avg
    WRITE: 27 MiB/sec 28 MiB/sec 27 MiB/sec = 27 MiB/sec avg

    Now testing RAID1 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
    READ: 205 MiB/sec 198 MiB/sec 190 MiB/sec = 198 MiB/sec avg
    WRITE: 27 MiB/sec 27 MiB/sec 28 MiB/sec = 27 MiB/sec avg

    Now testing RAID1 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
    READ: 274 MiB/sec 270 MiB/sec 268 MiB/sec = 271 MiB/sec avg
    WRITE: 27 MiB/sec 27 MiB/sec 26 MiB/sec = 27 MiB/sec avg

    Now testing RAIDZ+0 configuration with 8 disks: cWmRd@cWmRd@cWmRd@
    READ: 348 MiB/sec 348 MiB/sec 352 MiB/sec = 302 MiB/sec avg
    WRITE: 161 MiB/sec 184 MiB/sec 185 MiB/sec = 92 MiB/sec avg

    Now testing RAIDZ+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 349 MiB/sec 348 MiB/sec 345 MiB/sec = 302 MiB/sec avg
    WRITE: 207 MiB/sec 196 MiB/sec 209 MiB/sec = 92 MiB/sec avg

    Now testing RAIDZ2+0 configuration with 12 disks: cWmRd@cWmRd@cWmRd@
    READ: 324 MiB/sec 322 MiB/sec 322 MiB/sec = 322 MiB/sec avg
    WRITE: 204 MiB/sec 194 MiB/sec 194 MiB/sec = 131 MiB/sec avg

    Done
    aaront
    User

    75 posts

    Posted on 25 July 2014 @ 23:16
    Just wanted to toss in here, a small cache ssd with just metadata could help depending on your files. For example I do an rsync copy off another server to my tank every hour, then run a snapshot. The rsync goes 10x faster with an ssd just to cache the metadata.
    DVD_Chef
    User

    128 posts

    Posted on 28 July 2014 @ 21:00
    What size is your pool and the cache? So you have set the secondarycache value to metadata for your pool so it only stores metadata?

    More info would be appreciated aaront.

    Thanks
    zsozso
    User

    13 posts

    Posted on 30 July 2014 @ 15:15
    Hi,

    My config is somewhat similar, i have the same lsi/m1015/dellh200 controller, but my system is a very low end and very low power intel ivy bridge dual core G2030 with a cheap dual pci-e motherboard.
    I have 10 toshiba 3tb disks in raidz2:
    NAME STATE READ WRITE CKSUM
    tank ONLINE 0 0 0
    raidz2-0 ONLINE 0 0 0
    gpt/tank01 ONLINE 0 0 0
    gpt/tank02 ONLINE 0 0 0
    gpt/tank03 ONLINE 0 0 0
    gpt/tank04 ONLINE 0 0 0
    gpt/tank05 ONLINE 0 0 0
    gpt/tank06 ONLINE 0 0 0
    gpt/tank07 ONLINE 0 0 0
    gpt/tank08 ONLINE 0 0 0
    gpt/tank09 ONLINE 0 0 0
    gpt/tank10 ONLINE 0 0 0

    My pool is not empty it has 5.22TB data in it.
    My performance is a lot better though, my write is about 700MB/s and my read is around 1250MB/s.
    I tried raid0 before doing the data migration i got 1.5G/s writes and 1.7G/s reads.
    Which fw version do you use for the m1015s?

    Zsolt
    DVD_Chef
    User

    128 posts

    Posted on 30 July 2014 @ 17:48
    Zsolt

    The flash was for LSI SAS2008 IT mode, as documented in the lime-technology forum. At the time I did this, the p11 or p12 version was the latest available, so it is one of those.
    zsozso
    User

    13 posts

    Posted on 30 July 2014 @ 20:38
    When doing read/write i presume the cpu does not max out.
    I's really strange that i get 2-3 times as much out of it.
    DVD_Chef
    User

    128 posts

    Posted on 30 July 2014 @ 20:56
    zsozso wrote: When doing read/write i presume the cpu does not max out.
    I's really strange that i get 2-3 times as much out of it.

    It is a 3.2G QuadCore Xeon, so it does not really even break a sweat. What kind of drives are you using in yours? Mine are 2TB Hitachi SATA 6G 7200rpm drives.
    Next Page »

    Valid XHTML 1.1