Latest interface: 0.3.1
Latest system: 010
minasmorgul
User

43 posts

Posted on 4 August 2011 @ 23:27edited 5 August 2011 @ 04:04
Configuration:

MOBO: ASUS M4A78LT-M 760G RGVSM
CPU: AMD Athlon II X2 250
RAM: 8 GB Kingston ValueRAM DIMM ECC DDR3-1333
HBA: LSI SAS 9201-16i
HDD: 15 x Samsung HD204UI
HDDcages: 3 x Netstor NS170S (replace the built-in fans, they're pretty unquiet)
CABLE: 4 x 3WARE CBL-SFF8087OCF-10M
NIC: 2 x ASUS NX1101 + onboard NIC

Note: I'm still not able to recommend the combination of my HBA and HDDs. Maybe after a firmware update of both components. So far i still have dropouts but thank to ZFS no data loss.


Results:

broken image broken image

ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 5 disks
disk 1: label/c1a3t1
disk 2: label/c1a3t2
disk 3: label/c1a3t3
disk 4: label/c1a3t4
disk 5: label/c1a3t5


  • Test Settings: TS32;

  • Tuning: KMEM=12g; AMIN=4g; AMAX=6g; PFD=0;

  • Stopping background processes: sendmail, moused, syslogd and cron

  • Stopping Samba service


  • Now testing RAID0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 410 MiB/sec 413 MiB/sec 417 MiB/sec = 413 MiB/sec avg
    WRITE: 393 MiB/sec 398 MiB/sec 379 MiB/sec = 390 MiB/sec avg

    Now testing RAID0 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
    READ: 543 MiB/sec 505 MiB/sec 511 MiB/sec = 520 MiB/sec avg
    WRITE: 478 MiB/sec 474 MiB/sec 461 MiB/sec = 471 MiB/sec avg

    Now testing RAIDZ configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 316 MiB/sec 328 MiB/sec 308 MiB/sec = 317 MiB/sec avg
    WRITE: 254 MiB/sec 263 MiB/sec 276 MiB/sec = 264 MiB/sec avg

    Now testing RAIDZ configuration with 5 disks: cWmRd@cWmRd@cWmRd@
    READ: 349 MiB/sec 351 MiB/sec 385 MiB/sec = 362 MiB/sec avg
    WRITE: 337 MiB/sec 332 MiB/sec 325 MiB/sec = 331 MiB/sec avg

    Now testing RAIDZ2 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 225 MiB/sec 226 MiB/sec 227 MiB/sec = 226 MiB/sec avg
    WRITE: 171 MiB/sec 175 MiB/sec 163 MiB/sec = 170 MiB/sec avg

    Now testing RAIDZ2 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
    READ: 309 MiB/sec 315 MiB/sec 309 MiB/sec = 311 MiB/sec avg
    WRITE: 266 MiB/sec 256 MiB/sec 247 MiB/sec = 256 MiB/sec avg

    Now testing RAID1 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 292 MiB/sec 288 MiB/sec 296 MiB/sec = 292 MiB/sec avg
    WRITE: 96 MiB/sec 100 MiB/sec 98 MiB/sec = 98 MiB/sec avg

    Now testing RAID1 configuration with 5 disks: cWmRd@cWmRd@cWmRd@
    READ: 361 MiB/sec 359 MiB/sec 358 MiB/sec = 359 MiB/sec avg
    WRITE: 93 MiB/sec 94 MiB/sec 99 MiB/sec = 95 MiB/sec avg

    Now testing RAID1+0 configuration with 4 disks: cWmRd@cWmRd@cWmRd@
    READ: 321 MiB/sec 334 MiB/sec 334 MiB/sec = 330 MiB/sec avg
    WRITE: 192 MiB/sec 196 MiB/sec 206 MiB/sec = 198 MiB/sec avg

    Now testing RAID0 configuration with 1 disks: cWmRd@cWmRd@cWmRd@
    READ: 114 MiB/sec 113 MiB/sec 114 MiB/sec = 114 MiB/sec avg
    WRITE: 106 MiB/sec 97 MiB/sec 100 MiB/sec = 101 MiB/sec avg

    Now testing RAID0 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
    READ: 220 MiB/sec 219 MiB/sec 219 MiB/sec = 219 MiB/sec avg
    WRITE: 212 MiB/sec 206 MiB/sec 209 MiB/sec = 209 MiB/sec avg

    Now testing RAID0 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
    READ: 319 MiB/sec 325 MiB/sec 323 MiB/sec = 322 MiB/sec avg
    WRITE: 294 MiB/sec 303 MiB/sec 309 MiB/sec = 302 MiB/sec avg

    Now testing RAIDZ configuration with 2 disks: cWmRd@cWmRd@cWmRd@
    READ: 91 MiB/sec 94 MiB/sec 94 MiB/sec = 93 MiB/sec avg
    WRITE: 90 MiB/sec 91 MiB/sec 91 MiB/sec = 91 MiB/sec avg

    Now testing RAIDZ configuration with 3 disks: cWmRd@cWmRd@cWmRd@
    READ: 191 MiB/sec 184 MiB/sec 194 MiB/sec = 189 MiB/sec avg
    WRITE: 183 MiB/sec 195 MiB/sec 184 MiB/sec = 187 MiB/sec avg

    Now testing RAIDZ2 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
    READ: 112 MiB/sec 108 MiB/sec 113 MiB/sec = 111 MiB/sec avg
    WRITE: 79 MiB/sec 69 MiB/sec 87 MiB/sec = 79 MiB/sec avg

    Now testing RAID1 configuration with 2 disks: cWmRd@cWmRd@cWmRd@
    READ: 171 MiB/sec 164 MiB/sec 168 MiB/sec = 168 MiB/sec avg
    WRITE: 85 MiB/sec 92 MiB/sec 93 MiB/sec = 90 MiB/sec avg

    Now testing RAID1 configuration with 3 disks: cWmRd@cWmRd@cWmRd@
    READ: 216 MiB/sec 212 MiB/sec 208 MiB/sec = 212 MiB/sec avg
    WRITE: 96 MiB/sec 86 MiB/sec 85 MiB/sec = 89 MiB/sec avg

    Done


    Doesn't look to bad, i think. You have to consider, that i scrubbed my vital pool parallel to the Benchmark!

    =;o))

    I have two of such RAIDZ configurations with 5 disks stuck in one large pool so far:

    data
    raidz1-0
    label/c1a1t1
    label/c1a1t2
    label/c1a1t3
    label/c1a1t4
    label/c1a1t5
    raidz1-1
    label/c1a2t1
    label/c1a2t2
    label/c1a2t3
    label/c1a2t4
    label/c1a2t5

    It is already filled up with vital data, so testing is only possible with the 5 leftover disks. If you want me to try something special, tell me now - i'm up to add the leftover disks to the pool also.

    Regards,
    minas
    minasmorgul
    User

    43 posts

    Posted on 26 August 2011 @ 00:00

    Now I am able to recommend the combination of my HBA and HDDs!

    Didn't made any firmware update but hadn't any dropouts since using 0.1.8.

    @Jason: Did there come a new mps-driver for the LSI SAS 9201-16i with 8.3-001?

    Regards,
    minas
    Jason
    Developer

    806 posts

    Posted on 26 August 2011 @ 02:25
    Hey Minas!

    I didn't notice your thread until just a moment ago; thanks for those benchmarks! They look very consistent. Even with just a dualcore you can see the RAID-Z2 scales very consistently. Having 8GiB RAM probably is the biggest factor here. Though if you would test with more disks probably that would limit your scores a bit. Still you got very decent scores and even if your system would bottleneck the amount of disks that you got it should be very acceptable performance I think!

    Regarding your controller; that is the new LSI SAS 2008 controller sporting 6Gbps SAS link and PCI-express 2.0. It is supported under the 'mps' driver, which is still experimental. In previous system images, this driver was not present in the standard FreeBSD distribution and I used a patchset to add this driver to FreeBSD. In 8.3-001 (8-STABLE) the 'mps' driver is present by default without any patch. I can't say for sure, but it is highly likely that the driver was improved versus my earlier patchset.

    Probably the improved 'mps' driver in 8.3-001 system image is still lacking some functionality required to be really stable. For example the previous driver had some problems during boot up phase if the disks during the initial detection would misbehave; causing the system to not being able to boot. Such problems may be an annoyance but at one point will be fixed (and may already be fixed) - but should not be a major reason to avoid the 6Gbps controllers anymore I believe.

    It's nice to hear your system is performing well with this new controller. Please note that you can also do a mini-benchmark on your existing pool without losing any existing data. This benchmark can be found on the Pools->Benchmark page and can be safely used without harm to your data.

    Cheers!
    minasmorgul
    User

    43 posts

    Posted on 30 August 2011 @ 01:41
    Hi Jason,

    thank you for your detailed explanation about the 'mps' driver. The non-destructive benchmark looks as follows (data pool still without the 'leftover disks'):

    Pool : data
    Read throughput : 313.4 MB/s
    Read throughput : 298.9 MiB/s
    Write throughput: 197.1 MB/s
    Write throughput: 188 MiB/s

    Best Regards,
    minas
    Jason
    Developer

    806 posts

    Posted on 31 August 2011 @ 09:33
    Hey Minas,

    Keep in mind that the non-destructive benchmark tests on your pool already containing data, so it's normal to see some lower performance in this benchmark. Still I think your performance could be a bit higher. But if you're content with performance over gigabit network you may not have any reason to tune it further.

    Should you encounter any problems with your controller using the new 'mps' driver in 8.3-001, then please give me a heads up! I would love to know about any potential issues that you may encounter.
    minasmorgul
    User

    43 posts

    Posted on 1 September 2011 @ 01:55edited 01:56 23s
    Hi Jason,

    acknowledged, you'll hear about any issues with my controller.
    The_Dave
    User

    221 posts

    Posted on 1 September 2011 @ 07:12edited 07:14 43s
    Hmmm...I just picked up a Supermicro AOC-USAS2-L8i ( LSI SAS 2008 controller) and flashed it to an "e". Using 10 WD20EARS drives configured as a 2 vdev RaiDZ array I pulled the following using the non-destructive bench (32gb):

    Tank - 2 VDev RaidZ testing ---4k SECTOR OVERRIDE

    32GB Sequential zero Benchmark

    Pool : Tank
    Read throughput : 353.1 MB/s
    Read throughput : 336.7 MiB/s
    Write throughput: 383.7 MB/s
    Write throughput: 365.9 MiB/s

    Similar read speeds but you write speed is a lot slower? I will start a thread about using ZFSGuru to create a 4k pool and add drives in vs me creating all nops for each drive and creating the pool. It's interesting! To sum it up I gained about 50-75 MB/s!
    nicholasa
    User

    158 posts

    Posted on 1 September 2011 @ 17:54
    On a 6 disk Samsung F4 2TB RAIDZ i get this:

    64GB test-file
    Pool : storage1
    Read throughput : 522.2 MB/s
    Read throughput : 498 MiB/s
    Write throughput: 374.6 MB/s
    Write throughput: 357.2 MiB/s

    While running all the stuff i normally do (VirtualBox, SABnzbd, SickBeard, Transmission, iSCSI etc)
    nicholasa
    User

    158 posts

    Posted on 8 September 2011 @ 17:36
    Same server but different disk configuration:
    4x2TB Samsung F4 in RAIDZ:
    (64GB testfile)
    Pool : data1
    Read throughput : 308.4 MB/s
    Read throughput : 294.1 MiB/s
    Write throughput: 180.8 MB/s
    Write throughput: 172.4 MiB/s

    I'm planning to add another RAIDZ VDEV to this pool: does the performance improve? If not i'm thinking about using all 8 disks in a single RAIDZ.
    Jason
    Developer

    806 posts

    Posted on 8 September 2011 @ 22:32
    A 5-disk RAID-Z with 4K sector disks should work much better than 4-disk RAID-Z yes!

    Another setup you could consider is a 6-disk RAID-Z2 which adds more protection. A 8-disk RAID-Z may not be very safe due to the large number of disks. A RAID-Z2 adds alot of protection.

    The most optimal configurations:
    3 or 5 disks: RAID-Z
    6 or 10 disks: RAID-Z2
    nicholasa
    User

    158 posts

    Posted on 9 September 2011 @ 12:18
    So what would you do if you have 8 disks?
    a: 2x VDEVS of 4x disks in RAIDZ?
    b: 8x disks in a RAIDZ2?
    c: 8x disks in a DAIDZ?

    I know that c is less secure than a or b.
    Performance is preferred, but security is also important.
    danswartz
    User

    252 posts

    Posted on 9 September 2011 @ 17:03
    What is 'DAIDZ'?
    nicholasa
    User

    158 posts

    Posted on 12 September 2011 @ 15:00
    Dan: you are funny!
    Anyways i got my last two disks today. Backed up what i needed and deleted the old pool. Created a new one.

    8x Samsung F1 in RAIDZ:

    64GB test file,

    with default sector size:
    Pool : data1
    Read throughput : 552.8 MB/s
    Read throughput : 527.2 MiB/s
    Write throughput: 423.7 MB/s
    Write throughput: 404.1 MiB/s

    with 512K sector size:
    Pool : data1
    Read throughput : 693.8 MB/s
    Read throughput : 661.6 MiB/s
    Write throughput: 536.3 MB/s
    Write throughput: 511.4 MiB/s
    Jason
    Developer

    806 posts

    Posted on 12 September 2011 @ 15:37edited 15:38 17s
    512K sector size? Don't you mean 4K or did you actually use half-a-megabyte sectorsize? :)

    You should not do that; it would waste alot of space on small files and metadata. Unless your files are all 100GB+ in size.

    What about an 8-disk RAID-Z2 with 4K sectorsize and default (512-byte) ? With 8 disks, choosing RAID-Z2 over RAID-Z is recommended.
    nicholasa
    User

    158 posts

    Posted on 12 September 2011 @ 21:57edited 22:00 33s
    Ah, i set it to 512 bytes. My bad. Is that bad, Jason? I thought that you should only use 4K if the numbers of drives - parity can be divided by 128?

    Nah, i don't need RAIDZ2. It's just for home storage, and i do use online backup as well.

    Some pics:
    http://bildr.no/view/973576

    http://bildr.no/view/973588

    It doesn't stand in the living room, that's just for today.
    Jason
    Developer

    806 posts

    Posted on 12 September 2011 @ 23:06
    Looks very tidy! :)

    512-byte (default) sectors is good for the optimal configurations (RAID-Z: 3 or 5 disks, RAID-Z2: 6 or 10 disks). For non-optimal configurations the sectorsize override to 4K sectors may help a bit, but at the cost of lost space. It's a trade-off. You may not need the sectorsize override if you're content with performance, especially since most systems are limited by gigabit network.

    Though keep in mind the numbers you get with benchmarking are optimal numbers; they will get lower as you fill the pool with real data and you work on the slower parts of your harddrives as well as filesystem fragmentation sets in. Therefore, it's good if your local performance is higher than the performance level you want to achieve over (gigabit) networking.
    mamruoc
    User

    70 posts

    Posted on 14 September 2011 @ 15:07
    Jason, so if the system has optimal configuration (my case: 10 disks RAIDZ2), I should not use 4K sectors?
    nicholasa
    User

    158 posts

    Posted on 20 September 2011 @ 21:44
    OK, after i "suddenly" found out i created a RAID0 instead of a RAIDZ then here's my new benchmarks:
    (same config, 8 Samsung 2TB disks)
    RAIDZ
    512kb sectorsize
    Pool : tank
    Read throughput : 617 MB/s
    Read throughput : 588.4 MiB/s
    Write throughput: 410.5 MB/s
    Write throughput: 391.4 MiB/s

    4K sextorsize:
    Pool : tank
    Read throughput : 614.8 MB/s
    Read throughput : 586.3 MiB/s
    Write throughput: 419.2 MB/s
    Write throughput: 399.8 MiB/s

    I'm gonna stick with 512 sector size.
    minasmorgul
    User

    43 posts

    Posted on 31 January 2012 @ 18:19edited 1 February 2012 @ 12:37
    Hi Jason,

    congratulations, ZFSGuru really becomes adult now. I just updated to 0.2.0-beta4 and everything looks very decent!

    Starting the Non-destructive benchmark reveals that something happened under the hood also:

    ZFSguru 0.2.0-beta4 pool benchmark
    Pool : data (27.2T, 93% full)
    Test size : 64 GiB
    Data source : /dev/zero
    Read throughput : 634.6 MB/s = 605.2 MiB/s
    Write throughput: 478.9 MB/s = 456.7 MiB/s

    Same hardwareconfiguration as posted on 5 August 2011 @ 06:27

    Maybe the LSI SAS 2008 controller got a new 'mps' driver again ?!

    Best Regards,
    Minas
    Last Page

    Valid XHTML 1.1