Latest interface: 0.3.1
Latest system: 010
rob
User

5 posts

Posted on 22 March 2013 @ 08:34
Hi,

I've build a test machine to test ZFSguru and NFS/ZFS in particular. I'm planning on creating one or two storage boxes for VMWare storage. (8 nodes, with about 80VMs)

For testing purpuses I've build the following:
  • ASRock B75 Pro3-M

  • Pentium G220

  • Corsair CMV16GX3M2A1333C9

  • 1 * 1TB SATA (WD1002FBYS-02A6B0) for root-on-zfs (temporarily)

  • 5 * 1GB SATA (WD1002FBYS-02A6B0)


  • I would greatly appreciate to hear wat you think of these results. I think the speed is OK, but the IOps seems a bit low. Especially for VMWare purposes.

    ***************

    ZFSGURU-benchmark, version 1
    Test size: 32.000 gigabytes (GiB)
    Test rounds: 3
    Cooldown period: 2 seconds
    Sector size override: default (no override)
    Number of disks: 5 disks
    disk 1: gpt/hd0
    disk 2: gpt/hd1
    disk 3: gpt/hd2
    disk 4: gpt/hd3
    disk 5: gpt/hd4


  • Test Settings: TS32;

  • Tuning: none

  • Stopping background processes: sendmail, moused, syslogd and cron

  • Stopping Samba service


  • Now testing RAID0 configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 360 MiB/sec 361 MiB/sec 361 MiB/sec = 361 MiB/sec avg
    WRITE: 434 MiB/sec 381 MiB/sec 384 MiB/sec = 400 MiB/sec avg
    raidtest.read: 95 95 97 = 95 IOps ( ~6270 KiB/sec )
    raidtest.write: 97 97 102 = 98 IOps ( ~6468 KiB/sec )
    raidtest.mixed: 91 90 91 = 90 IOps ( ~5940 KiB/sec )

    Now testing RAID0 configuration with 5 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 438 MiB/sec 433 MiB/sec 429 MiB/sec = 433 MiB/sec avg
    WRITE: 468 MiB/sec 462 MiB/sec 453 MiB/sec = 461 MiB/sec avg
    raidtest.read: 114 114 98 = 108 IOps ( ~7128 KiB/sec )
    raidtest.write: 122 126 102 = 116 IOps ( ~7656 KiB/sec )
    raidtest.mixed: 100 98 98 = 98 IOps ( ~6468 KiB/sec )

    Now testing RAIDZ configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 284 MiB/sec 277 MiB/sec 273 MiB/sec = 278 MiB/sec avg
    WRITE: 258 MiB/sec 267 MiB/sec 265 MiB/sec = 263 MiB/sec avg
    raidtest.read: 74 75 74 = 74 IOps ( ~4884 KiB/sec )
    raidtest.write: 76 77 77 = 76 IOps ( ~5016 KiB/sec )
    raidtest.mixed: 67 69 68 = 68 IOps ( ~4488 KiB/sec )

    Now testing RAIDZ configuration with 5 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 360 MiB/sec 358 MiB/sec 342 MiB/sec = 354 MiB/sec avg
    WRITE: 346 MiB/sec 333 MiB/sec 345 MiB/sec = 341 MiB/sec avg
    raidtest.read: 72 72 73 = 72 IOps ( ~4752 KiB/sec )
    raidtest.write: 74 75 76 = 75 IOps ( ~4950 KiB/sec )
    raidtest.mixed: 66 66 67 = 66 IOps ( ~4356 KiB/sec )

    Now testing RAIDZ2 configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 185 MiB/sec 188 MiB/sec 185 MiB/sec = 186 MiB/sec avg
    WRITE: 169 MiB/sec 186 MiB/sec 180 MiB/sec = 178 MiB/sec avg
    raidtest.read: 72 72 72 = 72 IOps ( ~4752 KiB/sec )
    raidtest.write: 76 75 76 = 75 IOps ( ~4950 KiB/sec )
    raidtest.mixed: 68 69 69 = 68 IOps ( ~4488 KiB/sec )

    Now testing RAIDZ2 configuration with 5 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 284 MiB/sec 277 MiB/sec 281 MiB/sec = 281 MiB/sec avg
    WRITE: 256 MiB/sec 259 MiB/sec 251 MiB/sec = 255 MiB/sec avg
    raidtest.read: 73 73 73 = 73 IOps ( ~4818 KiB/sec )
    raidtest.write: 75 76 75 = 75 IOps ( ~4950 KiB/sec )
    raidtest.mixed: 67 69 68 = 68 IOps ( ~4488 KiB/sec )

    Now testing RAID1 configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 299 MiB/sec 299 MiB/sec 294 MiB/sec = 297 MiB/sec avg
    WRITE: 97 MiB/sec 91 MiB/sec 89 MiB/sec = 92 MiB/sec avg
    raidtest.read: 113 116 95 = 108 IOps ( ~7128 KiB/sec )
    raidtest.write: 110 112 91 = 104 IOps ( ~6864 KiB/sec )
    raidtest.mixed: 87 87 85 = 86 IOps ( ~5676 KiB/sec )

    Now testing RAID1 configuration with 5 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 359 MiB/sec 360 MiB/sec 352 MiB/sec = 357 MiB/sec avg
    WRITE: 85 MiB/sec 87 MiB/sec 89 MiB/sec = 87 MiB/sec avg
    raidtest.read: 94 96 99 = 96 IOps ( ~6336 KiB/sec )
    raidtest.write: 91 94 98 = 94 IOps ( ~6204 KiB/sec )
    raidtest.mixed: 84 66 87 = 79 IOps ( ~5214 KiB/sec )

    Now testing RAID1+0 configuration with 4 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 333 MiB/sec 329 MiB/sec 323 MiB/sec = 328 MiB/sec avg
    WRITE: 192 MiB/sec 188 MiB/sec 177 MiB/sec = 186 MiB/sec avg
    raidtest.read: 96 103 100 = 99 IOps ( ~6534 KiB/sec )
    raidtest.write: 99 108 102 = 103 IOps ( ~6798 KiB/sec )
    raidtest.mixed: 88 79 88 = 85 IOps ( ~5610 KiB/sec )

    Now testing RAID0 configuration with 1 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 111 MiB/sec 106 MiB/sec 107 MiB/sec = 108 MiB/sec avg
    WRITE: 86 MiB/sec 91 MiB/sec 92 MiB/sec = 90 MiB/sec avg
    raidtest.read: 108 107 93 = 102 IOps ( ~6732 KiB/sec )
    raidtest.write: 108 107 91 = 102 IOps ( ~6732 KiB/sec )
    raidtest.mixed: 82 81 67 = 76 IOps ( ~5016 KiB/sec )

    Now testing RAID0 configuration with 2 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 204 MiB/sec 204 MiB/sec 205 MiB/sec = 204 MiB/sec avg
    WRITE: 183 MiB/sec 187 MiB/sec 189 MiB/sec = 186 MiB/sec avg
    raidtest.read: 83 99 99 = 93 IOps ( ~6138 KiB/sec )
    raidtest.write: 84 100 102 = 95 IOps ( ~6270 KiB/sec )
    raidtest.mixed: 78 85 87 = 83 IOps ( ~5478 KiB/sec )

    Now testing RAID0 configuration with 3 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 289 MiB/sec 283 MiB/sec 285 MiB/sec = 285 MiB/sec avg
    WRITE: 278 MiB/sec 295 MiB/sec 275 MiB/sec = 283 MiB/sec avg
    raidtest.read: 121 123 99 = 114 IOps ( ~7524 KiB/sec )
    raidtest.write: 128 127 101 = 118 IOps ( ~7788 KiB/sec )
    raidtest.mixed: 91 90 88 = 89 IOps ( ~5874 KiB/sec )

    Now testing RAIDZ configuration with 2 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 106 MiB/sec 106 MiB/sec 109 MiB/sec = 107 MiB/sec avg
    WRITE: 83 MiB/sec 97 MiB/sec 80 MiB/sec = 87 MiB/sec avg
    raidtest.read: 96 85 87 = 89 IOps ( ~5874 KiB/sec )
    raidtest.write: 93 86 87 = 88 IOps ( ~5808 KiB/sec )
    raidtest.mixed: 82 78 78 = 79 IOps ( ~5214 KiB/sec )

    Now testing RAIDZ configuration with 3 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 197 MiB/sec 197 MiB/sec 199 MiB/sec = 198 MiB/sec avg
    WRITE: 191 MiB/sec 190 MiB/sec 193 MiB/sec = 191 MiB/sec avg
    raidtest.read: 88 100 97 = 95 IOps ( ~6270 KiB/sec )
    raidtest.write: 91 99 98 = 96 IOps ( ~6336 KiB/sec )
    raidtest.mixed: 72 71 67 = 70 IOps ( ~4620 KiB/sec )

    Now testing RAIDZ2 configuration with 3 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 102 MiB/sec 106 MiB/sec 104 MiB/sec = 104 MiB/sec avg
    WRITE: 77 MiB/sec 86 MiB/sec 97 MiB/sec = 87 MiB/sec avg
    raidtest.read: 117 97 107 = 107 IOps ( ~7062 KiB/sec )
    raidtest.write: 108 95 103 = 102 IOps ( ~6732 KiB/sec )
    raidtest.mixed: 90 84 89 = 87 IOps ( ~5742 KiB/sec )

    Now testing RAID1 configuration with 2 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 178 MiB/sec 184 MiB/sec 180 MiB/sec = 181 MiB/sec avg
    WRITE: 88 MiB/sec 91 MiB/sec 87 MiB/sec = 89 MiB/sec avg
    raidtest.read: 121 85 107 = 104 IOps ( ~6864 KiB/sec )
    raidtest.write: 114 82 105 = 100 IOps ( ~6600 KiB/sec )
    raidtest.mixed: 84 64 84 = 77 IOps ( ~5082 KiB/sec )

    Now testing RAID1 configuration with 3 disks: cWmRzmId@cWmRzmId@cWmRzmId@
    READ: 247 MiB/sec 241 MiB/sec 249 MiB/sec = 246 MiB/sec avg
    WRITE: 87 MiB/sec 91 MiB/sec 89 MiB/sec = 89 MiB/sec avg
    raidtest.read: 104 110 114 = 109 IOps ( ~7194 KiB/sec )
    raidtest.write: 98 106 111 = 105 IOps ( ~6930 KiB/sec )
    raidtest.mixed: 82 65 84 = 77 IOps ( ~5082 KiB/sec )

    Done
    rob
    User

    5 posts

    Posted on 22 March 2013 @ 11:23
    Also, I'm looking for some good advice for a ZIL/SLOG-device to increase the IOPS. I've been reading a lot, but cannot find any review of modern SSDs compared in sustained write IOps.

    I know we should buy two SSDs and put them in mirror. But since a few GB should be enough, would a Intel DC 2.5" S3700 100GB be overkill? What do you think of this as a ZIL-drive?
    DigitalDaz
    User

    5 posts

    Posted on 26 March 2013 @ 17:34
    Hi Rob,

    Looking at that setup, its never going to be quick. Remember that in any sort of RAIDZ configuration you maximum IOPS is only going to be of 1 drive as long as they are all in one vdev.

    If anything, squeeze youself another drive in and get three striped mirrors. Obviously you will lose capacity but at leased then you will get the combined IOPS across 3 vdevs.

    If you are using NFS a SLOG should make the world of difference. My understanding is that with V28 you should not need to mirror these and a 100GB DC3700 will be massive overkill. Any decent quality 32GB should be plenty.

    I'm just about to build some similar storage for a couple of ESXi hypervisors.

    I'm initially going with 8 x Seagate Barracuda with a 32GB SLOG and a 256GB Samsung 840 Pro as L2ARC.

    I'm using 4GB Fiber Channel from the hypervisors to the shared storage.
    rob
    User

    5 posts

    Posted on 28 March 2013 @ 02:32
    Hi, Daz


    Thanks for your explanation.
    I really appreciate you're sharing this.

    DigitalDaz wrote:
    If anything, squeeze youself another drive in and get three striped mirrors. Obviously you will lose capacity but at leased then you will get the combined IOPS across 3 vdevs.
    I'm indeed thinking of going RAID10, but is this necessary when using a SLOG? In my perspective, using a SLOG is practically an offload for write IOPS-performance...

    DigitalDaz wrote:
    My understanding is that with V28 you should not need to mirror these and a 100GB DC3700 will be massive overkill.
    No mirror? Really? I haven't seen any information about this. Can someone confirm this?

    DigitalDaz wrote:
    I'm just about to build some similar storage for a couple of ESXi hypervisors.
    I'm initially going with 8 x Seagate Barracuda with a 32GB SLOG and a 256GB Samsung 840 Pro as L2ARC.
    Could you tell me what SLOG you're thinking of? And does it have supercap?

    Thanks!
    DigitalDaz
    User

    5 posts

    Posted on 28 March 2013 @ 06:38
    I haven't found my SLOG as yet, but I'll be looking at something with the spec of an Intel 520. My understanding of 520s are that they have zero cache so you do not need a capacitor. So I'm looking for small, SATA 3 and zero cache.
    DigitalDaz
    User

    5 posts

    Posted on 28 March 2013 @ 06:41
    I've just found that there IS a 60GB Intel 520 so I will probably use one of those. I have a 480GB one already and they are very fast.
    rob
    User

    5 posts

    Posted on 28 March 2013 @ 08:51
    Nice! We were looking for a Intel 320, but choose for a 520 60GB just a few minutes ago.
    We already have a new 840 pro lying around, so we will probably be testing somewere next week.

    So our test-setup wil be:
    1 * VMWare ESXi (spare server, specs don't really matter, but with 2*1Gbit)
    1 * ZFSGURU (6*1TB SATA in RAID10, Samsung Pro 840 256GB for LARC (and boot) , Intel 520 60GB for ZIL, 1Gbit (as ZFSGuru does not yet support LACP))

    I will test with ZFSGURU advanced benchmark and VMware IOanalyser (http://labs.vmware.com/flings/io-analyzer).

    We are using this test setup to come up with a replacement of some storage in our datacenter. We currently have 2 NFS-boxes with 12*1TB SATA's, but I'm not happy with the performance and I'm looking to replace/customize them. But because of budget, we will be reusing those disks.
    DigitalDaz
    User

    5 posts

    Posted on 28 March 2013 @ 19:56edited 19:59 44s
    aaront
    User

    75 posts

    Posted on 19 April 2013 @ 19:50
    Hey Rob,
    I'm running a very similar production setup and would be happy to answer any config questions. My build is basically:
    8x3tb sata drives as one mirrored pool (raid 10)
    2xwhatever sata drives as raid 1 root on zfs
    samsung 830 256gb l2arc
    intel 320 80gb slog
    24gb ram
    quad core first gen i7
    lsi 9211 (the pool goes to this, the rest go to the mainboard sata)

    This is all shared out as nfs for esxi 5.1 as well as some samba shares that are used directly, as well as by the virtual machines. I have a rule of keeping my vms slim and using samba shares as their storage for things like ftp servers. It allows my auto snapshots to backup data much better than if the vm was a 200gb file and I needed something out of it, snapshots wouldn't really help much.

    The main gotcha is that I can't use all my l2arc because I don't have enough ram. any l2arc you add takes away from the arc. So 24gb of ram, id suggest a 128gb l2arc ssd. The samsungs are great, and I would totally buy an 840 pro if I was doing this now. If you have more ram, go for the 256.

    I don't think the 520 series has the cap like the 320 does, but I could be incorrect. I went with the 80gb 320 because it's 2x the performance of the 40gb, not for the size. My next build will use the intel 3700. It costs more but the performance is more than worth it. In fact, I'm planning on using them for the l2arc as well since I realized I don't need as much l2arc as I though I did.

    do not not not use raidz as a vm storage. The speed is definately noticable as one vm can slow down the whole pool since a raidz acts as one spindle. And don't even get me started on replacing a drive, the whole pool slows to a crawl. With raid 10 replacing a drive is super quick and the whole thing performs much better. I will never use anything but raid10 from now on.

    Another gotcha, the whole thing has to be glabel or gpt. you CANNOT mix these or everything gets wacky. I chose glabel and am really happy with it, however I never really got partitioning to work right. I manually partitioned disks, then labled the slices and got it working, but the performance went down. I ended up just doing glabel for everything and kill two slots as my raid1 for boot. Its a waste of two slots but things work well and fast.

    Anyway, if you have any questions my experience can help with, let me know.
    Last Page

    Valid XHTML 1.1