Latest interface: 0.3.1
Latest system: 010
No1451
User

54 posts

Posted on 15 February 2011 @ 17:36
I am once again seriously considering taking the leap, I will be trying ZFSGuru in ESXi for a few weeks to see if it meets my needs and expectations for performance, I don't really know what to do for pooling though.

I have:
4x750GB
4x1.5TB
3x2TB

Many of the drives are the advanced format 4k sector drives if that makes any difference. Please advise!
Bob
User

8 posts

Posted on 15 February 2011 @ 18:28
You generally want to pool the common drives together. I am assuming that your 4x750, 4x1.5, 3x2 all contain the same exact drive in the pool.

Generally you do not want to mix say 2 WD Black and 2 WD Greens if these comprised of your four 750GB drives.

Also when creating your pools you want to create raidz with either 3 or 5 drives and not 4. Using 4 drives in the pool will slow down the pool.

Using 4k sectors you want to use the override on the disk page when your formating your drives and when your creating your pool. It will be faster. The problem is you can't use root on zfs yet with 4k overrides. (Jason correct me if I am wrong)

Basically here is what I would personally do for the pools assuming you have matching drives:

3x750GB - Raidz - PoolA
3x1.5TB - Raidz - PoolB
3x2TB - Raidz - PoolC

Then if you wanted to you could use one of your spare drives and create like a boot pool for the system (or if the 750s are non 4k drives then install the root on that pool)

No1451
User

54 posts

Posted on 15 February 2011 @ 18:41
Thanks for the superquick reply! Most of the drives are identical, one of the 1.5TBs is a different breed than the rest(but still a green drive with similar specs). Same goes for the 2TB drives, 2xWD Greens and 1xSeagate green, hopefully these are similar enough that it wouldn't cause issues(a tiny hit to performance is not a big concern for me). Am I right in assuming that with the pooling you outlined there my final logical space is 8.5TB?

Is there any big issue with using the 4K override(currently using 4K drives in WHS results in absolutely terrible performance, in the realm of reads of only 18MB/s)?

Now a tricky question that will likely have many scratching their heads, what is the performance loss if I were to share this via iSCSI to another VM on my machine, and then use that machine to manage my interface with the pool? I am really uncomfortable with BSD and I don't have the patience or the knowledge to try and find suitable replacements for some of the software I use regularly. If I'm just crazy and this setup is stupid, please let me know :)
Bob
User

8 posts

Posted on 15 February 2011 @ 19:33
Yes. The total pool size should be 8.5TB. The best way to check and see if its better or not to use the 4k override honestly is to test it using Jason's benchmarking tool. Test each pool individually with the benchmarking tool, one using the default setting and the using the 4k override. (IT ERASES ALL YOUR DATA WHEN YOU RUN THE BENCHMARK so don't do it if there is data on it.) This should give you an idea of where your pools stand and to see if there is a major slowdown by using different drives.

My question is what hardware are you planning on running this. ZFS eats memory and is CPU intensive and with three pools you probably want 8gb of RAM min.

I am not familiar with iSCSI at all so I can't really answer those questions.

Jason
Developer

806 posts

Posted on 15 February 2011 @ 19:41edited 19:42 05s
Excellent advice offered here! Let me comment:
  • You can also create 4x750GB, 4x1.5TB and 3x2TB as raidz in one pool, so the disk count per vdev does not have to be the same.
  • Performance for 'optimal' configurations (3,5,9 disks in RAID-Z, 6 or 10 in RAID-Z2) should be good without the sectorsize override, and in fact may be faster without the override.
  • Use the Benchmark feature to test your performance and stability. You should perform a Root-on-ZFS installation before trying benchmarking, though.
  • Root-on-ZFS will not be possible on 4K sectorsize override, but also not expanded pools like three RAID-Zs. Consider using a different device for Root-on-ZFS. For the time being, that could be a 1GB+ USB stick.
  • I don't know ESXi, but you really have to make sure it honors the flush commands ZFS sends, or your data integrity may be in jeopardy. Having the controller passthrough to FreeBSD or having raw disks passed to FreeBSD would be optimal.

Please let me know about performance! I'm eager to see what performance penalty an ESXi system would have. If you want to test properly, you could perform the same benchmarking on native ZFSguru/FreeBSD thus without ESXi. All you would need to do is boot your system from one of the devices that occupy your Root-on-ZFS installation.

Good luck!
No1451
User

54 posts

Posted on 15 February 2011 @ 20:15
My hardware will be:
Mobo - Supermicro X8SIL-F
CPU - Xeon 3430
RAM - 16GB
HBA - 2xIntel SASUC8i passed through to the guest

I'm not sure if ESXi honors flush commands, so far all my googling has not turned up any solid answers on this. So what you're saying is that if my pool is 3/5/9 drives large I don't need to use the sector override for 4K drives?
Jason
Developer

806 posts

Posted on 15 February 2011 @ 21:05
Indeed you may not have to use the override in those cases. Benchmarking will give you a clear picture.

If you do a passthrough of the controllers themselves, then you should have no problems with the flush commands. FreeBSD will directly control those disks, which is good. Make sure to have the Intel cards running in IT mode!
No1451
User

54 posts

Posted on 9 March 2011 @ 22:29
So the hardware is in, my hypervisor is loaded and I've started populating it with VMs. As it stands I gave it a small(10GB) virtual disk on ESXi to act as a boot device, using RootOnZFS method. Next up is figuring out iSCSI, how exactly do I go about creating an iSCSI target and also, since I'm very new to this, what exactly are caveats of iSCSI? If I connect from one machine and write, then disconnect and connect from another it is just like a standard block device correct?

Also want to make sure it can expand without issue, this is a big deal for me. Any advice appreciated :)
The_Dave
User

221 posts

Posted on 10 March 2011 @ 01:24
Sounds like we are in the same boat except I can't run ESXi because I bought an i3 for my server a long time ago so no VT-d for me (for now) :( Please post how ZFS Guru runs in ESXi once you get it setup please!

Does having 2 pools for 2 different vdevs vs 2 vdevs in one pool really make a difference?

The_Dave
User

221 posts

Posted on 10 March 2011 @ 08:10
Never mind about my performance questions. I found a GREAT article that discussed it and is pretty easy to read:

http://constantin.glez.de/blog/2010/06/closer-look-zfs-vdevs-and-performance
No1451
User

54 posts

Posted on 10 March 2011 @ 18:51
Interesting read and answered a few questions I had.

Now something I'm not so sure of, Jason I hope you read this, you mentioned that 3 or 5 drive pools are best for 4K sector drives, is that total drive pool(including the parity?) or the number of drives that will actually be accessed?

So say, 3 drives:
2xdata
1xparity

Is that the ideal or:
3xdata
1xparity
The_Dave
User

221 posts

Posted on 10 March 2011 @ 21:42edited 21:49 11s
2X data and 1X parity or 4X data and one parity (or two parity doesn't matter).

So from the sounds of it, there would be better IOPS but same bandwidth performance with two separate 3 disk vdevs in RaidZ in one pool than say a 6 disk vdev in a RaidZ2 pool?

EDIT: It appears bandwidth would be improved as well? I have 5 WD 2TB drives I was using in a single vdev. Should I create 2 vdevs and add a Hitachi 1TB drive I have here to complete the second vdev until another WD 2TB arrives (I can order) to replace the Hitachi?
No1451
User

54 posts

Posted on 11 March 2011 @ 06:34edited 06:36 05s
Up and running with one of my vdevs(4x750GB drives, these are a legacy carryover from my old machine). So far my performance is looking very promising. I suspect it could be a bit better(each drive along benches at ~80-95MB/s), if this is way too low for the current setup someone shout-out and let me know!

broken image
Jason
Developer

806 posts

Posted on 12 March 2011 @ 03:34
Those are very good performance numbers, but you have alot of memory too! I think 8GiB? Looks like even more looking at the tuning numbers; unless you adjusted them yourself.

Either way looks good, you don't need the sectorsize override and howmany disks in a RAID-Z won't be that crucial to normal 512-byte sectordisks. That issue is only relevant to newer 4K sector disks that emulate 512-byte sectors (all current 4K disks do this; like WD EARS and Samsung F4).

@The_Dave: you can use the Disks->Benchmark feature to find out how these configurations perform for you. You can't test nested RAID-Z though; i believe those are in units of 5 disks so it would only test that at 10 disks total. With 5 disks i would make it a single RAID-Z so you get more space (20% redundancy level). 6 disks and RAID-Z2 is better though. Both are optimal configurations for 4K disks. Note that RAID-Z2 performance is low under 6 disks, even with 512-byte drives, in my tests anyway.
No1451
User

54 posts

Posted on 12 March 2011 @ 04:15
The VM is sitting with 10GB of ram atm, I'm considering bumping that up to 12GB depending on how I feel when I install WHS 2011
AlessandroDuke
User

25 posts

Posted on 18 February 2019 @ 08:48
Thank you regarding offering latest revisions about the problem, My partner and i enjoy examine a lot more. Accountants Crawley
Last Page

Valid XHTML 1.1