Latest interface: 0.3.1
Latest system: 010
Pantagruel
User

45 posts

Posted on 26 January 2013 @ 16:19
The guys at zfsbuild.com have benched ZFSguru against Freenas and Nexenta.

http://www.zfsbuild.com/2013/01/25/zfsbuild2012-nexenta-vs-freenas-vs-zfsguru/

Sadly ZFSGuru seems to be he least optimal choice regarding ZFS IOPS :( .

On the upside, they do mention the potential of tweaking ZFSguru to get to where FreeNas is regarding performance.
zxed
User

7 posts

Posted on 27 January 2013 @ 00:54
Wow, how did they come up with those numbers?

I haven't done any measured/empirical tests myself. However, using the same generic hardware I have now, my experience with Nexenta was most definitely the slowest implementation of ZFS (not to mention limited to a meager few TB) that I could lay hands on.
Pure Solaris was better, but ZFSGuru was/is the easiest and fastest solution I tried, especially when reading or writing annoyingly large 20GB files over a network :)
DVD_Chef
User

128 posts

Posted on 29 January 2013 @ 14:56edited 16:24 09s
Their tests were exclusively dealing with block access using iSCSI, which is not the way I guess most of us here use ZFSGuru. I myself only use it as file storage accessed by SMB or NFS, and not iSCSI.

Is anyone currently using iSCSI that can confirm these poor test results?

EDIT--

I created an 80GB volume and shared it out using iSCSI for testing. I attached it to a Win7ultimate box over gigabig ethernet and installed Iometer with their settings, modifying them to point to the proper "disk". I have no experience with Iometer so am not sure yet if I am using it properly but it seems to be running. My setup is a Sun x4500(thumper) with 16GB ram and a RAIDZ2 pool of three vdevs one containing 6 2TB HGST CoolSpin drives and the other two each with 6 3TB HGST CoolSpin drives. No RAID10 or other forms of mirroring to increase performance and no SSD cache/l2arc is being used, so not a match for their setup.

The first couple of tests are "4K; 100% read; 0% write" and are averaging 10K IOps. This is 4x better than the 2.5k their charts show at the first few que depths. I had to stop the tests, as the machine is in production at the moment. I will start the tests again when the machine is not in use, as I am curious to compare things.
aaront
User

75 posts

Posted on 12 February 2013 @ 19:03
Didn't bother with iSCSI yet, I use nfs for all my vms and adding iSCSI is a pita. SO I'm testing with my modest production box (after hours but it's always doing something since my dc and other stuff is virtualized and runs off there).
I have 4 mirrored pairs of two disks each (seagate 3tb consumer), intel 320 80gb slog, samsung 830 256gb l2arg, 24gb ram. (all on a consumer asus mb, lsi 9211 hba for the pool, onboard sata for slog/l2arc/boot)

I have a win7x64 vm I use to host my acronis pxe server that mostly does nothing so I put iometer on it, attached another disk (nfs on my normal pool, gzip6 no less) and we shall see. However I do have hourly zfsrep scrips that do snapshots and sends to another box so that will definately make some weird dips.

First test (looks like 4k sequential) chills at 12k. Easily 4x as fast as they showed... I'll let this run and post results in the morning.
aaront
User

75 posts

Posted on 13 February 2013 @ 10:28
morning is here! I'll summarize the resuls, if anyone wants the full dump let me know as there are no attachments here
4K; 100% Read; 0% random 10.6k - 13.1k
4k random write 1.1k - 1.2k
4k random 67%read33%write .7k - 2.1k (drops with higher queue depth)
4k random read .6k - 1.3k (only drops at highest queue)
8k random read .6k - 6.8k (started low then went higher, I think something else must have been running starting two tests ago to bring all these tests low for a few min, only explanation I can think of)
8k sequential read 6.3k - 6.9k
8k random write .9k - 1.4k
8k random 67%write33%read 1.9k - 2.2k
16k random 67%write33%read (1) 1.2k - 1.7k
16k random write 1.13k - 1.17k
16k sequential read 3.4k - 3.7k
16k random read 3.5k - 3.7k
32k random 67%write33%read .7k - .9k
32k random read 1.6k - 1.9k
32k sequential read 1.7k - 1.9k
32k random write .4k - .7k

So basically my write performance sucks, and my read is pretty good. If you toss out some weird numbers, probably because other things are using this system as it's productions, everything beat the zfsbuild.com numbers for zfsguru. And this is on a MUCH less powerful box, while being used for other things (nfs vmware and samba mostly), on NFS!

My only tuning is (on a 24gb ram system)
arc_min - 16g
arc_max - 22g
arc_meta_limit - 12g (I have some samba shares with a ton of files, even this gets full)

I can't remember how bad the default values were, but I really can't understand how their test numbers for zfsguru could be so bad. I have full confidence in this distro and will continue using it as long as I'm doing zfs.












DVD_Chef
User

128 posts

Posted on 14 February 2013 @ 13:07
Thanks for the post aaront! It is good to see another system showing the same trends as I was seeing. I would guess that this pool is a RAID10 type of setup? Do you have any cache or slog ssd's in use?

With the new server I am deploying the iometer tests will be a good way to break it in. It is a different config than the one used above, so will be able to give another data point to this discussion.
aaront
User

75 posts

Posted on 20 May 2013 @ 14:28
raid 10 (4 mirrored pairs seagate 3tb). intel 320 80gb zil, samsung 830 256gb l2arc. all glabel.
aaront
User

75 posts

Posted on 7 February 2014 @ 15:05
dvd_chef, check out my thread over here:
http://zfsguru.com/forum/buildingyourownzfsserver/652

I got my new hardware for my next build and will be posting all my notes and scripts as I put it together better than ever. My current box will become my backup target and my current backup will finally get decomd or maybe an offsite backup.
Last Page

Valid XHTML 1.1