Latest interface: 0.3.1
Latest system: 010
Pages: 1 2
RobinM
User

18 posts

Posted on 9 March 2015 @ 19:22
Hi there,

Finally it happened, I build my first ZFS(guru) box, it only took me 3 years to take the step.. ^^

I am thinking about posting about my ZFS adventures on this forum. Is that ok, and were shall i create my topic?

My goal is to use the ZFSGuru box purely for storing my media, and will use a different server that will act as my htpc, download and plex mediaserver.

Greatings from a not so ZFS virgin anymore ;)

RobinM
User

18 posts

Posted on 11 March 2015 @ 19:33
Not many responses yet ^^ will post some benchmark results soon that i did with some cheap notebook drives i could gather.
CiPHER
Developer

1199 posts

Posted on 11 March 2015 @ 19:37
Hey!

Guess i missed your post. But congrats on your first ZFS box!

You are very welcome to write about your experience, i'd love to hear about it. Especially things that can be improved in ZFSguru!
DVD_Chef
User

128 posts

Posted on 11 March 2015 @ 20:35
Congrats on making the first step, but watch out!
I Started out with a small HP Microserver to replace my original Drobo, and now the latest is a 15 bay supermicro server stuffed with 2TB drives humming along in the basement. ;)

Amazing how quickly things expand to fit the available storage.
RobinM
User

18 posts

Posted on 11 March 2015 @ 22:39
Haha thanks for the warning! happens all the time, starting with some budget in mind, and then after some time the perfect set up always ends way above budget ;)

For now ill post some benchmark results, maybe you can share your thoughts about the performance. For production i want to start with a raidz config of 3 or 5 disk, and later expand them with another raidz config of 3 or 5 discs.

Is it also possible to replace for example a 3 x 4tb disk raidz pool by 3 x 6tb? So swapping the disks 1 by 1? and once they are all replaced the pool grows?

In my current mediaserver i have a serveraid m1115 with it firmware, but cannot use it for testing because its in use. Meaning i will probably buy another one next month ^^ Having 2 of them gives me more then enough expension room.

Specs of ZFS box are:

CPU: i5 2300
RAM: 16gb
Motherboard: Asus Maximus Gene V - not the most energy efficient solution but hey, it looks great ;)

ZFS is rooted on a samsung 830 ssd. After finishing the wizard i thought i should have partitioned the ssd first, right? Because now its 120gb partition just for zfsguru.

Drives used for testing (they are connected to the 6gb ports of the motherboard):

2 x Hitachi Z5K500
1 x HGST Z5K500
1 x WD Blue WD5000LPVX (used for tests with 4 drives)

About the benchmark, to me the reads of the 3disk raidz tests look kinda low? I ran the benchmark a few times giving similair results.

Also what i noticed when creating a pool, even when sector size override is set to 'no sector override' the pool still got ashift 12, and not ashift 9. Am i missing something here?

RAIDZ 3x500gb notebook disk

ZFSguru 0.2.0-beta9 (10.1-001) pool benchmark
Pool : media (1.36T, 0% full)
Test size : 64 GiB
normal read : 109 MB/s
normal write : 128 MB/s
I/O bandwidth : 29 GB/s

RAIDZ 4x500gb notebook disk

ZFSguru 0.2.0-beta9 (10.1-001) pool benchmark
Pool : mediatest (1.81T, 0% full)
Test size : 64 GiB
normal read : 251 MB/s
normal write : 194 MB/s
I/O bandwidth : 28 GB/s

RAIDZ2 4x500gb notebook disk

ZFSguru 0.2.0-beta9 (10.1-001) pool benchmark
Pool : mediatest (1.81T, 0% full)
Test size : 64 GiB
normal read : 178 MB/s
normal write : 123 MB/s
I/O bandwidth : 29 GB/s

RAID0 1x500gb Hitachi Z5K500

ZFSguru 0.2.0-beta9 (10.1-001) pool benchmark
Pool : mediatest (464G, 0% full)
Test size : 64 GiB
normal read : 93 MB/s
normal write : 50 MB/s
I/O bandwidth : 28 GB/s


RAID0 2x500gb Hitachi Z5K500

ZFSguru 0.2.0-beta9 (10.1-001) pool benchmark
Pool : mediatest (928G, 0% full)
Test size : 64 GiB
normal read : 170 MB/s
normal write : 125 MB/s
I/O bandwidth : 29 GB/s

RAID0 4x500gb notebook disk

ZFSguru 0.2.0-beta9 (10.1-001) pool benchmark
Pool : mediatest (1.81T, 0% full)
Test size : 64 GiB
normal read : 304 MB/s
normal write : 316 MB/s
I/O bandwidth : 28 GB/s

RAID1 2x500gb Hitachi Z5K500

ZFSguru 0.2.0-beta9 (10.1-001) pool benchmark
Pool : mediatest (464G, 0% full)
Test size : 64 GiB
normal read : 112 MB/s
normal write : 47 MB/s
I/O bandwidth : 29 GB/s


RAID10 4x500gb notebook disk

ZFSguru 0.2.0-beta9 (10.1-001) pool benchmark
Pool : mediatest (928G, 0% full)
Test size : 64 GiB
normal read : 217 MB/s
normal write : 118 MB/s
I/O bandwidth : 28 GB/s
RobinM
User

18 posts

Posted on 15 March 2015 @ 22:01
Last days i have been doing some testing for samba shares, when reading big files from the ZFS box I only get around 50MB/s per second. This should be like 90MB/s right? Also running ChrystalDiskMark benchmark for a networkdrive giving similar read speeds. Write speed seems fine.
aaront
User

75 posts

Posted on 20 March 2015 @ 01:09
1. samba has it's own stuff to optimize, so don't worry about that until later. This site has some good stuff, but be careful, change one thing at a time, and test after each change
https://calomel.org/freebsd_network_tuning.html

2. Those numbers look alright, those are 5400rpm drives, not gonna break any speed records.

If you are going to do striping, don't touch raidz, do raidz2 in one of the recommended numbers, I forget what they are but you can look it up. The chance of a second drive failing during a rebuild of one failed drive are ridiculously high. Personally I like raid10 (stripe of mirrored pairs). You get 1/2*N size and 1/2*N*(1 drive speed) write performance, but 1*N*(1 drive speed) read performance. Replacing failed drives is super fast and you can even add a 3rd drive to a mirror if you notice one failing. it's also less of a load on the cpu compared to parity stripes.
RobinM
User

18 posts

Posted on 22 March 2015 @ 22:36
I have done testing with different computers and different network cards. All giving similar read speed results.

What I also noticed is that also the performance of ISCSI is having this same problem with read speed. When reading large files the speed falls back to around 55MBs. This is same as what i am getting when testing samba. So feels like the problem is not with Samba but something else.

Because I was kind of disappointed I figured to try another ZFS solution. To see what kind of results I would get. I installed nas4free and imported the ZFS pool. Started with testing samba and now i get reads of 100+ MBs!

How can it be that nas4free gives this performance right out of the box and ZFSGURU in my situation not?

From nas4free i copied the sysctl.conf settings, i will try to implement these into ZFSguru and see if this will fix the problem with read speed.

CiPHER
Developer

1199 posts

Posted on 23 March 2015 @ 02:25
Does NAS4Free use the 'istgt' package, or the new kernel-implementation (ctld) ?

ZFSguru still uses istgt, but i began work on the new ctld kernel-implementation of iSCSI some time ago. It won't make it in the coming release, though.

What you can try is create a Samba-share on a tmpfs mounted directory. Then test whether performance is good. If it is, then ZFS tuning would be the obvious place to look for.

Have you enabled any special memory tuning at step 4 of the installation, for example?
RobinM
User

18 posts

Posted on 24 March 2015 @ 20:28
i found that disabling the the kernel tuning for nas4free had some impact on performance, reads of 95mbs instead of 105mb. But not the 60 i have with zfsguru.

Could it be that there is a problem with memory tuning? I tried from the start with aggressive tuning, with or without tuning didnt give any difference in benchmark results, kinda strange right?
CiPHER
Developer

1199 posts

Posted on 24 March 2015 @ 20:54
What BSD version does NAS4Free use? It could be that drivers for your hardware have changed.

When i search on your motherboard, it also appears that is has an extra ASMedia SATA controller, using the same colours as the chipset SATA controller. This is very bad. Since you should not use these ports when benchmarking or wanting top-performance. If using both of these ports simultaneously, it could impact performance of the other disks too - especially when using RAID-Z.

Your motherboard has two black SATA-ports, and four red SATA-ports. But two of the four red ports are from the ASMedia-chip. See the image:

broken image

So this is something you can try. But you say you tested with other systems too?

Could you tell me more about with what hardware you actually tested? And also, how did you test; how did you get your numbers?

Generally i advise these three steps:
1) make sure the network bandwidth is good, by benchmarking with iperf.
2) make sure the disk bandwidth is good, you can use the ZFSguru benchmark for that. Or just 'dd' on the command line. It is the same.
3) now test Samba performance on a tmpfs-share. This means it will read/write to RAM and not disk. This means pure Samba + network performance is tested.
4) finally, test Samba performance on the actual pool, having all factors counted in.

The idea is that if 1) is already bad, it is no wonder 4) get's bad scores too. You need to isolate the problem to three components: network, disk, software (Samba but also OS tuning settings, etc).

It is possible you have some local issue affecting a limited number of users. Still i'm very interesting in finding out. It is possible NAS4Free uses some tuning to get better performance. Still i'm getting 88MB/s read and 115MB/s write without any tuning in my local situation. And that is even with one Realtek-chip; which many dislike for various reasons. Generally Intel is the best. But there have been plenty of Intel network bugs and performance issues for FreeBSD in the past; so it is no guarantee. But you already tried several network cards you stated.

So if you're willing to give this more time to investigate, i'm here to support you!
RobinM
User

18 posts

Posted on 24 March 2015 @ 21:39edited 22:22 50s
Base OS of NAS4Free 9.3.0.2 = FreeBSD 9.3

Testing with 4 drives in ZFS RAID0.

ZFSguru 0.2.0-beta9 (10.1-001) pool benchmark
Pool : mediatest (1.81T, 0% full)
Test size : 64 GiB
normal read : 304 MB/s
normal write : 316 MB/s
I/O bandwidth : 28 GB/s

Created samba share, mount drive on windows, using crystaldiskmark for benching the network drive.

With testing on other system i meant i ran the crystaldiskmark on different computers to see if it was maybe only with specific client.

Going to follow your steps soon and let you know the results!

RobinM
User

18 posts

Posted on 25 March 2015 @ 00:03
I downloaded JPerf, trying to start but this is the result i get:

bin/iperf.exe -c 192.168.1.13 -P 1 -i 1 -p 5001 -f k -t 10
connect failed: Connection refused
Done.

How can i get iperf to work? Does the server needs to be started or something on zfsguru? Please provide me some help :)

CiPHER
Developer

1199 posts

Posted on 25 March 2015 @ 13:28
Start iperf on the server (as root i think):

iperf -s

Then on your (Windows) client you use jperf or similar to connect to the server. Try to use standard settings, otherwise you run the risk of optimizing which the application doesn't. In particular, the TCP receive window size is important; Windows often chooses a too low value causing the gigabit connection to lose performance potential.
RobinM
User

18 posts

Posted on 25 March 2015 @ 22:02edited 22:10 00s
Needed to use SSH to start iperf, first time using ssh ever ;)

Here are the results: from your steps, 1 is already bad.

I have tested with realtec and intel network cards on my clients, but didnt try a non intel card for the ZFS box. So this is something i can still try and see the results. But if the intel card works fine with NAS4FREE, it should also be able to work fine with ZFSGURU right? :)

bin/iperf.exe -c 192.168.1.13 -P 1 -i 1 -p 5001 -f m -t 60
------------------------------------------------------------
Client connecting to 192.168.1.13, TCP port 5001
TCP window size: 0.01 MByte (default)
------------------------------------------------------------
[156] local 192.168.1.11 port 58051 connected with 192.168.1.13 port 5001
[ ID] Interval Transfer Bandwidth
[156] 0.0- 1.0 sec 46.7 MBytes 392 Mbits/sec
[156] 1.0- 2.0 sec 55.8 MBytes 468 Mbits/sec
[156] 2.0- 3.0 sec 57.7 MBytes 484 Mbits/sec
[156] 3.0- 4.0 sec 58.8 MBytes 493 Mbits/sec
[156] 4.0- 5.0 sec 55.9 MBytes 469 Mbits/sec
[156] 5.0- 6.0 sec 54.3 MBytes 455 Mbits/sec
[156] 6.0- 7.0 sec 59.5 MBytes 499 Mbits/sec
[156] 7.0- 8.0 sec 55.6 MBytes 466 Mbits/sec
[156] 8.0- 9.0 sec 56.6 MBytes 475 Mbits/sec
[156] 9.0-10.0 sec 52.1 MBytes 437 Mbits/sec
[156] 10.0-11.0 sec 51.0 MBytes 428 Mbits/sec
[156] 11.0-12.0 sec 58.4 MBytes 490 Mbits/sec
[156] 12.0-13.0 sec 58.8 MBytes 494 Mbits/sec
[156] 13.0-14.0 sec 58.6 MBytes 492 Mbits/sec
[156] 14.0-15.0 sec 58.4 MBytes 490 Mbits/sec
[156] 15.0-16.0 sec 59.9 MBytes 502 Mbits/sec
[156] 16.0-17.0 sec 58.2 MBytes 488 Mbits/sec
[156] 17.0-18.0 sec 58.1 MBytes 487 Mbits/sec
[156] 18.0-19.0 sec 52.7 MBytes 442 Mbits/sec
[156] 19.0-20.0 sec 59.0 MBytes 495 Mbits/sec
[ ID] Interval Transfer Bandwidth
[156] 20.0-21.0 sec 60.0 MBytes 503 Mbits/sec
[156] 21.0-22.0 sec 58.9 MBytes 494 Mbits/sec
[156] 22.0-23.0 sec 42.8 MBytes 359 Mbits/sec
[156] 23.0-24.0 sec 52.9 MBytes 444 Mbits/sec
[156] 24.0-25.0 sec 56.1 MBytes 471 Mbits/sec
[156] 25.0-26.0 sec 58.6 MBytes 492 Mbits/sec
[156] 26.0-27.0 sec 58.9 MBytes 494 Mbits/sec
[156] 27.0-28.0 sec 58.6 MBytes 491 Mbits/sec
[156] 28.0-29.0 sec 59.6 MBytes 500 Mbits/sec
[156] 29.0-30.0 sec 57.2 MBytes 480 Mbits/sec
[156] 30.0-31.0 sec 59.3 MBytes 497 Mbits/sec
[156] 31.0-32.0 sec 59.2 MBytes 497 Mbits/sec
[156] 32.0-33.0 sec 59.0 MBytes 495 Mbits/sec
[156] 33.0-34.0 sec 56.3 MBytes 472 Mbits/sec
[156] 34.0-35.0 sec 40.0 MBytes 336 Mbits/sec
[156] 35.0-36.0 sec 45.4 MBytes 381 Mbits/sec
[156] 36.0-37.0 sec 40.8 MBytes 342 Mbits/sec
[156] 37.0-38.0 sec 59.3 MBytes 497 Mbits/sec
[156] 38.0-39.0 sec 57.5 MBytes 482 Mbits/sec
[156] 39.0-40.0 sec 56.6 MBytes 475 Mbits/sec

Done.
CiPHER
Developer

1199 posts

Posted on 26 March 2015 @ 12:34
ok i guess i see the problem. As i suspected, the TCP window size is too low to utilise all potential of gigabit. It is only 0,01MB (8 or 16KiB).

What you can do is launch iperf with a different TCP window size. And if i'm right, you would get proper scores.

For this you would use the -w parameter with an argument to specify the window size. You can try 1M (one megabyte). Anyway, try larger values than the 0,01MB.

If indeed you get proper scores after increasing the TCP window size, then this confirms my theory that somehow Windows sets a really low TCP window size. It is possible NAS4Free has some tuning to force Windows to use a larger window size.
RobinM
User

18 posts

Posted on 26 March 2015 @ 14:51
Hey CiPHER,

I tried with 1mb and 64kb TCP window size, here at the results:


bin/iperf.exe -c 192.168.1.15 -P 1 -i 1 -p 5001 -w 1.0M -f m -t 20
------------------------------------------------------------
Client connecting to 192.168.1.15, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[156] local 192.168.1.18 port 59652 connected with 192.168.1.15 port 5001
[ ID] Interval Transfer Bandwidth
[156] 0.0- 1.0 sec 66.6 MBytes 559 Mbits/sec
[156] 1.0- 2.0 sec 110 MBytes 926 Mbits/sec
[156] 2.0- 3.0 sec 105 MBytes 882 Mbits/sec
[156] 3.0- 4.0 sec 108 MBytes 902 Mbits/sec
[156] 4.0- 5.0 sec 107 MBytes 896 Mbits/sec
[156] 5.0- 6.0 sec 107 MBytes 897 Mbits/sec
[156] 6.0- 7.0 sec 106 MBytes 888 Mbits/sec
[156] 7.0- 8.0 sec 107 MBytes 898 Mbits/sec
[156] 8.0- 9.0 sec 105 MBytes 884 Mbits/sec
[156] 9.0-10.0 sec 106 MBytes 886 Mbits/sec
[156] 10.0-11.0 sec 106 MBytes 893 Mbits/sec
[156] 11.0-12.0 sec 104 MBytes 870 Mbits/sec
[156] 12.0-13.0 sec 107 MBytes 898 Mbits/sec
[156] 13.0-14.0 sec 111 MBytes 932 Mbits/sec
[156] 14.0-15.0 sec 108 MBytes 909 Mbits/sec
[156] 15.0-16.0 sec 110 MBytes 924 Mbits/sec
[156] 16.0-17.0 sec 103 MBytes 866 Mbits/sec
[156] 17.0-18.0 sec 111 MBytes 930 Mbits/sec
[156] 18.0-19.0 sec 110 MBytes 920 Mbits/sec
[156] 19.0-20.0 sec 110 MBytes 923 Mbits/sec
[ ID] Interval Transfer Bandwidth
[156] 0.0-20.0 sec 2108 MBytes 884 Mbits/sec
Done.

bin/iperf.exe -c 192.168.1.15 -P 1 -i 1 -p 5001 -w 64.0K -f m -t 20
------------------------------------------------------------
Client connecting to 192.168.1.15, TCP port 5001
TCP window size: 0.06 MByte
------------------------------------------------------------
[156] local 192.168.1.18 port 59551 connected with 192.168.1.15 port 5001
[ ID] Interval Transfer Bandwidth
[156] 0.0- 1.0 sec 80.9 MBytes 678 Mbits/sec
[156] 1.0- 2.0 sec 108 MBytes 909 Mbits/sec
[156] 2.0- 3.0 sec 106 MBytes 887 Mbits/sec
[156] 3.0- 4.0 sec 96.2 MBytes 807 Mbits/sec
[156] 4.0- 5.0 sec 108 MBytes 904 Mbits/sec
[156] 5.0- 6.0 sec 105 MBytes 880 Mbits/sec
[156] 6.0- 7.0 sec 108 MBytes 902 Mbits/sec
[156] 7.0- 8.0 sec 106 MBytes 888 Mbits/sec
[156] 8.0- 9.0 sec 96.5 MBytes 809 Mbits/sec
[156] 9.0-10.0 sec 107 MBytes 896 Mbits/sec
[156] 10.0-11.0 sec 106 MBytes 892 Mbits/sec
[156] 11.0-12.0 sec 106 MBytes 887 Mbits/sec
[156] 12.0-13.0 sec 89.8 MBytes 753 Mbits/sec
[156] 13.0-14.0 sec 106 MBytes 890 Mbits/sec
[156] 14.0-15.0 sec 108 MBytes 903 Mbits/sec
[156] 15.0-16.0 sec 105 MBytes 883 Mbits/sec
[156] 16.0-17.0 sec 106 MBytes 893 Mbits/sec
[156] 17.0-18.0 sec 107 MBytes 900 Mbits/sec
[156] 18.0-19.0 sec 63.0 MBytes 528 Mbits/sec
[156] 19.0-20.0 sec 104 MBytes 872 Mbits/sec
[ ID] Interval Transfer Bandwidth
[156] 0.0-20.0 sec 2022 MBytes 848 Mbits/sec
Done.
CiPHER
Developer

1199 posts

Posted on 26 March 2015 @ 15:11edited 15:13 10s
Ok, as you can see it has a huge benefit. Generally, you should be able to come close to 950 megabit. In your case it is slightly less. But still you scores doubled versus the standard/default settings.
RobinM
User

18 posts

Posted on 30 March 2015 @ 15:07edited 16:06 26s
Ok, seeing it, but how can i get this performance by default? Should be something on ZFUguru side, right? Because NAS4FREE can deliver this performance out of the box? (without enabling autotuning).

Going to test with following:

To enable RFC 1323 Window Scaling and increase the TCP window size to 1 MB on FreeBSD, add the following lines to /etc/sysctl.conf and reboot.

net.inet.tcp.rfc1323=1
kern.ipc.maxsockbuf=16777216
net.inet.tcp.sendspace=1048576
net.inet.tcp.recvspace=1048576

You can make these changes on the fly via the sysctl command. As always, the '$' represents the shell prompt and should not be typed.

$ sudo sysctl net.inet.tcp.rfc1323=1
$ sudo sysctl kern.ipc.maxsockbuf=16777216
$ sudo sysctl net.inet.tcp.sendspace=1048576
$ sudo sysctl net.inet.tcp.recvspace=1048576

Didnt work ^^
CiPHER
Developer

1199 posts

Posted on 31 March 2015 @ 00:19
You need to be root. Do not use 'sudo'.
Once you login with SSH, you run the command 'su' (Switch User) which will switch you to the root user. Then you can execute the sysctl commands.

sysctl net.inet.tcp.rfc1323=1
sysctl kern.ipc.maxsockbuf=16777216
sysctl net.inet.tcp.sendspace=1048576
sysctl net.inet.tcp.recvspace=1048576

Note however, that if it doesn't work, it is best to remove it. Bad tuning can make things much worse.
RobinM
User

18 posts

Posted on 2 April 2015 @ 22:57
I have read that after a reboot it would not keep changes when using sudo. Anyway i already did it haha before you could warn me not to do it. When after i changed the settings started iperf, it said default windows size was 1mb (normally 64kb).

But when testing it still gave bad performance. Only when i manually set it to 64kb or 1mb it will give the increased performance.
RobinM
User

18 posts

Posted on 16 April 2015 @ 14:17
Hey CiPHER,

Do you have any idea how NAS4FREE can increase the TCP window size when reading from it?
CiPHER
Developer

1199 posts

Posted on 16 April 2015 @ 16:32
Hi RobinM,

I think questions about NAS4Free can best be asked on their forums, since they know their product the best. Instructions for 10.1 and 11.x which ZFSguru uses can be much different than 9.x and 10.0.
RobinM
User

18 posts

Posted on 16 April 2015 @ 20:29
ok will try to get some answer there, maybe you can benefit from it also ;)
DVD_Chef
User

128 posts

Posted on 17 April 2015 @ 22:18
What version of Windows are you running on your client box?
Next Page »

Valid XHTML 1.1