Latest interface: 0.3.1
Latest system: 010

17 posts

Posted on 17 April 2015 @ 10:56edited 11:01 14s
I had issues with nfs copy speed and a kind person helped me on to get decent copy speeds.

It seems usefulll to write it down here also.

Baseline with dd and mbuffer
This is a test to create packets and send them over the network with dd and mbuffer.

Reciever side:

zfsguru1# mbuffer -4 -s 128k -m 1G -I 5001 > /dev/null
in @ 1122 MiB/s, out @ 1122 MiB/s, 683 GiB total, buffer 0% full
summary: 684 GiByte in 10min 27.7sec - average of 1115 MiB/s

Sender side:

zfsguru3# dd if=/dev/zero bs=1M count=700000 | mbuffer -4 -s 128k -m 1G -O
in @ 1122 MiB/s, out @ 1122 MiB/s, 682 GiB total, buffer 100% full700000+0 records in
700000+0 records out
734003200000 bytes transferred in 626.935080 secs (1170780234 bytes/sec)
in @ 0.0 KiB/s, out @ 1122 MiB/s, 683 GiB total, buffer 34% full
summary: 684 GiByte in 10min 27.5sec - average of 1115 MiB/s

This is about line rate, there is some tcp overhead.

Next step is moving data using zfs send and recieve via mbuffer (it moves a snapshot)

receiver side:

zfsguru1# mbuffer -4 -s 128k -m 1G -I 5001 | zfs recv -vFd pool 1/share
receiving full stream of tank5/share@2015-04-09 into pool1/share/share@2015-04-0 9
in @ 0.0 KiB/s, out @ 0.0 KiB/s, 18.5 TiB total, buffer 0% full
received 18.5TB stream in 23456 seconds (829MB/sec)

Sender side:

zfsguru3# zfs send -R tank5/share@2015-04-09 | mbuffer -4 -s 12 8k -m 1G -O
in @ 0.0 KiB/s, out @ 617 MiB/s, 18.5 TiB total, buffer 7% fulll
summary: 18.5 TiByte in 6h 30min 49.7sec - average of 829 MiB/s

This is lower, so the pool read speed is the limit here. The pool tests with 1 GByte/sec with the benchmark
Pool consist of blu ray rips, so small and large files.

Systems used:

Zfsguru1: supermicro x9sae-v mainbord, xeon E3-1265l V2 32 gb ram and pool with raid-z2 vdev of 10 x toshiba DT01ACA300 3TB drives. M1015 in it mode.

Zfsguru3: supermicro x10sl7-f mainbord, Xeon E3-1220v3, 32 gb ram and pool with raid-z2 vdev of 10 x hgst 7K4000 drives. Sas2308 chip in it mode

Network cards: intel x520-da2
Switch: d-link dgs-1510-28x

There was nothing tweaked, basic zfsguru 10.1 install which has the driver for the intel nic card (only set to load it in boot.conf)
Mtu standard on 1500.

So out of the box, this performs very good once you know how to use it.

Youtube link for some proof:

Perhaps it helps someone.

130 posts

Posted on 17 April 2015 @ 22:28
Thanks for documenting this, as mbuffer can really even out the uneven nature of zfs send /receive. For comparison what was the zfs send/receive speed before implementing mbuffer?

17 posts

Posted on 18 April 2015 @ 07:01
I did not try that, as my first steps with snapshots and deleting them i lost the filesystem (my own mistake)
So i now use the gui to make an destory snapshots, and that feel saver :)
NFS gave me about 150 MB/sec, but that might have been limited by Midnight commander, which i used to copy files from the network mounted pool to the internal pool.

4 posts

Posted on 14 February 2018 @ 11:08
The basic idea is that you are now getting best of the kind performance by using direct-attached storage with attached disks. For the task you describe, you are in great shape. custom paper help service If you actually find yourself requiring to copy that data following and forth from the server to your workstation.
Last Page

Valid XHTML 1.1