Latest interface: 0.3.1
Latest system: 010
hgeorgescu
User

41 posts

Posted on 14 September 2015 @ 22:30
Hello Team ZFSGuru,
I tried to find my post earlier this year, and the advice I got from Cipher.. Oddly enough, the forum allows only the view of the latest postings (one page), and apparently I cannot go beyond that, as there is no page turn to see older discussions.
Eventually I've been able to find what I was looking for, using google (it helped me find a pointer to that conversation, which now I've saved locally).
I'm using Firefox and Chromium browsers on Linux, to browse the site.
Thanks for clarifying this issue.
HG
CiPHER
Developer

1199 posts

Posted on 15 September 2015 @ 00:38
The forum software sucks balls in dark corners.. but hey what can i say? We are working on it. :D

We plan to migrate to Mesa - our own CMS solution and with it the plugin codenamed 'NewForum'. We hope to upgrade the website and forum this year. Until then... this is all we can offer.

Google is your (evil) friend, to find hidden forum threads.
hgeorgescu
User

41 posts

Posted on 15 September 2015 @ 18:48
Thank you, Cipher.
This is the link to that: https://zfsguru.com/forum/zfsgurusupport/922
and after a long pause, I just came back to this project (building my zfsguru nas), and did it per those initial specs.

Powered by ZFSguru version 0.3.1
Running official system image 10.2.003 featuring FreeBSD 10.2-RELEASE-p1 with ZFS v5000.
Running Root-on-ZFS distribution.
Hardware
Motherboard Asus (M5A99X EVO R2.0)
64-bit octo core AMD FX(tm)-8320 Eight-Core Processor - Running at 3500 MHz ( scales from 1400 MHz to 3500 MHz ) -doing some rsync copies.
16 GiB physical memory, 15.4 GiB usable ECC RAM.
11 normal harddrives - in fact 2 ssds partitioned and used as advised by Cipher, and, in the interim, one 6 disk pool (6x3TB) in raidz2 configuration.
The reminder of the disks are for the time being just hanging off the system, not in use.
The pool is running on a Megaraid (SAS9240-8i) flushed in IT mode with the latest FW from AVAGO/LSI.
Gigabit networking - using the port on the MB; I also have a second card which is not in use at themoment.

I have a performance question.

I'm running a rsync copy from a raiserfs disk (mounted ro in zfsguru) to the raidz2 pool. the amount of data is roughly 2.7TB and has not finished after about 30 hours of running. But it is getting closer.
The raiserfs disk is connected to the mb sata, as is also the ssd root mirror. The raidz2 pool is on the LSI controller.

I read on the internet reported copies of 3TB in 7-8 hours, which is fantastic... Is there anything in particular I should look into relative to performance improvement options, with this setup?

I installed bonnie++ from fdbs repository and the ran it against the pool. I don't think it is too spectacular. Any comments and /or thoughts?


[root@zfsguru /media-1/share]# bonnie++ -r 16384 -u ssh
Using uid:44, gid:44.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
zfsguru.bsd 32G 117 99 286076 46 163406 37 302 92 329544 26 94.9 2
Latency 94681us 3786ms 3276ms 318ms 488ms 807ms
Version 1.97 ------Sequential Create------ --------Random Create--------
zfsguru.bsd -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 8062 36 +++++ +++ 21920 93 22599 94 +++++ +++ 20707 94
Latency 518ms 217us 854us 30691us 1484us 379us
1.97,1.97,zfsguru.bsd,1,1442338681,32G,,117,99,286076,46,163406,37,302,92,329544,26,94.9,2,16,,,,,8062,36,+++++,+++,21920,93,22599,94,+++++,+++,20707,94,94681us,3786ms,3276ms,318ms,488ms,807ms,518ms,217us,854us,30691us,1484us,379us
[root@zfsguru /media-1/share]#
CiPHER
Developer

1199 posts

Posted on 15 September 2015 @ 19:02
Probably reiserfs code in BSD is not that fast.

You can use gstat to determine disk I/O utilisation. For example with this command:

gstat -f gpt

This should give you an idea whether the disks are the bottleneck. It is likely reiserfs code is simply not that fast. This applies to other filesystems as well on BSD. Only UFS and ZFS are really usable on BSD/ZFSguru.
Last Page

Valid XHTML 1.1