Latest interface: 0.3.1
Latest system: 010
jonnyboy
User

74 posts

Posted on 20 February 2012 @ 17:32
Would this be considered good performance? Of course, it fluctuates.

ZFS guru version: 0.2.0-beta5 | website
StatusNetworkDisksPoolsFilesServicesSystem
Formatting
Memory disks
SMART
Advanced
I/O monitor
Benchmark
I/O monitor
Filter:

dT: 1.001s w: 1.000s filter: ^(gpt|label)/
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 317 0 0 0.0 316 37915 24.9 93.4 gpt/"sata1"
0 319 1 4 26.6 317 38042 25.1 96.9 gpt/"sata2"
4 317 0 0 0.0 316 37113 24.7 93.0 gpt/"sata3"
4 320 0 0 0.0 319 37343 24.7 95.8 gpt/"sata4"
0 317 0 0 0.0 316 38038 24.0 92.8 gpt/"sata5"
0 318 0 0 0.0 317 38166 24.0 93.0 gpt/"sata6"
0 318 1 4 56.5 316 38038 24.1 93.0 gpt/"sata7"
0 317 0 0 0.0 316 37915 24.9 93.2 gpt/"sata8"
4 316 0 0 0.0 315 37116 24.6 95.8 gpt/"sata9"
4 319 3 1 6.5 315 37239 24.8 94.6 gpt/"sata10"
3 315 0 0 0.0 314 37112 23.5 83.9 gpt/"sata11"
4 317 0 0 0.0 316 37113 24.3 93.1 gpt/"sata12"

CiPHER
Developer

1199 posts

Posted on 20 February 2012 @ 17:42
Without knowing what kind of pool you run it is not easy to say something about your scores. Your disks all write at about 37MB/s, which implies they are all part of the same RAID-Z1/2/3 vdev.

Since the disk utilization appears to be high, it could be that your disks don't have full bandwidth. Is this using an expander or port multiplier?

In any case, if you combine all those 37MB/s you come to a nice number. You have to substract the number of parity disks; they don't count if you want to know how fast you can write to your pool. Probably enough for gigabit ethernet. :)
jonnyboy
User

74 posts

Posted on 20 February 2012 @ 18:03edited 18:04 16s
The pool is raidz2 with 2 vdevs of 6 drives each. They are connected via a 3ware 9550sx controller as single disks.

So that would be 37 x 8 = 296 MB/s, correct? That seems a bit high across a single 1G nic.

I am copying files across the network in 3 processes.

I may try connecting the drives via the mb and a couple of cheap sata controllers and test the speed then.
The_Dave
User

221 posts

Posted on 21 February 2012 @ 07:06
Sounds like a PCI Express x1 link?
CiPHER
Developer

1199 posts

Posted on 21 February 2012 @ 07:26
I was not aware you were benchmarking over the network. That changes things!

If transferring over the network, you will notice that your disks are not always doing something. ZFS saves up a full 'transaction group' and commits it to disk. Your disks are faster than the gigabit ethernet interface. So you will see periods where your disks do nothing, then you see them loaded 100% until they written their transaction group, then they idle again.

Can you confirm this behavior? If you write a file locally you should not have this problem.
jonnyboy
User

74 posts

Posted on 21 February 2012 @ 07:44
That is correct.
But, I am not really benchmarking. I was moving files from one server to another and glanced at the I\O Monitor page. It looked, to me, that some of the numbers were a bit low for %busy to be so high. I took the 3ware card out of the server last night and replaced with a couple of generic sata controllers, but they don't work with my motherboard. Anyway, the server is down until I get a couple AOC-SAT2-MV8 controllers to test out. So, I can't go into any real detail of the performance.
Using the benchmarking tests, I was getting ~265 r/w with raidz2, 2 vdevs of 6 drives each. I don't really know if that number is good or if I should expect better.
I have seen lots of posts that recommend not using raid controllers, but hba's instead. I'll see in a week or so.

Thank you for responses and the info.
CiPHER
Developer

1199 posts

Posted on 21 February 2012 @ 10:00
The AOC-SAT2-MV8 uses a Marvell chip which doesn't work well at all on non-Windows systems.

It is also limited in bandwidth as it uses an ancient PCI-X interface. Connected to a PCI slot that would leave very little bandwidth per disk. Expect low performance with this controller.

A good HBA is the IBM M1015 flashed with IT-firmware. That is a SAS controller with two mini-SAS connectors. Each Mini-SAS cable splits into 4 SAS/SATA connectors. So one such controller can connect 8 normal SATA drives. Often you need to buy cables separately; make sure you buy the correct ones!
danswartz
User

252 posts

Posted on 21 February 2012 @ 10:40
You want sff-8087 forward cables, NOT reverse cables.
jonnyboy
User

74 posts

Posted on 21 February 2012 @ 15:57
What would you recommend as a port multiplier or sas extender?
CiPHER
Developer

1199 posts

Posted on 21 February 2012 @ 16:31
None: don't use expanders or port multipliers unless you really have to. Buy multiple HBA instead. Think each PCI-express port as 8 to 16 SATA ports. LSI has an equivalent to the IBM M1015 that has 16 ports (4 mini-SAS connectors).

If you have enough PCI-express ports, i would buy multiple HBA's. This has the advantage of more bandwidth, as well as the ability to split them for different systems in the future.
danswartz
User

252 posts

Posted on 21 February 2012 @ 18:40
Port expanders can be problematic, it seems.
jonnyboy
User

74 posts

Posted on 21 February 2012 @ 20:13edited 22 February 2012 @ 15:06
Thanks for the info, guys. I think I will sit tight with what I have for the time being. I'm not in the mood for new mb's, controllers and possibly ram if upgrade to DDR3.

***EDIT***
I moved 6 drives to the mb sata controller, leaving 6 on the PCI-X 3ware 9550sx, and then reran the benchmarks. They increased at least 100MB's r/w. With that, I bought 2 controllers.
Last Page

Valid XHTML 1.1