Latest interface: 0.3.1
Latest system: 010
dreh23
User

3 posts

Posted on 14 January 2015 @ 13:36edited 15:04 57s
dreh23
User

3 posts

Posted on 14 January 2015 @ 15:05
Qui
User

55 posts

Posted on 15 January 2015 @ 21:38
??
CiPHER
Developer

1199 posts

Posted on 17 January 2015 @ 23:57
Quoting start post, take note that special characters are broken on this forum.
dreh23 wrote: Hello,
I'm working for a visual effects company and I'm designing our new file server although we are planning to use zfs on linux. I would kindly ask your advice or suggestions on our setup.

Task:
-----
Replacing our Fedora 24 bay sata system with something that has more punch. Currently we use a 4 x 10GB Bond in a HP switch and a Hardware Raid + xfs.

Requirements:
-------------
- 50 TB usable space
- saturating 1GB Ethernet for file transfer to at least 10 -15 workstations simultaneously via SMB (~ 115MB/s) - image sequences of files normally around 5-20 MB each.
- keep it simple: No cluster file system, 1 Machine and 1 Storage Domain Space, No active fail over we can live with some downtime - backups are rsynced to a backup system


Problems in the past:
---------------------
- Fedora
- Machines IO would max out at 4 streams although we never quite nailed it down who's fault it was. I suspect Hardware raid and old samba version that wouldn't scale properly.
- When 16 Render Nodes where doing constantly small r/w operations we could easily bring the machine to its IO max.

New Setup
---------
Hardware:
Dual Xeon XXX
Super Micro 24 bay enclosure
> 256 GB RAM
4 x 10GB
MegaRAID SAS 9361-8i
24 x 4GB SATA 7200/24-7 disks
1 x System disk

Raid Setup:
6 Vdevs a 4 Disks Raid 1+0 = 48TB

ZIL:
2x OCZ RevoDrive 350 PCIe SSD 240 GB mirrored

L2ARC:
2x OCZ RevoDrive 350 PCIe SSD 960 GB striped


Software:
Ubuntu 14.04
Zfsonlinux
Sernet Samba 4

Questions:
----------
I would like to have your opinion on following Questions:

1. Is this Raid Setup "OK"
2. Do I need Hot Spares if this is not critical - Server Room is 2 doors away to swap disks if failing - we would need to have some good drives laying around of course.
3. Is the Revodrive a good idea as ZIL/L2ARc
4. Do I really need to through so much hardware on cache if I'm planing to have plenty of Ram
5. Good proven alternatives on Hardware setup especially ZIL/L2ARC
6. Do you think I can fulfill my requirements with this setup
7. Which Xeons are suitable - other than ZFS compression and file services nothing else will be running on this machine

I would love to hear your opinion.
Thank you
Joe



downloadski
User

17 posts

Posted on 16 July 2015 @ 13:54
I only have 10 disk raid z-2 experience, but that delivers on average 828 MiB/second with whole pool copy on my systems.
This are then 8 disc that deliver the data.

If i understand raid 1 + 0 with 4 disks enough you have 2 mirrors of 2 discs.
If you put this in a vdev and make 5 of these more, you get max the performance of 12 disc.
That would be 1.5 times the read spead i see i assume, so 1244 MiB/sec

Doubtfull that you would be able to deliver large video files to 10-15 clients at 115 MB/sec sustained from the spinning discs.
Cache is nice if these clients access the same videoclips, but i am not sure the arc or the l2 arc hold 5-20 MB of data files completely in cache.

But no experience with soo many clients do video at gigabit line rate, i only have 2 cliients doing up to 54 mbps..
bitmap
User

26 posts

Posted on 25 August 2015 @ 02:59
You didn't mention the SuperMicro backplane in the chassis, or how it connects to the HBA, but a quick look at their site would appear that to get 24 ports into the 8 port HBA would probably use a SAS-846EL or SAS2-846EL, and while I'm not familiar with this specific gear, it is likely that the SAS expander would only really offer 4 ports bandwidth over a single SFF-8087/Mini-SAS cable for single-ported SATA disks. As well, some SAS expanders may offer reduced performance when handling SATA disks.

If you haven't already purchased this, you might look to use a pass-through backplane like SAS-846TQ with 24 individual ports, and split onto two separate 12-port, or three 8-port HBAs. If there are not enough PCIe slots (lots of SSD and ethernet), you might omit two of the 10G ports, as even with link bonding to the switch, I can't see you'd be able to saturate two links, much less four, from only 10-15 clients.
Last Page

Valid XHTML 1.1