Latest interface: 0.3.1
Latest system: 010

1 posts

Posted on 7 August 2011 @ 22:58edited 22:59 18s
1. Easily expandable, through adding more drives to fill the storage chassis to grow the ZFS pool/vdev, replacing drives with larger capacity ones, by cascading more chassis or by doing all three.
2. Data availability through redundancy & a robust filesystem that can handle all that low level management.
3. Complete storage pool encryption [unfortunately L2ARC can't be encrypted].
4. Make the data easily available to users.

Planning on building a ZFS storage solution & working around some limitations, mainly keeping in mind the budget & future expansion needs so that further down the road it doesn't become another complete buildup. This is the first of what will be [hopefully] a two part post. This post will be mostly about the hardware that I will be using as basis for this build.

I have been putting off building a ZFS solution for over a year now, but recently a drive crash and loss of quite a lot of irreplaceable personal data has put an end to that [embarrassingly enough, only a few months ago moved all of this data off to a new drive as I thought the old drive that contained all these data might croak any day now. Surprisingly enough [actually not] it was the new drive that bit the dust & the old one is till chugging along; SIGH!

My excuses, although valid, to stall the ZFS project was mainly waiting for 1TB platter drives to hit market. So that I could use 2 platter 2TB drives instead of the current 3 platter ones, which would be more prone to failure & SandyBridge-E/LGA 2011. Alas they're both [probably] only a quarter away. Didn't want to buy up components that are about to be made obsolete in just a few months & have no viable or easy future upgrade paths, other than completely replacing the entire system.

I have a basic idea of what I need for this project; I will be listing them below. I have broken down the list by components, so that it becomes easier for readers to go directly to that specific section to read the relevant information. Post suggestions, comments or experience you might have had with these components in a ZFS build or just in general. The only main components I haven't decided on yet are motherboard/CPU [the last one depends entirely on the motherboard I'll be getting] & a UPS.

Thank you all for your time & help. I will be updating this post with what component I am going to be getting, so that anyone reading won't have to read the whole thread just to get to an answer, in case it becomes that long indeed.


NOTE: The main sticking point is whether to go for a UP or DP system. Not sure if I'm going to be doing mirror or RAIDZ3. Initially I wanted to do mirrors as they're simple enough not stress the CPU too much [not exactly, as I later learned.] I have seen single Quad core Xeon's hit the dirt when accessing a 20-24 drive mirror [so forget RADZ anything,] as ZFS still has to do checksumming for both reads/writes & a bunch of other fancy stuff that even a hardware RAID card doesn't. I even plan to encrypt the entire ZFS pool.

RAID 10 [stripped mirror]
+Able to expand a mirrored pool/existing vdev by adding new drives & without resilvering [correct me if i'm wrong on either point]
+100% redundancy
+Best performance of all the ZFS RAID, if not being hit by CPU/system IOPS ceiling [hmm, needs more verification]
+Much faster resilvering as no parity calculation is needed
+Drive capacity expansion much simpler [i can disconnect 1 set of drives, connect the larger ones & rebuild the mirror, then do it again for the other set of drives]
?Cheap/low power quad core CPU able to handle FS task [not sure about this anymore]
-100% redundancy at the cost of 1/2 the storage capacity

+Only 15% or so storage space used for redundancy in a 22 drive zpool
-Slower than RAID 10 [hmm, also needs more verification]
-Unable to expand capacity to existing vdev by adding new drives [wait what happened to block pointer rewrite functionality?]
-Very slow resilvering
-Cheap/low power quad core CPU is unable to handle FS tasks

So, whether to use ZFS as mirror or RAID, I'll leave the details for the 2nd thread.

?Q. Has anyone done tests comparing the performance/scaling of the underlying hardware & these two types of ZFS RAID [mirror & Z3]?

Although I am leaning towards Intel M/B + CPU, I am open to suggestions for AMD components if there is significant price/performance advantage. As I see it currently, both DP/UP has some benefits and negatives:
+I can start with 1 CPU now & later add another when I start cascading to more storage chassis
+Won't have to replace motherboard/CPU to accomodate future growth
+Might be cheaper in the long run, when compared to replacing the whole system [or maybe not]
+3 channel/6 RAM slots
+Intel SDDC/AMD Chipkill
+RAS feature [at least some]
-Socket LGA 1366 only as no LGA 1155 DP M/B
-Socket LGA 1366 about to be made irrelevant with Socket LGA 2011/SandyBridge-E next quarter
?Initially more expensive than UP M/B + CPU [not sure about this either]

+Cheaper than DP, initially
+Socket LGA 1155/SandyBridge based stuff is newer than LGA 1366/Nehalem based DP systems
-No RAS features whatsoever
-2 channel/4 RAM slots only
-No expandability to accommodate future growth, will have to replace M/B & CPU completely
?Probably will hit I/O limit even with simple mirror type ZFS when using 20-24 drives [i highly doubt can handle RAIDZ3]

+Motherboard must be compatible with Solaris Express or one of the derivatives based off its dead cousin [OpenSol; RIP].
+Motherboard must be compatible with SuperMicro SC846E16-R1200B chassis
+SAS2 6.0 Gbps controller [preferably LSI SAS2008 or better]
+SAS controller must support Initiator/Target mode [as I will only be doing software/ZFS RAID]
+Intel SDDC [or AMD Chipkill if AMD motherboard]
+IPMI 2.0 + IP-KVM with remote ISO mounting capability
+2 Pci-E 2.0 x8/x16 slots [well, the more the better]
+2 Gbit Ethernet, capable of teaming [until I can get 10G Ethernet cards]
?Q. How many disk supported in I/T mode? [LSI 1068e & LSI 9211-8i based on SAS2008 supports up to 122, so I've heard, but not sure]
?Q. How much bandwidth does the controller have to the MB, not the ports [LSI 1068e has a 2000 MB/s based on x8 PCIe 1.0/1.1]
?Q. How much real throughput can the controller handle, between the ports & the motherboard [most RAID cards can only do 1 GB/s or less]
?Q. How many IOPS is the SAS controller rated for? [LSI 1068e was rated for up to 144,000 IOPS, if I remember correctly]

Would prefer a SuperMicro M/B, as getting multiple parts from the same vendor would simplify warranty & support, but I'd rather get something better price/performance wise, no matter the manufacturer.


NOTE: Absolutely no idea whatsoever. I wanted to get 3-4 decent consumer grade PSU's & cascade them together, but everyone said not to. Basically don't want to shell out several K for a enterprise level PSU; heck don't even need something like that.

+A few small network appliances & the ZFS storage chassis should last long enough [~10 minutes] at full load, for a proper shutdown
+UPS will not turn itself back on or the system until at least 15-20% charged after depletion [so as not to crash system from a subsequent brownout]
+UPS power outage alarm can be turned off [don't want to wake up the whole neighborhood when on battery]


NOTE: 8-10 + 2 spare SAMSUNG HD204UI F4EG 2.0TB drives for now [already bought 9]

I am thinking of getting 3-4 [2 SLOG + 1-2 L2ARC] cheap & small consumer grade SSD's. These will be MLC based & in the 40-80GB range, all depending on the price. OCZ Solid 3, OCZ Agility 3 & OCZ Vertex 3 seems to have nice read/write & IOPS [both sustained & random] numbers! Haven't decided which one yet, as I don't know much about the current gen [esp. OCZ model] drives. These cheap SSD's don't need to have any SuperCap's as I will have a PSU to go with this build.

?Q. What is the difference between OCZ Solid 3, Agility 3 & Vertex 3 drives?
?Q. Suggestion for any other SSD brand/model for my build, that has 3-5 years warranty? [def will be needing the longer warranty]


NOTE: I decided on SuperMicro SC846E16-R1200B. SC846E26-R1200B has dual SAS 6.0 Gbps ports, but also costs an extra $100-$150. Though I can always use the extra bandwidth when I start cascading with other chassis, I don't think the 2 SAS ports can be teamed [like Ethernet port's] by connecting them both to the same controller, I think they're only for failover.

Even with a DP motherboard I don't think I need a 1200W PSU! Any chassis that has SAS 6.0 Gbps backplane & a 900W PSU would do, not to mention could have saved $100-$150 on the price. Unfortunately SuperMicro doesn't have any E16/E26 chassis with 900W PSU.

4U Habey chassis that can hold 20-24 drives are almost nonexistent. Both Habey & NORCO ones are so barebones that I would have to spend a lot of money on good quality redundant PSU + HBA cards, both of which can easily add another 1K. Not to mention they usually have 4-6 separate backplanes with a separate SAS connector for each, no on board SAS controller will have enough ports for these.

1234.79 @ = SuperMicro SC846E16-R1200B / part# CSE-846E16-R1200BP [free ship, which usually runs in the 150-200; yay!!!]
+3 year warranty!!!
+Redundant PSU [don't have to spend extra money on buying 2 quality PSU's]
+Single port SAS2 6.0 Gbps backplane connection [cabling just got simpler]
+2 extra SAS ports built into the backplane for cascading.
-Expensive! Backplane by itself is 700-800 dollars, half the price of this chassis!!!
?Q. How many chassis can be cascaded together? [manual doesn't mention that]
?Q. Heard that drives pop out easily from this chassis, should I get the front bazel with lock to prevent that?
?Q. Anyway to put 4 total SSD's inside the chassis, as I want to use all 24 hot swap bays for drives [2 can mounted with optional drive trays]
?Q. If anyone has other suggestion for chassis, feel free to post it.


NOTE: I'd prefer to get x8 modules, as they're 33%-50% cheaper than x4 modules, as long as I can get SDDC working [well most new Intel chipsets now work with both types of module, except a dew things.] Mainly I want demand & patrol scrubbing capability of SDDC. So I've heard, that RAM errors are quite common. The numbers bandied around are between: 1 bit error/hour/GB of RAM or 10^−10 & 1 bit error/century/GB of RAM or 10^−17.

Initially I'd like to start off with 24GB, seeing as how cheap RAM is nowadays and of course the M/B permitting.

389.99 - x4 Kingston 24GB 240-Pin ECC Registered DDR3 1333 (PC3 10600) KVR1333D3D4R9SK3/24G (3 x 8GB)

259.99 - x4 Kingston 16GB 240-Pin ECC Registered DDR3 1333 (PC3 10600) KVR1333D3D4R9SK2/16G (2 x 8GB)

124.99 - x4 Kingston 8GB 240-Pin ECC Registered DDR3 1333 (PC3 10600) KVR1333D3D4R9S/8G

+$270 - 3 x 89.99 - x8 Kingston 8GB 240-Pin ECC Registered DDR3 1333 (PC3 10600) KVR1333D3Q8R9S/8G

+$240 - 3 x 79.74 - x8 Kingston 8GB 240-Pin ECC Registered DDR3 1333 (PC3 10600) KVR1333D3LQ8R9S/8G 1.35v

+RAM needs to be compatible with motherboard.
?Q. Should I look to get LV/1.35v DDR3 or stick to the regular stuff [heard LV DIMMs can't be used in dense configurations, as in less modules]

Parts & Cables
$10 - $05 x 2 MCP-220-84601-0N Supermicro 3.5" System hard drive tray
- 1 or 2 heatsink bracket, in case the M/B doesn't some with one
- 1 x 12V 4pin power cable for SuperMicro SC846E16-R1200B PSU
- 1 x SAS-8087 or 4SATA-mSAS reverse breakout cable; depending on what kind of SAS2 controller ports the M/B have

P.S. Please forgive any typos or omission, its very late here.

2 posts

Posted on 7 February 2019 @ 08:24
You are doing a good job. Great post, this kind of post dependably get a decent reaction since life is loaded with pressure and hecticness. So an individual needs to crisp and engaging information for perusers. Such help on altering their opinion.

39 posts

Posted on 19 February 2019 @ 16:31
Project IGI 3 PC Game Free Download ISO Compressed
PlayerUnknown's Battlegrounds - PUBG PC Game Free Download ISO Compressed

Save your game as much as you can. While auto-saving is a great feature, don't rely on it. Particularly, when you first start playing a game, you may have no idea when the game saves, which could lead to a loss of important data later on. Until you understand the game better, always save yourself.

83 posts

Posted on 23 June 2019 @ 07:31
Nice post! This is a very nice blog that I will definitively come back to more times this year! Thanks for informative post.
Last Page

Valid XHTML 1.1