Latest interface: 0.3.1
Latest system: 010
madilyn
User

1 posts

Posted on 19 February 2014 @ 23:58edited 23:59 55s
So I understand the issues with consumer SATA drives are:

1. Driver problems when put behind SAS interposer and expander.

2. Typically lacks supercapacitor for flushing write cache in power loss.

3. Very little overprovisioning.

4. Usually low life spans, unsuitable for 24/7 duty cycle.


Will I be fine to use consumer SATA drives on my ZFS server if I took these steps to mitigate the problems then?

1. Remove the SAS expander(s) on my server chassis and attach the SSDs directly to SAS HBAs internally (no separate head node and JBOD chassis) using SFF-8077 to 4x SATA fanout cables.

2. Use redundant power supplies and UPSes.

3. Partition the drive only to use 80% of actual storage space.

4. Get high MTBF drives, e.g. Samsung 840 Pro = 1.5 million hours MTBF.

The only problem I foresee with my setup is that it's a hassle to hot-swap the drives if they are attached using fanout cables.
aaront
User

75 posts

Posted on 20 February 2014 @ 16:03
I just installed 4x samsung 1tb evo in a raid10 using lsi hba, no expanders.
I also do hourly snapshots and send them to spinning disks on the same box as well as a secondary zfsguru box. I also use VDP to backup my virtual machines daily as this pool is only used to run vms.
The scary thing about raid10 is that in theory all the drives should fail at about the same time since they are all getting about the same writes. So replace failed drives asap and if possible, try to mix older drives and newer drives in each mirrored pair.

I'll let you know how it goes.
aaront
User

75 posts

Posted on 14 March 2014 @ 20:32
So I now have about 20 vms running over nfs to my ssd pool. I'm using Veeam ONE monitor (free) to monitor latency and usage of the pool. My read latency is basically 0 with little spikes to 5 or 10ms. Write latency had me a little concerned as it averages 25ms all spiky from 15-35ms. Also if I ran a vm backups or other write intensive operations, I could actually get my nfs connection to lockup for about 2 min. This is very very bad.

Nothing in my settings popped up to bother me, So I just added a spare intel320 80gb I had laying around as an slog to the ssd pool and immediately my write latency dropped in half to an Average 12ms. I'm going to do some big copies and see what happens. I'll also look for any nfs or network settings but I'm leaning toward blaming netgear smart switches. They have been having issues lately and I know that esxi/nfs/zfs can be touchy.
aaront
User

75 posts

Posted on 14 August 2014 @ 00:10
update: once I turned off tso on my stupid x540 10g nics, write latency also dropped to a much happier 2-3ms
CiPHER
Developer

1199 posts

Posted on 14 August 2014 @ 11:20
Weird issue aaront. You can try installing the latest 11.0-002 system image to see if BSD 11-CURRENT has already fixed your issue. Regardless of the outcome, you can return to your current installation on the System->Booting page. Make sure you run beta10 before doing this.
DigiDaz
User

2 posts

Posted on 17 September 2014 @ 11:29
aaront: How are you getting on with your 1TB EVOs now? I'd normally be wary of anything other than say the Intel DC S3500 series or better having been stung a couple of years ago with Crucial V4s that lasted less than a month.

I have had a couple of 840 pro in use for the last year or so in non zfs systems and have had no problems.

I'd like to if possible use 1TB EVO in striped mirrors and then to a second box via HAST replicating over 10GBe

TIA
aaront
User

75 posts

Posted on 8 October 2014 @ 03:11
my 4x 1tb evo in a raid 10 (mirrored pair) are still running great. I do have them fronted with a 200gb intel s3700 slog to help with wear.

I am going to look into firmware upgrades at some point to address this issue, but I don't see any speed problems in my setup
http://www.anandtech.com/show/8570/firmware-update-to-fix-the-samsung-ssd-840-evo-read-performance-bug-coming-on-october-15th

according to veeam-one, my read latency on the datastore is under 1ms and the write latency is about 2-3ms
Last Page

Valid XHTML 1.1