Latest interface: 0.3.1
Latest system: 010
hgeorgescu
User

41 posts

Posted on 19 January 2015 @ 14:48
Hello;
I'm at my second post in this forum - I'm hoping to become a new zfsguru user, but I'm having significant issues, some of which perhaps are due my lack of understanding how things work, or are supposed to work.
My hardware is PC grade. A AMD based m/b, 8 core cpu, 16 GB ECC ram, an LSI Megaraid card in IT mode, 8 x 3TB hard drives, and 2xM500 240GB SSDs.
The hard drives are of different models, purchased at different times.
2 are NAS drives (WD and Seagate), the other 6 are regular desktop drives (5 Toshiba, 1 Seagate).
The two NAS drives are formatted with reiserfs (mirrored) - temporary off-line, until I succedd building the rest of the system around them, and transfer the data over onto ZFS.

The plan: create 3 pools: one for os and apps (root), jails etc. running on the SSDs
Second a large pool for media raidz2 - with those 6 desktop drives
third a smaller pool zfs mirror with those 2 nas drives - for more frequent access
Planning also to run deduplication on those pools, and compression.

Initially I wanted to partition the SSDs and use them as root, swap, zil and l2arc.
I found two references in the postings on this forum, one stating that is possible, the other not advising for it - due to uncertain results/problems down the road. Since I didn't know how I can accomplish and test that scenario, from zfs menu, I left that idea for now, and tried to go with allocating the whole ssd to the install - letting zfsguru to decide.

After a couple of tries, I reformated the drives, destroyed the pools and started fresh. I created the ssd pool, installed the os (LiveCD (recommended) 10.1-002) and rebooted.
After reboot, the system came up fine, but the web interface gave an error accessing the pool menu. Searched the forum and found out that I should have upgraded the web interface. Did that and everything was fine.

After 2nd reboot, and fixing the web interface, I proceeded to create my second pool (Raidz2), and install some of the services. I wanted to install mediatomb, but that indicated that I need a number of other things installed, among which also X, and gnome desktop etc. I didn't expect to have to install those, but did that.
X and gnome didn't want to start, from the management interface, and I decided to boot the system, in the stage I was in, hoping that reboot might fix the problem.

I didn't get to it, however, because the system doesn't boot anymore, and is looking for the boot image now on the second pool I've created. The error message:
Can't find /boot/zfsloader
FreeBSD/x86 boot
Default: tank1:/boot/kernel/kernel
boot:
\
Can't find /boot/kernel/kernel

I suspect the second pool creation hijacked the boot disk (tank0) somehow,but I don't know why and what to do to fix it.

Based on the experience so far, (aside from fixing the boot problem) I would really benefit from an installation roadmap, for various pieces. Anyone among the advanced users willing to share what you did to get your system up?

Thanks much for your help.

CiPHER
Developer

1199 posts

Posted on 19 January 2015 @ 20:01
Hi,

I recommend against using de-duplication on large datasets. Use LZ4-compression, rather.

As for your boot problems, did you create a v5000 pool? Create your boot pool as v28. Your data pool as v28 first, then upgrade to v5000 by enabling LZ4-compression.

Anyway, with ZFSguru you can very well do your initial suggestion. Try the following:

- boot from the LiveCD. Login to the web-interface
- choose Skip Wizard at step 1 (step 0 is the welcome screen)
- upgrade to beta10 on the System->Upgrade menu.
- go to pools page, import all pools
- destroy all pools, or if you already have data on your 'hdd' pools then leave them alive, but kill the pools on the SSD.
- go to disks page, select your first SSD
- delete all partitions from it by selecting a partition and choosing to destroy it. Repeat.
- do not delete the boot partition. Leave it alone. If you have deleted it, delete the partition scheme as well, an option which becomes visible if you have no partitions left. Then you can initialise the disk, creating a boot partition.
- create a new partition on the SSD, the first partition not counting the boot partition. You can enter its size. You have 2x 240GB so a good choice could be:

2x 80GB ZFSguru system partition (boot, addons, downloads)
2x 10GB dedicated swap
2x 4GB sLOG (dedicated ZIL) for HDD pool 1, running in mirror across the two SSDs
2x 4GB sLOG (dedicated ZIL) for HDD pool 2, running in mirror across the two SSDs
2x 40GB L2ARC for HDD pool 1
2x 30GB L2ARC for HDD pool 2
~60GiB unpartitioned~

It is important to not partition your new SSD for more than 85% (i.e. 25% reserved for overprovisioning). You can even go up to 50% to improve performance and lifespan of your SSD. The effect is the strongest at the low ranges to going from 5% to 15% is a huge step forward. From 15% to 25% a slightly lesser effect, etc. For L2ARC you need more overprovisioning than usual. Think 25%-40%.

- after having partitioned both your SSDs, create the system pool on it. Create the pool as v28.
- perform an installation on the pool. Note that you can have multiple installations of ZFSguru across multiple pools, and manage then on the System->Booting page. Only one may be active during boot time. But you can use the LiveCD to import the system pool and change the active pool if you accidentally crashed an installation, and want to revert to the old one.
- create a snapshot on the system pool if you like, then you can always revert to the brand new installed state without having to reinstall. You may skip this, but its on Files->Snapshot.
- boot into your new installation

Now later you can add the sLOG and ZIL to your pools. But first i suggest you install all services you like, and test whether booting works. If you installed too much and your pool became full, then you may crash your installation. So the 80GiB in the example partition code above should be well enough to suit your basic needs of multiple installs and addons accordingly. Separate swap helps too. But needs still to be activated before it is used.

Hope this guide helps you to start of a reasonably serious home-build. Let me know if you need more help. :)
hgeorgescu
User

41 posts

Posted on 19 January 2015 @ 23:01edited 23:09 15s
Hello Cipher;
Thanks very much for the extensive explanations.
I need to get back to it asap and see how good of an apprentice I can be. :)

As for zfs version - I used v28 - as that was what the wizard was recommending. I was indeed kind of baffled when I saw how many versions were after that, but didn't adventure into upgrading to the latest.
Regarding thge SSD usage, the M500 has already a reserve (roughly 16GB I think - the difference between 256GB to 240GB). But to be on the safe side I can certainly give it more slack.
Citation: "In the M500, the RAIN stripe is set at 15:1, which monopolizes 17GB of flash on the 240GB version", from here: http://techreport.com/review/26170/crucial-m550-solid-state-drive-reviewed.

One final question though - as I'm not yet sure how I ended up with the boot problem.
I didn't change the boot disk (at least not intentionally). I had everything running on tank0 (the mirrored SSDs), and the system booted fine. I then created the second pool (tank1), but have not used it yet for anything. Just tried the APM thing, and left it alone after several tries. Now is clear from your reply in the other thread, that it won't work from the web interface. Thank you! I'll use camcontrol for that .

In the background somewhere however, the system considered to make that change on my behalf, and it didn't boot anymore afterwards (there was obviously no boot image on tank1).
Any thoughts on what might have happened?

Thank you,
CiPHER
Developer

1199 posts

Posted on 20 January 2015 @ 14:14
Crucial M500 has 15:1 ratio (1/16) and M550/MX100 have 127:1 ratio (1/128). But this concerns the RAIN bitcorrection technology used to correct 'bad sectors' (uBER). Think of it as RAID5 protection for your SSD. Most older SSDs are used with RAID0 (interleaving) without parity correction which frequently is dubbed with some marketing name like 'RAIN' or 'RAISE', an adaption to 'RAID'.

So a M500 SSD does not have extra overprovisioning because it is 240GB. That has solely to do with the RAID5 protection mechanism. This means you still need a good amount of space for overprovisioning. 25-40% is recommended if you are going to utilise your SSDs as L2ARC cache device.

I do not know what exactly caused your installation to fail. Simply writing your installation full so that no free space is available, can be enough to wreck an installation.

Just start with your installation and install all addons you like, then leave the system disk alone and go ahead with configuring your other pools and adding the SSD as L2ARC/sLOG device to the HDD pools.
hgeorgescu
User

41 posts

Posted on 21 January 2015 @ 02:58edited 15:23 29s
Thank you, Cipher. Fantastic help.

I've been able so far to follow the 1st part of your instructions. I may need to do the second part next evening, as I need to attend my other household duties. Here is what I've accomplished so far:

I have a mirrored 80GB pool (those two partitions) with a fresh new installation.

I chose to disable the zfsswap option at creation.
I'm not sure if I should have, but reading around the internet, it seemed the best thing to do.

I created two independent swap partitions, but I didn't know what to do with them from the web interface. I went in circles several times, and abandoned.

Searched more on the internet, found (https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/Mirror) how to do this from the command line, deleted them, and went to a terminal session (SSH) and got them added:
zfsguru# gpart add -s 10G -t freebsd-swap -l swap0 ada0
zfsguru# gpart add -s 10G -t freebsd-swap -l swap1 ada1
zfsguru# kldload /boot/kernel/geom_mirror.ko
zfsguru# gmirror label -b prefer swap gpt/swap0 gpt/swap1

added the swap partitions to /etc/fstab:
/dev/mirror/swap none swap sw 0 0
added geom_mirror.so to the boot sequence:
geom_mirror_load="YES"
in /boot/loader.conf
rebooted, and everything is running fine so far.
Finally I also took a snapshot of the root filesystem, and I feel safe :).

I'll continue with the installation tomorrow, and I what I want is to install either plex or mediatomb. Shall I try to install these in a jail - is that possible? What do you recommend, to keep things simple and safe?

One question regarding disaster recovery: I actually should this try before I go to far with the installation... What happens with my system if one of the SSD drives fail?
In the final configuration, I will have all those partitions mirrored, I suppose.
How will the system handle a one disk going kaput?

I'm including also the messages file from this new installation, here: https://www.dropbox.com/s/85pn4bv77krzqdg/messages?dl=0
When you have a chance to have a look, please - there are a few funny messages at boot time which I'm not sure what to do about, if anything..
Jan 21 00:56:26 zfsguru kernel: acpi_throttle1: <ACPI CPU Throttling> on cpu1
Jan 21 00:56:26 zfsguru kernel: acpi_throttle1: failed to attach P_CNT



Thank you for your help.
hgeorgescu
User

41 posts

Posted on 21 January 2015 @ 12:16edited 15:25 19s
Hello, the content of this thread went beyond the initial issue.
Not sure what is the best to document further, for everyone's benefit. Perhaps under a different title in the "Build your own ZFS server"?

Continuing briefly on the issue signalled above.
Per https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=180562 - there was a bug in the kernel: "amdtemp and ACPI not working with motherboard ASUS M5A97 PRO", which later code should have addressed it (FreeBSD Release10). My motherboard is a M5A99X-EVO-R20 (ASUS), but perhaps a similar issue persists with it.

Now that bug was dated some time in 2013, and the kernel used by zfsguru is based on Release10. I'm not sure how well was that addressed.

Digging further into the internet,found this discussion http://permalink.gmane.org/gmane.os.freebsd.devel.acpi/8554
And that prompted another couple of changes in /boot/loader.conf:
hint.acpi_throttle.0.disabled=1
P4TCC with hint.p4tcc.0.disabled=1

With that the next boot didn't show those acpi errors anymore.

Thank you.
Last Page

Valid XHTML 1.1