Latest interface: 0.3.1
Latest system: 010
blimpyboy
User

24 posts

Posted on 23 November 2016 @ 14:53
I have a working ZFSGuru 0.1.8 (FreeBSD 8.2-STABLE) system installed on a ZFS root pool 'root-pool' which is a 2-disk mirror using two 160GB drives:
drive1: 160GB gpt/mirror1
drive2: 160GB gpt/mirror2

# zpool status root-pool
pool: root-pool
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
root-pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/mirror1 ONLINE 0 0 0
gpt/mirror2 ONLINE 0 0 0

errors: No known data errors

I know the ZFSGuru version is pretty old, but it's a complicated build and it's running just perfectly. The system was pretty full so I though I'd replace both drives with two new 500GB drives:
drive3: 500GB gpt/mirror3
drive4: 500GB gpt/mirror4

I initialised both drives in gpt format and then did the following:

# zpool attach -f root-pool gpt/mirror2 gpt/mirror3
# zpool attach -f root-pool gpt/mirror2 gpt/mirror4

Once the new drives had resilvered, the pool looked like this:

# zpool status root-pool
pool: root-pool
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
root-pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/mirror1 ONLINE 0 0 0
gpt/mirror2 ONLINE 0 0 0
gpt/mirror3 ONLINE 0 0 0
gpt/mirror4 ONLINE 0 0 0

errors: No known data errors

I then detached the original disks:

# zpool detach root-pool gpt/mirror1
# zpool detach root-pool gpt/mirror2

and rebooted - all is fine... expcept the pool still shows size as 160GB! I then set 'autoexpand' on:

#zpool set autoexpand=on root-pool

and rebooted again, but pool is still showing as 160GB, even though new drives are 500GB:

# zpool list root-pool
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
root-pool 149G 135G 14G 90% 1.00x ONLINE -

I'm probably just missing something very simple, but how do I force ZFS to 'see' the extra space on the new drives/pool and allow me to use it for data?


I'd appreciate any light anyone can shed on this.
CiPHER
Developer

1199 posts

Posted on 23 November 2016 @ 17:03
I remember this being done with a 'zpool online -e' command you send for each disk.

I will look into it in more detail this evening.
blimpyboy
User

24 posts

Posted on 23 November 2016 @ 20:30
Yep, this did the trick for me:

#zpool online -e root-pool gpt/mirror3 gpt/mirror4

Btw, very reassuring to see you are still with us!
blimpyboy
User

24 posts

Posted on 24 November 2016 @ 10:19
One last question:

To tidy things up, I carried out a 'Zero-write partition areas' on the two old drives that I had detached from the root pool and then rebooted only to find that the server screen showed 'No boot device'. Initially, I thought perhaps I should have copied the boot area from the old drives to the new before zapping them - too late now I thought! Anyway, I powered down, unplugged the power & SATA cables from both old drives and tried again. This time everything booted up fine :-)

The ZFSGuru system is running perfectly again with the new mirrored pair of drives so not essential, but if anyone can explain the fine details of the above scenario, I'd be delighted to hear from you.
CiPHER
Developer

1199 posts

Posted on 25 November 2016 @ 01:17
I carried out a 'Zero-write partition areas' on the two old drives that I had detached from the root pool and then rebooted only to find that the server screen showed 'No boot device'.Was this with the old disks still connected? You should remove them from the system or configure your boot sequence in the BIOS/UEFI so that it boots from the new harddrive. The first time, i suggest using the boot menu which is often under the F9/F10/F11 key. If you know which disk boots properly, configure that disk as first boot disk in the BIOS.

So it sounds to me like the BIOS was booting from the zero-written disks, because after you unplugged them the system boots fine you say. That is because the BIOS can only boot from one of the two new mirror disks, which should both be bootable.
blimpyboy
User

24 posts

Posted on 25 November 2016 @ 19:10
Once I'd detached the old drives (but not wiped them) things started up fine after rebooting. I then zero'd out the partition areas of both detached drives and after a further reboot got the 'no bootable disk' error so something was trying to boot from one of the (still attached) blank drives. I didn't change anything in the BIOS settings at that point, I simply physically removed them and restarted again, after which the system booted fine.

I guess what is happening is that even though the new mirror drives were bootable, the BIOS saw the old drives were still there and tried to boot - that failed as I'd already blown the boot area. Once I'd removed the old drives, the BIOS must have then looked for another bootable drive and found one of the two new mirrors. Interesting that I had to physically remove the old blank drives in order for the BIOS to look past the drive it would have booted from in the past.

No major deal as the system boots fine when only the two new drives remain - just interesting to understand why the BIOS doesn't simply look at one of the two new drives when the old ones have been zero'd but still remain physically attached.
Last Page

Valid XHTML 1.1