Latest interface: 0.3.1
Latest system: 010
garfin
User

21 posts

Posted on 21 September 2016 @ 08:28edited 12:25 00s
Hi all,

Hoping (praying) i'm not dead in the water here... I've had a rather ugly power failure, and when ZFSGuru tried to come back up, i find the console stuck at the message

Solaris: WARNING: can't open objset for pool0/iSCSI

and not progressing any further.
zfsguru does not respond to ping or remote ssh at this early stage of the boot sequence.

I'm not too concerned about losing Filesystem:pool0/iSCSI , but i have other Filesystems shared out of pool0 that i really cant afford to lose.
i guess for starters, how can i get in (at the console) and prevent it from trying to auto mount these ??


*edit* - i am able to boot server from ZFSGuru 'live' USB stick and can see all the physical disks, including the mirrored that house the 'production' syspool ZFSguru installation. Attempting import of pool0 however , causes major disk I/O followed eventually by machine hanging due lack of /swap


[root@zfsguru ~]# zpool import
pool: pool0
id: 10513507314793354016
state: ONLINE
status: One or more devices were configured to use a non-native block size.
Expect reduced performance.
action: The pool can be imported using its name or numeric identifier.
config:

pool0 ONLINE
raidz2-0 ONLINE
gpt/raidz2-disk1 ONLINE
gpt/raidz2-disk2 ONLINE
gpt/raidz2-disk3 ONLINE
gpt/raidz2-disk4 ONLINE
gpt/raidz2-disk5 ONLINE
gpt/raidz2-disk6 ONLINE

pool: syspool
id: 13857589231523862335
state: ONLINE
status: The pool is formatted using a legacy on-disk version.
action: The pool can be imported using its name or numeric identifier, though
some features will not be available without an explicit 'zpool upgrade'.
config:

syspool ONLINE
mirror-0 ONLINE
gpt/syspool-disk1 ONLINE
gpt/syspool-disk2 ONLINE
[root@zfsguru ~]#


- Regards
karmantyu
User

131 posts

Posted on 21 September 2016 @ 13:44edited 13:45 14s
You can try to get an USB drive with good read / write access times - speeds and install ZFSguru on it. This way you don't have to write anything on the troubled pool. I've run ZFSguru (not live) for years on a single USB drive and had no issues with it. Mauritio has a how to if you interested in it.
http://zfsguru.com/forum/zfsgurudevelopment/147
garfin
User

21 posts

Posted on 21 September 2016 @ 14:07edited 14:16 02s
Hi, I've actually got some more spare 80gb drives laying around which would allow me to swap out these two

syspool ONLINE
mirror-0 ONLINE
gpt/syspool-disk1 ONLINE
gpt/syspool-disk2 ONLINE

and then install a fresh zfsguru syspool onto a 'new' mirror-0 from the live USB.

Question is , once i've done that , what is next ?
How would I import/repair pool0 , or more importantly , the other file systems in pool0 , other than pool0/iSCSI ? (i'm assuming of course, that the other filesystems within pool0 are still ok.)

OR, Is it safe to import the current (above) syspool within a 'live cd' booted environment? and if i do that , is there any config related to the auto-mounting of filesystems (particularly pool0/iSCSI) , that i can then access and remark out ?
karmantyu
User

131 posts

Posted on 21 September 2016 @ 15:13edited 15:16 01s
Sorry garfin maybe this is too much for my expertise. I would just simple install a new ZFSguru OS on a new medium (USB, HDD or SSD) and would boot from it. Now I would import the pools from ZFSguru web interface. That will not write to your pools but to your new system (OS) medium. Resilvering will write to the pools if it's necessary.
garfin
User

21 posts

Posted on 22 September 2016 @ 01:32edited 05:52 58s
*Update*

Hi, so what i'm now in the middle of...
1. Pulled old syspool mirror drives from system and replaced with another two.
2. installed root-on-ZFS
3. from console ran the following command
zpool import -f -N pool0

which hopefully should be now importing, pool0 without mounting any of its filesystems
manpage: https://www.freebsd.org/cgi/man.cgi?query=zpool&sektion=8

disk activity is high, unsure exactly how long it will take (6 x 3TB in raidz2) but i'm guessing a number of hours.

Hmm, disk i/o seemed to die after around 40 mins. console stopped responding too at that time. as did webUI and SSH access.

messages file points to running out of swap space.
garfin
User

21 posts

Posted on 22 September 2016 @ 05:19edited 05:46 49s
*Update*
Hi, so i'm now Running official system image 10.4.007 featuring FreeBSD 10.3-STABLE with ZFS v5000.
installed to the new mirrored 80gb HDD's , with a 32GB SWAP.
*fingers crossed*
garfin
User

21 posts

Posted on 22 September 2016 @ 06:41
*Update* Now i am getting worried..years worth of family pics and home movies, not to mention the 'other' media collections.. gone/unrecoverable because i cant get pool0 to import without dying some 38 mins later? (Curious that changing from 2gb swap to 32gb made no apparent difference to import timing.)


Sep 22 03:46:03 zfsguru su: ssh to root on /dev/pts/0
Sep 22 03:49:05 zfsguru login: ROOT LOGIN (root) ON ttyv1
Sep 22 03:49:52 zfsguru devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=10513507314793354016 vdev_guid=6608290585685258577''
Sep 22 03:49:52 zfsguru ZFS: vdev state changed, pool_guid=10513507314793354016 vdev_guid=6608290585685258577
Sep 22 03:49:52 zfsguru devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=10513507314793354016 vdev_guid=16298454112162294791''
Sep 22 03:49:52 zfsguru ZFS: vdev state changed, pool_guid=10513507314793354016 vdev_guid=16298454112162294791
Sep 22 03:49:52 zfsguru devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=10513507314793354016 vdev_guid=15906249964597420487''
Sep 22 03:49:52 zfsguru ZFS: vdev state changed, pool_guid=10513507314793354016 vdev_guid=15906249964597420487
Sep 22 03:49:52 zfsguru devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=10513507314793354016 vdev_guid=275378216303051017''
Sep 22 03:49:52 zfsguru ZFS: vdev state changed, pool_guid=10513507314793354016 vdev_guid=275378216303051017
Sep 22 03:49:52 zfsguru devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=10513507314793354016 vdev_guid=3806704286862671958''
Sep 22 03:49:52 zfsguru ZFS: vdev state changed, pool_guid=10513507314793354016 vdev_guid=3806704286862671958
Sep 22 03:49:52 zfsguru devd: Executing 'logger -p kern.notice -t ZFS 'vdev state changed, pool_guid=10513507314793354016 vdev_guid=3543897220293309887''
Sep 22 03:49:52 zfsguru ZFS: vdev state changed, pool_guid=10513507314793354016 vdev_guid=3543897220293309887
Sep 22 04:27:50 zfsguru kernel: pid 1025 (zpool), uid 0, was killed: out of swap space
Sep 22 04:27:51 zfsguru kernel: pid 889 (php-cgi), uid 888, was killed: out of swap space

garfin
User

21 posts

Posted on 22 September 2016 @ 09:34
Discovered zdb command..

trying

zdb -AAA -e -p /dev -F pool0

this could take days to run...
karmantyu
User

131 posts

Posted on 22 September 2016 @ 10:42edited 10:45 28s
This swap problem seems something kernel or swap configuration related. If you have a lot of RAM then you can try to add RAM SWAP
# mdconfig -s 8g -u md0
# swapon /dev/md0
That's 8GByte SWAP.
If you running low on RAM you can also pull one of the mirror disk system installed on and use it to create a huge SWAP file on a new ZFS pool and see what happens then.
# zpool import -o readonly=on pool0
# zpool -Fn pool0
You can also experiment with.

garfin
User

21 posts

Posted on 25 September 2016 @ 00:43
*update*

zdb -AAA -e -p /dev -F pool0

Still progressing @ 46MB/s 44Hrs to go..
garfin
User

21 posts

Posted on 26 September 2016 @ 00:02
*update*
died with a handful of hours left to go.. out of SWAP.. Grr , Now going to"pull one of the mirror disk system installed on and use it to create a huge SWAP file on a new ZFS pool and see what happens then" :)
garfin
User

21 posts

Posted on 27 September 2016 @ 00:06
*update*
Nothing i tried (running latter versions of FreeBSD) has worked, I've had to resort to going back to FreeBSD 9.1 so i can forcibly mount the volume without running out system RAM ( on FreeBSD 9.2 onwards, all my RAM would eventually be consumed by 'Wired' memory. ) data copies to elsewhere, now underway. Cheers.
Last Page

Valid XHTML 1.1