Latest interface: 0.3.1
Latest system: 010
Marcelluz
User

14 posts

Posted on 19 June 2016 @ 22:30edited 22:44 48s
Today I had to reboot ZFSguru. But after reboot the system was not back up, when looking into what was the cause, ZFSguru was constantly rebooting. So, I reinstalled ZFSguru. All went fine, but when I try to import my main pool, the server just halts and reboots..

My system:

ZFSguru system image 10.2.003 featuring FreeBSD 10.2-RELEASE-p1 with ZFS v5000
Running Root-on-ZFS distribution.

The pool look ok to me:

pool: dominion
id: 2080091093116710523
state: ONLINE
status: One or more devices were configured to use a non-native block size.
Expect reduced performance.
action: The pool can be imported using its name or numeric identifier.
config:

dominion ONLINE
raidz2-0 ONLINE
gpt/disk01 ONLINE
gpt/disk02 ONLINE
gpt/disk03 ONLINE
gpt/disk04 ONLINE

What can I do to resolve this?

Note: I have sucesfully imported another pool (1 disk) on this server, but the dominion pool won't..


Regards,

Marcel.
Marcelluz
User

14 posts

Posted on 24 June 2016 @ 10:07
I have tried this:

[ssh@zfsguru ~]$ su
[root@zfsguru /home/ssh]# set zfs:zfs_recover=1
[root@zfsguru /home/ssh]# set aok=1
[root@zfsguru /home/ssh]# zpool import -f dominion

But this crashes the server all the same.. :-(

Also, I tried this:

[root@zfsguru /home/ssh]# zdb -e -bcsvL dominion
Assertion failed: (DMU_OT_IS_VALID(dn->dn_phys->dn_type)), file /src/cddl/lib/libzpool/../../../sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dnode.c, line 443.
Abort trap

Nothing seems to work.

Yhe only thing I can think of that may be related somehow is the installation of Clamav on the system?

Regards, Marcel.
Marcelluz
User

14 posts

Posted on 2 July 2016 @ 14:20edited 14:41 08s
I reply on my own topic. Still I have not succeeded importing my pool Dominion. :-( Is there anyone who can help me? Are there topics related? Since I can not find them here on this site...

I found this message in the log, I do not know if this could be related since da0 is the system disc (usb drive);

ZFS WARNING: Unable to attach to da0.
Marcelluz
User

14 posts

Posted on 7 July 2016 @ 07:49edited 08:28 41s
Well, Ubuntu (16.04 LTS) does not crash on pool import, the zpool import command just hangs forever... No matter if I import it read only, with option -f, -f -F -d, or whatever I try.. Bye bye to 8 TB of music and scifi movie collection. :-( Of documents from the early years of computers. Yes, I was able to keep these for 20 years, and then ZFS came around. Many, many hours of work were in that data, since I copied every cd or dvd myself from my collection. Well, my trust in this miraculous secure ZFS was misplaced... It's not THAT secure I guess.. In real life world it just gave me more headaches than any other filesystem I ever used... And recovery options are lacking seriously, well, that is what I conclude... I am sure the data is there but out of my reach.. :-(( It;s really sad that one needs a bull in computer science to keep his data safe? Also reading day in day out on websites in search of some useful information only to find out that the ZFS system is not going to give in? Ridiculous it is.. In my opinion this should NEVER happen one should always be able to access the disks even when they are 90% corrupt. But no, ZFS just blocks access and offers no known way to access the data. How stupid is that. I will go on trying to get access to this annoying time consuming file system. Grrrrrrrrrrrrrr. Oh yeah my data from my business is also there. Dammit. Software Raid should be safer, there are no hardware failures like broken Raid controllers, one could always access his data. ?!? Nope. Just try to import your data to another system when the old one would not boot anymore. This happened several times. Ok, the USB media I used to use for booting ZFSguru were not the right choice since these corrupted quickly over time. But a secure system should warn the user BEFORE disaster, right? This never happened. It would go like this: the system is not accessible (of course, it encountered a corrupt sector), so I reboot it, but then it would not boot anymore. So one has to find out what to do about this without losing access to one's data. Roaming the internet I found that I am not the only one with these kind of trouble, a lot more people lost their data and time to ZFS. How cool is that. Oh and you know what is so much cooler? Many other people without trouble would react like this: ZFS is serving millions without any trouble you must have bad hardware or do something wrong yourself. So don't blame ZFS. Really? How is that going to help? This is a BUG. The system is just hanging in a loop and does not go on importing data because of some reason.

Now I am here reading this: http://prefetch.net/blog/index.php/2008/03/01/configuring-zfs-to-gracefully-deal-with-failures/

To be continued..
Marcelluz
User

14 posts

Posted on 7 July 2016 @ 08:51edited 09:37 03s
sudo zpool import dominion
cannot import 'dominion': pool may be in use from other system, it was last accessed by zfsguru.bsd (hostid: 0xd4ae71c5) on Sun Jun 19 15:06:18 2016
use '-f' to import anyway

Using that hangs the command, so what more options are there?

sudo zpool status dominion
cannot open 'dominion': no such pool

Confused...

sudo zpool status dominion
cannot open 'dominion': no such pool

But when I try to import it it's there, I know it's there.

sudo zpool import -f dominion && zpool set failmode=continue dominion

Well just hangs, no message of what is happening...

Read some more... Zpool imort still hanging.. no drive activity...

https://lists.freebsd.org/pipermail/freebsd-stable/2014-October/080505.html

Does anyone know (if anyone reads this) if it is an idea to start deleting "uberblocks" and how to do this safely? Is it possible? For as far I understand this, one loses a little piece of history every time one deletes an uberblock? Are there walkthrough's available (somewhere) on that?
Marcelluz
User

14 posts

Posted on 7 July 2016 @ 10:41
Here some system specs:

Hardware:
HP Proliant Microserver N40L
8GB of Kingston ECC memory, kit of 2x4GB, double sided. (KVR1333D3E9SK2)
1x Hitachi Deskstar 5K4000 (4TB) (has 1 pool on it, functions)
pool dominion:
2x Samsung Spinpoint F3 (HD203WI) (2TB each)
2x Samsung Spinpoint F3 (HD204UI) (2TB each)
System Disk:
Usb2.0 case with Seagate momentus 7200 (ST910021A) in it (100GB)
Marcelluz
User

14 posts

Posted on 9 July 2016 @ 18:21
root@Origin:~# zdb -e -bcsvL dominion

Traversing all blocks to verify checksums ...

error: blkptr at 0x7f1cfc000e40 has invalid COMPRESS 123
Aborted (core dumped)
bitmap
User

26 posts

Posted on 10 July 2016 @ 20:10
I'd keep trying anything non-destructive, but rolling back uberblocks is not something you can undo. The invalid compress error is interesting and may be worth following, and the unable to attach da0 error as well.

Have you tried swapping any of the disks to different sata ports, even if just one of the samsungs with the hitachi, to see if problems are with one port or one disk? Maybe with one of the samsung disconnected, as the "hang" could be the system waiting on a read that never returns. Any errors in the SMART status page for any of the disks?
Marcelluz
User

14 posts

Posted on 7 September 2016 @ 02:03
Thanks for your reply Bitmap. I have no idea what to do about the invalid compress error. I remember that I did change the compression settings on the pool to gain disk space. So I guess that was not good and can be the cause of the pool causing the system to crash on import? I did not swap any disks (yet), am a little afraid that this will worsen the case and, the system reboots immediately, so it is not waiting forever? In linux it will not reboot but the terminal input will just hang. No matter what I type in the terminal after that, but the command never returns.

I managed to get some information out of the pool, but I think it is not very useful.. other than that the pool itself is pretty much intact..

Here it is..

marcel@Origin:~$ sudo zdb -eh dominion
[sudo] password for marcel:

History:
2013-10-22.13:55:54 zpool create -f -o version=28 -O version=5 -O atime=off dominion raidz2 gpt/disk01 gpt/disk02 gpt/disk03 gpt/disk04
2013-10-22.13:55:54 [internal pool create txg:5] pool spa 28; zfs spa 28; zpl 5; uts zfsguru.bsd 9.0-RELEASE 900044 amd64
2013-10-22.13:55:54 [internal filesystem version upgrade txg:5] oldver=5 newver=5 dataset = 21
2013-10-22.13:55:54 [internal property set txg:5] atime=0 dataset = 21
2013-10-22.13:55:54 [internal create txg:6] dataset = 37
2013-10-22.13:55:55 zfs create dominion/share
2013-10-22.14:08:18 [internal create txg:166] dataset = 44
2013-10-22.14:08:23 zfs create dominion/audio
2013-10-22.14:08:39 [internal create txg:172] dataset = 51
2013-10-22.14:08:44 zfs create dominion/video
2013-10-22.14:09:24 [internal create txg:181] dataset = 58
2013-10-22.14:09:29 zfs create dominion/film
2013-10-22.14:09:51 [internal create txg:187] dataset = 65
2013-10-22.14:09:56 zfs create dominion/musicdvd
2013-10-22.14:10:08 [internal create txg:191] dataset = 72
2013-10-22.14:10:13 zfs create dominion/videoclips
2013-10-22.14:10:30 [internal create txg:196] dataset = 79
2013-10-22.14:10:35 zfs create dominion/graphics
2013-10-22.14:11:33 [internal destroy_begin_sync txg:209] dataset = 72
2013-10-22.14:11:34 [internal destroy txg:213] dataset = 72
2013-10-22.14:11:34 [internal property set txg:213] reservation=0 dataset = 72
2013-10-22.14:11:34 [internal reservation set txg:213] 0 dataset = 0
2013-10-22.14:11:39 zfs destroy dominion/videoclips
2013-10-22.14:12:09 [internal create txg:221] dataset = 86
2013-10-22.14:12:14 zfs create dominion/musicvideos
2013-10-22.14:12:39 [internal create txg:227] dataset = 93
2013-10-22.14:12:44 zfs create dominion/storage
2013-10-29.22:31:56 zpool scrub dominion
2014-04-25.07:24:27 zpool scrub dominion
2014-08-20.23:32:51 zpool import -d /dev/gpt -f 2080091093116710523
2014-08-20.23:33:15 zpool upgrade -V 5000 dominion
2014-08-21.20:00:35 zpool import -d /dev/gpt -f 2080091093116710523
2015-03-22.14:14:11 zpool import -d /dev/gpt -f 2080091093116710523
2015-03-22.17:35:40 zpool import -d /dev/gpt -f 2080091093116710523
2015-06-02.00:30:40 zpool scrub dominion
2015-06-02.09:45:13 zpool scrub -s dominion
2015-06-02.12:41:14 zpool scrub dominion
2015-06-02.13:58:52 zpool scrub -s dominion
2015-06-02.16:04:41 zpool scrub dominion
2015-06-29.18:06:52 zpool online dominion /dev/gpt/disk04
2015-09-05.13:47:29 zpool import -d /dev/gpt -f 2080091093116710523
2015-09-05.13:59:20 zfs create dominion/zfsguru
2015-09-05.13:59:21 zfs create dominion/zfsguru/download
2015-09-05.13:59:22 zfs set compression=off dominion/zfsguru/download
2015-09-05.13:59:23 zfs create dominion/zfsguru/10.2.003
2015-09-05.13:59:23 zfs set atime=off dominion/zfsguru/10.2.003
2015-09-05.13:59:24 zfs set sync=standard dominion/zfsguru/10.2.003
2015-09-05.13:59:25 zfs set dedup=off dominion/zfsguru/10.2.003
2015-09-05.13:59:25 zfs set copies=2 dominion/zfsguru/10.2.003
2015-09-05.13:59:26 zfs set compression=lz4 dominion/zfsguru/10.2.003
2015-09-05.13:59:27 zfs create -V 2g -s dominion/zfsguru/SWAP
2015-09-05.13:59:30 zfs set org.freebsd:swap=on dominion/zfsguru/SWAP
2015-09-05.14:00:36 zfs set mountpoint=legacy dominion/zfsguru/10.2.003
2015-09-05.14:00:41 zpool set bootfs=dominion/zfsguru/10.2.003 dominion
2015-09-05.14:25:26 zpool import -d /dev/gpt -f 2080091093116710523
2015-09-05.14:46:55 zpool set bootfs= dominion
2015-09-05.15:15:08 zpool import -d /dev/gpt -f 2080091093116710523
2015-11-11.08:18:21 zpool import -d /dev/gpt -f 2080091093116710523
2015-11-11.08:19:28 zpool set feature@large_blocks=enabled dominion
2015-11-11.08:20:46 zpool scrub dominion
2015-11-15.17:11:09 zfs set compression=lz4 dominion/film
2016-01-07.10:56:23 zfs destroy dominion/share
2016-01-09.02:05:19 zfs set sharenfs=-alldirs -mapall=1000:1000 -network 10.0.0.0/8 -network 172.16.0.0/12 -network 192.168.0.0/16 dominion/storage
2016-01-09.02:16:36 zfs set sharenfs=-network=0.0.0.0/0 -mask=0.0.0.0 -alldirs -mapall=1000:1000 dominion/storage
2016-01-09.02:42:03 zfs set sharenfs=-network=0.0.0.0/0 -mask=0.0.0.0 -alldirs -mapall=1000:1000 dominion/storage
2016-01-09.05:42:07 zfs inherit sharenfs dominion/storage
2016-01-09.05:42:35 zfs set sharenfs=-alldirs -mapall=1000:1000 dominion/storage
2016-01-10.17:40:19 zfs set sharenfs=-network=10.0.0.0/8 -network=172.16.0.0/12 -network=192.168.0.0/16 -alldirs -mapall=1000:1000 dominion/storage
2016-01-10.17:42:31 zfs set sharenfs=off dominion/storage
2016-01-10.17:42:42 zfs set sharenfs=off dominion/storage
2016-01-10.17:43:20 zfs set sharenfs=-alldirs -mapall=1000:1000 -network 10.0.0.0/8 -network 172.16.0.0/12 -network 192.168.0.0/16 dominion/storage
2016-01-10.18:07:07 zfs set sharenfs=-network=10.0.0.0/8 -network=172.16.0.0/12 -network=192.168.0.0/16 -network=192.168.1.0/24 -alldirs -mapall=1000:1000 dominion/storage
2016-01-10.18:08:42 zfs set sharenfs=off dominion/storage
2016-01-10.18:09:24 zfs set sharenfs=-alldirs -mapall=1000:1000 -network 10.0.0.0/8 -network 172.16.0.0/12 -network 192.168.0.0/16 dominion/storage
2016-01-10.18:14:12 zfs set sharenfs=-alldirs -mapall=1000:1000 -maproot=marcel -network=10.0.0.0/8 -network=172.16.0.0/12 -network=192.168.0.0/16 dominion/storage
2016-01-10.18:18:02 zfs set sharenfs=off dominion/storage
2016-01-10.18:19:17 zfs set sharenfs=-alldirs -mapall=1000:1000 dominion/storage
2016-01-11.11:58:51 zfs set sharenfs=-alldirs -mapall=1000:1000 dominion/video
2016-01-11.12:13:56 zfs set sharenfs=on dominion/musicvideos
2016-01-11.17:02:40 zfs set sharenfs=-ro -alldirs -mapall=1000:1000 dominion/storage
2016-01-11.17:03:05 zfs set sharenfs=-alldirs -mapall=1000:1000 dominion/storage
2016-01-11.17:03:46 zfs inherit sharenfs dominion/storage
2016-01-11.17:04:14 zfs set sharenfs=on dominion/storage
2016-01-11.17:07:12 zfs inherit sharenfs dominion/storage
2016-01-11.17:07:43 zfs set sharenfs=on dominion/storage
2016-01-11.17:09:44 zfs inherit sharenfs dominion/video
2016-01-11.17:09:51 zfs inherit sharenfs dominion/storage
2016-01-11.17:11:27 zfs set sharenfs=on dominion/storage
2016-01-11.22:26:41 zfs inherit sharenfs dominion/storage
2016-01-11.22:26:55 zfs set sharenfs=-alldirs -mapall=1000:1000 -network 10.0.0.0/8 -network 172.16.0.0/12 -network 192.168.0.0/16 dominion/storage
2016-01-11.23:17:34 zfs inherit sharenfs dominion/storage
2016-01-11.23:17:51 zfs inherit sharenfs dominion/musicvideos
2016-01-11.23:18:14 zfs set sharenfs=-alldirs -mapall=1000:1000 dominion/storage
2016-01-12.08:47:08 zfs set sharenfs=-alldirs -mapall=1000:1000 dominion/audio
2016-01-12.08:47:19 zfs set sharenfs=-alldirs -mapall=1000:1000 dominion/video
2016-01-12.08:47:29 zfs set sharenfs=-alldirs -mapall=1000:1000 dominion/musicvideos
2016-01-12.08:47:45 zfs set sharenfs=-alldirs -mapall=1000:1000 dominion/musicdvd
2016-01-12.08:47:55 zfs set sharenfs=-alldirs -mapall=1000:1000 dominion/graphics
2016-01-12.08:48:05 zfs set sharenfs=-alldirs -mapall=1000:1000 dominion/film
Marcelluz
User

14 posts

Posted on 7 September 2016 @ 02:09
marcel@Origin:~$ sudo zdb -ei dominion
[sudo] password for marcel:
Dataset mos [META], ID 0, cr_txg 4, 12.7M, 229 objects
Dataset dominion/audio [ZPL], ID 44, cr_txg 166, 141G, 18479 objects

ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0
Dataset dominion/zfsguru/10.2.003 [ZPL], ID 118, cr_txg 10416023, 1022M, 40907 objects

ZIL header: claim_txg 10416186, claim_blk_seq 112, claim_lr_seq 0 replay_seq 0, flags 0x2
Dataset dominion/zfsguru/SWAP [ZVOL], ID 124, cr_txg 10416041, 100M, 2 objects
Dataset dominion/zfsguru/download [ZPL], ID 112, cr_txg 10416017, 25.4K, 7 objects
Dataset dominion/zfsguru [ZPL], ID 105, cr_txg 10416014, 25.4K, 8 objects
Dataset dominion/musicdvd [ZPL], ID 65, cr_txg 187, 226G, 1029 objects

ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0
Dataset dominion/musicvideos [ZPL], ID 86, cr_txg 221, 104G, 2211 objects

ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0
Dataset dominion/film [ZPL], ID 58, cr_txg 181, 1.12T, 4619 objects

ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0
Dataset dominion/video [ZPL], ID 51, cr_txg 172, 1.75T, 8114 objects

ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0
Dataset dominion/graphics [ZPL], ID 79, cr_txg 196, 15.2G, 8605 objects

ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0
Dataset dominion/storage [ZPL], ID 93, cr_txg 227, 152G, 83967 objects

ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0
Dataset dominion [ZPL], ID 21, cr_txg 1, 41.8K, 15 objects

ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 0, flags 0x0
Verified large_blocks feature refcount is correct (0)
Last Page

Valid XHTML 1.1