Convert GFS2 Clustered FS into non Clustered FS

There was a Clustered Disk, which resides on a NAS.
The servers that forms the cluster is no longer uses the NAS. So this Clustered Disk no longer used.

Now the NAS connected to a single UNIX server. How do you  recover the data ?

First try to find the iSCSI target names on this NAS :

[root@ppms ~]# iscsiadm -m discovery -t sendtargets -p 172.16.0.4
Starting iscsid:                                           [  OK  ]
172.16.0.4:3260,1 iqn.2004-04.com.qnap:ts-419pii:iscsi.sqldata.d09d73
172.16.0.4:3260,1 iqn.2004-04.com.qnap:ts-419pii:iscsi.shared.d09d73
172.16.0.4:3260,1 iqn.2004-04.com.qnap:ts-419pii:iscsi.otrs.d09d73


The target is the one with sqldata. Now try to connect this target into the server :

[root@ppms ~]# iscsiadm -m node -p 172.16.0.4 -T iqn.2004-04.com.qnap:ts-419pii:iscsi.sqldata.d09d73 -l
Logging in to [iface: default, target: iqn.2004-04.com.qnap:ts-419pii:iscsi.sqldata.d09d73, portal: 172.16.0.4,3260] (multiple)
Login to [iface: default, target: iqn.2004-04.com.qnap:ts-419pii:iscsi.sqldata.d09d73, portal: 172.16.0.4,3260] successful.

Check if the new iSCSI Disk is now recognized :

[root@ppms ~]# fdisk -l

Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00096c56

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64        1959    15215616   8e  Linux LVM

Disk /dev/mapper/vg_ppms-lv_root: 11.4 GB, 11416895488 bytes
255 heads, 63 sectors/track, 1388 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_ppms-lv_swap: 4160 MB, 4160749568 bytes
255 heads, 63 sectors/track, 505 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdb: 322.1 GB, 322122547200 bytes
255 heads, 63 sectors/track, 39162 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 262144 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       39162   314568733+  83  Linux


Now we need to check all LVM status about this new disk :

[root@ppms ~]# pvs
  Skipping clustered volume group nasVG2
  Skipping volume group nasVG2
  PV         VG      Fmt  Attr PSize  PFree
  /dev/sda2  vg_ppms lvm2 a--  14.51g    0

The Volume Group is nasVG2, but since this VG is a clustered VG, it was skipped.

This VG's clustered VG attribute needs to be removed ( -cn ).
 
[root@ppms ~]# vgchange -cn nasVG2 --config 'global {locking_type = 0}'
  WARNING: Locking disabled. Be careful! This could corrupt your metadata.
  Unable to determine exclusivity of nasdb
  Volume group "nasVG2" successfully changed

After removing the clustered attribute, the  status of Physical Volumes now can be seen and status of the Logical Volumes too :

[root@ppms ~]# pvs
  PV         VG      Fmt  Attr PSize   PFree 
  /dev/sda2  vg_ppms lvm2 a--   14.51g      0
  /dev/sdb1  nasVG2  lvm2 a--  299.99g 239.99g
 

[root@ppms ~]# lvs
  LV      VG      Attr      LSize  Pool Origin Data%  Move Log Cpy%Sync Convert
  nasdb   nasVG2  -wi------ 60.00g                                            
  lv_root vg_ppms -wi-ao--- 10.63g                                            
  lv_swap vg_ppms -wi-ao---  3.88g                                            


The LV status still inactive though :

[root@ppms ~]# lvscan
  inactive          '/dev/nasVG2/nasdb' [60.00 GiB] inherit
  ACTIVE            '/dev/vg_ppms/lv_root' [10.63 GiB] inherit
  ACTIVE            '/dev/vg_ppms/lv_swap' [3.88 GiB] inherit

Change this LV into active state :

[root@ppms ~]# lvchange -ay /dev/nasVG2/nasdb
 

[root@ppms ~]# lvs
  LV      VG      Attr      LSize  Pool Origin Data%  Move Log Cpy%Sync Convert
  nasdb   nasVG2  -wi-a---- 60.00g                                            
  lv_root vg_ppms -wi-ao--- 10.63g                                            
  lv_swap vg_ppms -wi-ao---  3.88g


[root@ppms ~]# lvscan
  ACTIVE            '/dev/nasVG2/nasdb' [60.00 GiB] inherit
  ACTIVE            '/dev/vg_ppms/lv_root' [10.63 GiB] inherit
  ACTIVE            '/dev/vg_ppms/lv_swap' [3.88 GiB] inherit

We have now LV active and ready to be used ...

[root@ppms ~]# mount /dev/mapper/nasVG2-nasdb /sqldata/
mount: Transport endpoint is not connected

 Ooopsss.. something was forgotten. The LV was used for GFS2 filesystem. While this server has no GFS2 tools. So, need to install GFS2 utilities :

[root@ppms ~]# yum install gfs2-utils
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.smartmedia.net.id
 * extras: mirror.vodien.com
 * updates: mirror.vodien.com
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package gfs2-utils.x86_64 0:3.0.12.1-68.el6 will be installed
--> Finished Dependency Resolution


.
.

. (snipped....)
.
.

Installed:
  gfs2-utils.x86_64 0:3.0.12.1-68.el6
Complete!


Try again. This time check using fsck to make sure file system is in good condition :

[root@ppms ~]# fsck /dev/mapper/nasVG2-nasdb
fsck from util-linux-ng 2.17.2
Initializing fsck
Validating Resource Group index.
Level 1 rgrp check: Checking if all rgrp and rindex values are good.
(level 1 passed)
Starting pass1
Pass1 complete     
Starting pass1b
Pass1b complete
Starting pass1c
Pass1c complete
Starting pass2
Pass2 complete     
Starting pass3
Pass3 complete     
Starting pass4
Pass4 complete     
Starting pass5
Pass5 complete     
gfs2_fsck complete

Seems all good. Mount it :

[root@ppms ~]# mount /dev/mapper/nasVG2-nasdb /sqldata/
gfs_controld join connect error: Connection refused
error mounting lockproto lock_dlm



Ouch.. still something missed. The GFS2 uses locking mechanism to prevent collision. This locking mechanism needs to be removed since this server exclusively use the Disk.

[root@ppms ~]# gfs2_tool sb /dev/mapper/nasVG2-nasdb proto lock_none
You shouldn't change any of these values if the filesystem is mounted.

Are you sure? [y/n] y

current lock protocol name = "lock_dlm"
new lock protocol name = "lock_none"
Done

Again, can not mount this LV. Different error as shown below :

[root@ppms ~]# mount /dev/mapper/nasVG2-nasdb /nasdb/
error mounting /dev/dm-2 on /nasdb: No such file or directory





After searching several possibility, the clue was found on /var/log/messages :

Nov 25 10:35:53 ppms kernel: GFS2: can't find protocol lock_none
Nov 25 10:37:59 ppms kernel: GFS2: can't find protocol lock_none

The locking protocol should be lock_nolock instead lock_none.

[root@ppms ~]# gfs2_tool sb /dev/mapper/nasVG2-nasdb proto lock_nolock
You shouldn't change any of these values if the filesystem is mounted.

Are you sure? [y/n] y

current lock protocol name = "lock_none"
new lock protocol name = "lock_nolock"
Done


Try again, and finally it was mounted and the data can be recovered.

[root@ppms ~]# mount /dev/mapper/nasVG2-nasdb /nasdb/
[root@ppms ~]# mount
/dev/mapper/vg_ppms-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/sr0 on /mnt/cdrom type iso9660 (ro)
/dev/mapper/nasVG2-sqldata on /sqldata type ext3 (rw)
/dev/mapper/nasVG2-nasdb on /nasdb type gfs2 (rw,seclabel,relatime,localflocks,localcaching)


[root@ppms ~]# ls -l /nasdb/
total 4
drwxr-xr-x. 10 27 27 3864 Mar 20  2014 mysql
 

[root@ppms ~]# ls -l /nasdb/mysql/
total 28824
drwx------. 2 27 27     3864 Mar 18  2014 bak_spms
-rw-r--r--. 1 27 27        0 Mar 18  2014 debian-5.5.flag
drwx------. 2 27 27     3864 Mar 18  2014 formWeaver
-rw-r-----. 1 27 27 18874368 Mar 21  2014 ibdata1
-rw-r-----. 1 27 27  5242880 Mar 21  2014 ib_logfile0
-rw-r-----. 1 27 27  5242880 Mar 18  2014 ib_logfile1
drwx------. 2 27 27     2048 Mar 18  2014 mysql
-rw-r-----. 1 27 27        6 Mar 18  2014 mysql_upgrade_info
drwx------. 2 27 27     2048 Mar 18  2014 otrs
drwx------. 2 27 27     3864 Mar 18  2014 performance_schema
drwx------. 2 27 27     3864 Mar 18  2014 phpmyadmin
drwx------. 2 27 27     2048 Mar 18  2014 spms
drwx------. 2 27 27     3864 Mar 18  2014 test

Comments