Page 1 of 1
Forum

Welcome to the Tweaking4All community forums!
When participating, please keep the Forum Rules in mind!

Topics for particular software or systems: Start your topic link with the name of the application or system.
For example “MacOS X – Your question“, or “MS Word – Your Tip or Trick“.

Please note that switching to another language when reading a post will not bring you to the same post, in Dutch, as there is no translation for that post!



QNAP - SSH - Manual...
 
Share:
Notifications
Clear all

[Solved] QNAP - SSH - Manually rebuild "inactive volume" (some useful commands)

1 Posts
1 Users
0 Likes
364 Views
 Hans
(@hans)
Famed Member Admin
Joined: 11 years ago
Posts: 2678
Topic starter  

Here some info I got my hands on, when trying to recover my RAID Volume that had become inactive after encountering a backplate issue which would at random eject my disk - naturally, this backplane issue needs to be resolved since this error (in my situation) will happen again ofcourse. 

Anyhoo - I've seen some interesting commands that may be helpful for others.


CAUTION:
Only use these commands if you know what you're doing.
In general I'd recommend contacting QNAP support and have them look at it.
QNAP has very good support, even for old QNAP (EOL) models!


First of all, all these commands are executed through an SSH (Shell) connection. This old QNAP article or this official QNAP article may be helpful, but then again: if you do not know what an SSH connection is, you'd probably be better off by contacting QNAP support.

 

Check RAID Status

The easiest way to check the RAID status is done with md_checher, which should be on your QNAP by default (my old QTS4 QNAP and newer QTS5 QNAP had it both installed):

md_checker

In my case this generated a list like this, where the "AAAA.AA." probably indicates trouble as it seems to be missing two spots. 

Welcome to MD superblock checker (v1.4) - have a nice day~

Scanning system...

HAL firmware detected!
Scanning Enclosure 0...

RAID metadata found!
UUID:		aac26467:7ef59f4f:51673f80:8c31584b
Level:		raid6
Devices:	8
Name:		md1
Chunk Size:	64K
md Version:	1.0
Creation Time:	Sep 7 19:53:43 2020
Status:		OFFLINE
===============================================================================
 Disk | Device | # | Status |   Last Update Time   | Events | Array State
===============================================================================
   1  /dev/sda3  0   Active   Feb  8 14:41:00 2024    58609   AAAA.AA.                 
   2  /dev/sdb3  1   Active   Feb  8 14:41:00 2024    58609   AAAA.AA.                 
   3  /dev/sdc3  2   Active   Feb  8 14:41:00 2024    58609   AAAA.AA.                 
   4  /dev/sdd3  3   Active   Feb  8 14:41:00 2024    58609   AAAA.AA.                 
   5  /dev/sde3  4   Active   Feb  8 14:36:42 2024    58422   AAAAAAAA                 
   6  /dev/sdf3  5   Active   Feb  8 14:36:42 2024    58422   AAAAAAAA                 
   7  /dev/sdg3  6   Active   Feb  8 14:36:42 2024    58422   AAAAAAAA                 
   8  /dev/sdh3  7   Active   Feb  8 14:36:42 2024    58422   AAAAAAAA                 
===============================================================================

 

Check if Disks are Detected properly

As show in my previous post, we can use qcli_storage storage for that:

qcli_storage -d

On my NAS all disks were detected properly:

Enclosure  Port  Sys_Name     Type      Size      Alias          Signature   Partitions  Model  
NAS_HOST   1     /dev/sda     HDD:data  3.64 TB   --             QNAP FLEX   5           WDC WD40EFRX-68N32N0
NAS_HOST   2     /dev/sdb     HDD:data  3.64 TB   --             QNAP FLEX   5           WDC WD40EFRX-68N32N0
NAS_HOST   3     /dev/sdc     HDD:data  3.64 TB   --             QNAP FLEX   5           WDC WD40EFRX-68N32N0
NAS_HOST   4     /dev/sdd     HDD:data  3.64 TB   --             QNAP FLEX   5           WDC WD40EFZX-68AWUN0
NAS_HOST   5     /dev/sde     HDD:free  3.64 TB   --             QNAP FLEX   5           WDC WD40EFRX-68N32N0
NAS_HOST   6     /dev/sdf     HDD:free  3.64 TB   --             QNAP FLEX   5           WDC WD40EFZX-68AWUN0
NAS_HOST   7     /dev/sdg     HDD:free  3.64 TB   --             QNAP FLEX   5           WDC WD40EFRX-68N32N0
NAS_HOST   8     /dev/sdh     HDD:free  3.64 TB   --             QNAP FLEX   5           WDC WD40EFRX-68N32N0

 

Check if all disks have RAID (md) information

Next we need to check all disks to see if they have md superblock information:

mdadm -E /dev/sda3
mdadm -E /dev/sdb3
mdadm -E /dev/sdc3
mdadm -E /dev/sdd3
mdadm -E /dev/sde3
mdadm -E /dev/sdf3
mdadm -E /dev/sdg3
mdadm -E /dev/sdh3

Example output of one of my disks:

/dev/sda3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : aac26467:7ef59f4f:51673f80:8c31584b
           Name : 1
  Creation Time : Mon Sep  7 19:53:43 2020
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 7794127240 (3716.53 GiB 3990.59 GB)
     Array Size : 23382381696 (22299.18 GiB 23943.56 GB)
  Used Dev Size : 7794127232 (3716.53 GiB 3990.59 GB)
   Super Offset : 7794127504 sectors
   Unused Space : before=0 sectors, after=264 sectors
          State : clean
    Device UUID : b8d7db50:63e4ca78:53c5877e:2e946fd0

    Update Time : Thu Feb  8 14:41:00 2024
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 7b01bedb - correct
         Events : 58609

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAA.AA. ('A' == active, '.' == missing, 'R' == replacing)

 

Manually reassemble the RAID

CAUTION: This step may damage your RAID volume! If unsure: contact QNAP support!

Now, if all disks have md information (Device Role shows Active device #), you can include all these disks (8 in my case) to manually reassemble the RAID (md1) like so:

mdadm -AfR /dev/md1 mdadm -E /dev/sda3 mdadm -E /dev/sdb3 mdadm -E /dev/sdc3 mdadm -E /dev/sdd3 mdadm -E /dev/sde3 mdadm -E /dev/sdf3 mdadm -E /dev/sdg3 mdadm -E /dev/sdh3

Now if a disk is missing (max 1 for RAID 5, and max 2 for RAID 6) then you can skip those in this command.
If you have too many disks missing, or too many disks with missing md information, then you will not be able to use the "mdadm -AfR" command to reassemble the RAID.

Verify and bringing the volume back online

When all that was completed successfully, verify if RAID is online, again with md_checker:

md_checker

If md_checker is showing that the RAID is ONLINE again, then you can the following to recover the storage config:

/ etc/init.d/init_lvm,sh

(remove the space between the slah forward and "etc" - mod_security of the webserver doesn't seem to like it)

You can after that verify if the RAID and volume are good.
If you are not familiar with these commands, then please read up on them before using them.
Some references: pvs, vgs, lvs, df, mount.

md_checker
pvs
vgs
lvs -a
ls -l /dev/mapper/
df
mount

After this, the volume is usually name "/dev/mapper/cachedevX" which mounts to "/share/CACHEDEVX_DATA".
For example the volume such as /dev/mapper/cachedev1 is mounted to path /share/CACHEDEV1_DATA.

You can then verify the content "File Station" or in the Shell with for example:

ls -l /share/CACHEDEV1_DATA/

Hope this is useful to fellow QNAP users.


   
ReplyQuote
Share: