Page 1 of 1
Forum

Welcome to the Tweaking4All community forums!
When participating, please keep the Forum Rules in mind!

Topics for particular software or systems: Start your topic link with the name of the application or system.
For example “MacOS X – Your question“, or “MS Word – Your Tip or Trick“.

Please note that switching to another language when reading a post will not bring you to the same post, in Dutch, as there is no translation for that post!



QNAP - SSH - Manual...
 
Share:
Notifications
Clear all

[Solved] QNAP - SSH - Manually rebuild "inactive volume" (some useful commands)

7 Posts
2 Users
0 Reactions
2,254 Views
 Hans
(@hans)
Famed Member Admin
Joined: 11 years ago
Posts: 2741
Topic starter  

Here some info I got my hands on, when trying to recover my RAID Volume that had become inactive after encountering a backplate issue which would at random eject my disk - naturally, this backplane issue needs to be resolved since this error (in my situation) will happen again ofcourse. 

Anyhoo - I've seen some interesting commands that may be helpful for others.


CAUTION:
Only use these commands if you know what you're doing.
In general I'd recommend contacting QNAP support and have them look at it.
QNAP has very good support, even for old QNAP (EOL) models!


First of all, all these commands are executed through an SSH (Shell) connection. This old QNAP article or this official QNAP article may be helpful, but then again: if you do not know what an SSH connection is, you'd probably be better off by contacting QNAP support.

 

Check RAID Status

The easiest way to check the RAID status is done with md_checher, which should be on your QNAP by default (my old QTS4 QNAP and newer QTS5 QNAP had it both installed):

md_checker

In my case this generated a list like this, where the "AAAA.AA." probably indicates trouble as it seems to be missing two spots. 

Welcome to MD superblock checker (v1.4) - have a nice day~

Scanning system...

HAL firmware detected!
Scanning Enclosure 0...

RAID metadata found!
UUID:		aac26467:7ef59f4f:51673f80:8c31584b
Level:		raid6
Devices:	8
Name:		md1
Chunk Size:	64K
md Version:	1.0
Creation Time:	Sep 7 19:53:43 2020
Status:		OFFLINE
===============================================================================
 Disk | Device | # | Status |   Last Update Time   | Events | Array State
===============================================================================
   1  /dev/sda3  0   Active   Feb  8 14:41:00 2024    58609   AAAA.AA.                 
   2  /dev/sdb3  1   Active   Feb  8 14:41:00 2024    58609   AAAA.AA.                 
   3  /dev/sdc3  2   Active   Feb  8 14:41:00 2024    58609   AAAA.AA.                 
   4  /dev/sdd3  3   Active   Feb  8 14:41:00 2024    58609   AAAA.AA.                 
   5  /dev/sde3  4   Active   Feb  8 14:36:42 2024    58422   AAAAAAAA                 
   6  /dev/sdf3  5   Active   Feb  8 14:36:42 2024    58422   AAAAAAAA                 
   7  /dev/sdg3  6   Active   Feb  8 14:36:42 2024    58422   AAAAAAAA                 
   8  /dev/sdh3  7   Active   Feb  8 14:36:42 2024    58422   AAAAAAAA                 
===============================================================================

 

Check if Disks are Detected properly

As show in my previous post, we can use qcli_storage storage for that:

qcli_storage -d

On my NAS all disks were detected properly:

Enclosure  Port  Sys_Name     Type      Size      Alias          Signature   Partitions  Model  
NAS_HOST   1     /dev/sda     HDD:data  3.64 TB   --             QNAP FLEX   5           WDC WD40EFRX-68N32N0
NAS_HOST   2     /dev/sdb     HDD:data  3.64 TB   --             QNAP FLEX   5           WDC WD40EFRX-68N32N0
NAS_HOST   3     /dev/sdc     HDD:data  3.64 TB   --             QNAP FLEX   5           WDC WD40EFRX-68N32N0
NAS_HOST   4     /dev/sdd     HDD:data  3.64 TB   --             QNAP FLEX   5           WDC WD40EFZX-68AWUN0
NAS_HOST   5     /dev/sde     HDD:free  3.64 TB   --             QNAP FLEX   5           WDC WD40EFRX-68N32N0
NAS_HOST   6     /dev/sdf     HDD:free  3.64 TB   --             QNAP FLEX   5           WDC WD40EFZX-68AWUN0
NAS_HOST   7     /dev/sdg     HDD:free  3.64 TB   --             QNAP FLEX   5           WDC WD40EFRX-68N32N0
NAS_HOST   8     /dev/sdh     HDD:free  3.64 TB   --             QNAP FLEX   5           WDC WD40EFRX-68N32N0

 

Check if all disks have RAID (md) information

Next we need to check all disks to see if they have md superblock information:

mdadm -E /dev/sda3
mdadm -E /dev/sdb3
mdadm -E /dev/sdc3
mdadm -E /dev/sdd3
mdadm -E /dev/sde3
mdadm -E /dev/sdf3
mdadm -E /dev/sdg3
mdadm -E /dev/sdh3

Example output of one of my disks:

/dev/sda3:
          Magic : a92b4efc
        Version : 1.0
    Feature Map : 0x0
     Array UUID : aac26467:7ef59f4f:51673f80:8c31584b
           Name : 1
  Creation Time : Mon Sep  7 19:53:43 2020
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 7794127240 (3716.53 GiB 3990.59 GB)
     Array Size : 23382381696 (22299.18 GiB 23943.56 GB)
  Used Dev Size : 7794127232 (3716.53 GiB 3990.59 GB)
   Super Offset : 7794127504 sectors
   Unused Space : before=0 sectors, after=264 sectors
          State : clean
    Device UUID : b8d7db50:63e4ca78:53c5877e:2e946fd0

    Update Time : Thu Feb  8 14:41:00 2024
  Bad Block Log : 512 entries available at offset -8 sectors
       Checksum : 7b01bedb - correct
         Events : 58609

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAA.AA. ('A' == active, '.' == missing, 'R' == replacing)

 

Manually reassemble the RAID

CAUTION: This step may damage your RAID volume! If unsure: contact QNAP support!

Now, if all disks have md information (Device Role shows Active device #), you can include all these disks (8 in my case) to manually reassemble the RAID (md1) like so:

mdadm -AfR /dev/md1 mdadm -E /dev/sda3 mdadm -E /dev/sdb3 mdadm -E /dev/sdc3 mdadm -E /dev/sdd3 mdadm -E /dev/sde3 mdadm -E /dev/sdf3 mdadm -E /dev/sdg3 mdadm -E /dev/sdh3

Now if a disk is missing (max 1 for RAID 5, and max 2 for RAID 6) then you can skip those in this command.
If you have too many disks missing, or too many disks with missing md information, then you will not be able to use the "mdadm -AfR" command to reassemble the RAID.

Verify and bringing the volume back online

When all that was completed successfully, verify if RAID is online, again with md_checker:

md_checker

If md_checker is showing that the RAID is ONLINE again, then you can the following to recover the storage config:

/ etc/init.d/init_lvm,sh

(remove the space between the slah forward and "etc" - mod_security of the webserver doesn't seem to like it)

You can after that verify if the RAID and volume are good.
If you are not familiar with these commands, then please read up on them before using them.
Some references: pvs, vgs, lvs, df, mount.

md_checker
pvs
vgs
lvs -a
ls -l /dev/mapper/
df
mount

After this, the volume is usually name "/dev/mapper/cachedevX" which mounts to "/share/CACHEDEVX_DATA".
For example the volume such as /dev/mapper/cachedev1 is mounted to path /share/CACHEDEV1_DATA.

You can then verify the content "File Station" or in the Shell with for example:

ls -l /share/CACHEDEV1_DATA/

Hope this is useful to fellow QNAP users.


   
ReplyQuote
(@gor966007)
New Member
Joined: 1 week ago
Posts: 3
 

Hello friend.
You are a cool specialist of QNAP really need your help. My volumes went into the disabled state, I don't know what to do with it. Help please

https://ltdfoto.ru/image/huDWgu

https://ltdfoto.ru/image/huLRtu

https://ltdfoto.ru/image/huLTHH

This post was modified 1 week ago 2 times by gor966007

   
ReplyQuote
 Hans
(@hans)
Famed Member Admin
Joined: 11 years ago
Posts: 2741
Topic starter  

Hello there!

Well I'm not a QNAP specialist, just sharing things I've learned 😉 

I would highly recommend contacting QNAP Support (link).
I'm not a fan of having someone tinker on my QNAP either, but I have had great support from them twice now, so I can really recommend them for tricky situations like this. I've had two situations where I almost lost all my data, and they did fix the issue.

It's free, and they will help you remotely.
You only have to follow the link, setup a free account (no credit card needed), and post a support ticket (you will need the serial number of your QNAP, which you can find on the back of you QNAP or easier: in the web interace QNAP -> Control Panel.

I'm sorry I do not have a cookie cut fix 😞 


   
ReplyQuote
(@gor966007)
New Member
Joined: 1 week ago
Posts: 3
 

I wrote to technical support. I have been waiting for an answer for two days. Still no answer. 😪 


   
ReplyQuote
 Hans
(@hans)
Famed Member Admin
Joined: 11 years ago
Posts: 2741
Topic starter  

Yeah, there can be a delay ... I recall their support sitting in Taiwan, so time difference is definitely not helping.
Last time they helped, I had to enable the support app in the QNAP Web interface, something you may want to look into.
They use this to get remote access to your QNAP (with your permission). Without it they may not even be able to see what may be going on.
Did you go through those steps?


   
ReplyQuote
(@gor966007)
New Member
Joined: 1 week ago
Posts: 3
 

Yes, I did it.


   
ReplyQuote
 Hans
(@hans)
Famed Member Admin
Joined: 11 years ago
Posts: 2741
Topic starter  

Sorry for the delayed response - did you hear anything from support yet?
I'm very hesitant to make any suggestions, since I'm really not an expert and I'd be nervous about data loss. Even on my own system I'd be nervous.

You could try and see what the output of md_checker says (in SSH), but like I said: I'd be nervous to tinker with that. Better be patient (I wouldn't be patient either 😉 ) and have support look at it before tinker with it yourself.


   
ReplyQuote
Share: