Page 1 of 1
Forum

Welcome to the Tweaking4All community forums!
When participating, please keep the Forum Rules in mind!

Topics for particular software or systems: Start your topic link with the name of the application or system.
For example “MacOS X – Your question“, or “MS Word – Your Tip or Trick“.

Please note that switching to another language when reading a post will not bring you to the same post, in Dutch, as there is no translation for that post!



QNAP - Increase RAI...
 
Share:
Notifications
Clear all

[Solved] QNAP - Increase RAID rebuild speed

1 Posts
1 Users
0 Reactions
6,690 Views
 Hans
(@hans)
Famed Member Admin
Joined: 12 years ago
Posts: 2859
Topic starter  

I had to replace one of the disks in my RAID setup, and the rebuild started after a hot swap as expected,... but boy was that going slow. After a day still only 50% done.

After a little searching I found a trick to increase the QNAP RAID rebuild speed ...

First thing I did, was look at the current speed by using:

cat /proc/mdstat

This resulted in the following output.
Expected to finish in 3243 minutes ... 54 hours!!!! 

[~] # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md0 : active raid6 sdc3[10] sda3[8] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdb3[9]
   17572185216 blocks super 1.0 level 6, 64k chunk, algorithm 2 [8/7] [UU_UUUUU]
   [==========>..........] recovery = 50.1% (1470051456/2928697536) finish=3243.0min speed=7495K/sec
   
md256 : active raid1 sdc2[2](S) sdh2[7](S) sdg2[6](S) sdf2[5](S) sde2[4](S) sdd2[3](S) sdb2[1] sda2[0]
   530112 blocks super 1.0 [2/2] [UU]
   bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sdc4[2] sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdb4[1]
   458880 blocks [8/8] [UUUUUUUU]
   bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sdc1[2] sda1[0] sdf1[7] sdg1[6] sdh1[5] sdd1[4] sde1[3] sdb1[1]
   530048 blocks [8/8] [UUUUUUUU]
   bitmap: 0/65 pages [0KB], 4KB chunk
unused devices: <none>

By setting a higher speed limit, with the following line, the time to completing drastically improved:

echo 50000 > /proc/sys/dev/raid/speed_limit_min

Displaying mdstat again shows and expected completion time in 633 minutes, which is about 10 hours.

[~] # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md0 : active raid6 sdc3[10] sda3[8] sdh3[7] sdg3[6] sdf3[5] sde3[4] sdd3[3] sdb3[9]
   17572185216 blocks super 1.0 level 6, 64k chunk, algorithm 2 [8/7] [UU_UUUUU]
   [==========>..........] recovery = 50.5% (1480883716/2928697536) finish=633.2min speed=38104K/sec
   
md256 : active raid1 sdc2[2](S) sdh2[7](S) sdg2[6](S) sdf2[5](S) sde2[4](S) sdd2[3](S) sdb2[1] sda2[0]
   530112 blocks super 1.0 [2/2] [UU]
   bitmap: 0/1 pages [0KB], 65536KB chunk
md13 : active raid1 sdc4[2] sda4[0] sdh4[7] sdg4[6] sdf4[5] sde4[4] sdd4[3] sdb4[1]
   458880 blocks [8/8] [UUUUUUUU]
   bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sdc1[2] sda1[0] sdf1[7] sdg1[6] sdh1[5] sdd1[4] sde1[3] sdb1[1]
   530048 blocks [8/8] [UUUUUUUU]
   bitmap: 0/65 pages [0KB], 4KB chunk
unused devices: <none>

   
ReplyQuote
Share: