We recently started monitoring our Synology NAS with CloudRadar.io (monitoring software) and we were getting the following alert…
Module software raid health according to /proc/mdstat has alert(s)
Raid md0 degraded. Missing 5 devices. Raid md0 degraded. Missing 5 devices. Raid status not optimal (Needs Attention)
Raid md1 degraded. Missing 5 devices. Raid md0 degraded. Missing 5 devices. Raid status not optimal (Needs Attention)
After SSHing into the NAS and running cat /proc/mdstat I noticed MD0 and MD1 had missing disks. Those arrays are were not created by me. They appear to come from the factory and contain recovery data for the NAS.
synology_nas:/$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid5 sda3[0] sdi3[6] sdg3[5] sdf3[4] sde3[3] sdc3[2] sdb3[1]
46855227264 blocks super 1.2 level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU]
md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sde2[3] sdf2[4] sdg2[5] sdi2[6]
2097088 blocks [12/7] [UUUUUUU_____]
md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sde1[3] sdf1[4] sdg1[5] sdi1[6]
2490176 blocks [12/7] [UUUUUUU_____]
Looking further into the array I ran mdadm –detail /dev/md0
synology_nas:/$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Wed Feb 7 16:33:01 2018
Raid Level : raid1
Array Size : 2490176 (2.37 GiB 2.55 GB)
Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
Raid Devices : 12
Total Devices : 7
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Jun 9 09:22:20 2020
State : clean, degraded
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
UUID : 0386309b:f65fbf8a:c69de3af:22ddbed7
Events : 0.36734185
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 65 3 active sync /dev/sde1
4 8 81 4 active sync /dev/sdf1
5 8 97 5 active sync /dev/sdg1
6 8 129 6 active sync /dev/sdi1
- 0 0 7 removed
- 0 0 8 removed
- 0 0 9 removed
- 0 0 10 removed
- 0 0 11 removed
The reason for the error is both arrays report status: clean, degraded due to having 12 disks out of the 7 installed. I went ahead and set MD0 and MD1 to 7 disks using mdadm –grow -n 7 /dev/mdx
synology_nas:/$ sudo mdadm --grow -n 7 /dev/md0
synology_nas:/$ sudo mdadm --grow -n 7 /dev/md1
synology_nas:/$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Wed Feb 7 16:33:01 2018
Raid Level : raid1
Array Size : 2490176 (2.37 GiB 2.55 GB)
Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
Raid Devices : 7
Total Devices : 7
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Jun 9 09:33:38 2020
State : clean
Active Devices : 7
Working Devices : 7
Failed Devices : 0
Spare Devices : 0
UUID : 0386309b:f65fbf8a:c69de3af:22ddbed7
Events : 0.36734590
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 65 3 active sync /dev/sde1
4 8 81 4 active sync /dev/sdf1
5 8 97 5 active sync /dev/sdg1
6 8 129 6 active sync /dev/sdi1
They now show their status as clean and we no longer receive any alerts from our device monitoring system!
I was able to figure this out due to a number of sites / articles which I’ll list below…
- https://blog.sdbarker.com/post/synology-nas-ds1813-degraded-array-for-md0-and-md1-after-rebuild/
- https://support.unitrends.com/UnitrendsBackup/s/article/000002500
- https://www.digitalocean.com/community/tutorials/how-to-manage-raid-arrays-with-mdadm-on-ubuntu-16-04#:~:text=If%20you%20look%20at%20the,md0%20%2D%2Dremove%20%2Fdev%2Fsdc