Hoping someone can help! Never used software raid before and I've noticed the 2 disk RAID 1 is showing as degraded [U_] :
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda3[0]
3905836032 blocks super 1.2 [2/1] [U_]
bitmap: 29/30 pages [116KB], 65536KB chunk
md0 : active raid1 sda2[0]
1046528 blocks super 1.2 [2/1] [U_]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
I'm having trouble finding if a disk has failed or what has happened. Both disks are visible and testing fine with smartctl.
Here is the mdadm --detail output:
# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Sep 9 17:04:24 2019
Raid Level : raid1
Array Size : 1046528 (1022.00 MiB 1071.64 MB)
Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Sep 10 16:01:36 2019
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : xxx:0 (local to host xxx)
UUID : 40713c03:b0a45738:aae59e3a:541556fe
Events : 117
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
- 0 0 1 removed
# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Mon Sep 9 17:03:48 2019
Raid Level : raid1
Array Size : 3905836032 (3724.90 GiB 3999.58 GB)
Used Dev Size : 3905836032 (3724.90 GiB 3999.58 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Sep 10 16:51:52 2019
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : xxx:1 (local to host xxx)
UUID : c76c221f:33fff959:e95e90d6:f350c30b
Events : 71886
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
- 0 0 1 removed
I understand I can resync the array, but I'm unclear even as to what is missing.
Thanks for any advice!
--
Edit - After taking Stephens advice, it now looks like this:
I guess it would have helped if I could see what it used to look like..
It looks like this after running those commands:
# mdadm --detail /dev/md0
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
# mdadm --detail /dev/md1
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 spare rebuilding /dev/sdb3
Looks to have sorted the issue :)