Hello,
We have a RAID10 of 4 SSDs on RSTe, under CentOS 6.5.
We had to re-seat disks. Now the array is always degraded and resyncs at every boot, the status in ctrl+i is alway "Rebuild", showing sdd as non-member but it's not possible to add it back. (sdd is totally zeroed)
The main issue: the mdadm --examine and cat /proc/mdstat are different.
mdadm: [Volume0:1]: Slots : [_UU_], This Slot : 0 (out-of-sync), Map State : degraded, Dirty State : dirty
mdstat: [4/3] [UUU_]
At reboot, it starts a resync with sda - so the mdstat shows [_UU_] too but when resync finishes it turns to [UUU_] and won't start resync to sdd.
We can add or remove the sdd, but nothing happens. When sdd is added to md0 (container) mdstat --examine often just hangs.
When sdd is not added to md0, examine shows it like this:
Disk03 Serial : 8400X5800RGN:0
State : active failed
Id : ffffffff
We can't stop array as it's the root FS.
How can we fix this?