Coming soon - Get a detailed view of why an account is flagged as spam!
view details

This post has been de-listed

It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.

4
Failed drive in RAID5 - entire array disappeared
Post Flair (click to view more posts with a particular flair)
Post Body

This started as a help post, but while I was typing it up, I got things working. My array is rebuilding. But maybe my process will help someone else.

I've got OMV running on a Thecus N5550. It's been running really well. Definitely better than the abandoned Thecus OS.

Last weekend one of the drives in my RAID5 array died. Like won't spin up, clicking noise, not getting picked up in the BIOS, dead. I primarily use the NAS to backup my photos, so I don't need it every day. I shut the system down and ordered a replacement drive. I have the replacement drive now, but I can't get the array online at all.

Here's what I know. The 4 remaining disks are present and the system is detecting them. I can see them on the disks page. The RAID Management page shows no arrays. There should be 3. (The 3 carry over from the Thecus OS. 1 is the storage, 1 is swap, I can't remember the 3rd.) The only option on the page that is highlighted is "create".

From the command line, I can see the following:

root@omv-1:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : inactive sdc2[1](S) sdd2[2](S) sde2[3](S) sdb2[0](S)
      3896504320 blocks super 1.2

md50 : inactive sdc3[1](S) sde3[3](S) sdd3[2](S) sdb3[0](S)
      2097104 blocks super 1.2

md10 : inactive sde1[3](S) sdc1[1](S) sdd1[2](S) sdb1[0](S)
      8384512 blocks super 1.2

unused devices: <none>
root@omv-1:~# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
        Raid Level : raid0
     Total Devices : 4
       Persistence : Superblock is persistent

             State : inactive
   Working Devices : 4

              Name : N5550:0
              UUID : 796e302e:3e9738a1:af284c2e:c3460593
            Events : 134

    Number   Major   Minor   RaidDevice

       -       8       66        -        /dev/sde2
       -       8       50        -        /dev/sdd2
       -       8       34        -        /dev/sdc2
       -       8       18        -        /dev/sdb2

Similar output for the other 2 arrays.

I've tried a number of things I've seen in different places. It always kicked back errors. I finally got the array back online with:

root@omv-1:~# mdadm --stop /dev/md0
mdadm: stopped /dev/md0
root@omv-1:~# mdadm -A --force /dev/md0 /dev/sd[bcde]2
mdadm: forcing event count in /dev/sde2(3) from 134 upto 136806
mdadm: clearing FAULTY flag for device 3 in /dev/md0 for /dev/sde2
mdadm: Marking array /dev/md0 as 'clean'
mdadm: /dev/md0 has been started with 4 drives (out of 5).

After this the array is showing up again in the GUI. I put the new drive in and am currently recovering. But it looks like another drive is about to bite the dust, so I'd better get another new drive.

Author
Account Strength
100%
Account Age
7 years
Verified Email
Yes
Verified Flair
No
Total Karma
33,457
Link Karma
19,353
Comment Karma
10,821
Profile updated: 5 days ago
Posts updated: 8 months ago

Subreddit

Post Details

We try to extract some basic information from the post title. This is not always successful or accurate, please use your best judgement and compare these values to the post title and body for confirmation.
Posted
4 years ago