0

I previously created a RAID 1 array using mdadm. One of the drives is failing and I have removed it. I would like to continue using the remaining disk while waiting for a replacement disk. I don't have the command I initially used to create the array.

The remaining working disk looks like this:

# fdisk -l /dev/sda
Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000VN004-3CP1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: ABC-123-etc

So far as I can tell there are no partitions, but I'm not exactly sure.


I tried recreating the array with the following command:

# mdadm --create --assume-clean --level=1 --raid-devices=1 /dev/md0 /dev/sda
mdadm: '1' is an unusual number of drives for an array, so it is probably
     a mistake.  If you really mean it you will need to specify --force before
     setting the number of drives.

Makes sense! So trying with --force:

# mdadm --create --assume-clean --level=1 --raid-devices=1 /dev/md0 /dev/sda --force
mdadm: partition table exists on /dev/sda
mdadm: partition table exists on /dev/sda but will be lost or
       meaningless after creating array
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? n
mdadm: create aborted.

The system boots from a different physical device; this array is purely for storage.

Output of tail /etc/mdadm/mdadm.conf:

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This configuration was auto-generated on Fri, 17 Feb 2023 17:23:17 +0000 by mkconf
ARRAY /dev/md0 metadata=1.2 name=nas:0 UUID=d9953244:010850f9:e87c43c8:abc3e1f7

Output of cat /proc/mdstat with only one drive attached:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>

mdadm --examine --scan produces no output when run as root


1. Is there a way to create an array from this single physical disk?

2. When I receive the replacement drive, how would I go about restoring this to a 2 disk RAID 1 array?

14
  • Do you have any logs that show how this array got assembled in the past? Was mdadm using the whole disk or a partition? If mdadm metadata is lost (overwritten by partition table), you want to find the filesystem offset first, testdisk might help... if you actually recreate it (preferably using partitions not whole disk), you can use raid-devices=2 and put the 2nd drive as missing, see also Should I use mdadm --create. If you want to use it as single drive long term, just create a partition at the filesystem offset. Commented Feb 9, 2024 at 21:53
  • Where would I look for the logs? If you specifically mean the shell command I used to create the array, then no, I don't have that Commented Feb 9, 2024 at 22:07
  • Stop! If you get the commands wrong you'll wreck the remaining data drive Commented Feb 9, 2024 at 22:19
  • Is there any way to query the metadata to figure out what commands were used to create the array? Commented Feb 9, 2024 at 22:21
  • Please tell us in your question, 1. Did your system previously boot successfully and start the RAID1 when there were two drives? 2. Does your system boot with only one of the two drives (it should)? 3. What is the tail of /etc/mdadm/mdadm.conf? 4. What does cat /proc/mdstat show? 5. What about mdadm --examine --scan? Commented Feb 9, 2024 at 22:27

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.