- Fixed 4 broken markdown links (bad relative paths in See Also sections) - Corrected n8n port binding to 127.0.0.1:5678 (matches actual deployment) - Updated SnapRAID article with actual majorhome paths (/majorRAID, disk1-3) - Converted 67 Obsidian wikilinks to relative markdown links or plain text - Added YAML frontmatter to 35 articles missing it entirely - Completed frontmatter on 8 articles with missing fields Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
3.3 KiB
title, domain, category, tags, status, created, updated
| title | domain | category | tags | status | created | updated | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| mdadm — Rebuilding a RAID Array After Reinstall | linux | storage |
|
published | 2026-04-02 | 2026-04-02 |
mdadm — Rebuilding a RAID Array After Reinstall
If you reinstall the OS on a machine that has an existing mdadm RAID array, the array metadata is still on the disks — you just need to reassemble it. The data isn't gone unless you've overwritten the member disks.
The Short Answer
# Scan for existing arrays
sudo mdadm --assemble --scan
# Check what was found
cat /proc/mdstat
If that works, your array is back. If not, you'll need to manually identify the member disks and reassemble.
Step-by-Step Recovery
1. Identify the RAID member disks
# Show mdadm superblock info on each disk/partition
sudo mdadm --examine /dev/sda1
sudo mdadm --examine /dev/sdb1
# Or scan all devices at once
sudo mdadm --examine --scan
Look for matching UUID fields — disks with the same array UUID belong to the same array.
2. Reassemble the array
# Assemble from specific devices
sudo mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1
# Or let mdadm figure it out from superblocks
sudo mdadm --assemble --scan
3. Verify the array state
cat /proc/mdstat
sudo mdadm --detail /dev/md0
You want to see State : active (or active, degraded if a disk is missing). If degraded, the array is still usable but should be rebuilt.
4. Update mdadm.conf so it persists across reboots
# Generate the config
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm.conf
# Fedora/RHEL — rebuild initramfs so the array is found at boot
sudo dracut --force
# Debian/Ubuntu — update initramfs
sudo update-initramfs -u
5. Mount the filesystem
# Check the filesystem
sudo fsck /dev/md0
# Mount
sudo mount /dev/md0 /mnt/raid
# Add to fstab for auto-mount
echo '/dev/md0 /mnt/raid ext4 defaults 0 2' | sudo tee -a /etc/fstab
Rebuilding a Degraded Array
If a disk failed or was replaced:
# Add the new disk to the existing array
sudo mdadm --manage /dev/md0 --add /dev/sdc1
# Watch the rebuild progress
watch cat /proc/mdstat
Rebuild time depends on array size and disk speed. The array is usable during rebuild but with degraded performance.
Gotchas & Notes
- Don't
--createwhen you mean--assemble.--createinitializes a new array and will overwrite existing superblocks.--assemblebrings an existing array back online. - Superblock versions matter. Modern mdadm uses 1.2 superblocks by default. If the array was created with an older version, specify
--metadata=0.90during assembly. - RAID is not a backup. mdadm protects against disk failure, not against accidental deletion, ransomware, or filesystem corruption. Pair it with rsync or Restic for actual backups.
- Check SMART status on all member disks after a reinstall. If you're reassembling because a disk failed, make sure the remaining disks are healthy.
Reference: mdadm — How to rebuild RAID array after fresh install (Unix & Linux Stack Exchange)