Mdadm has one nice feature which enables us to replace drives with larger capacity ones. We can replace 500GB drives with 2TB drives with no downtime, it only takes time. At the start we have following setup:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465,8G 0 disk
├─sda1 8:1 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sda2 8:2 0 464,8G 0 part
└─md1 9:1 0 1,4T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
sdb 8:16 0 465,8G 0 disk
├─sdb1 8:17 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdb2 8:18 0 464,8G 0 part
└─md1 9:1 0 1,4T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
sdc 8:32 0 465,8G 0 disk
├─sdc1 8:33 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdc2 8:34 0 464,8G 0 part
└─md1 9:1 0 5,5T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 4T 0 lvm /osm
sdd 8:48 0 465,8G 0 disk
├─sdd1 8:49 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdd2 8:50 0 464,8G 0 part
└─md1 9:1 0 1,4T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
We proceed and replace one drive with a larger one.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465,8G 0 disk
├─sda1 8:1 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sda2 8:2 0 464,8G 0 part
└─md1 9:1 0 1,4T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
sdb 8:16 0 465,8G 0 disk
├─sdb1 8:17 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdb2 8:18 0 464,8G 0 part
└─md1 9:1 0 1,4T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
sdc 8:32 0 465,8G 0 disk
├─sdc1 8:33 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdc2 8:34 0 464,8G 0 part
└─md1 9:1 0 5,5T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 4T 0 lvm /osm
sdd 8:48 0 1,8T 0 disk
New empty drive is in the server, but we need to configure the partitions before adding it to raid array. Examine the old partition table.
# fdisk -l /dev/sda
Disk /dev/sda: 465,8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x590b9494
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 1953791 1951744 953M fd Linux raid autodetect
/dev/sda2 1953792 976771071 974817280 464,8G fd Linux raid autodetect
Create new partition table with more space for data, but with the same size for boot aprtition.
# fdisk /dev/sdd
Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x896edf9c.
Command (m for help): p
Disk /dev/sdd: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x896edf9c
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (1-4, default 1):
First sector (2048-3907029167, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-3907029167, default 3907029167): 1953791
Created a new partition 1 of type 'Linux' and of size 953 MiB.
Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.
Command (m for help): n
Partition type
p primary (1 primary, 0 extended, 3 free)
e extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (2-4, default 2):
First sector (1953792-3907029167, default 1953792):
Last sector, +sectors or +size{K,M,G,T,P} (1953792-3907029167, default 3907029167):
Created a new partition 2 of type 'Linux' and of size 1,8 TiB.
Command (m for help): t
Partition number (1,2, default 2): 2
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.
Command (m for help): p
Disk /dev/sdd: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x896edf9c
Device Boot Start End Sectors Size Id Type
/dev/sdd1 2048 1953791 1951744 953M fd Linux raid autodetect
/dev/sdd2 1953792 3907029167 3905075376 1,8T fd Linux raid autodetect
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
We can examine the partition table to be sure we did it correctly.
root@hyperion:~# fdisk -l /dev/sdd
Disk /dev/sdd: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x896edf9c
Device Boot Start End Sectors Size Id Type
/dev/sdd1 2048 1953791 1951744 953M fd Linux raid autodetect
/dev/sdd2 1953792 3907029167 3905075376 1,8T fd Linux raid autodetect
Take a look at lsblk, we see partitions but they are not in raid array yet.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465,8G 0 disk
├─sda1 8:1 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sda2 8:2 0 464,8G 0 part
└─md1 9:1 0 1,4T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
sdb 8:16 0 465,8G 0 disk
├─sdb1 8:17 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdb2 8:18 0 464,8G 0 part
└─md1 9:1 0 1,4T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
sdc 8:32 0 465,8G 0 disk
├─sdc1 8:33 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdc2 8:34 0 464,8G 0 part
└─md1 9:1 0 5,5T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 4T 0 lvm /osm
sdd 8:32 0 1,8T 0 disk
├─sdd1 8:33 0 953M 0 part
└─sdd2 8:34 0 1,8T 0 part
We can now proceed with adding them to the raid arrays.
# mdadm --manage /dev/md0 --add /dev/sdd1
mdadm: added /dev/sdd1
# mdadm --manage /dev/md1 --add /dev/sdd2
mdadm: added /dev/sdd2
Smaller array is quickly finished while larger oen takes some more time
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10]
md1 : active raid5 sdd2[4] sdc2[2] sdb2[1] sda2[0]
1461832704 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
[>....................] recovery = 0.4% (2084472/487277568) finish=73.7min speed=109709K/sec
bitmap: 2/4 pages [8KB], 65536KB chunk
md0 : active raid1 sdd1[4] sdc1[2] sdb1[1] sda1[0]
975296 blocks super 1.2 [4/4] [UUUU]
After that drive is added to raid array.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465,8G 0 disk
├─sda1 8:1 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sda2 8:2 0 464,8G 0 part
└─md1 9:1 0 1,4T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
sdb 8:16 0 465,8G 0 disk
├─sdb1 8:17 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdb2 8:18 0 464,8G 0 part
└─md1 9:1 0 1,4T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
sdc 8:32 0 465,8G 0 disk
├─sdc1 8:33 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdc2 8:34 0 464,8G 0 part
└─md1 9:1 0 5,5T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 4T 0 lvm /osm
sdd 8:48 0 1,8T 0 disk
├─sdd1 8:49 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdd2 8:50 0 1,8T 0 part
└─md1 9:1 0 1,4T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
And the array if back from degraded state.
# mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sun May 6 16:47:17 2018
Raid Level : raid5
Array Size : 1461832704 (1394.11 GiB 1496.92 GB)
Used Dev Size : 487277568 (464.70 GiB 498.97 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Oct 19 14:40:25 2021
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : data:1
UUID : 9b735c42:5691ab77:0e4f2403:f7d79bdd
Events : 41507
Number Major Minor RaidDevice State
7 8 2 0 active sync /dev/sda2
6 8 18 1 active sync /dev/sdb2
5 8 34 2 active sync /dev/sdc2
4 8 50 3 active sync /dev/sdd2
Now repeat the same procedure for the other 3 drives, one by one, until all 4 are bigger drives.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1,8T 0 disk
├─sda1 8:1 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sda2 8:2 0 1,8T 0 part
└─md1 9:1 0 5,5T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
sdb 8:16 0 1,8T 0 disk
├─sdb1 8:17 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdb2 8:18 0 1,8T 0 part
└─md1 9:1 0 5,5T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
sdc 8:32 0 1,8T 0 disk
├─sdc1 8:33 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdc2 8:34 0 1,8T 0 part
└─md1 9:1 0 5,5T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
sdd 8:48 0 1,8T 0 disk
├─sdd1 8:49 0 953M 0 part
│ └─md0 9:0 0 952,4M 0 raid1 /boot
└─sdd2 8:50 0 1,8T 0 part
└─md1 9:1 0 5,5T 0 raid5
├─osmhr-swap 253:0 0 4G 0 lvm [SWAP]
├─osmhr-root 253:1 0 50G 0 lvm /
└─osmhr-osm 253:2 0 1,2T 0 lvm /osm
Now we have larger partitions, but raid array is still the same size.
# mdadm -D /dev/md1 | grep -e "Array Size" -e "Dev Size"
Array Size : 1461832704 (1394.11 GiB 1496.92 GB)
Used Dev Size : 487277568 (464.70 GiB 498.97 GB)
Let it grow, let it grow, let it grow
# mdadm --grow /dev/md1 --size max
mdadm: component size of /dev/md1 has been set to 1952406528K
Now we have the right sizes in mdadm
# mdadm -D /dev/md1 | grep -e "Array Size" -e "Dev Size"
Array Size : 5857219584 (5585.88 GiB 5997.79 GB)
Used Dev Size : 1952406528 (1861.96 GiB 1999.26 GB)
Only thing left to fix is LVM
# pvscan
PV /dev/md1 VG osmhr lvm2 [1,36 TiB / <140,11 GiB free]
Total: 1 [1,36 TiB] / in use: 1 [1,36 TiB] / in no VG: 0 [0 ]
Resize PV
# pvresize /dev/md1
Physical volume "/dev/md1" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
Results
# pvscan
PV /dev/md1 VG osmhr lvm2 [5,45 TiB / 4,23 TiB free]
Total: 1 [5,45 TiB] / in use: 1 [5,45 TiB] / in no VG: 0 [0 ]
List LV
# lvscan
ACTIVE '/dev/osmhr/swap' [4,00 GiB] inherit
ACTIVE '/dev/osmhr/root' [50,00 GiB] inherit
ACTIVE '/dev/osmhr/osm' [1,17 TiB] inherit
Extend LVM
lvextend -L2T /dev/osmhr/osm
Size of logical volume osmhr/osm changed from 1,17 TiB (307200 extents) to 2,00 TiB (524288 extents).
Logical volume osmhr/osm successfully resized.
Result
# lvscan
ACTIVE '/dev/osmhr/swap' [4,00 GiB] inherit
ACTIVE '/dev/osmhr/root' [50,00 GiB] inherit
ACTIVE '/dev/osmhr/osm' [2,00 TiB] inherit
Resize file system
# resize2fs /dev/osmhr/osm
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/osmhr/osm is mounted on /osm; on-line resizing required
old_desc_blocks = 75, new_desc_blocks = 128
The filesystem on /dev/osmhr/osm is now 536870912 (4k) blocks long.
Now we have more free space to use