17th Jul2012
Author: Gyro

Resize/Shrink Software RAID1 FileSystem/Volume/Partition and setup a LVM on the free disk space created

This is a guide on how to shrink a Software RAID1 filesysten, volume, and partition on a dedicated/remote server. I added the sources that helped me complete this task at the bottom of this post. It took about 2 days to get this sorted out, mainly because I am a Jack of all traits, and a master of none. :crazy:

This guide on how to resize a software raid1 should work on most Linux Distributions. I used a dedicated server located in Strassburg running Ubuntu Server 12.04, and I connected to it via ssh from Asia. :wooty:

Here my setup prior to doing this:

Two 2 TB hard disks (sda/sdb) with 3 partitions each
sda1/sdb1 -- md0 -- /boot
sda2/sdb2 -- md1 -- swap
sda3/sdb3 -- md2 -- /
md2 uses pretty much all the space.

Here the setup after completing the steps below:
Two 2 TB hard disks (sda/sdb) with 4 partitions each
sda1/sdb1 -- md0 -- /boot
sda2/sdb2 -- md1 -- swap
sda3/sdb3 -- md2 -- / (~320GB)
sda4/sdb4 -- md3 -- LVM "nova-volumes" (~1650GB)

Make sure you got the needed packages installed:
# sudo apt-get install lvm2 dmsetup mdadm

Boot the server in recover mode, then do:
# sudo -s -H
# mdadm -A --scan

mdadm: /dev/md/0 has been started with 2 drives.
mdadm: /dev/md/1 has been started with 2 drives.
mdadm: /dev/md/2 has been started with 2 drives.

We have 3 raid partitions, md2 is the one we need to resize.

Make sure it is the correct partition:
# fdisk -l /dev/md2

Disk /dev/md2: 1989.4 GB, 1989370830848 bytes
2 heads, 4 sectors/track, 485686238 cylinders, total 3885489904 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md2 doesn't contain a valid partition table

Almost 2TB looks good.

You may have to unmount the RAID partition:
# umount /dev/md2

Set md2 as faulty on sda. The 3rd partition is sda3 while the 3rd RAID volume is md2, thanks to some geeks that can't agree whether an array starts with a zero or a 1…. anyways:
# mdadm --manage /dev/md2 --fail /dev/sda3

mdadm: set /dev/sda3 faulty in /dev/md2

Remove the partition from the array:
# mdadm --manage /dev/md2 --remove /dev/sda3
mdadm: hot removed /dev/sda3 from /dev/md2

This will prevent any possible sync to start while we work on the filesystem etc.
We will also need to repartition sda3, so it needs to be out of the RAID array.

Reduce the size of the RAID filesystem for md2, for now to 280GB so it fits in the
later resized RAID volume:
# e2fsck -f /dev/md2
# resize2fs -f /dev/md2 280G

Resizing the filesystem on /dev/md/2 to 73400320 (4k) blocks.
The filesystem on /dev/md/2 is now 73400320 blocks long.

Reduce the size of the RAID volume to 305GB, so it will fit in the new 320 GB partition later.
To get the right size do 305x1024x1024 = 319815680.
# mdadm --grow /dev/md2 --size=319815680

mdadm: component size of /dev/md2 has been set to 319815680K

Delete the sda3 partition and recreate it with the size of 320GB. I used cfdisk for this, which is
pretty much self explanatory… , there is also plenty of help on deleting/creating partitions.
# cfdisk
I set the partition type to FD (Linux Raid autodetect) and the size to 343599 MB, which is
slightly bigger than the RAID filesystem would be when set to 320GB.

Add the new 320GB partition to the array, and let it synchronize.
# mdadm --manage /dev/md2 --add /dev/sda3

mdadm: added /dev/sda3

ONLY If you get this instead:
mdadm: /dev/sda3 reports being an active member for /dev/md2, but a --re-add fails.
mdadm: not performing --add as that would convert /dev/sda3 in to a spare.
mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sda3" first.

Do this:
# mdadm --zero-superblock /dev/sda3
and then again:
# mdadm --manage /dev/md2 --add /dev/sda3
You may have to do the same for sdb3 later.

Watch the sync progress, as you got to waaaaaaaiiiiiit for it to finish:
# watch cat /proc/mdstat

md2 : active raid1 sda3[2] sdb3[1]
319815680 blocks super 1.2 [2/1] [_U]
[==>………………] recovery = 14.7% (47026560/319815680) finish=43.0min speed=111643K/sec

Press Ctrl+c to stop watching it…
Once the sync is done, it should look like this:

# cat /proc/mdstat

Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
md2 : active raid1 sda3[2] sdb3[1]
319815680 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
9999288 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
766964 blocks super 1.2 [2/2] [UU]

Next step is to set md2 as faulty on sdb3:
# mdadm --manage /dev/md2 --fail /dev/sdb3

mdadm: set /dev/sdb3 faulty in /dev/md2

Remove the drive from the array:
# mdadm --manage /dev/md2 --remove /dev/sdb3
mdadm: hot removed /dev/sdb3 from /dev/md2

Delete the sdb3 partition and recreate it with the size of 343599 MB to match sda3.
# cfdisk /dev/sdb

Add the new partition to the array, and let it synchronize.
# mdadm --manage /dev/md2 --add /dev/sdb3

mdadm: added /dev/sdb3

Watch the sync progress, as you got to wait for it to finish, agaaaaaaaiiiin:
# watch cat /proc/mdstat
Press Ctrl+c to stop watching it…
Once the sync is done, it should look like this:
# cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
md2 : active raid1 sda3[0] sda3[1]
327155712 blocks super 1.2 [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
9999288 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
766964 blocks super 1.2 [2/2] [UU]

Make sure the new partition size is recognized:
# blockdev –rereadpt /dev/sda
# blockdev –rereadpt /dev/sdb

Make the RAID volume use up the maximum space on the partition:
# mdadm --grow /dev/md2 --size=max

mdadm: component size of /dev/md2 has been set to 335548631K

Run a file system check:
# e2fsck -f /dev/md2

/dev/md2: 54841/20316160 files (0.3% non-contiguous), 1558294/81264640 blocks

Increase the size of the filesystem to take up all the space of the volume:
# resize2fs -f /dev/md2

Resizing the filesystem on /dev/md/2 to 83887157 (4k) blocks.
The filesystem on /dev/md/2 is now 83887157 blocks long.

Check the filesystem
# e2fsck -f /dev/md2

This is where you would be done with the resizing, and could boot in normal mode. If you have trouble booting in normal mode, have a look at the orange box below about reassembling and array UUIDs.
Do not reboot yet, if you are going to setup the LVM as well, just continue below.

Use the newly available space to setup another RAID array for LVM.

Start with setting up the LVM for nova, by creating the partitions for md3 on both drives:
# cfdisk /dev/sda
# cfdisk /dev/sdb
I set the partitions up to use the maximum space available.

Check if all partitions are OK
# fdisk -l /dev/sda /dev/sdb

Device Boot Start End Blocks Id System

/dev/sda1 2048 1535999 766976 fd Linux raid autodetect
/dev/sda2 1536000 21536767 10000384 fd Linux raid autodetect
/dev/sda3 21536768 692636077 335549655 fd Linux raid autodetect
/dev/sda4 692636078 3907029167 1607196545 fd Linux raid autodetect
Partition 4 does not start on physical sector boundary.

Device Boot Start End Blocks Id System
/dev/sdb1 2048 1535999 766976 fd Linux raid autodetect
/dev/sdb2 1536000 21536767 10000384 fd Linux raid autodetect
/dev/sdb3 21536768 692636077 335549655 fd Linux raid autodetect
/dev/sdb4 692636078 3907029167 1607196545 fd Linux raid autodetect
Partition 4 does not start on physical sector boundary.

Create the new array with one drive:
# mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sda4 missing

Add sdb4 to the array
# mdadm --manage --add /dev/md3 /dev/sdb4

mdadm: added /dev/sdb4

Make md3 a LVM volume
# pvcreate /dev/md3

Physical volume "/dev/md3" successfully created

Add the new volume to the "nova-volumes" group
# vgcreate nova-volumes /dev/md3

Volume group "nova-volumes" successfully created

I had to reassemble the array while creating this tutorial, which resulted in a broken grub due to a change of the UUID of the md2 array, on top of that I couldn't do chroot, so manual patching had to be done.

Only do these steps, if you can't boot normally, or know for a fact that you reassembled your raid array(s).

Update mdadm.conf and grub.cfg, but first get the UUIDs:
# blkid

/dev/sda1: UUID="5910cbc1-0b5c-044c-b286-f447ab78e5d9" UUID_SUB="823dde44-5e73-6a6d-7e8b-de25d1423fb0" LABEL="puck538:0" TYPE="linux_raid_member"
/dev/sda2: UUID="42679326-1f8a-b1e6-fa6e-6c3777a5caf8" UUID_SUB="bdc9307d-0d92-5062-a04d-41a48a8a2c60" LABEL="puck538:1" TYPE="linux_raid_member"
/dev/sda3: UUID="443dbe36-1d80-5344-3989-4c73f56e7dd2" UUID_SUB="42a10b71-9bd6-11a4-517d-b198858e8b06" LABEL="puck538:2" TYPE="linux_raid_member"
/dev/sda4: UUID="c0c5c51e-5956-3116-6fee-ccd193564574" UUID_SUB="08698000-adab-10a1-da9d-7209707a5776" LABEL="puck538:3" TYPE="linux_raid_member"
/dev/sdb1: UUID="5910cbc1-0b5c-044c-b286-f447ab78e5d9" UUID_SUB="626284bb-c30c-8474-1f8f-f756b62a2814" LABEL="puck538:0" TYPE="linux_raid_member"
/dev/sdb2: UUID="42679326-1f8a-b1e6-fa6e-6c3777a5caf8" UUID_SUB="9ddfd438-7e6c-9181-1499-13a29c42adc3" LABEL="puck538:1" TYPE="linux_raid_member"
/dev/sdb3: UUID="443dbe36-1d80-5344-3989-4c73f56e7dd2" UUID_SUB="099d4e22-03c0-1d4b-3d15-64f396a84d36" LABEL="puck538:2" TYPE="linux_raid_member"
/dev/sdb4: UUID="c0c5c51e-5956-3116-6fee-ccd193564574" UUID_SUB="fa918a45-47c4-6444-e912-16951f2191a0" LABEL="puck538:3" TYPE="linux_raid_member"
/dev/md0: UUID="5c6eb879-6d96-487f-8523-90c40ee21a3e" TYPE="ext2"
/dev/md1: UUID="f92ff19b-aee4-4a94-a225-df1d6a2454b0" TYPE="swap"
/dev/md2: UUID="e06f4042-f285-4cf8-846b-59539d6b00ac" TYPE="ext4"
/dev/md3: UUID="LRX6l7-mOSx-TOm6-nq9M-CUKt-Vrem-YivLWw" TYPE="LVM2_member"
Mount /boot (md0) and check the UUID for md2 in grub.cfg
# mkdir /md0
# mount /dev/md0 /md0
# nano /md0/grub/grub.cfg
search for the first "set root" and update the UUID to the new one you see behind /dev/sda3: and /dev/sdb3:
you have to delete the dashes in the UUID, so it becomes one long string.

Example:
/dev/sda3: UUID="443dbe36-1d80-5344-3989-4c73f56e7dd2" becomes "443dbe361d80534439894c73f56e7dd2"

OLD: set root='(mduuid/297096e0a18b36c10f1c40c0bdd4225b'
NEW: set root='(mduuid/443dbe361d80534439894c73f56e7dd2)'

Mount / (md2) and update the UUID for md2 in mdadm.conf, also add md3 to the list
# mkdir /md2
# mount /dev/md2 /md2
# nano /md2/etc/mdadm/mdadm.conf


/dev/sda3: UUID="443dbe36-1d80-5344-3989-4c73f56e7dd2" becomes "443dbe36:1d805344:39894c73:f56e7dd2"
/dev/sdb4: UUID="c0c5c51e-5956-3116-6fee-ccd193564574" becomes "c0c5c51e:59563116:6feeccd1:93564574"

ARRAY /dev/md/0 metadata=1.2 UUID=5910cbc1:0b5c044c:b286f447:ab78e5d9 name=yourhost:0
ARRAY /dev/md/1 metadata=1.2 UUID=42679326:1f8ab1e6:fa6e6c37:77a5caf8 name=yourhost:1
ARRAY /dev/md/2 metadata=1.2 UUID=443dbe36:1d805344:39894c73:f56e7dd2 name=yourhost:2
ARRAY /dev/md/3 metadata=1.2 UUID=c0c5c51e:59563116:6feeccd1:93564574 name=yourhost:3


Note: "yourhost" needs to be replaced by your hostname…

Done, time to reboot!

Note: You may have to add the new array after reboot. Just run this once to make sure it is added:
# mdadm -A --scan

Enjoy :)

References & Resources:
http://www.howtoforge.com/how-to-resize-raid-partitions-shrink-and-grow-software-raid
http://www.evilbox.ro/linux/ubuntu-raid1-resize/
http://mkfblog.blogspot.com/2007/11/resizing-raid1-system-partition.html
http://lavezarez.blogspot.com/2011/01/raid-1-not-starting-and-mounting-on.html
https://help.ubuntu.com/community/Installation/RAID1+LVM
https://raid.wiki.kernel.org/index.php/Partitioning_RAID_/_LVM_on_RAID 

16147

Keep Me Going
Your Name / Website:
Did you like my post? Buy me a Bitcoffee! 14JX52Li7zTKbyQv2omw4tvu6Bi1KzfHNW



13 Responses to “Resize/Shrink Software RAID1 FileSystem/Volume/Partition and setup a LVM on the free disk space created”

  • me

    Do NOT run mdadm –grow /dev/md2 –size=max. Instead, resize to your destination size.

  • Great article, thank you.

    Have you tried
    blockdev –rereadpt /dev/sdb
    blockdev –rereadpt /dev/sda

    after the cfdisk? Might stop that jumping up to the previous max size?

  • Sky

    Great tuto !
    Would you mind if I translate it to french and post it on my blog ? Of course, you’ll be quoted as the original author.

    • Gyro

      Of course you can!

      A link back to this post would be great!

      You can also post the link to your translation, once you have done it, so french speaking people can go to your blog :)

  • Hans

    Hi,

    I am really a newbie using linux. So I want to learn by trying out some stuff. I have a similar server setup as in the example.

    I am trying your guide but I can’t manage to finish it. I am running into some problems.
    I am also having the problem that the raid array by using the grow to maximum size command grows to the original size.
    I tried removing the array and recreating one, but than I got errors with different physical size and superblock size or something like that.
    I will try again without using the grow to max but by giving the exact size of the /dev/sda3 partition. Is that ok?

    Another strange thing I ran into was that after resizing the md2 array, the server wouldn’t boot anymore (although the UUID was not changed). When deleting the sda3 partition with cfdisk it said that there was no partition which was set to bootable. Could that be the problem? Strange thing is that when I cfdisk before changing anything, there is also no partition that had a bootable flag.

    I also ran into problems creating the new /dev/sda4 partition. Using cfdisk I could make a new partition. But the partition was not recognized. Looking back, this could be due to the growing back to original size of the md2 array…

    What’s the reason for using different sizes for the partition and the array and afterwards let the array grow to the size of the partitions? Has this something to do with the synchro between the two partitions? Or can I use the same size immediately?

    Kind Regards and thanks for the guide! I hope I can succeed one day.

    • Gyro

      Hello,

      the comment section is not really a good place to discuss this in details.

      If the file system grows to the original size, then you encountered the same thing that happened to me one time. Look at the orange bordered box near that step, I also provided my solution at the bottom of the post.

      Growing the file system to fit the new partition size avoids having a file system that does not fit inside the resized partition, while guaranteeing that the file system uses the maximum space available. :)

      • Hans

        Is there another way how I can contact you? I suppose you have my mail address from my post.

        I am still having some problems and I wanted to ask you a thing or two.

        Can you send me an e-mail?

        Thanks

  • sterbhai

    helped a lot!

  • eileon

    Personally i ran into trouble with this article and lost my data

    You do not explain precisely “Delete the sda3 partition and recreate it with the size of 320G”. Personally i used fdisk and use a Begin sector different from the original one (error, mistake,.. ?)

    Then you don’t mention md superblock metadata version. For me the ancient one was Metadata Version 0.90. Using Knoppix 7.6 made me use by default Metadata Versiuon 1.2.

    I don’t know why but the result is malfuntionning..

  • eileon

    For later viewers i will add some pieces which could help

    I did thisq resize on /dev/md1 (sda2 and sdb2) and had no more /dev/md1 (/) so server not booting

    With Knoppix i recreated md1 which had disappeard

    mdadm –create –verbose –level=1 –metadata=0.9 /dev/md1 –raid-devices=2 /dev/sda2 /dev/sdb2

    Made faulty and remove /dev/sda2

    With fdisk delete and recreate sda2 with first sector original (type fd)

    Add /dev/sda2

    Made faulty and remove /dev/sdb2

    With fdisk delete and recrete sdb2 with first sector original (type fd)

    Add /dev/sdb2

    After reboot s

    Server boots well and i can find my data back

  • Cris

    beware if typos!

    • Gyro

      Thanks for the comment and warning.

      They are not typos, WP just auto-formatted the lines. I need to add a code wrapper, so it doesn’t happen anymore.

Leave a Reply

Your email address will not be published. Required fields are marked *

What is 12 + 3 ?
Please leave these two fields as-is:
IMPORTANT! To be able to proceed, you need to solve the following simple math (so we know that you are a human) :-)