1

Topic: Raid 1 shrink home size and grow root

Hello,

When I set up this server I made the mistake of only having 35 GB for root. Now I want to increase it to 100 GB and reduce home by 65 GB. Since I have never modified partitions on a server with data, I would like some advice before I proceed.

I used fsarchiver on a live disk to make a copy of root and a copy of home and those are on a different drive SDc.

When you launch gparted from the live CD, it shows the MDs and the SDs although the MD numbers are different than using gparted when you just let the system boot.

(1) The first thing I wanted to try is to reduce the home MD by 65 GB but leave the partition alone.

(2) Then reboot the computer and see what the MD home looks like in gparted. I believe it will show that it is smaller than the physical SD partitions. (wasted space at the end of the MD partition).

(3) If step 2 works as I expected then I plan on reducing both SDa2 and SDb2 to the size of the home MD.

(4) Reboot the computer and gparted should show some unused space between home and root.

(5) Then increase the size of both SDa3 and SDb3 by moving the start to the beginning of the unallocated space.

(6) Then try to reboot and hopefully it will start up and gparted will show that the root MD does not fill all of the SDs size (wasted space at the end of the SDa3 and SDb3.

(7) Then use gparted to increase the size of MD root to fill the SD3 partions.

(8) Reboot and see if everything is working

Is it that simple of do I need to do something else to shrink and grow the MDs and change the physical partition sizes?

I added the File system names

# parted --list

Model: ATA WDC WD30EZRZ-22Z (scsi)
Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: pmbr_boot

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  2960GB  2960GB    home             raid
 3      2960GB  2998GB  37.6GB     root            raid
 4      2998GB  2999GB  1074MB    boot             raid
 5      2999GB  3001GB  2003MB    swap             raid


Model: ATA WDC WD30EZRZ-22Z (scsi)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: pmbr_boot

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB               grub  bios_grub
 2      2097kB  2960GB  2960GB    home             raid
 3      2960GB  2998GB  37.6GB     root            raid
 4      2998GB  2999GB  1074MB    boot             raid
 5      2999GB  3001GB  2003MB    swap             raid


Model: ATA ST3750640AS (scsi)
Disk /dev/sdc: 750GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End    Size   Type     File system  Flags
 1      1049kB  750GB  750GB  primary  ext4


Model: Linux Software RAID Array (md)
Disk /dev/md127: 37.5GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  37.5GB  37.5GB  ext4


Model: Linux Software RAID Array (md)
Disk /dev/md125: 2001MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system     Flags
 1      0.00B  2001MB  2001MB  linux-swap(v1)


Model: Linux Software RAID Array (md)
Disk /dev/md126: 1072MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  1072MB  1072MB  ext4


Model: Linux Software RAID Array (md)
Disk /dev/md124: 2960GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:

Number  Start  End     Size    File system  Flags
 1      0.00B  2960GB  2960GB  ext4 

Thanks for taking a look at my question,
Craig

2

Re: Raid 1 shrink home size and grow root

Hi Craig,

Glad you have taken a backup copy of your data.

As you appear to know, GParted recognises Linux Software (MD - Multiple Devices) RAID arrays, but can't manipulate (stop/start, resize or move) them.

As an overall plan that sounds okay.  There are a lot of details glossed over like shrinking MD RAID array /dev/md124 (home) and it members or recreating the MD RAID array smaller.  And then for /dev/md127 (root) moving the array to the start on partitions sd[ab]3 and growing it, or recreating it larger.

Mike

3

Re: Raid 1 shrink home size and grow root

Mike, thanks for the info and reply.

I had read on older posts that GParted could not move or resize MD Raid arrays but the way they are currently displayed in GParted it looked possible with this newer GParted version, which is why I glossed over those steps hoping that GParted could now do that.

Many years ago I had one drive fail in a Raid 1 array. I found a similar drive and replaced the bad drive. I don't remember if I parted it to match the working drive before it synced or if the system just did that.

Here are a couple of options I am thinking of:

(1) ----

What would happen if I took one drive out of my current Raid array and used GParted to change the partition sizes on the removed drive? Then put it back into the array and let it sync up. Would it sync up using the new sized partitions or would it end up reverting to the original raid partition sizes?

If it kept the new sizes, I could then take out the remaining original drive and then change the partition sizes and then put it back in and let it sync with the previously modified drive.

(2) ----

Figure out how to use:

resize2fs
mdadm

To resize the arrays and then use GParted to resize the partitions.

Then move the starting position of the root partition.

(3) ---

Just start over and create the partitions the way I want and then install my backups from fsarchiver into the new partitions.

My question with that is, would that method work or would some raid configuration files that might get overwritten like mdadm.conf need to be restored with the mdadm.conf that uses the newly sized partition scheme?

Thanks again,
Craig

PS I couldn't think of a good way to reference both sda3 and sdb3 but the way you used sd[ab]3 like a regexp set is very clear.

4

Re: Raid 1 shrink home size and grow root

I suggest an online shrink for the home array and drop one mirror and recreate partition, re-mirror and repeat for the root array.


Make sure you understand what this does before you start.  Ask questions if you aren't sure.  Double check my calculations.


1. Boot from GParted Live CD or other rescue CD.

2. Use GParted to shrink file system in /dev/md124 (home) by 66560 MiB (65*1024).

3. Shrink the /dev/md124 (home) array by 68157440 KiB (65*1024*1024).

# Report array details, especially Array Size and Used Dev Size.
mdadm -D /dev/md124
# Calculate new Array Size by subtracting 68157440 KiB.  Replace dummy
# figure of 123456789.  Keep the 'K' for KiB.
mdadm -G --array-size=123456789K /dev/md124
mdadm -G --size=123456789K /dev/md124
mdadm -D /dev/md124

4. Shrink sda2 partition by 136314880 512-byte logical sectors (65*1024*1024*2).

# Report partition size.
sgdisk -p /dev/sda
# Delete partition 2, and re-create with the same start and a new end
# sector, 136314880 less.  Replace $start and $new_end with relevant
# figures.
sgdisk --delete 2 /dev/sda
sgdisk --new 2:$start:$new_end
partprobe /dev/sda

5. Shrink sdb2 partition by 136314880 512-byte logical sectors
   (65*1024*1024*2).

# Report partition size.
sgdisk -p /dev/sdb
# Delete partition 2, and re-create with the same start and a new end
# sector, 136314880 less.  Replace $start and $new_end with relevant
# figures.
sgdisk --delete 2 /dev/sdb
sgdisk --new 2:$start:$new_end
partprobe /dev/sdb

6. Remove sda3 from /dev/md127 (root) array.

mdadm --manage /dev/md127 --fail /dev/sda3
mdadm --manage /dev/md127 --remove /dev/sda3
mdadm -D /dev/md127
# Clear superblock of the removed member.
mdadm --zero-superblock /dev/sda3

7. Grow sda3 partition by 136314880 512-byte logical sectors
   (65*1024*1024*2) to the left.

# Report partition size.
sgdisk -p /dev/sda
# Delete partition 3 and re-create with a new start, 136314880 less, and
  the same end.  Replace $new_start and $end with relevent figures.
sgdisk --delete 3 /dev/sda
sgdisk --new 3:$new_start:$end
partprobe /dev/sda

8. Add new larger sda3 member to /dev/md127 (root) array and re-mirror.

mdadm --manage /dev/md127 --add /dev/sda3
# Watch re-mirroring progress.
watch cat /proc/mdstat

9. Remove sdb3 from /dev/md127 (root) array.

mdadm --manage /dev/md127 --fail /dev/sdb3
mdadm --manage /dev/md127 --remove /dev/sdb3
mdadm -D /dev/md127
# Clear superblock of the removed member.
mdadm --zero-superblock /dev/sdb3

10. Grow sdb3 partition by 136314880 512-byte logical sectors
    (65*1024*1024*2) to the left.

# Report partition size.
sgdisk -p /dev/sdb
# Delete partition 3 and re-create with a new start, 136314880 less, and
  the same end.  Replace $new_start and $end with relevent figures.
sgdisk --delete 3 /dev/sdb
sgdisk --new 3:$new_startP:$end
partprobe /dev/sdb

11. Add new larger sdb3 member to /dev/md127 (root) array and re-mirror.

madam --manage /dev/md127 --add /dev/sdb3
# Watch re-mirroring progress.
watch cat /proc/mdstat

12. Grow the /dev/md127 (root) array to maximum.

mdadm -G --size=max /dev/md127
mdadm -D /dev/md127

12. Use GParted to check the file system in /dev/md127 (root)
    This will also grow the file system to fill the array

5 (edited by SgiarC 2020-04-29 17:43:45)

Re: Raid 1 shrink home size and grow root

mfleetwo,

Thanks for taking the time to write such a clearly explained step by step detailed explanation of the process. It appears you have modified many RAID arrays and their partitions.

Craig

6

Re: Raid 1 shrink home size and grow root

@ mfleetwo,

I seems that shrinking "home" keeps the data intact is that correct?

(2) GParted can change the file system because the starting point is not being moved?

(3) You shrink the md(home) by (65*1024*1024)

(4) and (5) You shrink the sd[ab]2 partitions by (65*1024*1024*2). Why is this multiplied by 2?

Just to understand up to step 5. Here is what I think is happening.

Normally I think of partitions that are empty, then you fill them with data. In this case since the data is already on the disk in a set location, you use Gparted to make the md 65MG smaller. Then you delete the sd[ab]2 partitions, then redefine their start and end positions which is around the existing data. Does partprobe make this permanent or is that just so you can check how it's set up.

After step 5 you end up with 65GB of unallocated space between sd[ab]2 and sd[ab]3?

Thanks,
Craig

7

Re: Raid 1 shrink home size and grow root

Your storage stack, in order, is:
1. File systems (stored in MD RAID arrays)
2. MD RAID arrays (store in partitions)
3. Partitions located on Drives

Yes steps 1 to 5 are shrinking the home file system, MD RAID array and partitions by 65 GiB in that order, keeping the starting location the same.
* GParted works in MiB, hence 65*1024
* mdadm works in KiB, hence 65*1024*1024
* Your drives work in 512-byte logical sectors, hence 65*1024*1024*2

For step 2 run like this: gparted /dev/md124.

All file systems store their superblock at a fixed offset from the start of the block device (partition, MD RAID array, whole drive), followed by file system metadata such as free space maps and then directory and file data.  Everything gets allocated from the start working upwards.  So file systems which can be resized, are resized at the end, keeping the start fixed.  GParted is just using the file system specific command to resize the file system.  E.g. for EXT2/3/4 it uses resize2fs.

When GParted "resizes" the start of a file system and partition it has to copy the whole of the file system to the new starting sector of the partition on the drive.  This is much slower than resizing the end of the file system.

sgdisk is the tool which is changing the GPT partitions on the drives.  partprobe is informing the kernel of the change.

Yes after step 5 there will be 65 GiB of unallocated space between partitions 2 and 3 on drives sda and sdb.

8

Re: Raid 1 shrink home size and grow root

mcfleetwo,

Thanks for the explanation about the size calculation differences between the different tools, that now makes sense.

Steps 1-5 would be fairly quick, but steps 6-9 would be somewhat slow because it has to move all of the data to a new starting point?

#df -h

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        1.8G     0  1.8G   0% /dev
tmpfs           1.8G   69M  1.8G   4% /dev/shm
tmpfs           1.8G  1.7M  1.8G   1% /run
/dev/md127       35G   17G   16G  53% /
tmpfs           1.8G  1.5M  1.8G   1% /tmp
tmpfs           256M     0  256M   0% /var/mysqltemp
/dev/md126      990M  263M  660M  29% /boot
/dev/md124      2.7T  178G  2.4T   7% /home
tmpfs           368M   25M  344M   7% /run/user/1000
tmpfs           368M     0  368M   0% /run/user/1002 

How long do you think it would take to move the root that currently takes up 17G?

Most likely I will take another server that I was about to retire and experiment on that one before I modify my existing server. I also plan on doing this in two steps.

(1) Reduce the size of home.
Then reboot and make sure it worked correctly.

(2) Move root and grow the space and see if that works correctly.

If I can get those two steps to work correctly on my test server, then I'll repeat that process on my current server.

I still need to review steps 6-9 some more, after that I might have a few questions about those steps.

Thanks again,
Craig

9

Re: Raid 1 shrink home size and grow root

Of steps 1-5, shrinking home, step 2 shrink file system in /dev/md124 using GParted, will take the longest.  GParted will perform a file system check and then shrink the file system.  Not really sure how long the FSCK will take.  With only 178G of files in home I guess 5 to 15 minutes.  The home file system shrink will be almost immediate.  (Shrinking a file system has to relocate all data above the new smaller target size to below it, but your file system only contains 178G in 2.7T it will never have written any data in that last 65G it is being shrunk by).

All the other commands in steps 1-5 are just updating a bit of metadata and will be almost immediate.


Growing root is being done by re-mirroring the /dev/md127 RAID array, twice.  Step 8 and 11.  Once to relocate each of the mirrored members sd[ab]3.  As the re-mirroring is being done at the RAID array level, it will be re-mirroring 35G.  (The size of the root file system, not the 17G of files it contains).  I expect each re-mirroring will take 5 to 10 minutes.

Step 12 then uses GParted to grow /dev/md127 file system.  Again this will do a FSCK before growing the file system.  I guess FSCK of 17G of files will take less than 5 minutes.  Growing the file system will be almost immediate.

All the other commands in steps 6-12 are just updating a bit of metadata and will be almost immediate.

10

Re: Raid 1 shrink home size and grow root

mfleetwo,

Thanks for your time estimates, overall it seems like it won't take that long. I also liked your explanation about how for root it will mirror the full 35g not just the 17g of files.

Since this is a learning experience for me, I want to experiment on a server that does not matter if things go haywire.

Previously I used fsarchiver to backup "home" and "root". Here is what I want to try first on my "testing" server.

(1) install a new operating system with partitions and a raid 1 that's somewhat similar to my real servers configuration, but since the drives on that computer are smaller the partitions will be different.

(2) replace "home" using the fsarchiver "home" files.

(3) replace "root" using the fsarchiver "root" files.

The thing that somewhat stumps me is the archived "root" files will contain the information for the raid 1 array from my "real" server. If I simply replace all the root files, I don't think it will work correctly because the partitions on my "testing" server will be slightly different. Are there files I should not overwrite in root that define the raid 1 array?

The only file I can think of is:

/etc/mdadm.conf

After I restore the archived files from my real server onto "testing", it seems like at least I should copy the original mdadm.conf file from the "testing" server back to "testing". Is that the only file that controls the configuration of the raid 1 array? Once I copy that file back will it simply reboot without a problem?

If it does boot up then I'll start following your directions (steps 1 - 12) and make sure I can do it on my "testing" server without any issues.

Thank you again,
Craig

11

Re: Raid 1 shrink home size and grow root

After installing the OS on your testing server with RAID it will be a working OS like your production server.  There is no benefit in restoring the fsarchive of root from your production server over the top.  That will just replace correctly configure files with the wrong ones and stop it booting.  Incorrect settings will include:
* (UUIDs of) RAID arrays in:
  /etc/mdadm.conf
  /boot/grub2/grub.cfg
  /boot/initramfs*.img
* (UUIDs of) file systems in:
  /etc/fstab
  /boot/grub2/grub.cfg
Anything else that is different between your production server and testing server such as network card MAC address.


You can restore the fsarchive of home from your production server to your testing server.  I am not familiar with fsarchiver so if it restores the whole file system including it's UUID then you will need to update /etc/fstab afterwards to match.

12

Re: Raid 1 shrink home size and grow root

mfleetwo,

Just in case something went amiss, I wanted to set up another server for production and then get back to this server and change the size of the partitions.

I followed your steps and I was able to resize the partitions, although the home directory lost some data and the md for that partition was no longer there. Perhaps I missed a step.

To fix that, I created another md for home and added one of the main users to /home. Then I was able to reboot and everything worked correctly, with the exception of missing data from home, but that's no big deal because I could just put that data back in.

Since now everything worked on that Fedora 31 version with the new sizes, I decided to see how the upgrade from F31 to F32 worked. I did that and after it was finished it would not boot. So, I'll just start from scratch.

Thanks for your help, it was a good learning experience.
Craig