1 (edited by nbathum 2018-10-11 18:04:41)

Topic: GParted sees partiton space different than df. low inode, no lvm


My root partition is/has run out of inodes. Since it is a 500GB disk and the Disk free tool was showing ~30 GB in use, my plan was to shrink the root partition, create a second logical ext4 partition (tuned to have more inodes), and move some paths onto that.

My system contains a 500GB nvme device, on which there are two plain partitions. A fat32 boot partition, and a single ext4 system partition.

When I booted into gparted live, it showed the root partition as mostly full, which confused me greatly, and is why I'm posting here.

This is strangely similar to a recent post (id=17776, but I can only include 1 link apparently) but altogether different, because this partition on my computer doesn't use LVM. It is a plain ext4 partition.

Since that post has a lot of good guidance on what info to post, I have included that below.

1. Screen shot of the disk.  From gparted-live-0.32.0. The partition in question is the system root '/' partition: /dev/nvme0n1p2


2. Partition layout.

sudo parted /dev/sda print
sudo fdisk -l /dev/sda
sudo gdisk -l /dev/sda
sudo lsblk -o name,maj:min,rm,size,ro,type,fstype,label,mountpoint

$ sudo parted /dev/nvme0n1p2 print
Model: Unknown (unknown)
Disk /dev/nvme0n1p2: 510GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags: 

Number  Start  End    Size   File system  Flags
 1      0.00B  510GB  510GB  ext4

$ sudo fdisk -l /dev/nvme0n1p2
Disk /dev/nvme0n1p2: 475 GiB, 509961641472 bytes, 996018831 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

$ sudo gdisk -l /dev/nvme0n1
GPT fdisk (gdisk) version 1.0.3

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/nvme0n1: 1000215216 sectors, 476.9 GiB
Model: SAMSUNG MZVLW512HMJP-000H1              
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 50042058-E9CE-4BB2-8425-82007E798650
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1000215182
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         4196351   2.0 GiB     EF00  EFI System
   2         4196352      1000215182   474.9 GiB   8300  Linux filesystem

$ sudo lsblk -o name,maj:min,rm,size,ro,type,fstype,label,mountpoint
sda           8:0    1 29.9G  0 disk                    
└─sda1        8:1    1 29.9G  0 part vfat   GPARTED-LIV 
sr0          11:0    1 1024M  0 rom                     
nvme0n1     259:0    0  477G  0 disk                    
├─nvme0n1p1 259:1    0    2G  0 part vfat   BOOT        /boot
└─nvme0n1p2 259:2    0  475G  0 part ext4   root        /

3. Mounted file system usage.

$ df -k
Filesystem     1K-blocks     Used Available Use% Mounted on
devtmpfs          815468        0    815468   0% /dev
tmpfs            8154680        0   8154680   0% /dev/shm
tmpfs            4077340     6124   4071216   1% /run
tmpfs            8154676      300   8154376   1% /run/wrappers
/dev/nvme0n1p2 496804444 29503132 442384460   7% /
tmpfs            8154676        0   8154676   0% /sys/fs/cgroup
/dev/nvme0n1p1   2093048    26000   2067048   2% /boot
tmpfs            1630932        4   1630928   1% /run/user/1000

4. File system minimum size and super block dump.  (This assumes the file system is ext4, ext3 or ext2.  Replace /dev/sda2 with the actual device name).

$ sudo resize2fs -P /dev/nvme0n1p2
resize2fs 1.44.1 (24-Mar-2018)
Estimated minimum size of the filesystem: 123929873


$ sudo dumpe2fs -h /dev/nvme0n1p2
dumpe2fs 1.44.1 (24-Mar-2018)
Filesystem volume name:   root
Last mounted on:          /
Filesystem UUID:          2e526915-15e1-48d1-ac01-33e57aa2d886
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              486400
Block count:              124502353
Reserved block count:     6225117
Free blocks:              116829330
Free inodes:              2195
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Reserved GDT blocks:      1024
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         128
Inode blocks per group:   8
Flex block group size:    16
Filesystem created:       Wed May  9 16:55:18 2018
Last mount time:          Wed Oct 10 17:44:48 2018
Last write time:          Wed Oct 10 17:44:48 2018
Mount count:              1
Maximum mount count:      -1
Last checked:             Wed Oct 10 17:42:32 2018
Check interval:           0 (<none>)
Lifetime writes:          366 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     32
Desired extra isize:      32
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      b84be108-517b-44ad-9876-3ef569ae3f6f
Journal backup:           inode blocks
Checksum type:            crc32c
Checksum:                 0x40b2f620
Journal features:         journal_incompat_revoke journal_64bit journal_checksum_v3
Journal size:             1024M
Journal length:           262144
Journal sequence:         0x0018f859
Journal start:            1
Journal checksum type:    crc32c
Journal checksum:         0x19202b52

Hrmm, that resize2fs output is interesting. Hopefully someone with more knowledge than I knows what to make of that.

I'll include an extra invocation of df to show the inode count in another way, and to serve as a solemn reminder to myself about what I've done.

$ df -i /dev/nvme0n1p2
Filesystem                                             Inodes  IUsed IFree IUse% Mounted on
/dev/...                                                   486400 484221  2179  100% /

Is there some safety percent for reserved free inodes or something which is causing an issue?

Thanks for any help.



Re: GParted sees partiton space different than df. low inode, no lvm

Excellent report.  You have provided all the information I need.

resize2fs reports that the minimum size the file system can be shrunk to, is 123929873 (measured in file system block size units). dumpe2fs reports the file system "Block size" is 4096 bytes.  Multiplying this out, the minimum size the file system can be shrunk to is 123929873*4096 bytes ~= 472.75 GiB.  As GParted is a file system resizing tool, it has to use this figure as the "Used" figure to prevent trying to shrink a file system smaller than it's minimum.

EXT file systems store inodes at fixed location across the file system.  So because your inodes are almost 100% full your file system can't be shrunk as the inodes as the end of the partition are in use, even if there is over 400 GiB of space for storing file data.

The fact that your file system has so few inodes is unusual.  I created a test ext4 file system on a 474 GiB partition and got 31,064,064 available inodes, where as your file system only as 486,400 available inodes.

I recommend doing a file level backup, recreating the file system and restoring the backup.  Then probably also having to fix booting.