Available Space Count Using Numerous Little Files

Apr 08, 2007 | 3 minutes read
Share this:

Tags: UFS, Df, Du

Using big bundled software suite such as the IBM WebSphere Java Application Server can sometimes lead to confusion when determining the currently available space to be used.

In fact, we get a particular case where du(1) and df(1m) said some space are not in used, but--when trying to allocate it--we simply can't. Here is the description of this curious behavior:

  • Solaris 9 (Generic_112233-11) using SVM with soft partition.
  • The information from df(1m) seems not good, as we can't use the reported free space: 1GB of 8GB seems free... but not usable.
  • du(1) and df(1m) agreed together, and their results are very similar (note: minfree is set to 1%).

Some notes now:

  • File system is consistent (passes fsck(1m) happily).
  • There is no file descriptor currently open on this file system.
  • No data were stored under the directory on which is mounted the problematic file system.
  • Tested with and without disk quota, and with and without logging options.
  • The file system seems very fragmented, see below.

Here we will provide a test case showing the differences between du(1) and df(1m), and the reality. First, create and configure the test file system, and populate it with appropriate (problematic) data:

# metainit d107 -p d7 100m
# newfs d7 -m d17 d27 1
# grep d107 /etc/vfstab
/dev/md/dsk/d107 /dev/md/rdsk/d107 /t/data/WebSphere ufs 1 yes logging
# mount /t/data/WebSphere
# tunefs -m 1 /t/data/WebSphere
# cd /t/data/WebSphere
# gzip -dc /tmp/testcasedata.tar.gz | tar xf -
# rm testcasedata.tar.gz && lockfs -af

Now, we can observe the common UNIX utilities reports, and calculate the exact available space helped by the fstyp(1m) command:

# df -k /t/data/WebSphere
Filesystem        kbytes    used   avail capacity  Mounted on
/dev/md/dsk/d107   95207   28416   65839    31%    /t/data/WebSphere
# du -sk /t/data/WebSphere
27384   /t/data/WebSphere
# fstyp -v /dev/md/dsk/d107 | head -15
magic   11954   format  dynamic time    Fri Jan 14 14:47:00 2005
sblkno  16      cblkno  24      iblkno  32      dblkno  2408
sbsize  2048    cgsize  8192    cgoffset 216    cgmask  0xffffffe0
ncg     3       size    102400  blocks  95207
bsize   8192    shift   13      mask    0xffffe000
fsize   1024    shift   10      mask    0xfffffc00
frag    8       shift   3       fsbtodb 1
minfree 1%      maxbpg  2048    optim   time
maxcontig 16    rotdelay 0ms    rps     167
csaddr  2408    cssize  1024    shift   9       mask    0xfffffe00
ntrak   24      nsect   424     spc     10176   ncyl    21
cpg     8       bpg     5088    fpg     40704   ipg     19008
nindir  2048    inopb   64      nspf    2
nbfree  5024    ndir    151     nifree  50438   nffree  26599
# echo "(26599*100)/95207" | bc
# echo "8*5024" | bc

Well. Now, we can say:

  1. The fragmentation ratio for this file system is pretty high: 27%.
  2. Although it seems that only ~40MB are really available for use, the df(1m) utility reports us with ~65MB. The overestimation is about 15% in this (not very high volume) test case! Wow...

The bad news is that the problem is due to an high number of very small files provided with the third party software from IBM, and correspond to locale files. These files were <1KB. And because this is a third party component, we can't do anything about that. In fact, the size of a single file system block is 8192 bytes, at least on sun4u processor architecture (see the mkfs_ufs(1m) documentation for more details).

The good news is that the problem may be worked around by changing the optimization space file system's tunable from time to space (please refer to the tunefs(1m) manual page for more information). The little downside here is that the data must be rewritten in order to benefit from this modification, for example using a ufsdump(1m) and ufsrestore(1m) cycle.