SUMMARY: large phantom files

From: Beck, Joseph <>
Date: Wed Apr 04 2007 - 09:47:49 EDT
Thanks to all for their quick replies...

Lsof and find /proc -links 0 - will show you files thar were deleted but
still have a process using them


Run "lsof +L1" and look for large entries in the SIZE column.

Restart/kill the corresponding process.

The +L1 option lists files with a link count of zero.


lsof is your best friend in these cases.  lsof can be had from as a package for many releases of Solaris.

Something like

   lsof | grep /tmp | grep -v /tmp/ | grep VREG will list unnamed files
open in /tmp and the processes that have them open.  Any or all of those
are possible candidates.


Ric Anderson (


It's been a while since I've used this strategy, but I've had success in
the past by running:

fsck -n

on the file system in question.  Any "phantom" file should show up as
disconnected inodes.  If you see any *large* disconnected files, you can
use "pfiles" on all suspect processes to find which has the process

Steve Ehrhardt


I may have spoken too quickly.  I checked on this approach just after I
sent you my last email, and found that it would not work.

As I said, I haven't used the "fsck" approach recentl.  Iit appears that
"fsck -n" chokes on the question:



You could still use this basic approach, but would either have to run it
manually - answering "no" to all the questions , or else try using ing
like "expect" to selectively answer "yes" to just this one question.

Sorry for not checking before answering.

Steve Ehrhardt


Did you use fuser(1M)?


Never fully investigated Disk I/O by process myself but the following
whitepaper may be of interest:




I have had some success in the past with running "fsck -N" against the
suspect filesystem. If fsck reports an unallocated inode I would then
use lsof to find a pid associated with that inode.




you have to kill the process that created the files that are filling up
your disk; what's happening is that a process still has an inode open,
so the space never becomes available.  eg: if /var/adm/messages fills up
your disk, you have to HUP syslogd to get the space back.

run lsof to track it down...



-----Original Message-----

From: Beck, Joseph []

Sent: Thursday, March 29, 2007 5:30 PM


Subject: large phantom files

We've had 2 incidents this week where file systems were 100% or quite

In both scenarios du was reporting little to no utilization. So, I
figured it was the old issue where I just needed to identify the
application that still had a file descriptor open on some really large
file and shut the app down to clear the space. Unfortunately, that
didn't prove to be the case.

The first incident was /tmp on an apache server. After much hunting
around, the conclusion was that the application (a weblogic agent) was
writing something to /tmp and someone deleted the file. I suppose the
possibilities are limitless. In the end, after stopping & starting
several of the 30+ apache instances, we ended up rebooting.

Fortunately it was a dev environment, but I still took enough heat.

2nd incident was a less critical orca server, /var was showing 98% full
but only had 25MB worth of files out there.

Orca server was sol10 but at end of day so we went with the easy fix &

I'm looking for other strategies, tools, whatever to be able to locate
the offending process in this scenario. On the web server, I exhausted a
lot of time with this looking at /proc & pfiles command & lsof, etc.

This was solaris 9

Also curious if dtrace could've helped in any way.

Joe Beck Ciber Inc. - a consultant to SEI  One Freedom Valley Drive/ 100
Cider Mill Road| Oaks, PA 19456 | p: 610.676.2258 |
sunmanagers mailing list
Received on Wed Apr 4 09:48:33 2007

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:44:05 EST