SUMMARY: Limiting the number of Superblock duplicates on newfs of huge filesystem

From: Chris Ruhnke <>
Date: Thu Sep 15 2005 - 08:09:13 EDT
Thanks for responding to: Jeff Woolsey, Dave Mitchell, David Markowitz, 
and Dan Stromburg.

Comments centered on reducing the number of cylinder groups using the -c 
option and the number of inodes using -i option.  Unfortunately, newfs had 
already minmax'ed these options and I had nowhere to go.

Other suggestions were to examine the "new" filesystems from Sun - QFS and 

I will live with it the way it is...

> I don't have to newfs filesystems very often so I don't run into this 
> problem that much.
> I have just built a 500+ GByte RAID-5 user data filesystem.  I am 
> "newfs" against it and it has been running for almost an hour now. 
> of the amount of data space available, newfs is creating a gazillion 
> superblock duplicates.  Now, I am quite happy to have superblock 
> duplicates in the event of loss of the primary superblock.  But this is 
> failure that rarely happens any more.  Usually the entire filesystem 
> corrupted beyond redemption.  I don't mind giving up some space to a few 

> -- even a few hundred -- superblock clones.  But there comes a point 
> duplicate superblocks are just a waste of space and fragementation of 
> data area.
> Does anyone have any suggestions about how to limit the number of 
> superblock duplicates that get created on a ufs filesystem.  I am 
> restricted by the customer to using UFS and do not have the luxury of 
> looking into Veritas or any other "smart" volume/filesystem manager.  I 
> haven't found anything promising in the man pages.


Chris H. Ruhnke
Technical Services Professional
IBM Global Services
Dallas, TX

Office:  (972) 980-0474 ext. 234
Cell:     (214) 704-3749
Text messaging: mail to (128 character max)

O'Toole's Law:  Murphy is an optimist.
sunmanagers mailing list
Received on Thu Sep 15 08:09:00 2005

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:51 EST