SUMMARY: UFS Block Size and JFS options

From: Jeff D. Allen <allen_at_cs.dal.ca>
Date: Mon Jun 17 2002 - 10:36:34 EDT
Thanks to everyone who responded. I think the concensus in our group
is to try throwing more resources at the server for now and sees how
it goes, but the installation will done to amke it easier to implement
other ideas in the future such as storing mail on multiple disks or
swapping in faster 15K disks. We'll also try the priority paging technique.

Responses and original post are below:

On Wed, Jun 12, 2002 at 11:47:19AM -0300, Jeff D. Allen wrote:
> Background:
> System: Netra T1 AC/200		OS: Solaris 8 7/01
> No special disk or logical volume management software.
> Probably woefully behind on patches.
> 
> I wanted to create a UFS filesystem with a block size larger then 8192, but
> according to newfs(1M):
> 
>  -b bsize
>        The logical block size of  the  file  system  in
>        bytes  (either  4096  or  8192).  The default is
>        8192.  The sun4u architecture does  not  support
>        the 4096 block size.
> 
> And when I tried: newfs -Nv -b 16384 /dev/rdsk/c0t1d0s0
> I recieved this:  newfs: 16384: bad block size
> 
> Does anyone know of a relatively simple way of creating a UFS filesystem with
> a block size larger than 8192, or if it's even possible?
> 
> This system is replacing an old mail server (zmailer) that has become bogged
> down with I/O wait. Does anyone have any tips on how to avoid this, ie.
> increase disk read/write performance? One of our theories is that the current
> mail server is low on RAM (128MB) and stuff is being swapped in and out of
> memory so throwing more resources (1GB of RAM) at it might help.
> 
> Thanks! Will summarize.
> 
> -- 
> --------------------------------------------------------------------
> Jeff D. Allen, BCSc  ---------------------------------------------
> Systems Administrator		Faculty of Computer Science
> Dalhousie University		6050 University Ave. Halifax NS
> Email: allen_at_cs.dal.ca		Web: http://www.cs.dal.ca/
> ------------------------------------------------------------------
> _______________________________________________
> sunmanagers mailing list
> sunmanagers@sunmanagers.org
> http://www.sunmanagers.org/mailman/listinfo/sunmanagers

-----


From: Mike Salehi
upgrade to Solaris 9


-----


From: Kevin Buterbaugh
AFAIK, 8192 is the only block size you can use with UFS on a sun4u
architecture system.  I doubt, however, that tuning that would help in any
case.

Before you do anything else, you need to prove / disprove your theory
of a memory shortage.  If either vmstat or sar -g show a non-zero scan
rate, then you need more memory.  If you do, then until you upgrade the RAM
you're wasting your time trying to tune the I/O system.

I think you're on the right track; 128 MB RAM is extremely small these
days.  Also, one hair-splitting aside:  I doubt your systems is swapping.
It is more likely paging, and possibly paging quite heavily.  If your
system ever swaps, it needs more memory NOW!  HTH...


-----


From: Christophe Dupre
You can't change the block size of the filesystem.

To help, you can do a couple of things:
- Get a faster disk (15K RPM if possible)
- Add memory. Memory will allow the OS to cache more data
- Make sure you use the UFS logging mount option. This makes file creation
a bit faster (~5%)
- Add more drives, and spread your data on them (RAID-0 software with
DiskSuite)
- Make sure your filesystem is nowhere near full. When a filesystem gets
full, it is slower as it is more difficult to find free blocks and
fragmentation occurs.
- How's the state of the DNLC cache ? Run vmstat -s and see the cache hit
%. Anything under 90% needs more memory allocated to the DNLC. See the
Solaris tuning guide.


-----

From: Jay Lessert
> Background:
> System: Netra T1 AC/200               OS: Solaris 8 7/01
> No special disk or logical volume management software.
> Probably woefully behind on patches.
>
> I wanted to create a UFS filesystem with a block size larger then 8192, but
> according to newfs(1M):

You can't.

> This system is replacing an old mail server (zmailer) that has become bogged
> down with I/O wait. Does anyone have any tips on how to avoid this, ie.
> increase disk read/write performance? One of our theories is that the
current
> mail server is low on RAM (128MB) and stuff is being swapped in and out of
> memory so throwing more resources (1GB of RAM) at it might help.

Before you go write your own file system :-), it would be a good idea
to run 'vmstat 60' on the box for a day (or maybe 'vmstat -p 60').
That'll tell you if you're out of RAM or not.

I'm not familiar with zmailer, but postfix and qmail are two MTAs that are
capable of handling prodigious e-mail traffic on modest hardware.  They
both concentrate on reducing the number of file create/delete
operations necessary to enqueue and deliver a message.


-----


From: Tristan Ball
You can't increase the block size, because the block size must be the
same as the memory page size. This is because solaris does 90% or more
of it's IO through the VM paging system.

I'd recommend having a look at the output from "vmstat -p", and "iostat
-dxn".

Be carefull about what you mean by "bogged down with iowait" too. The
cpu counter for iowait time on suns is essentially pointless, it's just
a flag on the kernel CPU structure that is set while their is a IO
active on any disk/nfs device. The CPU is free to work on other things,
while that happens, it will get an interrupt when the IO completes.

vmstat will give you a good breakdown on what kind of disk IO you are
getting (it should be mostly filesystem page in/out/faults - ie normal
FS io). If you see consistant numbers in the sr or anonymous paging
columns, you have a serious memory shortage.

From iostat, i the wait, activ, wsvc_t and asvc_t colums for the
partition in question add up to more that 20, and the %b is about 60%
consistantly, then you have a disk bottleneck.

Adding memory, will, of course, likely help, regardless of what the
problem really is. 128mb is quite a small amount, and even if it's not
actively swapping processes, discarding executable pages, or writing
anyomous memory to the paging file, there's not going to be much left
for the FS page cache, and you'll see a lot more IO's that you should.

As for increasing performance:
Mount the volume "noatime"
Add ram:
Add disk: 6 drives, configured as 3 mirrors, with a stripe volume
accross the top, will give you quite impressive IO/per sec rates. But it
really depends on what you have now!
Use VxFS. The veritas filesystem requires gnerally requires fewer
metadata updates than VxFS, and is better at batching IO's together.
Benchmarks for heavy Clearcase activitiy saw a 15% improvement in
performance on VxFS.


-----


From: Bruce McAllister
Max block size curretly on UFS (as far as I am aware) is 8K, you cannot
create a block size bigger than that. If you are creating this on a mail
server it may be advisable to create a smaller block size as the general
mail message is not that big and you will be wasting space on the
filesystem. It may be benificial to check your mail message size over a
period and base you block size on the average size of a message.


-----


From: Wolfgang Kandek
try enabling priority paging on the old system. Under heavy file I/O Solaris
can swap out executables to make space for more file caching. Priority
paging prevents this.

take a look at this link for some information:
http://www.sun.com/sun-on-net/performance/priority_paging.html

-- 
--------------------------------------------------------------------
Jeff D. Allen, BCSc  ---------------------------------------------
Systems Administrator		Faculty of Computer Science
Dalhousie University		6050 University Ave. Halifax NS
Email: allen_at_cs.dal.ca		Web: http://www.cs.dal.ca/
------------------------------------------------------------------
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Mon Jun 17 10:41:42 2002

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:42:46 EST