SUMMARY: Parameter for reduce I/O Buffer

From: Pleszko, Alex <>
Date: Mon Jul 29 2002 - 10:06:13 EDT
  Hi gurus

  I am still working on this issue but i've got two options for
manipulate the I/O buffer behavior with the answers from this
list. To limit the total amount of memory that can be used for
I/O Buffer (also refered as file cache) the kernel parameter
"bufhwm" should be changed. Other option is to use the "directio"
option when mounting the file systems used by the database, so
the I/O buffer could be used only within local disks.

Thanks to:

Jay Lessert
Kevin Buterbaugh
Erik Williams
Robert Reynolds
Baranyai Pal
Amit Mahajan
Tristan Ball
Zeev Fisher

Alex Pleszko

Here is the original question and the replies i've got so far:

Pleszko, Alex wrote:

>  Hi gurus,
>  I have a E10k with 12 Gb mem that runs exclusively one Oracle
>instance. Oracle is using about 8 Gb of shared memory and some
>user process. Almost 3 Gb of memory are being used for File Cache,
>as i could verify with memtool (prtmem command). Usually this
>wouldn't worry me, but this machine is using SAN disks (which has
>its own file cache) and there is no need for two caches. Memory
>scan rate is also presentig high values.
>  Does anyone know wich parameter can i set to reduce or eliminate
>file cache for Solaris 7?
>  I will summarize. Thanks in advance!
>  Alex



     You need to mount the filesystems with the "directio" option.  See the
man page for "mount_ufs" for more information on this.  Also, Allan
Packer's excellent book "Configuring and Tuning Databases on the Solaris
Platform gives more information on Direct I/O itself.  HTH...

Kevin Buterbaugh

"Anyone can build a fast CPU.  The trick is to build a fast system." -
Seymour Cray


use the forcedirectio parameter in your ufs mount options in /etc/vfstab


I am not a Sun expert, but I know a little Oracle. If your Oracle DB files
are not on raw partitions (or Veritas Quick IO), you are going to be double
buffering. Data is being cached in both the OS and also in the Oracle SGA.
If you are caching it again in your SAN solution, then you are triple

If your scan rate is consistently high, it is a indication that your system
is running a shortage on memory. If you look at the page outs, I would
expect to see a considerable rate given that you have the 3GB of file cache.
The SAN solution may be providing you with a great deal of cached writes.


And what is the real speed of the SAN?
Not the cache itself but the connection?
100 MBps? 200 MBps?
And what is the speed of memory? 1-2 GBps?
So your "fast" (hahaha) SAN cache is behind
a relatively slow connection.
Anyway file cache will use the free (unused)
memory on Solaris. When system needs it will flush pages.


this site would help you answer your Disk I/O cache formula.


And more to the point, oracle is buffering anyway.

If you enable Direct IO for the filesystem's your DB's are on, then the OS
no-longer cache data from those filesystems, while still maintaining caching
other things (which you want! :-) )


The kernel parameter bufhwm defines the max amount of memory that the 
buffer can use.

You can see how big it is by :

/usr/sbin/sysdef | grep bufhwm

Also , you can get statistics on it by :

kstat unix:0:biostats

( calculate the hit rate )

I think that if you are working with ORACLE , which has it's own cache , 
the best is to use the directio option in the mount command.
This way , you bypass the memory cache completely and you will get 
performance which is almost as fast as raw device.

sunmanagers mailing list
Received on Mon Jul 29 10:16:23 2002

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:42:50 EST