SUMMARY: High iowait on 280R / Solaris 9

From: NKS <>
Date: Fri Jan 19 2007 - 09:18:14 EST
Original post:

> I have a SunFire 280R Solaris 9, 2xCPU, 4GB RAM, fiber HBA
> connected to a Nexsan Atabeast with 14x300GB disks. Nexsan 
> disks organized in 8 identical volumes, RAID5. Veritas VM
> v.4 used on SUn-server.
> Even though the I/O activity is low, the server reports 
> 20-80% of the CPU-time is iowait...
> The Atabeast controller is idle; nothing special going on. 

Forgot to mention that the server is an NFS-server but only a
few clients were accessing their NFS-mounts at the time...

Comments / suggestions:

1) In Solaris 10, iowait is no longer measured and will be reported as zero
   by existing tools. Since iowait was always a variant of idle time, this
   makes no difference to usr or sys time. iowait was always a confusing and
   useless metric, which is why it was removed.

2) Volume layout

   As some of you pointed out, there is no point in having striped layout
   on a volume already beeing configured as RAID5 on the disk controller.
   This slows down the IO. Unfortunaltely, I haven't had the opportunity to
   recreate these volumes using Veritas concat layout but all new volumes
   on other disk boxes are using this layout. We're seeing higher I/O 
   throughput on these.
2) Tools to find more info about disk activity

   vxstat  (fex. vxstat -i 5 - c 10 -g smoddg)
   sar     (fex. sar  3 10)
   prstat  (fex. prsize -t size)
   top     (have a look at picdl's size. May require a patch if size is large)


None. Yet. My server is still spending much time in iowait. However, after
having rebooted all clients the iowait number became significantly lower.
Could be that one of the clients messed up big time... I plan to change
the volume layout when we're upgrading to a new disk box.
Thanks to all that replied!

Best regards,

sunmanagers mailing list
Received on Fri Jan 19 09:20:06 2007

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:44:04 EST