Got a few answers regarding part of my query, which was admittedly rather broad. The question is repeated at the end of this email.
Late news: We just found out that we are running a version of parityck that can cause problems with our configuration. We just killed multiple copies of it on two servers, and it reduced I/O times considerably. It did not help on one of our E3500s, however, but that doesn't have the same kind of disk configuration as the servers in question. Since this email was almost ready to send when we found out about the parityck problem, it will be sent as was written, FWIW.
Continuing with the originally intended summary:
There was some question whether the A3500 might just be too slow, not the disks. Good question. We've been having some controller problems, including what look like failures that rm6 can't fix...until after an SSE shows up on site, then rm6 can fix it. It's happened more than once.
We also got some tuning parameters to make vxfs/vxvm more efficient. We will be testing these out in the near future. They are (exactly as received):
A few things you can try --
- Stripping the volume over multiple controllers would help a great deal but if you cant do that - (NOTE: We can't, right now. MW)
- Try tuning up VM and VxFS a bit.
* File System support for VXFS 3.2.5+
* Volume Manager support.
- Plus tuning the kernel to give more cache to VFS would help - Especially with Mail files.
(Parameters based on 4GB mail server with HW Raid controllers - adjust to your needs) (NOTE: we have about 5% of a 702GB mail data area used - MW)
* Setup priority paging to make better use of limited Ram.
* Must remain in this order for proper loading.
* Speed up the scan rate to run through plenty of memory pages.
* Slow down pager and fsflush scan rates.
And the most important thing is a small detail we had forgotten about: the vxstat command. It made our problems with iostat irrelevant, as it showed us statistics by subdisk that showed only two of the seven were getting any traffic.
What we didn't get was any idea of where to find information on how to calculate stripe sizes if we went to a striped plex/volume setup. Our seven subdisks are currently 68GB each, and we wonder if the standard 128KB/stripe value is really quite what we want.
Similarly, no one had any solid input on whether the SSE's suggested plan for switching from concatenated to striped plex/volume was feasible.
Only one person noticed the controller/disk array name switch error (still tired from the previous night's crash?) in the original question:
> Date: Tue, 10 Oct 2000 14:55:03 -0400
> From: Michael Watson <email@example.com>
> To: firstname.lastname@example.org
> Subject: Disk config poor for I/O? E10K/A3500/rm6/SEVM 2.6
> We have a mail server which is showing high I/O wait cpu stats on iostat -c, and users have been known to complain that the system
seems slow. The system is an E10K domain with D1000 controller over five full trays on an A3500, H/W raid by rm6 into 13 LUNs. The
mail user space is RAID5, 10 LUNs showing as 10 physical disks to the system, which are vxfs and concatenated each as a single subdisk
into a single plex and single volume under SEVM/vxvm 2.6.
> I/O wait times are generally 60-80% all the time. We have Virtual Adrian running, and it complains about the native cxtxdxsx
volumes, but never about the sd devices that show up under iostat -x (we don't know if it would). In fact, one of our problems is
that we cannot seem to map all the sd devices to virtual devices recognized by SEVM. Our busiest sd device has these cumulative
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> sd686 5.1 25.9 60.9 277.3 0.2 1.3 49.2 2 36
> and maps to path_to_inst as:
> /sbus@69,0/QLGC,isp@0,10000/sd@5,4 686 sd
> but no cxt5d4sx appears in vxva or any command line disk print output (e.g., vxprint, vxdisk list). This is true of about half the
sd devices iostat -x can see.
> Our SSE has suggested defrag of the volume, shrinking it, taking the remaining space, creating a new striped volume, then mirroring
the old volume and after it syncs up, breaking the mirror and moving the new volume to the old mount point, and recovering the space
from the old volume into the new.
> The part about being able to mirror a volume with another existing volume sounds unfamiliar to us, but does sound feasible? In
fact, does it sound like we would gain much from doing this in the first place?
> More details are available for anyone interested.
U BEFORE POSTING please READ the FAQ located at
. and the list POLICY statement located at
A To submit questions/summaries to this list send your email message to:
A To unsubscribe from this list please send an email message to:
E and in the BODY type:
R unsubscribe sun-managers
. unsubscribe sun-managers email@example.com
L To view an archive of this list please visit:
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:14:19 CDT