SUMMARY: SUN and Oracle performance design question

From: Timothy Lorenc <tim_at_load.com>
Date: Wed May 29 2002 - 15:50:15 EDT
[Sorry if this is a duplicate, I did not see this message go through the
first time I sent it.]

I would like to thank the following individuals for responding to my query:

Luis Aguilar [laguilar@transformpharma.com]
topher [topher@findtopher.com]
Vberg Mats [mats.oberg@tietoenator.com]
Jeff Kennedy [jlkennedy@amcc.com]
Tim Chipman [chipman@ecopiabio.com]
Tristan Ball [tristanb@vsl.com.au]

From their combined responses and a glimpse at their experiences I have been
able to identify the use of a small SAN instead of the use of directly
attached Sun StorEdge T3's. Currently we are looking at an EMC FC4700
connected via 2Gb Fiber Channel PCI controllers from either Qlogic or
LSILogic.

Factors that contributed to this decision:
1. The ability to share available storage between 2 or 3 separate
projects/systems that have aggressive storage requirements. By consolidating
the funds available for each project into one lump sum, I have been able to
convince my upper management of the feasibility of acquiring a SAN.
2. Limitations in the Sun StorEdge T3 technology only allows the creation of
2 LUNs per tray
3. Oracle should probably revise their current technology view for database
information layout due to the advances made in storage technology that
surpass the past SCSI controller technology that their ideology appears to
be based on.

As a side note I did find very interesting White Papers on RAID and Oracle
Performance Tuning at:
www.orapub.com; http://www.orapub.com/cgi/genesis.cgi?p1=sub&p2=papers_main

Implementing RAID on Oracle Systems
http://www.orapub.com/cgi/genesis.cgi?p1=sub&p2=abs132

Myths & Folklore About Oracle8i Performance Tuning
http://www.orapub.com/cgi/genesis.cgi?p1=sub&p2=abs139

Oracle Applications Capacity Planning for Middle Tier Servers
http://www.orapub.com/cgi/genesis.cgi?p1=sub&p2=abs137

RAID: High-Performance, Reliable Secondary Storage
http://www.orapub.com/cgi/genesis.cgi?p1=sub&p2=abs124

High-Availability and Scalability for Your Oracle Database
http://www.orapub.com/cgi/genesis.cgi?p1=sub&p2=abs123


RESPONSES:

Luis Aguilar [laguilar@transformpharma.com]
The T3's had a big limitation of only two LUN's per box, make this is no
longer an issue.

topher [topher@findtopher.com]
Actually - we just did almost EXACTLY this same set of research - and found
several interesting solutions.

first - based on the current Sun architecture, loading it up with T3s didn't
work very well - essentially you can overload the backplane with parallel
data reads and writes, and you wind up binding yourself at the backplane (no
fun at all) - part of it was the fact that there aren't enough PCI busses on
a system to support full throughput

What we DID like was the Sun SAN solutions.  We tested a 6900 and a 9910 -
configured ourselves with dual-port 2 Gb fiber cards (from Qlogic, of
course), connected to a pair of Qlogic 2Gb switches (which are cheaper than
the 1 Gb switches) and with Veritas DMP (which comes with Veritas Volume
Manager) - we wound up having a 8 Gb connection to the SAN solution - which
has outperformed every other configuration we could find.  The nice part is
that after the initial buying, we've managed to grow the solution to work
for our entire development environment (so I've got a 8 Tb solution now,
servicing AIX, Sun, HP, and m$ systems)

The 6900 is basically a stack of T3s with a 'SAN' head - that plus the fact
that it's fourth quarter for Sun - plus the fact that Sun is really pushing
storage solutions right now - you should be able to get a pretty good deal
on the whole kit-n-kaboodle, no matter which way you go.

Oh yeah, and with Veritas qio - the testing we performed showed that
'cooked' file systems performed just as fast as raw disk - which makes
backups, restores, and general disk management easier.

and that, my friend, is my Sun sales speech... (You can look to STK and
Hitachi for storage, the 9100 from Hitachi is pretty nice -- but my opinion
would be not to let EMC get their hooks in you - the upfront price may be
nice, but the long term service is a high price to pay....)

And yes, I would say you don't need to worry about arranging your disks
nearly so precisely as in days of yore.  If you just drop your Database onto
one of these solutions, you can monitor for hotspots, and swap with cool
areas on the fly - which means that instead of a 'theoretical' disbursement
of disk access, you can get a 'real-time' view of what's going to happen -
cuz we all know that what is 'supposed' to happen on a system, never does.

Vberg Mats [mats.oberg@tietoenator.com]
Hi, I would recommend looking into LSIlogic4s arrays.

Jeff Kennedy [jlkennedy@amcc.com]
Here's my take on this.  Why spend time and energy trying to figure out
the best disk/controller layout due to storage vendor constraints?  We
have a XioTech Magnitude SAN array with 32 disks in the cabinet.  The
way XioTech works is at the block level, which means no disk assignment
is needed.  We tell the SAN how much space we want and it presents that
amount as a single LUN; but the space is spread across all 32 drives
(true virtualization).  It's blazing fast; the only issue I have seen is
the fcal controller doesn't have enough cache to keep up with the scsi
requests under heavy load.  But even that has a fix through a Sun system
parameter.

Tim Chipman [chipman@ecopiabio.com]
AFAIK, in general, the use of "advanced hardware raid 5 engines" with fancy
adaptive cache routines, tons of cache memory, and read-ahead / write behind
cache -- in general, this kind of thing makes most "benefit" for use with an
underlying raid-5 array.  If you use a T3 as JBOD storage or Raid 0 / Raid 1
/
Raid 0+1 storage .. you really have a lot of stuff in the t3 you have spent
$$ on and are not actually using / getting significant benefit from.

ie, the whole point of purchasing T3 bricks over D2000 (JBOD Ultra160 scsi
array
from sun, fairly new part) is the benefit of the hardware raid controller
for
Raid5 arrays in particular.

AFAIK Oracle still does not "recommend" to EVER use Raid 5 storage, this
(for
all I can tell?) dates back to the bad old days of yore when raid 5 was
inseparably associated with utterly terrible performance (especially for
disk
writes in particular).  The advent in the past 2-3 years of "decent to
excellent" performance Raid5 arrays due to snazzy Hardware Raid
implementations
... seems to have been mostly ignored, AFAIK, by the good folks at oracle.
I.e.,
they still suggest that the only good hardware to run oracle on is a setup
with
mirrored drives and tons of controllers. No discussion of hardware raid 5
implementations or (gasp!) NFS-mounts from a *fast* "filer" box.  [although
the
folks at Netapp, for instance, HAVE succeeded at getting their "Filer" NFS
units
approved by oracle as "approved storage solutions" as "filers" for oracle
DB's,
unfortunately, I'm not aware that oracle has clearly discussed the general
issues involved in approving a given NFS-mount environment as "appropriate
and
approved" for oracle. (Sigh.).However, the fact that I don't know of the
discussion or docs from Oracle doesn't imply that it doesn't exist :-)

Also note, in my opinion anyhow, that Sun T3 disk arrays are

-NOT the best disk array on the market, especially at list price, compared
to
other units available
-Somewhat dated in their design (i.e., no big change since introduction a
few
years back, other than adding "full SAN fabric support" {which should have
been
present since day zero}, and typically flogging models with absurd large
amounts
of cache ram {i.e., 1 gig} as an attempt to convince potential customers
that T3
are indeed "cutting edge" units.
-however, all price issues aside, the T3 is certainly is a very "reasonable
performer", no questions about that. Just not necessarily the best on the
market..

For example, *AFAIK*, the units sold by "Winchester systems" can
significantly
outperform T3 arrays from sun. They are NOT cheap by any means though,
however,
the cost per gig is similar to the T3 and the performance is significantly
better I suspect. These things are available in either U-160 (multiple
independent bus for multi-host connectivity or to increase ## of controllers
on
the host talking to a single array if you wish --  or FCAL based too at
added
cost).

Likewise, AC&NC Sell "JetStor III" disk arrays (ultra 160 based,
top-to-bottom,
with FCAL "external host connectivity" if desired) which offer (I believe)
very
similar performance to the T3 units, and they cost far less (which could be
seen
as a good, or bad thing, depending on how people thing at your site? :-)

Hope this info is of some use ... and certainly your final summary on the
topic
will be of "significant interest" :-)

Tristan Ball [tristanb@vsl.com.au]
Firstly, I'd be careful with the T3, it's a fairly inflexible unit.
They work fine as Raid-5 blobs, however the are limited to 2 luns per
tray, luns cant span trays, and combinatory configs, like raid0/1 are
limited. The new version, with the 1G cache may have improved, but I
doubt it. :-)

I'd recommend a hitachi 9200, or an EMC unit.

You'll generally find these days you don't need to worry about
controllers in quite the same way as you used to, the dual controller
units on the low-midrange HDS or EMC units are fast enough to saturate
2-4 FC100 links, depending on your IO pattern. And, frankly, with 4 100
meg/sec connections from a 9200, you're fairly unlikely to saturate the
links on a transactional database. :-) The disks themselves are more
likely to be a bottleneck.

I'd strongly recommend you read the sun whitepaper about what they call
"wide-thin stripes" too.

I would be very interested in what you decide to do tho, these kind of
questions generally can only be decided by in house specific benchmarks.
This is the other reason for going with a more serious array, the T3 is
too inflexible. And, by the math I recently did, by the time you start
looking at 40+ disks, the cost/gig/lun of the larger arrays beats the T3
by a long way.



ORIGINAL MESSAGE:

Hello SunManagers;

From the past/present Oracle has suggested a large number drives/LUNs (21 as
the ideal solution) for spreading out data, indexes, logs, rollback
segments, temp space, ... Working with my developers I have helped to size
this down to 7 drives/LUNs for the location of this information. We are
looking at Fiber Channel solutions, in particular Sun StorEdge T3(s).

My question concerns how many controllers would be ideal to meet this
solution? It would appear that seven controllers with the requirement for
this data to be striped/mirrored would best be met by using physically 14
controllers for 7 mirrored drives/LUNs. In the past I have always lived by
the philosophy of striping along a controller and mirroring across
controllers. I guess I am curious if this thinking is "old school" when
using directly attached Sun StorEdge T3 arrays (or any modern style fiber
channel solution)?

I am currently working with systems that utilize Sun StorEdge A1000 arrays
and I would like to move away from that technology. [40/Mbps DIFF UltraSCSI
vs. 100/Mbps Fiber Channel as one reason]. Does the performance and
resilience of fiber channel technology dismiss past hardware conventions due
to its inherent multipathing and full-duplexed speed capabilities.

BTW; any alternative(s) [both array and controller HBA] that anyone would
suggest as comparable or better than the Sun StorEdge T3 solution?

Thanks for your time. Will summarize.

-- LOAD your email!

Timothy Lorenc        USmail:  Lorenc Advantage, Inc.
Consultant                     6732 E. State Blvd.
                               PMB 304
Email: tim@load.com            Fort Wayne, IN 46815
http://www.load.com

***DISCLAIMER***

If this communication concerns the negotiation of a contract or agreement,
the Uniform Electronic Transaction Act does not apply to this communication:
contract and/or agreement formation in this matter shall only occur with
manually-affixed signatures on original documents.
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Wed May 29 15:58:33 2002

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:42:44 EST