SUMMARY: Best way to connect A3500FC to E5500?

From: Mike Robertson <mikerobertson_at_hantover.com>
Date: Mon Oct 22 2001 - 09:40:48 EDT
Hi all,

Original question:

> I'm new to Sparc systems and getting ready to buy a couple of E5500's
> and A3500FC's fully loaded in a 2 X 7 configuration.  The var I'm
> dealing with is suggesting the best way to connect the E5500 to the
> A3500FC is by adding two gbic's to the sbus i/o board.  I thought pci
> was faster than sbus, so I am wondering if using a FC-100 attached to
a
> PCI I/O board would be faster.  I'm also concerned that there may be a

> possible bottleneck with all the i/o from the A3500FC coming through
one
> I/O board.

 Christopher Ciborowski had some interesting points about possible SBus
I/O
board bottlenecks:

If you are using the on-board GBIC slots on the SBus I/O board, the
bottleneck depends on what you also have in the slots in the I/O board.
The

SYSIO chips (of which there are 2, one (SYSIO-0) for SBus slots 1, 2,
and
the SOC+ connections, and the other (SYSIO-1) for SBus slot 0 and FEPS.
Keeping in mind that you can get close to 70% of the bandwidth with
SOC+,
and knowing that each of the SYSIO controllers can handle 200Mb/sec
maximum
(peak bandwidth), only using the 2 onboard SOC+ wont't hurt you
(140Mb/sec).

However, with a GigE card in SBus slot 1 (~70Mb/sec) and the 2 onboard
SOC+
(~140Mb/sec) which totals ~210Mb/sec, is over what the controller can
handle.  Remember that this is peak bandwidth, and that you may not get
near

the 200Mb/sec maximum, you may only see 70Mb/sec on each SOC+, and there

would be no other I/O, so not to worry about the bandwidth.  Capacity
plan
according to how the application will access the disk, and any other I/O

requirements on the board.

A common theme was that I should use two SBus I/O boards regardless of
performance for redundancy reasons.

David Bader wrote:

Two GBIC's to the SBUS I/O board is one option, only there are a few
issues
to keep in mind.  Redundancy... you lose the board, and you lost the
host
connection for all machines.  Normally I would dual loop two SBUS I/O
boards
to the FC controller for redundancy.  But remember without a
switch/multiple
host attachment you are at the mercy of the throughput on the 5500 or
whichever
machine you front end the array with..

Todd Nugent had a nice comment about  FC-AL performance differences:

When I did something similar with a pair of E3500s and A5100s, I put two
I/O
boards in the 3500s, but my main motivation was redundancy in case one
I/O
board failed.  Of course, it turned out that it did not initially
support
multipath, but now it does and the I/O shares nicely.

The 2 FC-AL interfaces on the SBUS I/O board are built-in and outperform

SBUS FC-AL cards, so I assume they are are basically wired into the
gigaplane.  While I don't have any benchmarks to prove this, I would
expect
the built-in FC-AL on the SBUS I/O boards to outperform a PCI card
plugged
into one of the PCI card holder boards.  If you are not used to
designing
FC-AL networks, but sure to have someone check over your configurations.

Many configurations which are not a good idea will still work, but make
your
life difficult.

David Bader also had this comment about FC-AL and why you might not
realize
the full speed advantage over SCSI:

Speed/throughput.... this was your biggest issue.   Keep in mind back
then
FC-AL was used for distance not throughput.  By the way you have a SCSI
based disk array so the drive is actually a bottleneck if you think
about
it.   I always look at it like this,  Ethernet to Backplane to
Fiber/FC/SCSI
to RAID Controller to Disk and back again.  You can fill in the
transport
speeds and see that your bottlenecks can occur in many places.

Thanks to all who responded.  I'm learning a great deal on this list
already
and just subscribed two weeks ago.  For now I have decided to use two
SBus
I/O cards with one gbic in each one.

Mike Robertson
Senior Analyst
Hantover, Inc.
Kansas City, MO
Received on Mon Oct 22 14:40:48 2001

This archive was generated by hypermail 2.1.8 : Wed Mar 23 2016 - 16:32:34 EDT