SUMMARY: Best way to connect A3500FC to E5500?

From: Mike Robertson <>
Date: Mon Oct 22 2001 - 09:40:48 EDT
Hi all,

Original question:

> I'm new to Sparc systems and getting ready to buy a couple of E5500's
> and A3500FC's fully loaded in a 2 X 7 configuration.  The var I'm
> dealing with is suggesting the best way to connect the E5500 to the
> A3500FC is by adding two gbic's to the sbus i/o board.  I thought pci
> was faster than sbus, so I am wondering if using a FC-100 attached to
> PCI I/O board would be faster.  I'm also concerned that there may be a

> possible bottleneck with all the i/o from the A3500FC coming through
> I/O board.

 Christopher Ciborowski had some interesting points about possible SBus
board bottlenecks:

If you are using the on-board GBIC slots on the SBus I/O board, the
bottleneck depends on what you also have in the slots in the I/O board.

SYSIO chips (of which there are 2, one (SYSIO-0) for SBus slots 1, 2,
the SOC+ connections, and the other (SYSIO-1) for SBus slot 0 and FEPS.
Keeping in mind that you can get close to 70% of the bandwidth with
and knowing that each of the SYSIO controllers can handle 200Mb/sec
(peak bandwidth), only using the 2 onboard SOC+ wont't hurt you

However, with a GigE card in SBus slot 1 (~70Mb/sec) and the 2 onboard
(~140Mb/sec) which totals ~210Mb/sec, is over what the controller can
handle.  Remember that this is peak bandwidth, and that you may not get

the 200Mb/sec maximum, you may only see 70Mb/sec on each SOC+, and there

would be no other I/O, so not to worry about the bandwidth.  Capacity
according to how the application will access the disk, and any other I/O

requirements on the board.

A common theme was that I should use two SBus I/O boards regardless of
performance for redundancy reasons.

David Bader wrote:

Two GBIC's to the SBUS I/O board is one option, only there are a few
to keep in mind.  Redundancy... you lose the board, and you lost the
connection for all machines.  Normally I would dual loop two SBUS I/O
to the FC controller for redundancy.  But remember without a
host attachment you are at the mercy of the throughput on the 5500 or
machine you front end the array with..

Todd Nugent had a nice comment about  FC-AL performance differences:

When I did something similar with a pair of E3500s and A5100s, I put two
boards in the 3500s, but my main motivation was redundancy in case one
board failed.  Of course, it turned out that it did not initially
multipath, but now it does and the I/O shares nicely.

The 2 FC-AL interfaces on the SBUS I/O board are built-in and outperform

SBUS FC-AL cards, so I assume they are are basically wired into the
gigaplane.  While I don't have any benchmarks to prove this, I would
the built-in FC-AL on the SBUS I/O boards to outperform a PCI card
into one of the PCI card holder boards.  If you are not used to
FC-AL networks, but sure to have someone check over your configurations.

Many configurations which are not a good idea will still work, but make
life difficult.

David Bader also had this comment about FC-AL and why you might not
the full speed advantage over SCSI:

Speed/throughput.... this was your biggest issue.   Keep in mind back
FC-AL was used for distance not throughput.  By the way you have a SCSI
based disk array so the drive is actually a bottleneck if you think
it.   I always look at it like this,  Ethernet to Backplane to
to RAID Controller to Disk and back again.  You can fill in the
speeds and see that your bottlenecks can occur in many places.

Thanks to all who responded.  I'm learning a great deal on this list
and just subscribed two weeks ago.  For now I have decided to use two
I/O cards with one gbic in each one.

Mike Robertson
Senior Analyst
Hantover, Inc.
Kansas City, MO
Received on Mon Oct 22 14:40:48 2001

This archive was generated by hypermail 2.1.8 : Wed Mar 23 2016 - 16:32:34 EDT