SUMMARY: Hardware recommendations for cluster (multi-OS)

From: <Frank.Olsen_at_stonesoft.com>
Date: Mon Mar 25 2002 - 12:18:36 EST
Hi,

Finally, here's the summary of my request. The full text is quoted at the
end of this mail. Basically, I wanted advice on shared storage for
clusters, with cost as the main factor

The following people responded:

- Al Hopper (Logical Approach Inc.)
- Ben Tierney (Security Computer Sales)
- Bertrand Hutin (Amdahl)
- David Evans (Oracle)
- Ned Schumann (Olympus)
- Steve Camp (Camp Technologies, LLC)
- Tim Chipman (Ecopiabio?)
- Timothy Lorenc (Lorenc Advantage Inc.)

Thanks to all of you, some of whose' patience go severaly tested...!

(Apparently, hardware vendors don't read these lists... Also, I don't know
who replied to sunmanagers and who replied to linuxmanagers.)


S C S I

The most basic conclusion was that if cost is the main concern, I should
stay with SCSI.

That was the advice from Timothy Lorenc who told me to get quotes from
companies doing refurbished Sun hardware: www.solarsys.com and
www.redapt.com. I had a brief look at the first of these companies and they
do ship to Europe. (Still, shipping costs/support will probably prevent me
from ordering from them and other US companies.)

Steve Camp also told me to stay with SCSI, but to get Sun HW from eBay:
A1000 $800 - $2,500 and D1000 $400 - $2,500. He recommended QLogic SCSI
HBAs for compatibility with the OSs I mentioned in my request. (He also
gave some eBay prices for FC equipement that I'll discuss below.)

Tim Chipman suggested RAID boxes from Winsys because they have 2-6 way SCSI
buses, which is nice for a cluster setup. When I asked about the prices for
such boxes -- assuming that HW RAID would be otside my budget -- he replied
that prices probably started at $10k for a refurbished array with new disks
(thus confirming my assumption). He then mentioned Promise as one company
doing IDE arrays with RAID controllers making the IDE disks appears as SCSI
LUNs to the clients. Prices could get as low as $2k for a bare 8-bay array.
This sounds like something to consider further.

David Evans also told me to stick with SCSI and mentioned Sun as doing
expensive but reliable hardware and NTPA and EMC for more extensible
solutions. For Linux RAID he pointed to http://www.tdl.com/~netex, which I
haven't checked ni detail due to the issue of shipping to Europe. From a
quick look at the price list I note that their solutions are based on Mylex
PCI-based internal RAID controllers. This may not work in a cluster setup.
Also, they suggested different Mylex controllers for Linux as opposed to
Solaris, thus violating one of my requirements -- to use the same
HBA/controller for the different OSs. For price, he told me to go for IDE
RAID rather than SCSI. For Sun boxes he suggested getting a cheap enclosure
with software RAID. When prompted he told me to use a PC case as enclosure
or go to Fry's. Again, I'm in Europe, so I'd have to find a local
alternative to Fry's. I mentioned that the StorEdge D1000 has two separate
SCSI chains and David told be that it should be possible to do this with
cheap enclosures as well. We also started a discussion on the multipath
support in various OSs/kernels that would support such setups, but that is
probably getting out of scope for what the issue was here.

Ben Tierney told me this (sunmanagers?) was the right list for my question.
Again, he only suggested SCSI, but told me to use the D1000 instead of the
A1000 since I could use SW RAID with the D1000 for a cheaper solution. (He
could also give me better prices on StorEdge than SDC... :-)


F I B R E  C H A N N E L

Some people did suggest going for FC and I exchanged several mails with
some of them to clarify whether it was possible to get a FC setup at a
"reasonable" cost. (Mind you, one FC vendor I contacted told me that
"cheap" and "SAN" shouldn;t be used in the same sentence.

The main proponent for FC was Al Hopper and I exchanged a series of mails
with him on this. Thanks Al! His main argument was better performance --
35k IOPS/sec versus 6k IOPS/sec -- and simpler cabling than SCSI -- serial
rather than parallel and no termination problems. For specific products he
suggested QLogic 2200 HBAs for Linux and 2300 for Solaris. The 2200 runs at
1 Gb/sec while the 2300 runs at 2 Gb/sec. The 2300 doesn't have Linux
drivers and thus does not meet my requirements.The 2200 are approx. $8-900.
The next step would be HW RAID controllers -- IBM DF4000R "ProFibre Storage
Array" based on Mylex controllers. I didn't get a price for these but HW
RAID is surely outside my budget.

For my specific requirements, performance isn't the issue -- rather I'm
interested in FC for extensibility to more than two cluster nodes and for
ease of cabling as Al said. My idea was to get a cheap FC JBOD or just
loose disks and to hook the JBOD/disks on a FC-AL loop with multiple
cluster nodes. He replied that I'd probably need a FC hub for that to work
since they include by-pass circuitry to deal with any broken devices on the
loop, although he wasn't sure whether a cluster node going down would break
an FC-AL loop without a hub in all cases.  For FC hubs he didn't recommend
any particular brand, but only urged me to get second-hand ones -- and
upgrade to a switch once budget permits -- and to keep spare GBICs as
replacement. Concerning dual-loop setups for reliability in clusters he
said that for cost reasons it would be better to avoid it, in particular
since FC was reliable already. For FC JBODs He suggested Trimm enclosures
as the best deal. I'm not sure how JMR will continue the Trimm line now
that they've bought them up. Currently I've got quotes (in Europe... ;-( )
for Trimm and JMR Fortra at fairly reasonable prices, around $2-4k
depending on the number and kinds of disks. I'd be keen to just get an
empty enclosure and then fill it up as needed with slow, low-capacity
(refurbished) disks, but I haven't found any company in Europe doing that.
eBay seems to have good prices on FC disks -- I saw some 9 GB Seagate disks
at around $10!

Coming back to my question about using "loose" FC disks he initially
discarded this due to noise and shielding issues, at least for anything
more than a single disk setup. However, when I showed him the solutions for
building your own FC JBOD from www.cinonic.com and others he did revise
that conclusion somewhat. He saw Cinonic as having a good solution for
shielding the cabling itself, but that the disk adaptor might still
introduce noise. (The Trimm JBOD he suggested avoids this.) When I
contacted Cinonic, they told be that I need FC hubs that do retiming of
signals. Al that generally speaking, the cheapest hubs don't do retiming,
at least those from Emulex. The more expensive Emulex LH5000 does
retiming/reclocking on all ports. THESE "DIY" FC JBODs SOUND INTERESTING --
I'D APPRECIATE ANYONE'S EXPERIENCE WITH THEM!

(I have found a few other entry-level FC JBODs, but I won't mention them
here since I never got a quote from the manufacturer... `land-5' used to do
FC JBODs but have now moved to do NAS only since that is far cheaper and
more demanded.)

Al also told me to avoid FC-to-SCSI bridges since they wouldn't help me get
a cheaper solution anyway, just more hassle.


Bertrand Hutin led me to Atto FC HBAs. My current best quote for a FC setup
here France is for Atto HBAs and an Atto FC hub. Bertrand has only used
Atto HBAs on Windows and couldn't tell me how they work under other OSs.

Steve Camp again told me to check out eBay for FC equipement, mentioning
QLogic FC HBAs at $150-400 and FC switches at $2k-6k.




i S C S I

Ned Schumann told me that iSCSI might be the way to go, although I'd have
to wait for a while and performance wouldn't be very good a the start. The
most expensive part of the iSCSI is a GigE switch, but it's an interesting
solution for upgrading an existing DAS array to a SAN configuration. He did
give me this link --
http://www.intel.com/network/connectivity/products/iscsi/index.htm -- to
Intel's iSCSI adapters being released these days. (So far, I haven't
investigated iSCSI any further.)

Tim Chipman also mentioned iSCSI, but was told me that few real products
are available at the moment. 3ware are shipping a product, but apparently
it's not compliant to the specification.


OK, thanks to all of you!

Regards,
Frank

>From: CN=Frank Olsen/O=Stone
>Date: 03/04/2002 03:44:18 PM
>Subject: Hardware recommendations for cluster (multi-OS)
>
>Hi,
>
>(Am I the first person to post to both sun- and linux managers? Hope you
don't
>mind...)
>
>I want to set up a cluster where as much as possible of the hardware can
be
>shared between multiple OSs. (But not the servers ;-) Basically, it is a
>cluster for HA with a shared disk array, although I may be interested in
doing
>clustered file systems and such. The main factor of choice is cost, which
is
>why I'd like to use the disk array from several platforms. I don't need to

>access the disk array from multiple platforms simultaneously. If possible,
I'd
>also like to cut costs further by using the same HBAs on multiple
platforms. Of
>course, this being a cluster I need a disk array that has dual attachment
so
>that I can connect two hosts. Since this will be used mainly for
>testing/development, I don't need max. speed. Given the cost factor I
might do
>software RAID rather than hardware. The OS I will use are Solaris, Linux,
and
>(possibly, grudgingly) Windows.
>
>At the moment I've got some quotes from Sun where I can get a StorEdge
>A1000/12x36G at ~$9,500 or a D1000/12x18G at ~$6,000. What I'd like to
know is
>whether it is possible to find anything cheaper than Sun hardware? At
least for
>the HBAs I'll get something other than Sun -- I've looked at QLogic and
LSI.
>
>One issue is whether to get a SCSI or a FC setup. Given the tight budget
I'm
>likely to go for SCSI, but I saw recently the FC is getting there in terms
of
>cost; e.g., there was a "SAN connectivity" package from QLogic at $10k.
Now,
>even $10k may be outside my budget, but maybe it is possible still to get
a
>basic FC setup without an FC switch? Is there a reasonably priced disk
array
>(probably JBOD) out there with FC connections, a FC hub, and FC HBAs?
>
>In advance, thanks for your help!
>
>Regards,
>Frank Olsen
>
>PS If hardware recommendations is outside the scope of this mailing list,
could
>you at least refer me to some lists where such issues are discussed?
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Thu Mar 28 09:11:27 2002

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:42:38 EST