Summary of Auspex Experiences

From: tim robinson (tr@crosfield.co.uk)
Date: Tue Apr 14 1992 - 09:51:44 CDT


I would like to thank all those who took the time to respond to my
request for good or bad experiences with Auspex servers. It was very
apparent that people were pleased with their decisions to go for
Auspex, and as a result it looks like we will be joining them shortly.

Below is a copy of responses received.

-------------------------------------------------------------------------

We have had our Auspex for nearly 3 years now. In fact, our machine was like
the 6th one they made for production.

Performance has been very good. It consistently averages twice as fast on
response time than standard Sun's that serve files (SS-1, SS-2, etc). I have
it configured with ~10Gb of disk (mix of 663Mb, 1.0Gb, and 1.3Gb disks), with
2 Ethernet provcessors. I have roughly 60 clients (Sun-3, Sun-4, SS-1, SS-2,
and PC's). Most of the Sun-3's are diskless (~35%), the rest mostly have
local root/swap.

At the time, justification for a new file server wasn't a problem, we knew we
needed something. We looked at the various accelerators, prestoserve, etc, and
the Sun 4/490. At the time, I felt that the 4/490 performance just wasn't
there. On top of that, Sun didn't seem to understand just what it takes to
make a good file server. On the other hand, Auspex had good answers for all
my questions and concerns. Sun even loaned us a 4/490 to evaluate, but it
didn't perform as well. They kept saying that it could be tuned better, but
nobody knew how. I didn't have time to wait.

I have not benchmarked against the new MP series, so I can't say.

I routinely see 200 to 300 NFS ops/sec, with peaks to 400 at times, at any
time, there is no noticeable degradation is response, so I am sure it can
handle more. I would probably need more ethernet ports though (a 3rd processor
is on order now, so I will have 6 ports soon).

Service has been very good. The machine has been very reliable, and when there
is a problem, they are very eager to fix it. Sun NEVER gave us that kind of
service.

I am not bad mouthing Sun, I think they have some good products, I just don't
think that a file server is among them.

On the downside, if you are looking for a compute server, the Auspex isn't
it. Also, the price of extra peripherals like disk and such is kinda high.

Anyway, gotta go. If you want more info, just e-mail me. I can go on for hours
on this subject. I feel from my experiences that I am quite qualified.

Russ Poffenberger DOMAIN: poffen@sj.ate.slb.com
Schlumberger Technologies UUCP:
{uunet,decwrl,amdahl}!sjsca4!poffen
1601 Technology Drive CIS: 72401,276
San Jose, Ca. 95110 Voice: (408)437-5254 FAX:
(408)437-5246

--------------------------------------------------------------------------

We have only had ours for two months, but I am very happy with it. Currently
it supports 49 workstations (mostly SPARCs) across three subnets, with an
additional connection to the backbone. In the next 30 days about 12 more
clients will be added.

The clients get all of root and /usr from the Auspex, but have local swap and
tmp. Most workstations have local disk for large (several hundred MB) datasets,
but a few do not, and these get data from the server as well. The server has
20GB of SCSI disk, 20MB of its own memory, which is plenty, and 16MB cache. We
could probably use more of the latter, as when someone is accessing data across
NFS, the age of data in the cache is no more than a couple of minutes. The rest
of the time, though, it is fine.

The client roots are spread across 5 separate root partitions on different
disks, to divide the load. The Auspex benefits enormously if you can spread
the load around like this, and it is easy to do with a little advance planning.
We are only using striping on the largest partitions.

We have benchmarked average NFS response at around 28ms, and transfer rates
of 300KBytes/sec (read) and 80Kbytes/sec (write). These are extremely good.
It can be compared to our Solbourne 5/801, which with 25 clients (and 40
interactive users) had an average NFS response time of anywhere from 250ms to
700ms. The clients boot very quickly, and we rarely see those "NFS server not
responding" messages. In the 60 days we have had it, it has crashed exactly
once, rebooted immediately (full boot, including processor dumps and fsck of
20GB of disk, took 25 minutes), and they are sending us a patch for the bug
that caused the crash.

I have not had much contact with sales, since this was obtained through a
formal procurement, but the little I have had has been helpful. Technical
support is also outstanding. It is still such a small company, with only about
200 customers, that you don't even have to give them a serial number or
contact number; they just know who you are. They are also Internet-literate,
which is very nice.

If you have ever seen Guy Harris' postings, you will have some idea of the
quality of their technical people.

Hope this helps.

Ruth.

----
Ruth Milner                          NRAO/VLA                  Socorro
NM
Computing Division Head      rmilner@zia.aoc.nrao.edu

---------------------------------------------------------------------------

Well, we've had an NS5000 with 4 nets and 20GB of disk for about 3 weeks now. Obviously we don't have a lot of experience with it, but so far it has been excellent. The local support people have been good, installation was quick and easy, and all has been as Auspex promised.

Their "perfmon" tool is useful for monitoring the system and seeing how busy it is. Perhaps more useful than vmstat, iostat, etc. on a regular unix box for seeing what NFS is doing on the system.

The thing I am most impressed by so far is write performance with the write accelerator (probably Sun can do something similar with their prestorserve, but I've never used one of those.) The system is very easy to work with - very much a wheel it in, turn it on and use it sort of thing. There have been no integration problems with our other systems.

We have not done any specific benchmarks at this point. We will be doing some application benchmarks within the next week. The ease of use of the Auspex system makes me think that there other reasons to use it than just raw performance - a Sun server on the same scale would have Prestoserve, NC400's etc., and probably be more work to set up and maintain. Note however, that this is conjecture on my part, as we don't currently have any of the Sun add-on's in use at our site.

If you'd like to discuss this with me you are welcome to call via voice during my work hours - don't know if those bear any relationship to your work hours.

-Scott Muir

Western Digital Corporation 8105 Irvine Center Drive Irvine, CA 92718 (714) 932-6764

sunkist!dasun!muir@sun.com

----------------------------------------------------------------------------

From: ereed@auspex.com (Ed Reed)

Today is my 6th day at Auspex. I've been a system/network administrator for 10 years mostly with DEC and SUN equipment.

I came from an object oriented database company where I bent over backwards to try and get management to buy into an Auspex. Unfortunately, in their view networks are unreliable and slow and so purchased a hord of SS2 with 64MB and 1GB of disk thinking this would cure their networking problems. Sigh.

As far as servers go I have experience with Sun4/490s and Dec 5500 and workstation class machines (SS2, DEC5000/200, IBM320, HP720. I don't know if it makes sense but my "feel" of the NS5000 is that it is definately "industrial strength" compared to the SUN and DEC. This is reflected in the special man pages that enable several functions on line. (SP- storage processor, FP- file processor, HP- host processor) A listing follows.

The network here is all diskless Suns and Xterminals. This may be a long term economic advantage (benchmark) you may wish to point out. I appreciate this point heavily as at my last job local disk failures were so frequent and I spent a lot of my time just keeping things up (read: no fun).

All workstations at Auspex have centralized administration. To me this is a big system administrative win I haven't seen since my days at Lockheed. I have yet to notice network performance sag even with concurrent dumps run across the network.

I have yet to do disk stripping or disk mirroring but I am tasked with revising our backup scheme and so expect to use these features which for high up time systems is essential.

I could forward more but view that it might be construed as hype just because I work at Auspex.

Ed

---------------------------------------------------------------------------

We are also considering the purchase of an ns5000. Please post a summary of your responses. We do not have any benchmarks planed yet. We would like to evaluate a system in-house prior to purchase, but Auspex insists on a P.O. with a 30day return option. The main advantages we see to such a system is that it will free our existing sun file/compute servers to be dedicated compute servers. We should also see much improved NFS response without having compute jobs competing for resources.

The main objection raised so far is that it tends to be a single point of failure (unless you buy more than one). Right now we have our users distributed between 3 sun servers. We are a little nervous about putting them all on one auspex. Of course, Auspex insists that they are extremely reliable with quick downtime response.

Thanks,

Rod Rebello titan!rrebello@asuvax.eas.asu.edu Microchip Technology Inc., Chandler, AZ

---------------------------------------------------------------------------

We brought an NS5000 in here about 6 months ago and are currently serving all the workstations (about 80) in this division of this organization, from it. All our workstations are either diskless, or have only swap and /tmp on their local disk.

Initially our experiences were "bad", but principally because our existing network had Retix bridges in it which dropped packet trains from the 5000 on the floor. Auspex doesn't violate the spec - it just blasts out the packets, like 6 in a row for NFS, with the minimum-permitted interpacket gap. This can wreak havoc with bridges, some repeaters, and banks of multi-port transceivers that are "deeply" nested. If any of your workstations aren't directly on the ether segment they're served from, the only device I've heard Auspex mention that they know you can run a client on the other side of, is a Cisco AGS+ router.

The only other problem we've had with the machine is that we wanted to run an Ikon hyperchannel interface board in it, to interface to an existing hyperchannel net. This board will not work in the Auspex backplane as it will in a standard Sun - we had timeout problems that caused the Auspex to panic, so had to run this application on a sun client of the Auspex. In general, Auspex does not recommend users run *any* 3rd-party board in their VME backplane, though this was somehow buried in the fine print and we missed that statement before signing on the dotted line.

The bottom-line overall, though, is that we're satisfied with the 5000. If you don't want to slide weird boards into its backplane, it's a good machine.

I don't think one can say that benchmarks are the "reason" to have an Auspex. The convenience of administration when all workstations are served from one place, is considerable (we previously served our stations from four different suns), and the Auspex architecture scales better as you add stations. If your management thinks administration is somehow a "free" thing, then they need some education. Also, the SCSI/multi-port ethernet architecture of the 5000 is such that its peak NFS performance still exceeds Sun's state-of-the-art, such as the 690 with NS4000 protocol processors. And, Auspex supports virtual partitions (including concatenation, mirror-ing, and striping) on its disk drives, a very useful feature (used to be that Sun would support this if you bought a separate product ... don't know if they're still unbundling it or not).

There is a possibility that we will eventually replace our NS5000 with an IBM Raven. IBM has licensed most of the Auspex technology and will be bringing out their own version of the 5000. Their machine will have an RS6000 host processor (plus the usual Auspex off-board processors) with two 80MB/sec microchannels. We are considering this because we need the high data rate into the main (host) processor, and a sparc processor can't provide it. ---------- Ed Arnold * NCAR * POB 3000, Boulder, CO 80307-3000 * 303-497-1253(voice) 303-497-1137(fax) * era@ncar.ucar.edu [128.117.64.4] * era@ncario.BITNET

---------------------------------------------------------------------------

We're contemplating buying an Auspex to serve up to fifty diskless ELCs. We're currently running a dozen ELCs off a Sparcstation 2, with a 4/470 used to provide additional processing and fileserver. As we are jumping suddenly to a larger installation, I am planning to bypass the usual mess of small to medium servers and go straight to a single, large server. Fifty marks the maximum size for the next few years --- there simply isn't the floor space!

What has been peoples' experience of Auspex and the gear? We are impressed by the paper and impressed by the people, but there are as yet no installed machines in the UK.

What have people found to be the performance of a 4/692 in the NFS role? Is it much better than a 4/470 (which isn't amazing)?

What about other choices?

In summary: we want to serve fifty diskless workstations plus a few random boxes. We fancy an Auspex but there are other possibilities. What do you think?

ian

---------------------------------------------------------------------------

From: Dave Capshaw <capshaw@asc.slb.com>

We have an Auspex NS5000 and overall I am quite pleased with it. Its most notable feature is that it works: is up all of the time and offers full performance to each of its Ethernets. I would hate not to have one (i.e. to attempt to design an alternative server for NFS clients.)

The Auspex folks have a great attitude and solid hardware. Their biggest problem is that they are not perfect yet. There are two ourstanding problems that we are chasing: unmountable filesystems and slow mounts from Ultrix systems.

We deal with large files and a significant benchmark for us involves file write rate: with async writes the Auspex server and a Sun SS2 client are limited by the Ethernet (when writing to striped disks).

Dave

-- Tim Robinson | Tel: +44 442 230000 Ext3850 Crosfield Electronics Ltd | Fax: +44 442 232301 Hemel Hempstead, Herts, HP2 7RH, UK | Email: trobinson@crosfield.co.uk ================================================================================



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:06:41 CDT