Finally! A summary of the responses I received
regarding relative performance of standalone and
swapful SparcStation configurations.
Here was my request:
> Subject: SS-1+ performance: swapful vs. standalone
> Hi, everyone --
> My group of three Sun administrators is faced with the
> task of installing 30 new SS-1+'s at our existing
> site of ~200 Suns (servers/clients, mostly). We are
> trading in older models for these SS-1+'s. Our existing
> site has 14 servers, 4 3/280s and 10 4/280s, all of
> the relevant ones running 4.1. We have one or two
> servers per lan, and, in all but one case, at least
> one 4/280 per lan. The backbone is now ethernet but
> we are testing fddi.
> Now, for my question:
> We can install these new SS-1+s either swapful or
> standalone. We see a potential administrative headache
> if we go standalone, but we also understand that there
> may be a performance hit if we go swapful.
> Has anyone out there benchmarked these scenarios? Gut
> feelings won't help us here -- management can't put
> intuition on a viewgraph :^). We're trying to find out if
> the increased administrative overhead might be balanced
> by a dramatic increase in performance if we go
> standalone. Dramatic means 20% or more.
> I realize that any numbers I get will be relative
> to the type of work done on the machines, the amount
> of swap on each client, configuration of the network,
> etc. I'm willing to take whatever I can get.
> We just don't have the man- (woman-?) power or the
> time to perform these benchmarks. Any numbers at
> all would be helpful -- we're really divided on this
> I know that standalones perform better -- the question
> is, how much better? Is it worth the hassle?
The definitions of "swapful", "dataless", and "standalone"
varied slightly, but, on the whole, responses were helpful.
Here's the score:
standalone: 3 votes
swapful: 3 votes
dataless: 8 votes
swapful or dataless: 2 votes
where standalone = local /, swap and /usr.
swapful = local swap.
dataless = local / and swap.
(Yes, Liz, you did reply twice, but I only counted your "vote"
once. :-) Thanks for both replies!)
Although I asked for numbers, there simply aren't many out
there. Most people gave me intuitive reactions, based on
experience and some limited testing.
It seems that dataless has yielded the best results for
most people. Liz Coolbaugh noted that nfswatch showed that
the majority of network traffic was talking to /export/root
and /export/swap on her diskless clients, so putting those
on local disk dramatically reduced traffic. Ed Morin ran
tests on a 3/80 of standalone versus completely diskless
and came up with < 20% performance improvement. His network
contains 100+ workstations with heavy subnetting.
It seemed that the people who felt that dataless was
best thought that /tmp (or /var) was the major source of traffic
to /. Some people suggested using tmpfs to alleviate this.
Since we don't tend to load up on memory, that's not feasible
The people who suggested standalone said that writing
scripts to blow / and /usr onto local disk was a trivial
task. Rdist was a common suggestion.
We have decided, at least for now, to go with swap and /tmp
on local disk and everything else served. This works well
for us because we have users who run applications that require
mounds and mounds of swap and others that require huge
amounts of /tmp. We have quite large /export/root fs on our
servers (for historical reasons), so that's not a problem
either. With this configuration, we're able to give
our users 125MB of swap and 75MB of /tmp. They seem to be
happy so far.
Many, many thanks to all who responded:
David LeVine firstname.lastname@example.org
Eduardo Krell email@example.com
Bryan McDonald firstname.lastname@example.org
Steve Hanson email@example.com
Frank Greco firstname.lastname@example.org
Liz Coolbaugh email@example.com
Keith McNeill firstname.lastname@example.org
Ed Morin email@example.com
Frank Kuiper firstname.lastname@example.org
Joe Garvey sumax!ole!johnny5!garvey
Rick Summerhill email@example.com
If you're interested, I can e-mail the actual responses I got to anyone
who asks for them. They amount to 500+ lines, so I won't do
AT&T Bell Labs, Allentown, PA
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:06:11 CDT