Here is the summary from the responses that I received regarding
optical jukebox drivers.
Before I list all of the responses, I would like to mention that
there is an interesting article in this months issue of SunExpert
on optical jukeboxes plus a nine page listing of optical disks
containing all of the info you need regarding the subject.
There are also a few articles in the June 1991 issue of SunWorld
that talk about optical jukeboxes and the drivers that accompany
them. The articles are "Rewritable Optical Technologies" which
contains technical information on optical disks, and "Out for a Spin"
which contains information on device drivers.
I strongly suggest that these articles be looked at since they will
lead you in the right direction for choosing the proper configuration
based on your needs; especially the articles in the SunExpert.
>From reading the different responses and talking to some sotfware
engineers and sales representatives, I basically realized that there
are many different components that have to be considered before
deciding on the hardware and software that you may need. It is
strongly suggested that you try to get acces to a demo of the
products that you are considering. Since I sent my original posting,
a few vendors called me and let me telnet into there system and
experiment a little with their software, which was very helpfull!
Beginning list of some responses that I received:
================================================================================I have some information on optical jukeboxes for you that you may
The products are:
1) Zetaco's TOFS
Not the highest quality group in the world. We used to work with them
and ended the relationship quickly. From what I hear, they went
bankrupt and regrouped, and have since come out with TOFS. I have
evaluated their product, and it is inferior to many of the other
optical products on the market.
2) R-Sqare's Infinity
Good quality company. I like their stuff. I think they are a
little behind the leaders, and I didn't see any way to do backups
with their stuff - but the file system on optical is a good start.
3) AAP's AMASS.
Do have any real good info on them. I would love to hear what they
have if you could send me any info.
I'd also like to add a 4th company to your list. I work for AT&T
CommVault which provides optical products to do data management, backup
and recovery, and version management (among other things). We support
the 10 and 20 GB HP jukeboxes on SunOS, and many other UNIX platforms.
We've been around for about 4 years. We're based in NJ, and have
offices all over the US. I'd be happy to share more info with you
if you like, in addition to talking more about the above vendors.
--- Scott Barnett Director of Development AT&T CommVault Systems Phone: (908)870-7008 Rm 2E-116 FAX: (908)870-7579 185 Monmouth Parkway email: firstname.lastname@example.org West Long Branch, NJ 07764
I have no information about these products.
We have a 20GB Epoch Infinite File Server which we have been very happy with, serving Suns and HP Snake workstations.
I have used several Pinnacle Micro optical drives (not jukes) and have been happy with them. They sell 10 disk sony jukeboxes as well.
Just thought you might be interested.
--Jim Dempsey-- email@example.com
Another package to consider is the Epoch file server package. Currently I have it on some Epoch hardware, but it is a very reasonable package with a good backup system and user interface. It has also been around longer than any of the other 3 that you mention so lots of the bugs have been worked out as to how to deal with opticals and staging of files.
This is about the third time I've replied to this sort of request. I should make up a canned document and just send it off as necessary. That hasn't been done yet, so I'll just make this up extemporaneously.
I have R-Squared's Infinity File System (IFS) running on a smaller (6.5 GB) IDE jukebox. The type of hardware doesn't really seem to make much difference.
One thing that surprised me greatly is that IFS is a repackaged version of AMASS. They have re-typeset the manual to remove references to AAP, but the scripts used for installation and maintenance all have AAP copyright notices.
I've spent a lot of time on the phone with R-Squared technical support in Colorado. They have one guy who knows IFS, and he's very good. A couple of times I've been able to stump him, and he had to call AAP to get the word.
First, the good stuff. * IFS is a real product, and it works almost all the time. No vapor here. * You can have filesystems that span multiple physical volumes, and the software takes care of mounting the proper disk when needed. There are time delays, but only a few seconds. * The mounted filesystems are NFS-exportable. It's all transparent.
Being a natural pessimist, I remember the negative aspects better than the positive. Here's a selection.
* Installation is not easy. It took me about a day to go through the whole procedure. I like to think I'm pretty sharp, but I messed up in a fundamental way. It took a lot of phone consulting to get it straight. Maybe the manual has been improved. You need two SCSI addresses: one for the drive and one for the jukebox. IFS requires an unformatted hard disk partition (50 MB in my case) for use as a cache. Not having one lying around, I had to reformat one of my disks.
* NFS access is much slower than direct access on the server. It also causes a pretty heavy computational load on the server.
* One platter in the jukebox is dedicated to backing up the filesystem information. In a 10-disk jukebox that's a 10% capacity reduction that I wasn't expecting.
* There are software bugs, which are constantly being fixed. I was bitten by a couple of nasty ones in 3.0.1 which seem to be cured in 3.0.2. I know of two remaining: 1. Sometimes executing a binary program from the jukebox causes a kernel panic. 2. You can't create hard links. (Soft links are OK).
* When something particularly strange happens to the hardware or SunOS, IFS can lose track of which platter is where. It doesn't have a good way of recovering from this. A couple of times I have had to remove the outer box from the jukebox just to find out which disk is inserted. They should sell them with a plexiglas window.
* The filesystem on the disks is not standard UFS. This means that you can't exchange disks with anyone else. Supposedly they are working on a utility that allows you to read and write UFS disks, but I haven't seen it yet.
From: pln@egret1.Stanford.EDU (Patrick L. Nolan)
> My main problem is that I haven't been able, and will not > be able, to see any of the products in action until I > actually do the purchase.
NO,NO,NO! Test them first on your actual setup!
We have had incredible amounts of difficulty with opticals, with two separate vendors. It's hard to believe but they just can't get their software to work well.
Ralph Finch 916-653-8268 firstname.lastname@example.org ...ucbvax!ucdavis!caldwr!rfinch Any opinions expressed are my own; they do not represent the DWR
We are currently using the Zetaco (TOFS) on our Sun 4/380. The Zetaco works great. Currently, I have all of my X11R5 sources out on the optical, as well as the binaries. When I start up X11R5, any binary or libraries that I need, are automatically migrated in. We see no proformance hit at all. We are quite pleased. The support group at Zetaco is execellent. They really know the product inside and out.
Hope that helps. Mark Kowitz Rockefeller University email@example.com
I would also try Pinnalce Micro @ 714-727-3300. Sorry I do not have the 800 number available.
Their product is not shipping yet but it is far superior to others I have SEEN and EXAMINED>
It is simple in it's implementation, installation and administration.
Maybe they would make you a beta customer???
#include "disclaimer.h" /* You know my ideas are my own etc */ -- -ed-
Ed Milstein | Edward.Milstein@West.Sun.COM | firstname.lastname@example.org | sun!edward Sun Microsystems | AREA SERVER Specialist, Sun, Orange County, CA
My company can offer you a magneto optical jukebox with a 6 cassette capacity and containing a Ricoh 3051E drive and OSS FileDriver software for $10695.00 in U.S. dollars. The capacity is approxiamately 6GB. We have bundled the OSS software which allows data interchange between different operating systems (assuming the software is installed on the other as well. The software allows you to find any file without loading the disk containing it, using the Volume Directory Manager. Files cannot be accidentally erased and a complete audit trail of every transaction is availble until the disk is actually reclaimed (to reclaim, or reuse a disk, the FileDriver overwrites each sector with zeros or empty spaces completely removing all traces of previous recorded information-- this can be crucial in security situations.) Apunix has been in business for almost 11 years, and our technical staff offer toll-free technical support for as long as you own the product. We have shipped quite a number of these units and havn't had any returned to date.
Thanks much for you time, Susan Tornroth 800-827-8649, ext 104 619-4959229, ext 104
Well, we have (as clients) an Epoch-1 -> dedicated hardware running UNIX -> Erasable-optical jukebox
At a recent meeting they announced the Epoch-2 -> Sparc Station 2 -> EO jukebox -> Exabyte drive -> Extensions to SunOS
It will provide you with transparant (for regular users) access to the jukebox, as if it where one gigantic filesystem
They are willing (the told me they where) to sell a software only version, but will surely want to sell the hardware too. P.S. only specific hardware is supported !
Theit products range from 20Gb to 1 Terrabyte on-line storage
Epoch systems 8 Technology Drive Westborough, MA 01581-1751 (800) U.S.-EPOCH (800) 766-7500 (might be the same as above - the letter sceme isn't used in europe) (508) 836-4300 (508) 836-4711 Fax (508) 836-3802 Fax (508) 366-6853 -> some of these number could be old, some new, I don't know
I know of one saleperson : Mark B. Ward (Manager of European Sales) -> he ought to be able to direct you to your salesperson -> email@example.com (I might be wrong about this one) I know of one of their engineers : Bruce Lutz (International Systems Engineer) -> firstname.lastname@example.org
-- Swa Frantzen Katholieke Universiteit Leuven Tel: ++32 (0)16/20.10.15 ext 3541 Departement Computerwetenschappen FAX: ++32 (0)16/20.53.08 Celestijnenlaan 200 A B-3001 Heverlee (Leuven) E-mail: Swa.Frantzen@cs.kuleuven.ac.be BELGIUM
============================================================================== Talk to email@example.com for information on R-squared's Infinity.
lkn is: Lee Neely
From: firstname.lastname@example.org (groth curtis a) ==============================================================================
Finally, here is a summary from a person who basically asked the same question is mine.
>From email@example.com Fri Apr 10 17:33:07 1992 Return-Path: <firstname.lastname@example.org> Received: from Synopsys.COM by pebbles.synopsys.com (4.1/SMI-4.1) id AA24752; Fri, 10 Apr 92 17:31:36 PDT Received: from aloft.att.com by Synopsys.COM (4.1/SMI-4.0) id AA29631; Fri, 10 Apr 92 17:31:35 PDT Received: by fernwood.mpk.ca.us; id AA06207; Fri, 10 Apr 92 17:32:14 -0700 Message-Id: <9204110032.AA06207@fernwood.mpk.ca.us> From: email@example.com Received: by aloft.att.com (4.1/4.7) id AA13915; Fri, 10 Apr 92 20:27:20 EDT Date: Fri, 10 Apr 92 20:27:20 EDT Original-From: aloft!bill (B. Shorter) To: firstname.lastname@example.org Subject: Optical Disk Systems Status: OR
even my company, AT&T, has a product offerring.
CommVault. Offers backup capability, archive, and (shortly) the ability of migrating files transparently to the (Sun) user to optical media.
The front end computer is a Sun SPARCstation-2. Save sets on disk are cpio files (rather than some obtuse construct used by many of the contenders).
If you want information about this product, contact Randy Fodero. I think his email path is email@example.com.
There is one CommVault Product at my site now, and other will arive shortly. (We also have an Epoch at our sister site).
Bill Shorter firstname.lastname@example.org
>From pln@egret1.Stanford.EDU Fri Apr 10 19:48:20 1992 Return-Path: <pln@egret1.Stanford.EDU> Received: from Synopsys.COM by pebbles.synopsys.com (4.1/SMI-4.1) id AA24961; Fri, 10 Apr 92 19:46:49 PDT Received: from egret1.Stanford.EDU by Synopsys.COM (4.1/SMI-4.0) id AA00648; Fri, 10 Apr 92 19:46:49 PDT Received: by fernwood.mpk.ca.us; id AA13931; Fri, 10 Apr 92 19:17:36 -0700 Received: by egret1.Stanford.EDU (4.1/inc-1.0) id AA06967; Fri, 10 Apr 92 19:15:42 PDT Date: Fri, 10 Apr 92 19:15:42 PDT From: pln@egret1.Stanford.EDU (Patrick L. Nolan) Message-Id: <9204110215.AA06967@egret1.Stanford.EDU> To: email@example.com Subject: Re: Experiences with read/write network based Optical Juke boxes needed. Status: OR
> We are currently in the market for network wide read/write Optical disk server > (something like an Epoch). I have come acroos the following companies that > seem to have similar products: > > * Epoch > * Zetaco with HP juke box + their own "TOFS" software. > * Q-Star > * R-squared with Kodak juke box + their own Infinity software > * Cal-Abco with the Genesis software > * Cranel with HP juke box + AMASS software > * Unitree?? > * Unbound > > I know a little bit about a couple of these companies and what they do, but > overall I do not have any idea about their performance, reliability etc etc. > > Given below are some considerations that are important to us: > > * Should start out with atleast 10GB but should be easily expandable > upto 60 to 70 GB. > > * How easily can we do backups on these systems? > > * Reliability? > > * Should work with SunOS or Sun compatible machines with NFS capabilities. > > * Would be nice to have automatic file migration capabilities but not > necessary. > > * Can these systems handle WORM disks? Would be nice but not necessary. > > Questions: > ---------- > > Is anybody using any of these systems right now? Can you describe your > experiences both good or bad? > We have the R-Squared Infinity software running on a small jukebox: an IDE 10-platter model. I think most of my experiences will apply to the larger models too.
This machine attaches to a Sun. A new device driver needs to be installed, which means adding stuff to the kernel. The kernel modifications took about a day to do, partly because the manual was not clear on some points. There is a new edition of the manual, which might help some.
It's not clear how long it will take them to come up with a new version for SunOS 5.0.
When dealing with R-Squared, they gave me the impression that they wrote the software themselves. That's not true. It's actually a repackaged version of AMASS, produced by Advanced Archival Products.
There is one guy at R-Squared who does technical support for this product. He's really good. I've talked to him on the phone a lot and he's always been right on top of the situation. When I found a bug that stumped him he went right to AAP and got it cleared up.
I have had 5 major problems with the jukebox. 1. Installing software. They talked me through it on the phone. 2. A hardware glitch a few weeks after installation. R-Squared had a contract with a local company. Apparently there's a generic problem with the IDE boxes that requires some adjustments at installation. 3. I ran a Sun OS patch that changed the protection on a lot of files. The Infinity software went crazy. Again they talked me through it on the phone. 4. I discovered a software bug that made the data read from the disk sometimes unreliable, and which sometimes caused a kernel panic. This was fixed in release 3.0.2 of the software. 5. There was a bug which caused one of the disk header blocks to be erased. There was a quick-fix program that repaired it (available from Colorado by e-mail), and the bug was fixed in 3.0.2.
One delicate feature of this thing is that it needs to keep track of which disk is positioned where. A couple of times it got confused and I had to remove the case so I would know what low-level commands to issue to de-confuse it.
The filesystem on the disks is not compatible with the normal unix file structure. A logical filesystem can span multiple disks, which need not be physically inside the machine.
In order to implement this, a fairly large chunk of ordinary disk needs to be set aside as a cache. It has to be a disk partition with no filesystem on it. I use 50 MB.
They are just about to release a new software feature which will allow disks to be read or written with a unix file structure. This will allow disks to be imported or exported to other machines.
The system wasn't designed with backups in mind. There's no software for that, or for migration. You could use it for that, if you want. That isn't my application. I wanted mass storage of data files that remain on line most of the time, but with the ability to migrate them out by hand.
I think Infinity will work with WORMs. In fact, it treats the rewritable disk as if it is a WORM. When a file is deleted, the space it occupied is not recovered. The dead space on a disk just grows. When there is too much dead space on a disk, there is a utility which allows you to move the remaining data to another platter. Then you reformat the platter, thus removing the dead space.
If you choose to have filesystems that span multiple platters, you can get in a situation in which files in the same directory are spread all over various platters. This could get annoying if you want to access a group of files in quick sequence.
The software uses the vfs interface, so the filesystems are NFS exportable. Files can be accesses the same way on the server and clients, although access is noticeably slower on clients.
>From csb@gdwb.OZ.AU Fri Apr 10 22:45:40 1992 Return-Path: <csb@gdwb.OZ.AU> Received: from Synopsys.COM by pebbles.synopsys.com (4.1/SMI-4.1) id AA25108; Fri, 10 Apr 92 22:42:34 PDT Received: from gdwb.OZ.AU by Synopsys.COM (4.1/SMI-4.0) id AA05054; Fri, 10 Apr 92 22:42:32 PDT Received: by fernwood.mpk.ca.us; id AA20340; Fri, 10 Apr 92 21:35:23 -0700 Received: from oahu.isd.gdwb.OZ.AU by peking.gdwb.OZ.AU with SMTP (5.61) id AA04163; Sat, 11 Apr 1992 14:33:02 +1000 Received: by oahu.isd.gdwb.OZ.AU (5.61) id AA01501; Sat, 11 Apr 1992 14:32:59 +1000 Date: Sat, 11 Apr 1992 14:32:59 +1000 From: csb@gdwb.OZ.AU Message-Id: <9204110432.AA01501@oahu.isd.gdwb.OZ.AU> To: firstname.lastname@example.org Subject: Re: Experiences with read/write network based Optical Juke boxes needed. Content-Type: X-sun-attachment Status: OR
---------- X-Sun-Data-Type: text X-Sun-Data-Description: text X-Sun-Data-Name: text X-Sun-Content-Lines: 67
From: bala@pebbles.Synopsys.COM (Bala Vasireddi) Date: Fri, 10 Apr 1992 16:23:18 PDT We are currently in the market for network wide read/write Optical disk server (something like an Epoch). I have come acroos the following companies that seem to have similar products: * Epoch Got one and like it. I know a little bit about a couple of these companies and what they do, but overall I do not have any idea about their performance, reliability etc etc.
Given below are some considerations that are important to us: * Should start out with atleast 10GB but should be easily expandable upto 60 to 70 GB.
Epoch can go to a 1,000 GB and starts at 20 GB's. * How easily can we do backups on these systems?
This is probably the make or break item for all these types of systems. Some you list above provide no backup software at all. The EPOCH handles backups using it's own special software. It keeps a database of every file backed up and where every file can be found on which backup. A must as well is the Hypersave option. This saves an enormous amount of effort. With this option we do full (level 0) backups everynight. * Reliability?
One day of down time in 6 months of operation. * Should work with SunOS or Sun compatible machines with NFS capabilities.
Epoch NFS is very quick and reliable. Later this year Epoch will be releasing the Epoch 2 which will be a Sun running a modified SunOS. Other waise the Epoch 1 is a 4.3 BSD machine and therefore ver compatible. We have a site with 80 sun servers and clients and it fits in beautifully.
* Would be nice to have automatic file migration capabilities but not necessary.
Epoch does this. Even provides software so you can do this on you SunOS filesystems as well. * Can these systems handle WORM disks? Would be nice but not necessary.
The only way to get 1,000 GB on an EPOCH is with WORM, either 5.25 or 12 inch.
To: email@example.com Subject: Experiences with read/write network based Optical Juke boxes needed. Status: OR
We are currently in the market for network wide read/write Optical disk server (something like an Epoch). I have come acroos the following companies that seem to have similar products:
* Epoch * Zetaco with HP juke box + their own "TOFS" software. * Q-Star * R-squared with Kodak juke box + their own Infinity software * Cal-Abco with the Genesis software * Cranel with HP juke box + AMASS software * Unitree?? * Unbound
I know a little bit about a couple of these companies and what they do, but overall I do not have any idea about their performance, reliability etc etc.
Given below are some considerations that are important to us:
* Should start out with atleast 10GB but should be easily expandable upto 60 to 70 GB.
* How easily can we do backups on these systems?
* Should work with SunOS or Sun compatible machines with NFS capabilities.
* Would be nice to have automatic file migration capabilities but not necessary.
* Can these systems handle WORM disks? Would be nice but not necessary.
Is anybody using any of these systems right now? Can you describe your experiences both good or bad?
Please send your replies to "bala@Synopsys.COM" and I'll definitely summarize.
-- Bala Vasireddi, Phone: (415)694-4180 Synopsys, Inc. FAX: (415)965-8637 700 E.Middlefield Rd. DDN: bala@Synopsys.COM Mountain View, CA 94043 UUCP: ..!fernwood.mpk.ca.us!synopsys!bala
>From firstname.lastname@example.org Fri Apr 10 23:20:36 1992 Return-Path: <email@example.com> Received: from Synopsys.COM by pebbles.synopsys.com (4.1/SMI-4.1) id AA25136; Fri, 10 Apr 92 23:17:35 PDT Received: from petadmin.wustl.edu by Synopsys.COM (4.1/SMI-4.0) id AA05622; Fri, 10 Apr 92 23:17:33 PDT Received: by fernwood.mpk.ca.us; id AA23113; Fri, 10 Apr 92 23:07:12 -0700 Received: by wugate.wustl.edu (5.65c+/WUSTL-0.3) with SMTP id AA25496; Sat, 11 Apr 1992 01:05:18 -0500 Received: by petadmin.wustl.edu.wustl.edu (4.1/SMI-4.0) id AA10079; Sat, 11 Apr 92 01:05:51 CDT From: firstname.lastname@example.org (Todd Gamble) Message-Id: <9204110605.AA10079@petadmin.wustl.edu.wustl.edu> Subject: Re: Experiences with read/write network based Optical Juke boxes needed. To: email@example.com (Bala Vasireddi) Date: Sat, 11 Apr 92 1:05:50 CDT In-Reply-To: <9204102323.AA24663@pebbles.Synopsys.com>; from "Bala Vasireddi" at Apr 10, 92 4:23 pm X-Mailer: ELM [version 2.3 PL11] Status: OR
HP also offers its own software. I believe a complete system goes for around $80000 (don't quote me on that) this includes a HP9000/720 workstation, some disks, a DAT drive, and a 20GB juke box (with media). You should call your local HP rep for details. It does support WORM drives and has "semi-automatic" file migration.
------------------------------------------------------------------------- Todd Gamble, Systems Administrator Phone: (314) 362-2011 Washington University School of Medicine FAX: (314) 362-6110 Campus Box 8225 510 South Kingshighway Blvd. St. Louis, Missouri 63110 Email: firstname.lastname@example.org -------------------------------------------------------------------------
>From email@example.com:B.McCrone Sat Apr 11 13:10:22 1992 Return-Path: <firstname.lastname@example.org:B.McCrone> Received: from Synopsys.COM by pebbles.synopsys.com (4.1/SMI-4.1) id AA25916; Sat, 11 Apr 92 13:07:21 PDT Received: from daresbury.ac.uk by Synopsys.COM (4.1/SMI-4.0) id AA08655; Sat, 11 Apr 92 13:07:21 PDT Received: by fernwood.mpk.ca.us; id AA27387; Sat, 11 Apr 92 12:38:56 -0700 Via: sun3.nsfnet-relay.ac.uk; Sat, 11 Apr 1992 20:30:05 +0100 Received: from dlsl.dl.ac.uk.dl_sun_server by gserv1 (4.1/DL-V1.5) id AA29550; Sat, 11 Apr 92 20:34:09 BST Date: Sat, 11 Apr 92 20:34:09 BST From: "B.McCrone" <B.McCrone@daresbury.ac.uk> Message-Id: <9204111934.AA29550@gserv1> To: bala <email@example.com> Subject: optical jukeboxes Status: OR
We have a Zetaco system. The messages which follow are copied from a similar enquiry about a month ago. I don't have anything to add really apart from changing the time line. There have been no failures so far.
a) How long have you had the ZETACO product and how reliable is it ? b) What type of system configuration dou you have ? A workstation fileserver envoronment ? Is it used in service environment where availability and reliability are major issues ? c) Any comments on performance, transfer rates to/from optical media. d) It incorporates Budtool for backup purposes - do you backup both optical and magnetic media or just the data on magnetic media ? Does Budtool offer any support for automatic Exabyte devices ? We have a SUMMUS jukebox with 2 8500 Exabyte drives -- I seem to remember that you also use SUMMUS at Daresbury.
I'll try to cover your questions as you put them.
a) we installed the Zetaco system just over 4 weeks ago - so it is too early to be certain of reliability. We have had no failures so far, beyond self induced software problems (you need to think carefully about the parameters such as inode allocation when setting up very large filesystems).
b) Our configuration: 90 Gbyte Jukebox SS2 + 64Mbyte + SCSI card 400 Mbyte system disc (just enough) } 4 * 1 Gbyte cache discs } on built in SCSI Exabyte 8500 }
The intended use (I only just invited the punters to play) is to hold experimental/part analysed data archives. We expect that most users will take working copies during intensive analysis, but no special restrictions are applied. I have also moved our common PD source archives onto a filesystem which is available site wide (we have 100+ workstations in use plus many PCNFS users).
We now regard the system as a service, but this is recent so no statistics for uptime etc yet.
c) Optical drives are SLOW (especially writing - I think they take a second rev to read/check). You need a fairly generous allocation of cache space. I'm aiming at 5% of each filesystem => I need another drive yet.
Also I encourage the use of the space for low frequency files, and intend to discourage (not ban) users accessing more than say 5% of their space in any 24 hours.
d) As to backups, I am still considering Budtool but haven't installed it. We have a home grown equivalent which is already handling ~50 Gbytes of magnetic space. This allows us to backup the optical plates to an 8500, but ties up the drive for >24 hours per 5 Gbytes!
I believe Budtool knows about Exabyte tape handlers but not SUMMUS (we never bought the SUMMUS after developing our software for it - no money then, no need now).
A few tips:
Allow plenty of swap space - the "ager" and backup processes build large tables in vm.
Try to second guess the eventual demand for capacity in each filesystem when allocating your cache discs - they are a pain to change.
Look very carefully at the inodes/cylinder group actually selected during mkfs_vbfs - SUN/BSD filing have some wierd ideas about "optimal" choices based on discs about 1972. I just lie about the physical layout until it looks OK - no problem with SCSI controllers! You can find some dirt on the assumptions used in back issues of SUN managers.
Remember that the inodes have to cover the staged out files as well as those on magnetic disc.
Beware of backups to 8200 - the data rate off optical is low, and their buffer management will result in excessive padding/waste of tape.
Reservations - few and minor but here goes:
There is no easy way to summarise filesystem use - df only shows the cache space and listp is too detailed.
The backup utility is based on cpio rather than BSD dump - personal preference.
There is no equivalent of icheck/ncheck to locate damaged files - haven't needed to yet of course.
Documentation leaves much unsaid - you need to think about how you would have designed the system before jumping in.
If I can help further, please feel free to call.
Brian McCrone UK (0925) 603281
>From firstname.lastname@example.org Sun Apr 12 04:55:53 1992 Return-Path: <email@example.com> Received: from Synopsys.COM by pebbles.Synopsys.com (4.1/SMI-4.1) id AA00371; Sun, 12 Apr 92 04:54:25 PDT Received: from tss.no by Synopsys.COM (4.1/SMI-4.0) id AA08248; Sun, 12 Apr 92 04:52:55 PDT Received: by fernwood.mpk.ca.us; id AA25669; Sun, 12 Apr 92 04:53:28 -0700 Received: from benoni by ppenoni.uit.no with SMTP (PP) id <firstname.lastname@example.org>; Sun, 12 Apr 1992 13:51:22 +0000 Received: from unas.tss.no by benoni.uit.no (5.65+IDA/Babel-1.15/ABaa-1.2/Ultrix) id AAbenoni07780; Sun, 12 Apr 1992 13:51:18 +0200 Received: by unas.tss.no (4.0/ABaa-1.3mini) id AA01134; Sun, 12 Apr 92 13:45:08 +0200 Date: Sun, 12 Apr 92 13:45:08 +0200 From: email@example.com (Aadne Hestenes Spt) Message-Id: <9204121145.AA01134@unas.tss.no> To: bala@Synopsys.com Subject: Optical Discs Status: OR
>From firstname.lastname@example.org Mon Apr 13 08:00:42 1992 Return-Path: <email@example.com> Received: from Synopsys.COM by pebbles.Synopsys.com (4.1/SMI-4.1) id AA01729; Mon, 13 Apr 92 07:59:11 PDT Received: from allegra.att.com by Synopsys.COM (4.1/SMI-4.0) id AA21026; Mon, 13 Apr 92 07:59:11 PDT Received: by fernwood.mpk.ca.us; id AA22865; Mon, 13 Apr 92 08:00:16 -0700 Message-Id: <9204131500.AA22865@fernwood.mpk.ca.us> Received: by inet; Mon Apr 13 10:58 EDT 1992 Date: Mon, 13 Apr 92 10:57:53 EDT From: firstname.lastname@example.org (Bill Roome) To: bala@pebbles.Synopsys.com Subject: optical disk jukeboxes Cc: email@example.com Status: OR
[Warning: the following is a mild advertizement for a system that I helped build. But I'm a research wienie, not a marketing wienie, so I'll try to stay reasonably honest. - Bill Roome ]
I'd like to point out another optical disk system, CommVault/3dfs, by AT&T. 3dfs is basically a network backup server with an on-line, NFS interface for accessing old versions. As with most backup systems, periodically (eg, daily), something snarlfes thru your mag disk file system(s), finds everything that's changed since the last snarfle, and ships those files over the net to 3dfs, which burns them onto optical disk.
The novel thing about 3dfs is that users can access the old versions via an NFS file system interface. By default, 3dfs gives the most recently dumped version of a file (eg, last night's version). If I want an old version, I can attach @date to any file name (eg, passwd@4apr).
Example: suppose novax is one of my machines, and I mount the 3dfs server on /usr/3dfs. Then /usr/3dfs/novax/etc/passwd gives me yesterday's passwd, and /usr/3dfs/novax/etc/passwd@23jan gives me passwd on 1/22/92, and /usr/3dfs/novax/etc@4jul91/passwd gives passwd as of last July 4th.
The result is that the old versions become part of the file system; you can access them with ordinary commands. No recompilation, no kernel changes. You can even exec old binaries.
For more info, see the paper I wrote in the Jan '92 Usenix.
3dfs uses a variety of jukeboxes, ranging from small 5.25" disks (R/W or WORM) to large 300+ gig 12" WORM jukeboxes. 3dfs uses a Sun as a controller. 3dfs can be mounted by anything that mounts NFS. 3dfs accepts dumps from anything that smells like a unix file system (eg, you can dump local file systems as well as NFS-exported systems).
As for archiving from mag disk to optical, the simple approach is to replace an archived file or directory with a symlink to the appropriate old version in 3dfs. That is read-transparent, but does NOT allow writes (3dfs is inherently read-only). We're currently working on "DMS", an add-on to an existing file server, to allow write-transparent archiving of files. DMS is similar to Epoch/renaisance (sp?), but uses existing (sun) file systems, and does NOT require kernel changes to the mag disk file server. We expect to ship DMS by the end of this year (I've got a prototype running & I use it to for *my* files; the delay is for commercial packaging & testing & (*gasp*) documentation).
For tech info, contact me Bill Roome firstname.lastname@example.org allegra!wdr 902-582-7974 The full-fledged marketing wienie is Dave Ireland 908-870-7234. =============================================================================== Hello,
I read your mail about optical discs etc., and I only want to inform you that we have a Jukebox and drives from ATG/France. The jukebox is cappable of handling 51 disks of about 10GB each or 9GB each, and it contains 2 drives. The discs are 12" WORM double sided, but must be turned in order to read on the second side. The transfer rate to the discs during write is about 500KB/s, but because of verification( you could also have bad spots on the optical media too ), the bandwidth will normally be dramatically reduced, but this depends on which SCSI controllers you are using and which device drivers/File system software you are using.
The Jukebox is connected to a SS2 with a RS232 cable for controlling the robitics in the jukebox, and SCSI on which both the drivers are connected.
The jukebox is called Cygnet 1802, and the drives GD9000.
There is a lot of experince in using the Cygnet jukebox and drives with 2GB capacity and STARFILE within ESA, ESRIN or the French PAF.
Please contact me again if you want more information, I think I did a lot of searching in the market prior to selecting the jukebox system.
Best Regards Adne Hestenes, Spacetec a.s, Box 585, 9001 Tromsoe, NORWAY
>From I.Ashton@bradford.ac.uk Mon Apr 13 03:11:34 1992 Return-Path: <I.Ashton@bradford.ac.uk> Received: from Synopsys.COM by pebbles.Synopsys.com (4.1/SMI-4.1) id AA00882; Mon, 13 Apr 92 03:08:38 PDT Received: from bradford.ac.uk by Synopsys.COM (4.1/SMI-4.0) id AA17399; Mon, 13 Apr 92 03:07:07 PDT Received: by fernwood.mpk.ca.us; id AA14047; Mon, 13 Apr 92 02:04:23 -0700 Via: sun3.nsfnet-relay.ac.uk; Mon, 13 Apr 1992 10:00:03 +0100 From: Ian Ashton <I.Ashton@bradford.ac.uk> Message-Id: <email@example.com> Subject: Dorospace experience (fwd) To: bala@Synopsys.com Date: Mon, 13 Apr 92 10:01:33 BST X-Mailer: ELM [version 2.3 PL11] Status: OR
Forwarded message: > From ian Thu Jan 9 15:41:42 1992 > Subject: Dorospace experience > To: P.C.Sutton > Date: Thu, 9 Jan 92 15:41:42 GMT > X-Mailer: ELM [version 2.3 PL0] > > As part of its replacement of central facilities Bradford purchased two > 30Gb jukeboxes and associated software from Dorotech Ltd. > > The jukeboxes are the Hitachi 112 series each holding 50 rewriteable Maxell > optical disks. Each jukebox has two drives and can hold a maximum of four. > The Dorotech applications in use at Bradford are Dorosave and > Dorospace. > > Dorosave provides filestore backup. Dorospace provides an extension to > the user filestore by migrating files between optical and magnetic > filestore based on frequency of access. Each application has control of > one jukebox. The boxes are on their own SCSI interface on the SUN 4/490 > which holds the user filestore. Cable lengths mean that the two > jukeboxes and a exabyte drive fit are chained together with the > jukeboxes smack up against the Sun. This just takes 6m of cable. > > The system has been in full user service from October 1991 with very > few problems. Dorospace was used initially to provide access to the > filestore from the Control Data Cyber systems the centre used to > operate. The filestore was migrated from dump tapes onto optical disk > during the first few weeks of the Sun service. Only 'text' files were > transferred. This accounts for about 3Gb of the jukebox. The remainder > is released as a 'space' partition for staff and research users needing > more than the 20Mb standard allocation. User files are accessed by > creating links, in the user's home directory to the user's optical > filestore in the jukebox. Access to a file not held in the magnetic > cache can result in a delay of 10-30 seconds which removes some of the > transparency. > > Dorospace jukeox User view > > > home > | > ___________________________ > +---------------+ | | | | > | Cyber 4 files | cyber2 cyber4 space rest of > |_______________| user's > | Cyber 2 files | files > |_______________| > | | > | space files | > | | > +---------------+ > > > > ALL user filestore at Bradford is served by the 4/490 over NFS. > Dorosave is running on the same system. Installation involves kernel > mods to add the jukebox driver and the Dorospace file system and then > loading the applications themselves. This was done by Dorotech UK and > took 4 days. This was the first 4/490 they had done and the next one > will take much less time! > > Various points which may be of interest:- > > When the jukebox is partitioned the number of disks for the partition > is required. Disks can be added but not taken away. > > Dorotech recommend a magnetic/optical ratio of 1:10 for good performance. > > The maximum file size is one side of a disk, about 300 Mb. Or the size > of the magnetic partition if this is less. > > ALL disks must be in the jukebox for DoroSpace. The backup application, > DoroSave can export disks for removal from the machine room. > > Dorospace backup for fire security is onto an exabyte drive attached to > the jukebox. > > We are dependant on Dorotech porting the applications before we can move to > SunOS 1.4.2 and beyond. > > Any queries welcome. > ian@bradford > Dorotech are a French company based in Nanterre, Paris. We deal with their office in England. I'l find out if the address of their USA office if you're interested.
>From firstname.lastname@example.org Mon Apr 13 06:30:39 1992 Return-Path: <email@example.com> Received: from Synopsys.COM by pebbles.Synopsys.com (4.1/SMI-4.1) id AA01687; Mon, 13 Apr 92 06:29:08 PDT Received: from lamont.ldgo.columbia.edu by Synopsys.COM (4.1/SMI-4.0) id AA19360; Mon, 13 Apr 92 06:29:07 PDT Received: by fernwood.mpk.ca.us; id AA13596; Mon, 13 Apr 92 06:29:55 -0700 Received: from muddy.ldgo.columbia.edu. by lamont.ldgo.columbia.edu (4.1/SMI-3.2) id AA16214; Mon, 13 Apr 92 09:27:56 EDT Date: Mon, 13 Apr 92 09:27:56 EDT From: firstname.lastname@example.org (bob bookbinder) Message-Id: <9204131327.AA16214@lamont.ldgo.columbia.edu> To: bala@pebbles.Synopsys.com Subject: Re: Experiences with read/write network based Optical Juke boxes needed. Status: OR
We are in the process of looking at the same list of vendors. Please make sure you summarize to the net.
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:06:43 CDT