SUMMARY: unable to read a tape

From: Gérard Henry <ghenry_at_cmi.univ-mrs.fr>
Date: Thu Dec 02 2004 - 09:44:24 EST
thanks again to all people responding (8 responses) from:
Baranyai Pal
Bill Wiliams
Darren Dunham
Patrick Nolan
E (?)
Siva Singam Santhakumar
Gregory Shaw
Bertrand Hutin

solution is:
tar cfb - 128 /export/home/ |rsh mary2-new dd of=/dev/rmt/6un obs=128b
ssh mary2-new "dd if=/dev/rmt/6un bs=128k" | tar xvfp -
bs=64k also works

here is responses:
Baranyai Pal
Does this work (after rewind) on LTO server:

dd if=/dev/rmt/6un of=/dev/null
If not:
Can this produce valid result (after rewind):

dd if=/dev/rmt/6un of=/tmp/dd.block.size bs=256k count=1
If yes the size of /tmp/dd.block.size should be used as bs=
parameter for dd.
-------------------------------------------------------------------------------
Bill Wiliams
The problem is that the actual tape blocksize is larger that expected
by 'dd' -- you need to specify a blocksize >= to the actual tape
blocksize.  If you do something like this:
	% ssh server "dd if=/dev/rmt/6un bs=128k" | tar xvfp -
it should work.  The "bs=128k" will allocate a 128KB buffer to hold
the tape blocks; however, only the amouht of data actually read will be
passed along through the pipe.  IOW: for 'tar' use the "dd bs=" doesn't
have to be exact, it just has to be big enough to hold the largest
block on the tape.

Actually, your problem (in this case) wasn't 'tar'; I think it was 'dd'
issuing the message.  However, you might have gotten something similar
from tar if it had been reading the tape directly.

Hint for the future:
I use the SSH pipe to read/write files to tape, and 'tar' (actually,
I use 'gtar') doesn't really care what size blocks it gets.
BUT CPIO DOES CARE.  If you ever use 'cpio' through a pipeline (ssh or
not) you do need to keep track of the blocksize you are writing to
tape!  Or at least the blocksize 'cpio' *thinks* it is writing to
tape, because when you read that tape back into 'cpio' it (apparently)
looks at the header and expects incoming data in whatever size was
used to write the tape.  The trick here is that you have to know
the "bs=" or "obs=" size used with 'dd' to create the tape, and when
you pipe from 'dd' to 'cpio' you should use "obs=" with the correct
blocksize.  Unlike "bs=" which is a guideline for buffer allocation,
"obs=" is a rule that says "I will write blocks of that size".
Of course, if 'cpio' wrote the tape directly, and 'cpio' reads the
tape directly, if you use "-C BLOCKSIZE" when you wrote it you gotta
use the same flage when you read the tape.  At least that has been my
experience.

-------------------------------------------------------------------------------
Darren Dunham
You should supply a blocksize (bs=) to the dd so that you're not writing
really small blocks on the tape.
Very strange.  I don't know why it's not reading with the same
blocksize.

Anyway, the error is coming from the tape driver.  It's complaining that
the size of the block on tape is bigger than the size of block it
expected to read, and there's not enough space to store it.

You can supply a big enough blocksize here to 'dd' (I'd start with 64k
and see if that does it) to read.  However, I'd prefer to supply a
blocksize on both ends.  64k is usually big enough to get you most of
your performance.
-------------------------------------------------------------------------------
Patrick Nolan
This looks like the same problem that drove me nuts a
couple of weeks ago.  It relates to the block size.
By default tar creates records of 10Kb.  By default
dd deals with records of 512 bytes.  It appears that
the second dd is trying to read larger blocks.

The cure is to specify block sizes explicitly.
To the first tar add something like -b 64.
To the second dd add bs=32k.
-------------------------------------------------------------------------------
E (?)
I think you blocksize is wrong.

Tcopy will show you the block size:
tcopy /dev/rmt/6un

Once you have this, try:
ssh server "dd if=/dev/rmt/6un bs=XX" | tar xvfp -
-------------------------------------------------------------------------------
Siva Singam Santhakumar
You are missing the blok size for dd. Try with bs=64k ( standard size)
-------------------------------------------------------------------------------
Gregory Shaw
1. I wouldn't use ssh for backups if I had a choice. ssh will throttle 
the throughput to the drive, as it has to encrypt everything. Your 
backups will be cpu bound and will hit both boxes pretty hard.
2. GNU tar (http://www.gnu.org) has a remote tape capability. It's a lot 
cleaner for doing this sort of stuff.

Of course, if you're in a high-security environment, you might need to 
get the below working.


Gerard Henry wrote:
> hello all,
> last week, i have a question concerning backup and receive many 
> responses, but didn't do a summary yet, because i can't work with tape.
> To backup filesystem, i try:
> 
> client-henry% tar cvpf - grand-1999.jpg | ssh server /bin/dd 
> of=/dev/rmt/6un
> grand-1999.jpg
> 1040+0 records in
> 1040+0 records out
> 
> before i restore, i went on backup server with LTO2 attached and did a 
> rewind:
> server-henry% mt -f /dev/rmt/6un status
> HP Ultrium LTO 2 tape drive:
>   sense key(0x0)= No Additional Sense   residual= 0   retries= 0
>   file no= 2   block no= 0
> server-henry% mt -f /dev/rmt/6un status
> HP Ultrium LTO 2 tape drive:
>   sense key(0x13)= EOT   residual= 0   retries= 0
>   file no= 2   block no= 0
> server-henry% mt -f /dev/rmt/6un rewind
> server-henry% mt -f /dev/rmt/6un status
> HP Ultrium LTO 2 tape drive:
>   sense key(0x0)= No Additional Sense   residual= 0   retries= 0
>   file no= 0   block no= 0
> 
> And now, i can't retrieve my file!
> client-henry% ssh server "dd if=/dev/rmt/6un" | tar xvfp -
> dd: reading `/dev/rmt/6un': Not enough space
> 0+0 records in
> 0+0 records out
> 
> What's wrong in this commands?
> server is a Solaris 8 machine, and i find on sunsolve a patch related to 
> this 116634-02, but my LTO2 came from tier, not sun. This patch update 
> firmware, do you think it's a good idea to use it?
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Thu Dec 2 09:46:43 2004

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:40 EST