SUMMARY: tar/ssh/dd large file

From: Ben Kim <>
Date: Wed Sep 07 2005 - 14:41:47 EDT
Thanks to Koef, Rich Kulawiec, David Foster, Dan Stromberg, Chris Sellers,
Aleksander Pavic, Chris Ruhnke and THORNTON Simon. Especially to David
Foster and Dan Stromberg.

I thought the problem was complex but it turned out the problem was too
simple and there was one file possibly corrupt. Others were OK.

In summary, gnu tar has no file size limit, except for implementation
limit (

dd is not the limiting factor. Chris Ruhnke shared his experience where he
had used it to zero out a disk of 1 TB without problem.

About timeout, Koef and Aleksander pointed out that LoginGraceTime=120
actually means 2 minutes and means "The server disconnects after this time
if the user has not successfully logged in."

Checking again, ClientAliveInterval and ClientAliveCountMax would have
been more relevant, but we were not using them. Bash/sh's TMOUT/TIMEOUT
seemed relevant but I didn't find them. Ssh non-free has idle-timeout, but
mine is not non-free.

All comments about rsync, ufsdump and syscall tracers were wonderful.

Thanks all.

My original post is here.
> I have a network backup script running over ssh like this:
> /usr/local/bin/tar cvfz - mydir | /usr/local/bin/ssh
> user@remote_server dd of=/backup/backup.tgz
> Somehow, the mydir became large (used to be 6gb now 36gb).
> After that, I found that the backup stopped at around 8gb.
> I wonder where this limit comes from. I'm using gnu tar and
> I'm sure it handles files over 2 gb. I'm not sure about dd. I
> know ssh can also be a cause since there's login grace time
> of 2 hours and I guess 36gb could have taken more than 2 hours.
> I have to determine whether I can continue to use this script
> or not. I'd appreciate any advice.


Ben Kim
sunmanagers mailing list
Received on Wed Sep 7 14:44:14 2005

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:51 EST