SUMMARY: Wiping disks (update)....

From: <Rick.Brashear_at_ercgroup.com>
Date: Thu Nov 13 2003 - 12:37:54 EST
Some additional responses came in after my summary that I felt would be
beneficial to the archives:

Niall O Broin [niall@makalumedia.com] wrote:
>If you want to be sure that the disks are wiped beyond retrieval, you want
to 
>do this a number of times. There is a Mil. standard for this. AFAIK i 
>specifies writing X number ot times with value Y, followed by A number of 
>times with value B etc.

Hearn, Stan (CEI-Atlanta) [Stan.Hearn@cox.com] wrote:

Shred of Gnu File Utils
<http://www.gnu.org/software/fileutils/fileutils.html>

>From the shred documentation is the following enlightening information:
shred: Remove files more securely
shred overwrites devices or files, to help prevent even very expensive
hardware from recovering the data. 
Ordinarily when you remove a file (see rm invocation), the data is not
actually destroyed. Only the index listing where the file is stored is
destroyed, and the storage is made available for reuse. There are undelete
utilities that will attempt to reconstruct the index and can bring the file
back if the parts were not reused. 
On a busy system with a nearly-full drive, space can get reused in a few
seconds. But there is no way to know for sure. If you have sensitive data,
you may want to be sure that recovery is not possible by actually
overwriting the file with non-sensitive data. 
However, even after doing that, it is possible to take the disk back to a
laboratory and use a lot of sensitive (and expensive) equipment to look for
the faint "echoes" of the original data underneath the overwritten data. If
the data has only been overwritten once, it's not even that hard. 
The best way to remove something irretrievably is to destroy the media it's
on with acid, melt it down, or the like. For cheap removable media like
floppy disks, this is the preferred method. However, hard drives are
expensive and hard to melt, so the shred utility tries to achieve a similar
effect non-destructively. 
This uses many overwrite passes, with the data patterns chosen to maximize
the damage they do to the old data. While this will work on floppies, the
patterns are designed for best effect on hard drives. For more details, see
the source code and Peter Gutmann's paper Secure Deletion of Data from
Magnetic and Solid-State Memory, from the proceedings of the Sixth USENIX
Security Symposium (San Jose, California, 22-25 July, 1996). The paper is
also available online
<http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html>. 
Please note that shred relies on a very important assumption: that the
filesystem overwrites data in place. This is the traditional way to do
things, but many modern filesystem designs do not satisfy this assumption.
Exceptions include: 
*	Log-structured or journaled filesystems, such as those supplied with
AIX and Solaris. 
*	Filesystems that write redundant data and carry on even if some
writes fail, such as RAID-based filesystems. 
*	Filesystems that make snapshots, such as Network Appliance's NFS
server. 
*	Filesystems that cache in temporary locations, such as NFS version 3
clients. 
*	Compressed filesystems. 
If you are not sure how your filesystem operates, then you should assume
that it does not overwrite data in place, which means that shred cannot
reliably operate on regular files in your filesystem. 
Generally speaking, it is more reliable to shred a device than a file, since
this bypasses the problem of filesystem design mentioned above. However,
even shredding devices is not always completely reliable. For example, most
disks map out bad sectors invisibly to the application; if the bad sectors
contain sensitive data, shred won't be able to destroy it. 
shred makes no attempt to detect or report these problem, just as it makes
no attempt to do anything about backups. However, since it is more reliable
to shred devices than files, shred by default does not truncate or remove
the output file. This default is more suitable for devices, which typically
cannot be truncated and should not be removed.

Mike Demarco [mdemarco@tritonpcs.com] wrote:
>This really depends on how secure you need this to be, The problem with
using either of the mentioned methods is that data >can still be retrieved
by changing thermal properties of the disk. If data was written to a track
at lets say 110 Degrees 
>the head position over the track is at a given location. If you cool the
disk down to 60 degrees it will move the head ever >so slightly off track
and you will see old information ghosting. One of the problems with doing a
format-analyze is that it >lays down a pattern on the disk at the current
temperature and when you have a given pattern it is much easier to read the
>ghost. The only way to guarantee the data can not be read is to destroy the
disk. 

Jason.Santos@aps.com wrote:
>Just some corrections/additions --

>/dev/random only exists on Solaris 9, or on Solaris 8 with patch 112438.

>It will be much faster to dd from /dev/urandom if you wish to overwrite
>the disk with pseudorandom data, because /dev/random is a source of
>"higher quality" random data, which means that it takes longer to
>produce, thus slowing down your dd.

>Also, you cannot use dd if=/dev/null, because you cannot read anything
>from /dev/null.  You can use /dev/zero instead to get a stream of zero
>bytes.  This will be much faster than using /dev/urandom.



Original Query:

I am tasked with ensuring no data is left on a large number of disks on 
servers we are returning on lease expiration. I have done some preliminary 
searching for techniques or tools for this task without success. 
What say my brothers/sisters at arms on this subject? 

Reponses:

dd if=/dev/random of=/dev/rdsk/<spanning partition of whtever disk)
or if-/dev/null

ALSO:
format - analyze - write/compare/purge/verify

Thanks again to one and all!

Summary:
Some suggested newfs but as documented in this sunmanagers archive article
newfs does very little to remove data (should have checked here first - Tim
Evans):

http://www.sunmanagers.org/pipermail/sunmanagers/2002-October/017432.html

Thanks to these respondants:

ippy@optonline.net
Bruntel, Mitchell L, ALABS [mbruntel@att.com]
Steven Hill [sjh@waroffice.net]
Stephen Moccio [svm@lucent.com]
neil quiogue [neil@quiogue.com]
Eric Paul [epaul@profitlogic.com]
Gwyn Price [gwyn@glyndwr.com]
Steve Elliott [se@comp.lancs.ac.uk]
Pablo Jejcic [pablo.jejcic@smartweb.rgu.ac.uk]
Tim Evans [tkevans@tkevans.com]
Ungaro, Matt [mjungaro@capitolindemnity.com]
joe.fletcher@btconnect.com
Dave Mitchell [davem@fdgroup.com]
Smith, Kevin [Kevin.Smith@sbs.siemens.co.uk]


g			 ERC
_______________________________
Rick Brashear
Server Systems
Information Technology Department
Employers Reinsurance Corporation
5200 Metcalf
Overland Park, Kansas 66201
*	913 676-6418
*	rick.brashear@ercgroup.com
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Thu Nov 13 12:37:50 2003

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:24 EST