HDD

How to securely erase files and free space (tested with FAT32)

* = please see "Follow up from the inbox" section at the end of this page

Note: this method describes how to securely erase free space on a drive. To permenantly erase ALL data from a drive, try Darik's Boot And Nuke* or see below for the section on scrub, which can erase free space, files or the whole drive.

I wanted to remove any old files from an external FAT32 drive. This drive has been in use for a number of years so needs a thorough clean. First of all, I checked the drive using TestDisk.

You can simply download the TestDisk linux version, extract the tar.bz2 file and run the binary from a terminal session. When I did so, I predictably found a bunch of old files TestDisk marks in red to show they are deleted. Note that the file recovery function of TestDisk only works on FAT32 / NTFS drives. To try and recover data from an ext3 filesystem, check out ext3grep.

In order to delete the free space, I looked around for a utility that had this capability. The secure-delete package contains a command called sfill that seemed to do what I wanted. While searching, I also discovered that if you have a journalling file system, you may have problems deleting files securely - see the discussion here.


Installing and using the secure-delete tools

The two particular tools of interest in the secure-delete package were srm (for deleting files) and sfill (for deleting free space). On Debian, the package can be installed using apt-get:

Debian:

sudo apt-get install secure-delete

Fedora:

sudo yum install srm

Note that I couldn't find a precompiled version of sfill for Fedora 11...

Then you can run the following command (where the path is the file system you want to erase the free space from):

sudo sfill -v /media/RESOURCES/

Output:

Using /dev/urandom for random input.
Wipe mode is secure (38 special passes)
Wiping now ...
Creating /media/RESOURCES/oooooooo.ooo ... **************************************
Wiping inodes ... Done ...  Finished

For me, this took about 4 hours.


The problem with FAT32...

I checked the file system as the erase was in progress. The ONLY file created was the file "oooooooo.ooo" - the 4GB file size limit on FAT32 had prevented the program from creating a large enough file to securely erase the free space... I believe the phrase 'doh!' is appropriate here.

As described by the sfill man page, the behaviour of sfill is as follows:

The secure data deletion process of sfill goes like this:
1 pass with 0xff
5 random passes. /dev/urandom is used for a secure RNG if available.
27 passes with special values defined by Peter Gutmann.
5 random passes. /dev/urandom is used for a secure RNG if available.

afterwards as many temporary files as possible are generated to wipe the free inode space. After
no more temporary files can be created, they are removed and sfill is finished.

So, FAT32 file systems have a 4GB file size limit but there was roughly 20GB of free space on the drive I wanted to erase the free space on.

The "oooooooo.ooo" file would get to 4GB and then naturally be unable to write further data...

So what to do?


Dummy files and srm

I decided to create empty files to fill up most of the space, then securely erase the dummy files using the srm tool.

I found this page that described the creation of dummy files. The example code was:

dd if=/dev/zero of=file_1GB bs=1024 count=1000

From the dd man page:

of=FILE        write to FILE instead of stdout
bs=BYTES       force ibs=BYTES and obs=BYTES
count=BLOCKS   copy only BLOCKS input blocks

There was 20GB free space, so I needed to create at least 5 x 4GB of files - I decided to go for six files of around 3.5GB each.

To loop the command above, I used a for loop, which I'd found an example for on the ss64.com site:

# Loop 100 times:
for i in $(seq 1 100); do echo -n "Hello World${i} "; done

The following command created six dummy files, each showing as 3.3GB in nautilus, except the last which filled up as much space as it could. The files were named emptyfile1, emptyfile2, etc.:

for i in $(seq 1 6); do dd if=/dev/zero of=emptyfile${i} bs=1024 count=3500000; done

Output:

3500000+0 records in
3500000+0 records out
3584000000 bytes (3.6 GB) copied, 204.292 s, 17.5 MB/s
3500000+0 records in
3500000+0 records out
3584000000 bytes (3.6 GB) copied, 137.098 s, 26.1 MB/s
3500000+0 records in
3500000+0 records out
3584000000 bytes (3.6 GB) copied, 136.737 s, 26.2 MB/s
3500000+0 records in
3500000+0 records out
3584000000 bytes (3.6 GB) copied, 132.27 s, 27.1 MB/s
3500000+0 records in
3500000+0 records out
3584000000 bytes (3.6 GB) copied, 138.037 s, 26.0 MB/s
dd: writing `emptyfile6': No space left on device
3261697+0 records in
3261696+0 records out
3339976704 bytes (3.3 GB) copied, 141.753 s, 23.6 MB/s

Then I ran the following command to securely erase the dummy files:

srm -v /media/RESOURCES/emptyfile*

Output:

Using /dev/urandom for random input.
Wipe mode is secure (38 special passes)
Wiping /media/RESOURCES/emptyfile1 **************************************
Removed file /media/RESOURCES/emptyfile1 ... Done
Wiping /media/RESOURCES/emptyfile2 **************************************
Removed file /media/RESOURCES/emptyfile2 ... Done
Wiping /media/RESOURCES/emptyfile3 **************************************
Removed file /media/RESOURCES/emptyfile3 ... Done
Wiping /media/RESOURCES/emptyfile4 **************************************
Removed file /media/RESOURCES/emptyfile4 ... Done
Wiping /media/RESOURCES/emptyfile5 **************************************
Removed file /media/RESOURCES/emptyfile5 ... Done
Wiping /media/RESOURCES/emptyfile6 **************************************
Removed file /media/RESOURCES/emptyfile6 ... Done

This took around 20 hours, roughly a little under 1GB / hour.

This process seemed somewhat complex, and the amount of time taken seemed excessive for what is (these days) a relatively small amount of data.

The two programs from the secure-tools suite I'd been using (sfill and srm) use a method similar to the Gutmann method, which involves writing over the data a large number of times in slightly different ways.

I decided to do some research to see if the Gutmann method is significantly more secure than other methods, and whether there were any better tools to accomplish my goal.


Compiling and installing scrub

After a brief search, I found a program called scrub.

Fedora:

sudo yum install scrub

Unfortunately there didn't seem to be an existing package for scrub to install using apt-get, but it was extremely easy to compile:

./configure
make
sudo make install

...or to create a DEB package to reuse later, replace the last step with:

sudo checkinstall -D make install

Scrub gives us a choice of nnsa, DoD and Gutmann as patterns to use for overwriting the free space on the drive.

From the scrub man page:

The effectiveness of scrubbing regular files through a file system will be limited by the OS and file
system.File systems that are known to be problematic are journaled, log structured, copy-on-write,
versioned, and network file systems. If in doubt, scrub the raw disk device.

...

The dod scrub sequence is compliant with the DoD 5220.22-M procedure for sanitizing removeable and
non-removeable rigid disks which requires overwriting all addressable locations with a character, its
complement, then a random character, and verify. Please refer to the DoD document for additional
constraints.

The nnsa (default) scrub sequence is compliant with a Dec. 2005 draft of NNSA Policy Letter NAP-14.x (see
reference below) for sanitizing removable and non-removable hard disks, which requires overwriting all
locations with a pseudorandom pattern twice and then with a known pattern. Please refer to the NNSA
document for additional constraints.

So now to find out if Gutmann was indeed required.


Researching Gutmann, DoD and nnsa:

Some information on the Gutmann method from http://en.wikipedia.org/wiki/Gutmann_method

"The Gutmann method is an algorithm for securely erasing the contents of computer hard drives,
such as files. Devised by Peter Gutmann and Colin Plumb, it does so by writing a series of 35 patterns over the
region to be erased.
The selection of patterns assumes that the user doesn't know the encoding mechanism used by the drive, and so
includes patterns designed specifically for three different types of drives. A user who knows which type of
encoding the drive uses can choose only those patterns intended for their drive. A drive with a different encoding
mechanism would need different patterns. Most of the patterns in the Gutmann method were designed for older MFM/RLL
encoded disks. Relatively modern drives no longer use the older encoding techniques, making many of the patterns
specified by Gutmann superfluous."

Some information on the DoD method from http://en.wikipedia.org/wiki/National_Industrial_Security_Program

"DoD 5220.22-M is sometimes cited as a standard for sanitization to counter data remanence. The NISPOM actually
covers the entire field of government-industrial security, of which data sanitization is a very small part
(about two paragraphs in a 141 page document). Furthermore, the NISPOM does not actually specify any particular
method. Standards for sanitization are left up to the Cognizant Security Authority. The Defense Security Service
provides a Clearing and Sanitization Matrix (C&SM) which does specify methods. As of the June 2007 edition of the
DSS C&SM, overwriting is no longer acceptable for sanitization of magnetic media; only degaussing or physical
destruction is acceptable.
Unrelated to NISP or NISPOM, NIST also publishes a Data Sanitization standard, including methods to do so."

You can find a nicely formatted PDF copy of the DoD Clearing and Sanitization Matrix here.

For some information on the default nnsa pattern (PDF format), see this document entitled Clearing, Sanitizing, and Destroying Disks CIAC-2325.

Check out section 7.2, as it goes into much greater detail about the various different overwrite methods. It even has a small section on scrub though I suspect this is now slightly out of date (for example, the scrub man page does not show an "-r" option, and the man page also suggests there are older methods from previous versions which have since been deprecated).

Next I set about testing scrub.


Erasing free space with scrub

The command I used was as follows:

scrub -X -s 1G /media/RESOURCES/Erase

The '-X' argument specifies we want to create a bunch of dummy files to fill up the available space. The '-s' argument tells scrub what size we want the files to be, in this case, 1GB per file. You'll notice this is exactly what I was trying to do with srm, only much easier as it only requires one command. Note that you cannot specify a folder that already exists - scrub will fail if you try.

Also the output is much more verbose, as you can see below.

Output:

scrub: using NNSA NAP-14.x patterns
scrub: scrubbing /media/RESOURCES/Erase/scrub.000 1073741824 bytes (~1GB)
scrub: random  |................................................|
scrub: random  |................................................|
scrub: 0x00    |................................................|
scrub: verify  |................................................|
scrub: scrubbing /media/RESOURCES/Erase/scrub.001 1073741824 bytes (~1GB)
scrub: random  |................................................|
scrub: random  |................................................|
scrub: 0x00    |................................................|
scrub: verify  |................................................|

...

scrub: scrubbing /media/RESOURCES/Erase/scrub.019 1073741824 bytes (~1GB)
scrub: random  |.......................................xxxxxxxxx|
scrub: random  |................................................|
scrub: 0x00    |................................................|
scrub: verify  |................................................|

This took about 52 minutes. A big improvement!


So does it work?

I fired up testdisk again, and although it is still possible to see the deleted files, the limited tests I did suggested that trying to undelete the files only returned random data, as expected.


In conclusion...

If you want to securely erase files or free hard disk space, scrub is well worth a look, and certainly seems the superior of the two tools I tested.

It seems somewhat pointless to use the Gutmann method, as it is slow and will wear out your hard disk faster due to the large number of times it writes to the disk.

The Gutmann method is more of a theoretical approach - in theory, someone with top of the line equipment (that may or may not exist) could retrieve the data. Also, many of the writes it makes to the disk are unnecessary with modern hard disks.

Also worth noting is (on FAT32 drives at least) that the names and dates of old files are still visible.* Even though they cannot be restored, this may still be more information than you want accessible on your hard disk.

If this is the case, my suggestion would be to use a virtual encrypted file system, such as the open source program Truecrypt.

This has two advantages. Firstly, your data will be encrypted and hence, more secure. Secondly, the virtual file system it creates will take the form of a single file, which will then give no clues about what is stored on the disk.

To accomplish similar secure erasing tasks on Windows machines, try the free Eraser tool.


Follow up from the inbox

From: 	acky_max@XXXXXX.XX [acky_max@XXXXXX.XX]

Concerning your article http://www.perceptualmotion.co.uk/secure_erasing.html
I want to tell you that after a scrub on a disk, is not possible to see the file
list. I used Testdisk and Photorec and when they don't find anything on a
drive, they show you the list of files of the first known hard disk (I don't
know why, but I tried again to see if the list was my ACTUAL home by adding a
new folder in it)

So I think you were in error writing that you can still have file list of the
scrubbed disk :)
From      : "Rob Whalley" [mail@XXXXXXXXXX.XX.XX]

Thank you very much for your email, when I next update the site I will be sure
to amend the information (sadly this happens infrequently due to real life
getting in the way!).

Just out of idle curiosity... was it a drive formatted with FAT32 you were
testing? If not, was it formatted in NTFS, EXT 2/3/4 or something else?
From: 	Germano [acky_max@XXXXXX.XX]

It was an ext3. The verb "was" is very appropriate because before using Scrub I
tried with DBAN live cd. BTW DBAN is very very bugged and after one hour it will
crash or something else. So I tried Scrub with success

The computer used with Scrub is a laptop with an hard disk with double partition
Ext3/Ntfs and the disk to wipe was connected through USB (a WD Caviar Green 1tb)