In many states in the US the Daubert rulings serve as a litmus test for acceptance of the testimony for the forensic expert in court. One of the challenges for acceptance of a method is the known error rate, or potential for error. I hinted at this in a previous entry when I mentioned the margin of error for the procedures used by security teams. I'd like to expand this for the purposes of this entry to those who acquire images using a procedure that is based on typed commands.
I'd like to use dcfldd as my example for this. Let me state that I love dcfldd. I use it all the time, and it's a fantastic tool. I don't see the flaw here being in the tool (although the vast capabilities of the tool can make an acquisition command very complex), but rather the major source of error is the human. Let's take a few sample acquisition commands going from simple to complex:
dcfldd if=/dev/fd0 of=/home/foo/evidence/dcfldd_test.dd bs=512 hashwindow=0 hashlog=/home/foo/md5.txt hash=md5
hogfly@blackwootsy:~$ wc /tmp/acquisition.txt
1 line, 7 words, 113 characters /tmp/acquisition.txt
dcfldd if=/dev/hda conv=noerror,sync hashwindow=0 hash=sha256 hashconv=after hashlog=/media/sda1/Case123/Images/SN1234563.dd.sha256 of=/media/sda1/Case123/Images/SN1234563.dd
hogfly@blackwootsy:~$ wc /tmp/acquisition.txt
1 8 176 /tmp/acquisition.txt
dcfldd if=/dev/hde conv=noerror,sync hashwindow=1M hash=md5 hashconv=after hashlog=/mnt/hdg1/dd-a00032/a00032.hash.log split=640M splitformat=aa of=/mnt/hdg1/dd-a00032/a00032.image of=/mnt/hdh1/dd-a00032/a00032.image
hogfly@blackwootsy:~$ wc /tmp/acquisition.txt
1 11 218 /tmp/acquisition.txt
Please be aware that I was replacing the text in acquisition.txt each time - I just didn't include those commands here for brevity.
So, what we see here is up to 218 characters for a single command that creates two images on two separate disks. Note that this is simply the culmination of the acquisition process. There are several other steps I haven't included here like: directory creation, mount commands, date, hwclock, dmesg, hdparm, disk_stat, fdisk, mmls, or the verification commands.
I decided to take a stroll down google lane for human error rates in a multistep typing process and came across the following site related to studies of human error. All credit for the resource goes to Ray Panko. I leave validation of the resources up to those who read the site.
The studies he's listed on his site are quite shocking when it comes to typed error rates. These studies contain error rates from 5.6% all the way up to 82% for SQL professionals!
I wonder what the error rate would be in conducting an acquisition from the command line in Linux. Is anyone aware of a study of something like this? Recall if you will that the Daubert challenge is about known or potential error rates in the procedure. Of course the damage caused by typos can be mitigated by using a write blocker but many folks like to believe that you don't *need* a write blocker if you acquire an image using linux. I tend to think that a study in "error rates of command line acquisitions" will lead to a large potential error rate in the process and that those that use command line acquisition tools should wrap the commands in scripts or use a GUI tool that's capable of meeting their need.
Thoughts?
Thursday, April 19, 2007
Subscribe to:
Post Comments (Atom)
1 comments:
Very interestering. I know when I type my fingers don't always cooperate with my brain. I would agree with trying to put as much into scripts and gui as possible. That way you can show what the scripts did and it can be repeated when used against the same data and you also don't leave steps out.
One thing to note when using Helix without a write blocker there is noting stopping you from repartitioning the drive (accidently of course, but explain that to the client) that you want to accquire (did this as a test to prove to someone why you need a write blocker).
Post a Comment