Thursday, May 10, 2007

Digital Forensic Science

For a few years now I've been contemplating the discipline called "digital forensic science" and I constantly wonder and ask myself "Is this a science?" and in fact many of my entries here contain something to the effect of "if we are indeed a science".

This needs to start with some definitions.(If someone wants to give me a better definition than what I have below, then please do)

Wikipedia defines science as: In the broadest sense, science (from the Latin "to know") refers to any systematic methodology which attempts to collect accurate information about reality and to model this in a way which can be used to make reliable, concrete and quantitative predictions about future events and observations.

Wikipedia defines Computer Science as: Computer science, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems.

Digital Forensic Science as defined by the DFRWS is:
The use of scientifically derived and proven methods toward the preservation,
collection, validation, identification, analysis, interpretation, documentation and
presentation of digital evidence derived from digital sources for the purpose of
facilitating or furthering the reconstruction of events found to be criminal, or
helping to anticipate unauthorized actions shown to be disruptive to planned
operations.

Now that the definitions are out of the way, let's take a deeper look in to this thing called "digital forensic science" or DFS.

DFS falls under the forensic science umbrella which is the application of science to satisfy the questions of the legal system. However, there is a component missing from DFS, and that's the foundation of science. Let me explain. Every other forensic science has a founding science that backs its use in a legal setting. Entomology is the foundation for forensic entomology, odontology is the foundation for forensic odontology and so on. Digital Forensic Science in practice has no scientific foundation. It is a discipline of Computer Science perhaps, but there doesn't appear to be a clear scientific foundation for what is done in the name of DFS.

Based on the definition of DFS per the DFRWS, it is directly related to the legal system and the law. Therefore, we can look at what qualifies as forensic testimony of a digital forensic scientist. Daubert is the predominant qualifier so let's look take a look at it.

Daubert asks us 4 main questions related to scientific forensic testimony.

  1. Whether the theory used by the expert can be and has been tested.
  2. Whether the theory or technique has been subjected to peer review.
  3. The known or potential rate of error of the method used.
  4. The degree of the methods or conclusion's acceptance within the relevant scientific community.
While the fourth item will always be debated as it is in any science, the first three usually must be satisfied for acceptance.

DFS fails these questions in many cases because these questions assume one thing and that is the scientific foundation of the application of the technique or method being used by an examiner.

To address them in order...
1) Testing.

This is not just testing the theory, but testing in a repeatable manner. The conducted experiment/test must be documented well enough such that it can be repeated and the results should be the same for each test.

This is probably the easiest part of the daubert challenge. However there are many variables that need to be taken in to account that could affect the outcome. When it comes to testing a theory or method, one must attempt to take every variable in to account, lest it be disproved by another scientist. We have the CFTT, but that's tool testing, and there is no standard for theory or method testing. That's not to say I think there should be a hard coded standard, but a framework should exist. The SWGDE and CFTT both have frameworks but as I said, they are for tool testing, not method or process testing and as such these can not be used without modification and modification of the testing method invalidates the procedure as it is no longer "scientifically derived".


2) Peer review.

As I've said before, we have no formal peer review in the field. We have a few journals but practitioners of DFS do not have a body to submit their procedures to for peer review and in many cases they can not because of a policy. Labs rarely receive a certification from a governing body and until the day comes when all computer forensic labs require a certification, the processes and procedures that take place within said lab could be suspect. How many can claim that their procedures and processes have been peer reviewed?

3) Error rates.

There are no published error rates for procedures, tools, methods etc, and often times we must answer "no" to this question, yet errors occur frequently. Vendors place the responsibility of determining rate of error on the user of their product while I tend to believe that the software should have to be submitted to a review group in order to receive a "forensic" stamp of approval. Beta testing is nice, but it's not and should not be a method of software certification, especially when the lives and well being of individuals is at risk.


FTK crashes on a semi-regular basis if you overload it, I've had cases where it would not process an image regardless of what I had done, and by many accounts Encase 6 has too many bugs to count. This is relatively simple to pick at.
Q: "Mr examiner, how can you trust the results of your tools when they crash in the middle of processing the evidence?"

A: "I re-ran the examination procedures and they concluded without fault".

Q: "So you are in the habit of re-testing until you are satisfied with the result?"

Q: "Did you validate your findings with other software?"

A: "Yes I validated my findings using another tool"

Q: "And what is the error rate with that software?"

A: "I don't know of an error rate with that software"

Let me also pick on Anti-malware software. Once upon a time, NIST approved Symantec and Mcafee for virus scanning. If you used these and nothing was found, then you could effectively claim the system was malware free. As we all know the error rates in these tools are absolutely tremendous. How can anyone claim that the system is malware free unless you scan it with every tool available? As a test of what's out there I submitted the pwdump binary from the recent compromise of my honeynet to Norman, virustotal, and jotti's virus scan and out of the 31 scanners at virustotal, only 5 picked it up as malware, only two from jotti(which are included in the 5 from virustotal), and norman claimed it wasn't malware. To boot, none of them had a specific definition for it. In addition, neither symantec or mcafee recognized the binary. We all know that this is a regular occurrence with AV software but unless we scan the file system with every available tool the best we can ever claim is that "given the tools in use and the current definition files, I can say that with a small degree of certainty, this system is malware free." This will permanently leave the door to the trojan defense wide open.

My point is that the tools available today will always have a rate of error, and we must know them, but how many can answer something other than "no" when the question "Is there a known rate of error for the procedure/tool/process you used?" is asked.


In many ways it would seem that DFS fails Daubert or should fail Daubert, because of the lack of scientific foundation in the discipline. However, as we know, examiners regularly survive Daubert and are allowed to testify.


Heading back to the definition, DFS is not strictly about reconstructing events found to be criminal or helping to anticipate unauthorized actions. While it's largely used in a legal or criminal setting, or to determine unauthorized actions, these are merely applications of DFS. Given this, is it reasonable for the definition to describe the application of the science and define it?
As Mark Pollitt suggested in his 2004 keynote at the DFRWS, perhaps Roles need to be added to the Framework. Given this, I think the definition needs to be re-worked.


The science in the case of DFS is not about proof, as we can't actually prove anything in a concrete manner. If you are ever asked for proof of something, can you make a claim that your conclusion is 100% accurate? For instance, can we prove who was actually sitting at the computer unless we actually saw them? The field is largely conjecture and any argument we make is based on circumstantial evidence. Yes, someone with that IP, at that time, logged in using that user name, and visited that website, and downloaded or viewed that image, and created that file. You can see how circumstantial evidence can mount to build a strong case, but is it strong enough? In many cases yes, but we're seeing instances of "the trojan defense", or "The defendant didn't actually knowingly store those CP files in his browser cache". While that's a bit of clever lawyering, it shows that there are holes in our processes and procedures. I will of course submit that there are always going to be holes as nothing is bulletproof but the lack of standardized scientific methods (as the DFRWS claims are used in DFS) creates situations like this.


As such I wonder if we shouldn't rely on something more concrete to help us effectively conclude that what we say happened, actually happened in the way we claim. Under current circumstances, the best we can do is present a degree of certainty or a level of confidence that what happened actually happened as we say it did. Eoghan Casey was on to something in his Second edition of Digital Evidence when he defines a certainty scale for the trustworthiness of digital evidence. I think that we could benefit from a formal use of this type of scale for our findings. I'm also beginning to think that by using math - either statistical confidence intervals or Bayesian credible intervals we can build a stronger model for Digital Forensic Science and therefore we can strengthen the presentation of digital evidence and the use of it in the legal system.

Thoughts?

1 comments:

Anonymous said...

These are really interesting and important points which you raise.

The recent increase in interest in "Live Forensics" within the Digital Forensic community has caused me to question whether we can actually call it "forensic" at all. I think Incident Response is more appropriate until such time as further work is done to
reinforce findings gathered from the likes of RAM, Network connections etc. The sheer volatile nature of the data collected would make it impossible to recreate and to lay it open to re examination.

In the UK we are guided by the ACPO principals of whch the third states "An audit trail or other record of all processes applied to computer based evidence should be created and preserved. An independent third party should be able to repeat those processes and achieve the same result." For me the important part of this principal is the second sentance.

Although we can establish procedures for the collection of volatile data being able to do so with a degree of certainty or a level of confidence is far more difficult. I beleive that we are not seeing challenges at present to evidence obtained from volatile sources because many examiners or Lawyers don't understand the nature of the evidence.

Your point about Eoghan's scale of trustworthyness is valid but more has yet to be done to establish what the scale is and how different techniques and tools can be compared against it.

Interesting Points made though.