Saturday, July 14, 2007

The rise of network forensics

I am starting to think that "network forensics" is going to quickly become the next "big thing"(TM) in the digital forensics discipline.

Well, what is network forensics?

by definition:Network forensics is the capture, recording, and analysis of network events in order to discover the source of security attacks or other problem incidents.

Unfortunately, that's the only definition I could actually find. Why is that I wonder? Perhaps because there is no such thing as network forensics? Is it just really another name for network security monitoring or intrusion analysis?

NSM is the collection, analysis and escalation of indications and warnings to detect and respond to intrusions. Intrusion analysis is much the same.

Let's look at a few of the tools used for each shall we?

Tcpdump
Wireshark
ngrep
tcpreplay
P0f
net flows(cisco, argus etc)
Snort or other IDS
Qradar


These are obviously only a few tools, many open source. Either way, they are multi-purpose tools used for things like protocol analysis, traffic analysis, intrusion analysis, security monitoring and now...network forensics? Let's be honest with ourselves, network "forensics" appears to be just a buzzword for what's been done for years in the analyst field. There's nothing "forensic" about it..or is there?

So, where does the "forensic" component come in to play? Simple. Collection, preservation, and presentation as evidence.

Network forensics has many criteria that must be met in order for it to be useful as evidence.

My initial thoughts...

1) It must be complete

Whether it's session data, full content, alert or other, it must be complete. A capture that's missing packets could be seen as damaging to a case. A capture system must be capable of keeping up with the network it's connected to. Anomaly systems like Qradar that decide what to capture content on, or not, should not be used as evidence because the capture will be incomplete. A partial capture could be used to perhaps get a wiretap, or warrant but presenting it as fact or of evidentiary value would jeopardize a case in my opinion.

2) It must be authentic

Captures must be proven to be authentic. Hashing routines should be run as a capture log rolls over or each packet should be hashed as it's stored, before the analysis begins. In addition, the capture should be done by two independent systems if possible.

3) Time is critical

The collection system must be synched, and any time disparity must be accounted for.

4) Analysis systems need logging capabilities.

An analyst looking at a network capture to be used as evidence must allow for logging of the actions of the analyst.

5) Anyone presenting network based evidence must be trained in packet and protocol analysis, not simply the tools used.

In my opinion, being able to read a packets content in depth and being able to accurately interpret and analyze it is of utmost importance. Being able to explain the why's and how's is critical. It's easy to jump to conclusions that are incorrect with network based evidence. Richard rebuffs here. And here is an incorrect conclusion. It's very easy to do since many scenarios will fit a similar signature.

6) It must be used in conjuction with host based forensics if possible.

Of course, not every scenario is ideal, but remember that the power of digital evidence is in the ability to corroborate. Corporal and Environmental evidence should be used to corroborate each other if an accurate reconstruction is to take place. The value of the evidence will be bolstered if the two sources support each other.

7) Sensor deployment must be appropriate.

It does almost no good to deploy a sensor at the perimeter if the attack is internal. It might make sense to deploy multiple sensors in this type of case. Care must be taken to deploy at the right time and location.


What are others thoughts on the subject of network forensics? Is it snake oil, or a bonafide digital forensics specialty?

1 comments:

alec said...

I can't believe anyone could call it snake oil!!

I had a case recently where a rootkitted spambot was doing its dirty work from within one of our networks. By capturing its traffic (via 'ip traffic-export' on a suitably placed router) the bot's C&C channel could be observed, along with the demographics of the spam itself.

Also, historical netflow analysis spoke of exactly when the spambot was active, which could give clues as to the infection date.

In this event, Network Forensics bore fruit quickly where the complementary (and equally necessary) host-based analysis was taking time. The C&C channel obtained from the network betrayed a Rustock variant:

http://www.usenix.org/events/hotbots07/tech/full_papers/chiang/chiang_html/

It's not quite the variant above, but its theory of operation was similar enough to allow for removal of the infection.

I can't live without NBF. I have an 80+ site network, and I capture session information from each of those sites via netflow (from router platforms) and debug-level logging (from PIX firewalls). All of this ends up on a CS-MARS where you can query and correlate to your heart's content :)