Just yesterday I was sitting on the bus, when we pulled up to a stop where at least 20 people boarded. I happened to be sitting near the rear exit door when I saw a young'ish kid jump through the open door and land in the seat in front of me. I looked around to see if anyone else noticed..apparently they didn't or didn't care to say anything. Then, I looked up at the front of the bus where there were all of the passengers waiting to go through the "security check" where you either scan a card, wave an rfid card, use your traditional boarding pass, or pay with cash. The bus driver, though he has a few mirrors to look in to was fully pre-occupied with the boarding passengers and had no way of detecting the fare thief.
This friends is a compromise of the system, where the system is the bus. All of the good little applications on the system checked in with the bus driver and were deemed acceptable. That one sneaky application jumped in through the exposed hole in the system, and looks just like all of the other applications on the system. I, playing the role of Antivirus or other security technologies inspected, and let it happen. The bus driver, being administrator was too busy with other duties to notice.
There's a saying that it's rarely the things we fear most that kill us. That seems to be the case when it comes to security. That which we armor our systems against is rarely what leads to compromise. In this case, The bus has multiple methods of authentication, guarded and monitored by a human. However, the backdoor of the bus gets opened every time the front door does, so anyone can go out the back door, or come in. My point? Every time you open the door to your system to put new things in it, you increase the potential for compromise by opening a back door.
We build great firewalls, and have operating system security products, and plenty of gee whiz tools, but there's also another saying...
"You as a system administrator can screw up only once", that's all it takes to lead to compromise. All it takes is one misconfiguration, one step in the wrong direction, one rear door of the bus staying open 3 seconds longer than it should, and bam, you've got a compromise on your hands.
Recall the Routine Activity Theory if you will. I firmly believe that RAT defines security incident occurence. Why do security incidents occur?
1) Target of opportunity
2) lack of proper guardianship
3) a motivated offender
Thursday, August 28, 2008
Monday, August 25, 2008
Knowing is half the battle
Any G.I. Joe fans out there? This was the catch phrase used at the end of the cartoon during the public service announcement when they teach some badly behaving kid a lesson. In a recent investigation my network logs showed a mysql intrusion using a root account and the ubiquitous User Defined Function attack. When I arrived on site to take a look at the system, the manager asked me what happened. I said "based on network logs it looks like a database server got compromised."
If there ever was a deer in the headlights look this was it.
"Database server? What database server, I didn't know there was one".
We grabbed the user of the system...same look..and same response.
"Database server? What database server, I didn't know there was one".
Many software packages come bundled with supporting software that isn't readily apparent during installation. It may say something like "installing database". Microsoft is actually good about this and says "this product requires a database, do you want MSDE or a real SQL installation"? After doing software installs all day long, the tech working on the computer is probably surfing the web or getting coffee while the install is happening, and comes back when the install is done and that's that. Sure, I understand that, who the heck watches software installs? No one I know, unless they're in a hurry or compiling something on a linux box.
However, the software needs to be checked before install, because you need to know what to expect. You need to know that you just opened a hole in the security of your organization and someone now needs to deal with it appropriately. When risk is introduced in an organization it needs to be known and addressed, and remember..knowing is half the battle.
If there ever was a deer in the headlights look this was it.
"Database server? What database server, I didn't know there was one".
We grabbed the user of the system...same look..and same response.
"Database server? What database server, I didn't know there was one".
Many software packages come bundled with supporting software that isn't readily apparent during installation. It may say something like "installing database". Microsoft is actually good about this and says "this product requires a database, do you want MSDE or a real SQL installation"? After doing software installs all day long, the tech working on the computer is probably surfing the web or getting coffee while the install is happening, and comes back when the install is done and that's that. Sure, I understand that, who the heck watches software installs? No one I know, unless they're in a hurry or compiling something on a linux box.
However, the software needs to be checked before install, because you need to know what to expect. You need to know that you just opened a hole in the security of your organization and someone now needs to deal with it appropriately. When risk is introduced in an organization it needs to be known and addressed, and remember..knowing is half the battle.
Sunday, August 24, 2008
When users attack
This is one of those things that make my head hurt. Just last week an IDS alert fired and ended up in my inbox. This was one of those alerts that require validation so after consulting network logs, a conclusion was reached that this was an incident and not just an event. Phone calls were made, emails were sent, and the local system admin called to let me know they were on their way to the machine. Great news. I instructed the sysadmin not to touch the system, not to let the user touch the system and just unplug the network cable. "Sure thing" said the sysadmin. I was already offsite on another engagement so it took a short while for me to get to the site. Upon arrival I checked in with the receptionist and spoke to the manager. The sysadmin was no longer around. I was directed to the computer user and together, we headed to the office where the computer was located. As instructed the user was not operating on the computer that had been compromised. This seemed promising...
We get to the office and the user says...
"I just got done running an antivirus scan, and it didn't find anything".
I'm literally at a loss for words at this point. "Err, uhm, what?!" I think to myself.
Friendly user offers up lots of other information about their actions and the nature of the system, including that they had no idea the vulnerable piece of software that got exploited was even installed on the system. This is bad(TM). Could be worse I suppose but to think that the sysadmin echoed back my request and agreed to pull the network cable and remove the user from the system and not to touch the system themselves, and then the user scanned the system..yikes. But wait! It gets better. The user has the risk history window open in symantec antivirus. Well well, looky what we have here, a scan that precedes the one just run by the user..and it's an administrative scan that identified lots of badness. 5 pieces of badness to be exact. I suppose it's a good thing that antivirus found the malware, but did it find it all? How can we be sure?
When next I speak to my sysadmin friend I think we'll need to talk. Ever feel like Chris Tucker and Jackie Chan in Rush Hour? "Do you understand the words that are coming out of my mouth"? The ever elusive Jun Tao snuck in, did damage and disappeared..all before I could get there. Common isn't it?
My sysadmin stick is being sharpened....
We get to the office and the user says...
"I just got done running an antivirus scan, and it didn't find anything".
I'm literally at a loss for words at this point. "Err, uhm, what?!" I think to myself.
Friendly user offers up lots of other information about their actions and the nature of the system, including that they had no idea the vulnerable piece of software that got exploited was even installed on the system. This is bad(TM). Could be worse I suppose but to think that the sysadmin echoed back my request and agreed to pull the network cable and remove the user from the system and not to touch the system themselves, and then the user scanned the system..yikes. But wait! It gets better. The user has the risk history window open in symantec antivirus. Well well, looky what we have here, a scan that precedes the one just run by the user..and it's an administrative scan that identified lots of badness. 5 pieces of badness to be exact. I suppose it's a good thing that antivirus found the malware, but did it find it all? How can we be sure?
When next I speak to my sysadmin friend I think we'll need to talk. Ever feel like Chris Tucker and Jackie Chan in Rush Hour? "Do you understand the words that are coming out of my mouth"? The ever elusive Jun Tao snuck in, did damage and disappeared..all before I could get there. Common isn't it?
My sysadmin stick is being sharpened....
V is for validation
Whether it's a complaint of weird system behavior, an alert from a detection system, a phone call or some other mechanism, a very important step must occur; Validation. Validation is absolutely important if only so we don't waste effort, and charge clients unnecessarily. Not long ago I received an email alert from an organization overseas that alerted our group to a system that may have been compromised. The alert went on to say that the system was likely compromised and a rootkit was probably installed. Like any well intentioned IR team we took the alert seriously and started making some phone calls. A time was arranged to preview the system in question. Two of us visited the datacenter housing the system and wouldn't you know it, the system that had been identified was not a single system.
It was the head node in a high performance cluster with 64 nodes. With models and simulations being actively run on the system we naturally couldn't just power it down. So, we validate before escalation to investigation. The head node and subsequent systems were running linux and we just happened to have our handy cd containing trusted statically compiled binaries. Some of you might be saying.. "Now just wait right there you can't touch the system, you'll impact forensic integrity" . Remember please that this is validation, we aren't in an investigation yet, so our goal is to minimize the impact we have, because we can not avoid having an impact. If we get to a full blown investigation, we put on our "forensic purity" hats. Ok, so back to the validation...
The alert we received was nice enough to include a set of characteristics *cough* tool marks anyone?*cough* The tool marks listed were of the individual nature and even then, they varied. First things first. We need to capture memory. I liken memory captures to photographs of a crime scene, so we take our pictures before disturbing the system. Ok, so we grab a copy of /proc/kcore and kernel and symbol table and shoot them to the response laptop over a netcat connection. Then we attempted to locate the locations and files that were listed in the alert. Nothing, nothing and more nothing. Great news! But wait, just what happened here, we asked ourselves. Afterall we're responders and forensic analysts, we want to be able to understand and explain.
Tracing back through the alert, a username was identified and that's why we received the alert. The alerters thought "ahah we have a user name and an IP address that the user logged in to, hence the computer he logged in to is likely compromised". While we appreciated the alert, it was awfully presumptive. Yes the user in question logged in to the system we received the alert about from a computer in the foreign country where the alert originated - hence the user name and ip address of the system he logged in to. The system he logged in from, was identified as being compromised by those that sent us the alert, so they alerted us that one of our systems may have been compromised. This makes good sense but it still obviously required us to validate the compromise. Validation cost us a little effort but we certainly saved a bundle of time by not jumping to conclusions and going right in to investigation mode. The owners of the cluster would not have been happy if we had.
It was the head node in a high performance cluster with 64 nodes. With models and simulations being actively run on the system we naturally couldn't just power it down. So, we validate before escalation to investigation. The head node and subsequent systems were running linux and we just happened to have our handy cd containing trusted statically compiled binaries. Some of you might be saying..
The alert we received was nice enough to include a set of characteristics *cough* tool marks anyone?*cough* The tool marks listed were of the individual nature and even then, they varied. First things first. We need to capture memory. I liken memory captures to photographs of a crime scene, so we take our pictures before disturbing the system. Ok, so we grab a copy of /proc/kcore and kernel and symbol table and shoot them to the response laptop over a netcat connection. Then we attempted to locate the locations and files that were listed in the alert. Nothing, nothing and more nothing. Great news! But wait, just what happened here, we asked ourselves. Afterall we're responders and forensic analysts, we want to be able to understand and explain.
Tracing back through the alert, a username was identified and that's why we received the alert. The alerters thought "ahah we have a user name and an IP address that the user logged in to, hence the computer he logged in to is likely compromised". While we appreciated the alert, it was awfully presumptive. Yes the user in question logged in to the system we received the alert about from a computer in the foreign country where the alert originated - hence the user name and ip address of the system he logged in to. The system he logged in from, was identified as being compromised by those that sent us the alert, so they alerted us that one of our systems may have been compromised. This makes good sense but it still obviously required us to validate the compromise. Validation cost us a little effort but we certainly saved a bundle of time by not jumping to conclusions and going right in to investigation mode. The owners of the cluster would not have been happy if we had.
Sunday, August 17, 2008
Situation Normal....
When analyzing a disk image or live system we're often confronted with the need to scan the system for malware. We need to know what was on the system, if anything, and what capabilities it has. Many people scan with well known vendor utilities like Symantec Antivirus or Mcafee. Others scan with some less popular tools, but all have the same end in mind; Find malware on the system by signature. I think it's past time we as examiners be honest with ourselves. Antivirus is not sufficient when attempting to detect the presence of malware on a system. Sure, it functions and will catch what it's aware of, but malware changes too rapdily for antivirus to be effective. You can scan a system or disk image all you want, but if the signature does not exist, you have no hope. Case in point: Asprox botnet related files. Yes I'm still watching it. Today I grabbed four of the newest binaries available.
The results from virustotal?
1,2,3,4
This is of course brand new malware on the block. But this is obviously a frequent thing.
Guess what? You don't stand a chance. If you're scanning a system for malware in the next few days because you're processing an image for a case, or responding to an incident..FAIL. You can not, based on an antivirus scan, even pretend to claim that the system is malware free. Your certainty level suffers greatly and that friends is what we call doubt.
Now, just because a binary evades signature detection doesn't mean you can't detect it. We just need to adapt out methodology when we search. As examiners we must accept the fact that Antivirus is a failing technology in that it consistently falls short and is no longer reliable if you base your conclusions on the results of a scan.
As such it's time to look at alternative methods when determining the presence of malware. Malware detection in forensics needs to move to a more behavioral based approach. Booting a disk image in vmware and looking at system behavior is a must. Capturing memory and analyzing it is a must. Running a sniffer is a must when the vm is booted. Using multiple antivirus products is no longer an option. I'd suggest that at least three products be used to scan all disk images and systems during response and/or forensics. What am I using? Symantec, Kaspersky, Bitdefender. With the samples I listed above, of course these wouldn't work..but the point is simply this: Just as more sources of evidence leads to a more solid case, the more sources that get consulted during a malware analysis leads to a higher degree of reliability in the results. Is it perfect? No, not at all. Is it more reliable? yes, it's more reliable if you:
1) Look at the filesystem for things that don't fit; new files in system32,new drivers, new services, new batch files, vbs scripts etc.
2) Scan with multiple AV products.
3) Boot the disk image in Vmware, watch the behavior of the system, capture memory, run a network sniffer.
4) Analyze memory, the behavior and the sniffer output(put it through snort and reconstruct streams).
This is far better and more reliable than simply stating "I scanned the disk image with Antivirus Product X, and could not identify any malware on the system. The system is clean."
The results from virustotal?
1,2,3,4
This is of course brand new malware on the block. But this is obviously a frequent thing.
Guess what? You don't stand a chance. If you're scanning a system for malware in the next few days because you're processing an image for a case, or responding to an incident..FAIL. You can not, based on an antivirus scan, even pretend to claim that the system is malware free. Your certainty level suffers greatly and that friends is what we call doubt.
Now, just because a binary evades signature detection doesn't mean you can't detect it. We just need to adapt out methodology when we search. As examiners we must accept the fact that Antivirus is a failing technology in that it consistently falls short and is no longer reliable if you base your conclusions on the results of a scan.
As such it's time to look at alternative methods when determining the presence of malware. Malware detection in forensics needs to move to a more behavioral based approach. Booting a disk image in vmware and looking at system behavior is a must. Capturing memory and analyzing it is a must. Running a sniffer is a must when the vm is booted. Using multiple antivirus products is no longer an option. I'd suggest that at least three products be used to scan all disk images and systems during response and/or forensics. What am I using? Symantec, Kaspersky, Bitdefender. With the samples I listed above, of course these wouldn't work..but the point is simply this: Just as more sources of evidence leads to a more solid case, the more sources that get consulted during a malware analysis leads to a higher degree of reliability in the results. Is it perfect? No, not at all. Is it more reliable? yes, it's more reliable if you:
1) Look at the filesystem for things that don't fit; new files in system32,new drivers, new services, new batch files, vbs scripts etc.
2) Scan with multiple AV products.
3) Boot the disk image in Vmware, watch the behavior of the system, capture memory, run a network sniffer.
4) Analyze memory, the behavior and the sniffer output(put it through snort and reconstruct streams).
This is far better and more reliable than simply stating "I scanned the disk image with Antivirus Product X, and could not identify any malware on the system. The system is clean."
Labels:
digital forensic science,
investigation,
malware,
methodology
Saturday, August 16, 2008
Windows Forensic Environment
Not much coverage on this yet...and I don't really know why.
The Windows Forensic Environment is based on the Windows OPK or AIK depending on your affiliation. I'm not an OEM so I got to use the AIK. I can't share many details on building this environment right now as I don't have my documentation on hand however consider the possibilities. We may have something on our hands now that can give Windows users a fair chance at reasonable forensics using a bootable CD. Sure we've had Helix for quite a while now and it's been great, but if you've ever trained people in using linux when they are completely unfamiliar with it, the odds that you'll get blank stares is high. Dos prompts are more familiar to many people, as are programs like encase (which works really well in the environment). X-ways Forensics works as well, as does F-response - which provides an interesting opportunity for using this as a known clean environment in a VM and a live capture scenario. Unfortunately FTK does not function as a result of the codemeter USB key. At least Imager Lite works though. It's been noted that the environment has a strong affinity for modifying the disks in the system so if you're using this, do some heavy testing. I'll have more information on this later.
The Windows Forensic Environment is based on the Windows OPK or AIK depending on your affiliation. I'm not an OEM so I got to use the AIK. I can't share many details on building this environment right now as I don't have my documentation on hand however consider the possibilities. We may have something on our hands now that can give Windows users a fair chance at reasonable forensics using a bootable CD. Sure we've had Helix for quite a while now and it's been great, but if you've ever trained people in using linux when they are completely unfamiliar with it, the odds that you'll get blank stares is high. Dos prompts are more familiar to many people, as are programs like encase (which works really well in the environment). X-ways Forensics works as well, as does F-response - which provides an interesting opportunity for using this as a known clean environment in a VM and a live capture scenario. Unfortunately FTK does not function as a result of the codemeter USB key. At least Imager Lite works though. It's been noted that the environment has a strong affinity for modifying the disks in the system so if you're using this, do some heavy testing. I'll have more information on this later.
Bots, no longer childs play
A few years ago botnets were pretty much childs play. The bot herders would run an IRC server, sloppily infect computers and detection was pretty simple. You'd find a rogue ftp server, and some form of bot capable of DoS'ing and that's about it, maybe some good movies and weird music but that's about it.
Over the past few weeks I've been following Asprox and some other botnets. I'll start with Asprox. Sure it's been documented by some of the biggest names around. Joe Stewart (who does amazing work if you haven't checked), SANS, Dancho Danchev (who does amazing work as well). Asprox right now is launching massive SQL injection attacks, and is succeeding in large numbers. It's a simple XSS attack, but wow is it effective. So, once your favorite website has been compromised(yahoo anyone?), and your users visit the site, what happens? If you visit the page with IE, you get sent down a specific path, and if you visit with firefox you go down a different path. What I found interesting is that attacks that the code used in the attack exploits MS08-041 in addition to simple XMLHTTP gets, malicious flash (based on browser detection, and then flash version detection, getting the browser to trust the binary and then the binary modifies the Anti-phishing bar for IE, and the botnet comes complete with statistics tracking and updating.
The victim computer becomes pwned in, every sense of the word. You end up with a keylogger, game password stealer, general information stealer, you're connected to the botnet C&C which is proxied and throw in a bit of fast flux just for fun. In the two weeks I've spent on this, I've seen the malware change 5 times - that's new malware not just revisions, and the SQL injection attacks are now coming in with a variable padding, attempting to bypass any filtering of the attacks. This botnet has been used for spamming, phishing and now SQL injection attacks to grow the pharm as it were. And Asprox is small compared to other more nefarious botnets.
Yet another botnet I'm looking in to, (not sure if it has a name) is being used for spam. I had a drive brought to me recently and took a look at it after looking at network traffic. Well, it uses methods similar to things like coreflood in that C&C communication is done over HTTP connections and consists of simple POST and GET requests, though it currently connects over port 18923. Yeah..that's HTTP over port 18923. This particular botnet comes with a rootkit that is not detected by modern signatures in software like Symantec (big surprise there I know), although antivirus evasion is apparently pretty darn easy.
It's been known for quite some time in small circles that botnets are big business but many people out there still don't get it. They see a system spamming and nuke it from orbit without doing even a simple Root Cause Analysis. An RCA in these cases provides a wealth of information. It can be said that everything has a signature, and malware leaves tool marks - from the installation, to activity and so on. An RCA allows us to create that signature to improve detection and knowledge of the methods and mechanisms used by these botnets. Next time someone in tech support or someone at your client's site wants to just nuke a system from orbit, ask them if you can image the system. This is no longer just child's play.
Over the past few weeks I've been following Asprox and some other botnets. I'll start with Asprox. Sure it's been documented by some of the biggest names around. Joe Stewart (who does amazing work if you haven't checked), SANS, Dancho Danchev (who does amazing work as well). Asprox right now is launching massive SQL injection attacks, and is succeeding in large numbers. It's a simple XSS attack, but wow is it effective. So, once your favorite website has been compromised(yahoo anyone?), and your users visit the site, what happens? If you visit the page with IE, you get sent down a specific path, and if you visit with firefox you go down a different path. What I found interesting is that attacks that the code used in the attack exploits MS08-041 in addition to simple XMLHTTP gets, malicious flash (based on browser detection, and then flash version detection, getting the browser to trust the binary and then the binary modifies the Anti-phishing bar for IE, and the botnet comes complete with statistics tracking and updating.
The victim computer becomes pwned in, every sense of the word. You end up with a keylogger, game password stealer, general information stealer, you're connected to the botnet C&C which is proxied and throw in a bit of fast flux just for fun. In the two weeks I've spent on this, I've seen the malware change 5 times - that's new malware not just revisions, and the SQL injection attacks are now coming in with a variable padding, attempting to bypass any filtering of the attacks. This botnet has been used for spamming, phishing and now SQL injection attacks to grow the pharm as it were. And Asprox is small compared to other more nefarious botnets.
Yet another botnet I'm looking in to, (not sure if it has a name) is being used for spam. I had a drive brought to me recently and took a look at it after looking at network traffic. Well, it uses methods similar to things like coreflood in that C&C communication is done over HTTP connections and consists of simple POST and GET requests, though it currently connects over port 18923. Yeah..that's HTTP over port 18923. This particular botnet comes with a rootkit that is not detected by modern signatures in software like Symantec (big surprise there I know), although antivirus evasion is apparently pretty darn easy.
It's been known for quite some time in small circles that botnets are big business but many people out there still don't get it. They see a system spamming and nuke it from orbit without doing even a simple Root Cause Analysis. An RCA in these cases provides a wealth of information. It can be said that everything has a signature, and malware leaves tool marks - from the installation, to activity and so on. An RCA allows us to create that signature to improve detection and knowledge of the methods and mechanisms used by these botnets. Next time someone in tech support or someone at your client's site wants to just nuke a system from orbit, ask them if you can image the system. This is no longer just child's play.
Subscribe to:
Posts (Atom)