Wednesday, March 25, 2009
Quickpost - new malware
New malware uploaded this evening. It's been causing problems everywhere.
Symantec Calls this TidServ.G - It poisons DHCP and DNS and redirects DNS to the Ukraine. This is the latest in DNS/DHCP poisoning malware.
I call it exemplar18 ;)
A quick word about the graphic (being a graphical person)..
The screenshot above is from HBGary's Responder Pro looking at the memory dump. Note the loop on the left hand side? That's an awesome representation of an 'if' loop that is checking if the host is running security software (anti malware). The malware will kill that software.
Sunday, March 22, 2009
Gateway Malware Theory
Over time I've started developing a simple theory I'm calling the Gateway Malware Theory. Stated simply, "Simple malware leads to more complex malware, and there is no such thing as simple malware".
In more detail...
In the early days of malware we had single purpose, single focus malware that spread through a single mechanism. These days, even the simple malware is multi-vectored, multi-staged and downloads other more nefarious malware. Take Vundo for instance.
Vundo is, in other words a downloader. Once it makes its way on to a system it tends to download rogue programs or 'scareware'. On occasion I've seen it download hupigon or some other nasty program. It also infects dll's, exhausts system resources, downloads other malware and so on. According to Fireeye, it's now downloading copies of Randsom and encrypting user documents.
Vundo is "simple malware", yet it can take a mere infection from nuisance, to a fully compromised system that poses a real risk. It's what I'm calling Gateway Malware.
This leads to the Gateway Malware Theory, which goes something like this....
Simple malware infections, if not dealt with quickly, will inevitably lead to the download and installation of poorly detected malware that poses a real and true risk to organizations. The focus of any investigation of malware should be less focused on the malware and more focused on the data that is contained on, or accesible from the infected system. Therefore the first step in the investigation of malware should be data centric. If the contents of a system are unknown, then the risk, regardless of the presence of malware, can not be known or determined. As such, the presence of malware is irrelevent unless the contents of the system are known, and one must know what level of access the infected system, or user of the system has to sensitive data.
As I said I'm developing this theory still, and it's incomplete but take a look at some of the memory dumps I'm making public through my Memory snapshot project if you think you disagree. Thoughts?
In more detail...
In the early days of malware we had single purpose, single focus malware that spread through a single mechanism. These days, even the simple malware is multi-vectored, multi-staged and downloads other more nefarious malware. Take Vundo for instance.
Vundo is, in other words a downloader. Once it makes its way on to a system it tends to download rogue programs or 'scareware'. On occasion I've seen it download hupigon or some other nasty program. It also infects dll's, exhausts system resources, downloads other malware and so on. According to Fireeye, it's now downloading copies of Randsom and encrypting user documents.
Vundo is "simple malware", yet it can take a mere infection from nuisance, to a fully compromised system that poses a real risk. It's what I'm calling Gateway Malware.
This leads to the Gateway Malware Theory, which goes something like this....
Simple malware infections, if not dealt with quickly, will inevitably lead to the download and installation of poorly detected malware that poses a real and true risk to organizations. The focus of any investigation of malware should be less focused on the malware and more focused on the data that is contained on, or accesible from the infected system. Therefore the first step in the investigation of malware should be data centric. If the contents of a system are unknown, then the risk, regardless of the presence of malware, can not be known or determined. As such, the presence of malware is irrelevent unless the contents of the system are known, and one must know what level of access the infected system, or user of the system has to sensitive data.
As I said I'm developing this theory still, and it's incomplete but take a look at some of the memory dumps I'm making public through my Memory snapshot project if you think you disagree. Thoughts?
Saturday, March 21, 2009
Malware project updates
As I mentioned in the addendum to the last post, I had a flaw in the method I was using.
The flaw was twofold. Memory page trimming in vmware, and I wasn't allowing the malware to execute fully. I've fixed this and as a result you'll see some fairly dramatic changes in the contents of the memory snapshots.
I've uploaded a few snapshots today including:
Ackantta
Koobface
Infostealer
and I also reloaded the exemplar4 snapshot, which is an IRCbot with a few twists ;)
I'll be adding a Mebroot and Randsom variant soon. I've added a link to the blog for accessing my skydrive. Expect regular updates. If you've got specific malware you want to see in memory, email me.
Updates:
I've now uploaded 10 samples including: Waledec, Mebroot, and more.
The flaw was twofold. Memory page trimming in vmware, and I wasn't allowing the malware to execute fully. I've fixed this and as a result you'll see some fairly dramatic changes in the contents of the memory snapshots.
I've uploaded a few snapshots today including:
Ackantta
Koobface
Infostealer
and I also reloaded the exemplar4 snapshot, which is an IRCbot with a few twists ;)
I'll be adding a Mebroot and Randsom variant soon. I've added a link to the blog for accessing my skydrive. Expect regular updates. If you've got specific malware you want to see in memory, email me.
Updates:
I've now uploaded 10 samples including: Waledec, Mebroot, and more.
Thursday, March 19, 2009
Memory snapshot Project Part II
It appears that the memory snapshot idea has been well received so I'm in the process of uploading more snapshots to my skydrive. I think I've got a decent format now.
Under my public folder you'll see a series of exemplarX files where X is a number.
Within each directory you can expect to find the following:
about.txt - This identifies the malware and provides an md5. The binary is uploaded at offensivecomputing.net.
virustotal_
Exemplar segments - I decided on a more universal method of compression (tar.gz) and I've split the segments using the linux split command. These segments will need to be concatenated. This can be done in linux by using the cat command. In windows, it's a copy command.
on Linux:
cat exemplar5.tar.gz.* > exemplar5.tar.gz
on Windows:
copy /b exemplar5.tar.gz.* exemplar5.tar.gz
Simply extract the .vmem from the .tar.gz file and off you go.
hashes.txt - This is a list of md5 hashes of all segmented files, the .vmem file, the .pdf, and the about.txt file.
This seems like a fairly decent model to follow though I'm open to suggestions.
I've posted a few more images and I'm in the process of creating several more.
One thing to keep in mind is that while I try to validate the execution of the malware in a virtual setting, I am fallible. If you think there's no trace of the malware in the memory dump, let me know.
Happy malware hunting.
3/21/09 addendum
A quick update.
I realized a flaw in my methodology. I didn't give the malware enough time to fully execute so I'm re-doing the exemplars.
If you downloaded exemplar4 already, I invite you to download it again.
Tuesday, March 17, 2009
A memory snapshot project
Some time ago, I got really tired of seeing lame attempts at proving the value of memory dumps by vendors showing that you could find "hxdef" strings in memory dumps. Today, I'd like to announce a fledgling personal project of mine. I don't yet have a name for it and it's in the very early stages but it goes something like this...
I see a lot of malware and I know there are a lot of people that don't. I also know that people want to do memory analysis but the only real source of samples is from DFRWS from 4 years ago. Here's what I'm doing...
I take 'in the wild' malware, load it up in a virtual machine, suspend the virtual machine and extract the .vmem file. I then upload the .vmem file and make it available to you, my faceless readers and the world at large. This isn't one of those "contests" where I challenge you to analyze a memory dump. Rather I am providing memory dumps of 'in the wild' malware being run in a controlled environment. Maybe this will help developers build better tools, maybe this will educate examiners, maybe this will build incident response IQ, maybe this will give students something to work with, or maybe I'll just waste some cycles providing this stuff. Time will tell.
This post is more or less a test to see if the public can access my skydrive to download the memory snapshots. Up until now, I've had issues sharing files with others. Hopefully skydrive helps with this issue.
My first snapshot is here. The file is a split .AD1 file created with FTK imager 2.5.5. You'll need to combine the segments and extract the contents. It's incredibly easy with FTK imager. The file contained within is a 7zip compressed memory image. Simply uncompress and have fun. All I ask at this point is that you let me know if you have issues, and maybe let me know if you find it valuable.
I see a lot of malware and I know there are a lot of people that don't. I also know that people want to do memory analysis but the only real source of samples is from DFRWS from 4 years ago. Here's what I'm doing...
I take 'in the wild' malware, load it up in a virtual machine, suspend the virtual machine and extract the .vmem file. I then upload the .vmem file and make it available to you, my faceless readers and the world at large. This isn't one of those "contests" where I challenge you to analyze a memory dump. Rather I am providing memory dumps of 'in the wild' malware being run in a controlled environment. Maybe this will help developers build better tools, maybe this will educate examiners, maybe this will build incident response IQ, maybe this will give students something to work with, or maybe I'll just waste some cycles providing this stuff. Time will tell.
This post is more or less a test to see if the public can access my skydrive to download the memory snapshots. Up until now, I've had issues sharing files with others. Hopefully skydrive helps with this issue.
My first snapshot is here. The file is a split .AD1 file created with FTK imager 2.5.5. You'll need to combine the segments and extract the contents. It's incredibly easy with FTK imager. The file contained within is a 7zip compressed memory image. Simply uncompress and have fun. All I ask at this point is that you let me know if you have issues, and maybe let me know if you find it valuable.
Sunday, March 15, 2009
Disaster averted
It's a rare day when I have truly exciting things happen. Tonight of course was the exception. A few months ago I had a hot water heater installed by so called "professionals"..you know, the factory trained kind. I use a night rate unit that controls when the unit is active. This evening when the unit turned on all was well or so I thought. When I went to the basement to look at something, I noticed an acrid chemical smell of something metallic and plastic burning. Having had "some" experience in this arena, I could tell that it was an electrical fire. If you've never smelled an electrical fire, there's nothing else like it. The smell of the metal wire, and the plastic shielding produces a smell and taste that doesn't leave your mouth or nostrils any time soon. Anyways, I had to locate the smell. The problem with electrical fires when you're in a room full of electrical wiring, is trying to locate exactly where the smell is coming from. For this, unless you have a "hot spot" detector, you usually have to rely on the tried and true sniff test.
So, there I was sniffing around my basement like a bloodhound trying to locate the source. Finally I reached the hot water heater. When you find the source, boy...you find the source. Getting that close to the source of an electrical fire creates a bit of a gag factor but it's temporary. Needless to say I turned off the breaker and called the Fire department. The problem was contained, but I wanted to make sure there were no hot spots growing in the conduit.
Not wanting to lose the opportunity to learn, I tried to pay attention to every detail - you know, that whole "study the methods used by others" idea that I mention quite a bit. The Captain was the first on the scene. I showed him where the fire was and got out of his way. He surveyed the area, asked me a few questions and previewed the hot water heater - meaning he did a sniff test too. When the rig arrived, I went out to let them in, and I showed them where to go. They checked the area with the hot spot detector and validated my findings, then proceeded to tear apart the wiring to determine the scope of the damage. You've already seen the wiring from inside the water heater. Here's the wiring from inside the conduit.
Suffice it to say the wiring is just destroyed. The root cause was a short within the wire nut, caused by poor installation. That smoldered lump of plastic in the first picture is what used to be the wiring nut. Anyways all is well and the aftermath begins tomorrow.
Naturally, this post isn't about the fire in my hot water heater tonight. It's about incident response and a few of the things that contribute to, and separate a good outcome from a bad outcome.
1) Knowing the environment you're dealing with. In this case, this was my house. I knew what I done today that could have created the situation, I knew where each electrical item was in my basement, I knew my wiring panel, and had it labeled.
In the digital world, this is the same as knowing your organization. You need to know where your assets are, what the assets are, how they are connected and you should have an updated topological diagram.
2) Experience and awareness. I've dealt with electrical fires before and knew what the smell was. I knew that a fire was nothing I was qualified to deal with, so I called the professionals without poking around more than was necessary. I also knew that once I described the problem, answered questions and showed them the location of the fire, I should get out of their way and let them work.
In the digital world, if you're the first responder or discover the incident, if you can't solve the problem yourself and you have someone on the way, don't meddle with the system and when the IRT arrives, show them where to go, answer their questions and get out the way. Hovering when an IRT is working does not help the situation. If your assistance is required, you'll be asked to help.
3) Factory trained professionals don't always do the right thing and cut corners. As the firefighters worked they were talking to one another and discussing their findings and theorizing the root cause. The root cause was the people that installed my hot water heater.
In the digital world, consultants are well paid but don't always do the right thing. I've dealt with many cases where the root cause was the consultant's poor choices during installation. Dropping firewalls, poor password security etc. When entering an engagement with a consultant, be sure that you know what you're getting.
These are just a few of the things that you should be aware of in the world of incident response. The biggest lessons of the night for anyone out there that has an Incident Response Team at your disposal is:
- If you are unsure, call the trained people that do know, before you do anything.
- There's no shame in admitting you don't know everything and can't solve the problem.
- If you know something is out of the ordinary, call quickly.
A safe evening to all.
reasonable belief
Just about every state now has a law that addresses data breaches and notification thereof. One thing they pretty much fail to do though is provide criteria for establishing reasonable belief. Well, what is it you might be asking?
Troy Larson provided the following to me for a definition: "As a legal standard, reasonable belief is defined as what an average person in similar circumstances might believe."
Ok, so that's easy enough. What would a layperson believe if presented with the circumstances. As it pertains to data loss investigations, we are never able to present our findings to an "objective" jury. Instead, we present our findings to a subjective group of individuals that have a stake in the data loss process. Sometimes you will be lucky enough to find yourself presenting your findings to a group or person with high ethical and moral standards who wants to do the 'right thing'(TM). If you are lucky enough to find yourself in front of a decision making group, what do you present? Of course, you present your findings in a factual manner, without attempting to inject bias or opinion (unless asked to render one). The role of decision maker is not ours afterall. However, we must take great care to not poison or influence the decision making process. Our analysis must be thorough and complete. It should not be based on assumption or speculation of "what if" or "they could have". That is not our role. Our role is to present what we found, and if something we expected to find, is not found, then we may have reason to suspect something is wrong.
So we must ask ourselves this question - Given normal circumstances, what would a layperson base their decision on? How is reasonable belief actually established?
I'm attempting to answer this very question. To do so, I pored over numerous reports and analyses and their resulting decisions. I did some other research and came up with the following areas that I think influence how a person develops a reasonable belief when weighing the decision to notify as a result of a data loss investigation.
*note these are high level and not intended to be 100% complete. The idea is to highlight areas of influence*
MAC times – Access times post compromise date, not explained by business processes or applications, not attested to by a user, not explained by registry analysis. No sign of MAC time tampering.
Depth/Breadth of penetration - System/root/administrative level access obtained on a system or obtained on multiple systems having access to sensitive data. Attacker had access to files or databases containing sensitive data. Stolen credentials used to log in to business systems and user account has access to sensitive data.
System - Log files suggest data was acquired. Registry analysis shows signs of searching for, or looking through files, opening files containing sensitive data, USB history shows signs of unrecognized devices being used. Internet history shows attacker activity indicating data exfiltration. In other words, this is the typical forensic analysis of a system.
Attack Profile - Targeted attacks, spear phish against specific group or individuals having access to sensitive data. Attack directed at a singular and specific target containing sensitive data.
Detection - I've discussed time previously so I won't cover it, but I will summarize it by saying that when the window of time from time of compromise to time of containment is longer than 3 months, the decision maker tends to be influenced by this fact. The same applies if the window is very small, say 24 hours. The speed with which an incident is detected is a large factor for the decision maker.
Network - Flows/packet captures suggest that data traveling to external entities involved in the incident contained sensitive information. Encrypted traffic flows to/from attack related IP addresses that can not be explained by configuration file.
Malware - Sophistication of malware suggests the ability to log keystrokes, sniff network traffic, modify timestamps, search for and/or steal data. Malware related artifacts show sensitive data being accessed. Malware is designed for theft of sensitive data.
Of course there will be corner cases where companies *SHOULD* automatically notify as in the case of a stolen or lost laptop/tape/hard drive, and data is unenecrypted. This is a huge topic so I'll be discussing it again...
Troy Larson provided the following to me for a definition: "As a legal standard, reasonable belief is defined as what an average person in similar circumstances might believe."
Ok, so that's easy enough. What would a layperson believe if presented with the circumstances. As it pertains to data loss investigations, we are never able to present our findings to an "objective" jury. Instead, we present our findings to a subjective group of individuals that have a stake in the data loss process. Sometimes you will be lucky enough to find yourself presenting your findings to a group or person with high ethical and moral standards who wants to do the 'right thing'(TM). If you are lucky enough to find yourself in front of a decision making group, what do you present? Of course, you present your findings in a factual manner, without attempting to inject bias or opinion (unless asked to render one). The role of decision maker is not ours afterall. However, we must take great care to not poison or influence the decision making process. Our analysis must be thorough and complete. It should not be based on assumption or speculation of "what if" or "they could have". That is not our role. Our role is to present what we found, and if something we expected to find, is not found, then we may have reason to suspect something is wrong.
So we must ask ourselves this question - Given normal circumstances, what would a layperson base their decision on? How is reasonable belief actually established?
I'm attempting to answer this very question. To do so, I pored over numerous reports and analyses and their resulting decisions. I did some other research and came up with the following areas that I think influence how a person develops a reasonable belief when weighing the decision to notify as a result of a data loss investigation.
*note these are high level and not intended to be 100% complete. The idea is to highlight areas of influence*
MAC times – Access times post compromise date, not explained by business processes or applications, not attested to by a user, not explained by registry analysis. No sign of MAC time tampering.
Depth/Breadth of penetration - System/root/administrative level access obtained on a system or obtained on multiple systems having access to sensitive data. Attacker had access to files or databases containing sensitive data. Stolen credentials used to log in to business systems and user account has access to sensitive data.
System - Log files suggest data was acquired. Registry analysis shows signs of searching for, or looking through files, opening files containing sensitive data, USB history shows signs of unrecognized devices being used. Internet history shows attacker activity indicating data exfiltration. In other words, this is the typical forensic analysis of a system.
Attack Profile - Targeted attacks, spear phish against specific group or individuals having access to sensitive data. Attack directed at a singular and specific target containing sensitive data.
Detection - I've discussed time previously so I won't cover it, but I will summarize it by saying that when the window of time from time of compromise to time of containment is longer than 3 months, the decision maker tends to be influenced by this fact. The same applies if the window is very small, say 24 hours. The speed with which an incident is detected is a large factor for the decision maker.
Network - Flows/packet captures suggest that data traveling to external entities involved in the incident contained sensitive information. Encrypted traffic flows to/from attack related IP addresses that can not be explained by configuration file.
Malware - Sophistication of malware suggests the ability to log keystrokes, sniff network traffic, modify timestamps, search for and/or steal data. Malware related artifacts show sensitive data being accessed. Malware is designed for theft of sensitive data.
Of course there will be corner cases where companies *SHOULD* automatically notify as in the case of a stolen or lost laptop/tape/hard drive, and data is unenecrypted. This is a huge topic so I'll be discussing it again...
Outbreak!
I have briefly mentioned Mass Casualty Incidents in the past. It's time to delve in to this a little and see where we end up. I'll likely spread this out over a few posts.
One of the most widespread diseases in existence is malaria. There's an estimated 200 to 300 million cases worldwide each year. 2-3 million of those result in death. There is currently no vaccine.
Let's focus on malaria for the time being. Malaria is primarily spread through female mosquitos that pass a parasite to the victim. That is to say a mosquito attaches itself to a victim, and injects saliva in to the wound to keep blood from clotting and the blood flowing. There are areas of the world where mosquitos are highly prevalent, and these are also places of high infection rates.
Wait a second. Let's summarize. An infectious disease, spread worldwide, causes death, and there is no vaccine, only treatment?
Sounds a bit like a malware infection, or rather a malware outbreak doesn't it? What if I were to tell you this is like the USB malware infections that spread all over, and caused the military to take a draconian approach of banning USB keys?
I say this quite a bit but the best way to master your field is to study the methods used in other fields. For an outbreak of this nature I refer to treating and preventing of malaria.
Think about it. Infected USB media is exactly like a mosquito, they contain a parasite and infect computers by injecting the executable referenced in their autorun files.
Let me spell this out for you. When you're faced with a transient population in the tens of thousands and a computer population of twice that number, and you have malware that spreads from one population to another, what do you do? That is to say you've got mobile people with infected USB keys and systems that are either infected or about to be infected.
Think malaria. Kill the mosquitos, innoculate and protect the uninfected, treat the infected. Unfornately this is a problem. Ever tried to track down thousands of USB keys? How do you get a hold of the USB keys? How do you kill the infection on their USB keys?
The answer is obvious. You can't track them down. So, let's focus on the second and third problem. The solution, as is often the case, presented itself.
In a highly distributed and decentralized environment (as many large organizations are), what needs to occur? Coordination, Communication, Information. This is step 1. Without this, everything else fails.
Consider calling emergency gathering of key staff to establish the process and procedure for dealing with the threat. Once the scope of the threat is conveyed, the action plan is established and off you go. Instructions and ideas get shared and the uninfected population is already in the process of being further protected by local IT staff.
What about the infected and the unknown?
In the digital world, you can't kill USB keys by spraying them with repellent and other chemicals and you can't compel tens of thousands of people to turn over their USB keys. But you can establish a triage center for the people in posession of them, and ask them to bring them in. There are two problems with this approach.
1) Scope of population. There is of course a realization that not all USB keys will be accounted for, but through the coordination of efforts to innoculate and protect the uninfected, while treating the infected, an intersection occurs, whereby both populations get protected and treated.
2)Laziness. People will not go out of their way to get a flu shot and they will not go out of their way to get their USB key checked. So, do what the medical field does, establish triage centers in multiple, high traffic areas.
So just what is a triage center for USB keys? It consists of uninfectable systems (mac and/or linux systems), and scripts to detect infected USB keys. Simply have an individual insert their usb stick and within seconds you know if you've got an infection. Then you innoculate the USB stick and make changes to attempt to prevent a recurring infection. In addition, you provide the person with the equivalent of a flyer that has detailed instructions to follow to innoculate and prevent infection of their computer.
But wait..what's missing here? Knowledge of the threat. In the midst of all of this, signature development needs to occur and threat assessments must continue. This is all about continuous information gathering. Samples need to be gathered and analyzed to determine the types and functionality of the malware. A line must be drawn that differentiates high value assets from the assets of little to no value. This is where further triage takes place.
More on this later.
One of the most widespread diseases in existence is malaria. There's an estimated 200 to 300 million cases worldwide each year. 2-3 million of those result in death. There is currently no vaccine.
Let's focus on malaria for the time being. Malaria is primarily spread through female mosquitos that pass a parasite to the victim. That is to say a mosquito attaches itself to a victim, and injects saliva in to the wound to keep blood from clotting and the blood flowing. There are areas of the world where mosquitos are highly prevalent, and these are also places of high infection rates.
Wait a second. Let's summarize. An infectious disease, spread worldwide, causes death, and there is no vaccine, only treatment?
Sounds a bit like a malware infection, or rather a malware outbreak doesn't it? What if I were to tell you this is like the USB malware infections that spread all over, and caused the military to take a draconian approach of banning USB keys?
I say this quite a bit but the best way to master your field is to study the methods used in other fields. For an outbreak of this nature I refer to treating and preventing of malaria.
Think about it. Infected USB media is exactly like a mosquito, they contain a parasite and infect computers by injecting the executable referenced in their autorun files.
Let me spell this out for you. When you're faced with a transient population in the tens of thousands and a computer population of twice that number, and you have malware that spreads from one population to another, what do you do? That is to say you've got mobile people with infected USB keys and systems that are either infected or about to be infected.
Think malaria. Kill the mosquitos, innoculate and protect the uninfected, treat the infected. Unfornately this is a problem. Ever tried to track down thousands of USB keys? How do you get a hold of the USB keys? How do you kill the infection on their USB keys?
The answer is obvious. You can't track them down. So, let's focus on the second and third problem. The solution, as is often the case, presented itself.
In a highly distributed and decentralized environment (as many large organizations are), what needs to occur? Coordination, Communication, Information. This is step 1. Without this, everything else fails.
Consider calling emergency gathering of key staff to establish the process and procedure for dealing with the threat. Once the scope of the threat is conveyed, the action plan is established and off you go. Instructions and ideas get shared and the uninfected population is already in the process of being further protected by local IT staff.
What about the infected and the unknown?
In the digital world, you can't kill USB keys by spraying them with repellent and other chemicals and you can't compel tens of thousands of people to turn over their USB keys. But you can establish a triage center for the people in posession of them, and ask them to bring them in. There are two problems with this approach.
1) Scope of population. There is of course a realization that not all USB keys will be accounted for, but through the coordination of efforts to innoculate and protect the uninfected, while treating the infected, an intersection occurs, whereby both populations get protected and treated.
2)Laziness. People will not go out of their way to get a flu shot and they will not go out of their way to get their USB key checked. So, do what the medical field does, establish triage centers in multiple, high traffic areas.
So just what is a triage center for USB keys? It consists of uninfectable systems (mac and/or linux systems), and scripts to detect infected USB keys. Simply have an individual insert their usb stick and within seconds you know if you've got an infection. Then you innoculate the USB stick and make changes to attempt to prevent a recurring infection. In addition, you provide the person with the equivalent of a flyer that has detailed instructions to follow to innoculate and prevent infection of their computer.
But wait..what's missing here? Knowledge of the threat. In the midst of all of this, signature development needs to occur and threat assessments must continue. This is all about continuous information gathering. Samples need to be gathered and analyzed to determine the types and functionality of the malware. A line must be drawn that differentiates high value assets from the assets of little to no value. This is where further triage takes place.
Wednesday, March 11, 2009
The F-bomb
I'm in one of those moods this evening and I recently saw something that just makes me laugh and cry all at once. This is to be considered not safe for work as I'll probably let loose the F-bomb a few times.
I'm pointing the finger directly at Guidance software and their classy representatives who see fit to trash their competition. Quid Pro Quo Guidance.
To begin...Guidance is feeling the pressure of a small company breathing down their backs. This product is being mentioned left and right in Guidance's own forums. At one point, Guidance saw fit to blacklist all use of the name of the product. Is that fear? Afraid someone will catch on to your fleecing of the industry?
First they begin by insulting the rest of the industry by saying that this "inferior tool" appeals to novice investigators. As someone with many years of investigative experience I heartily disagree. Maybe your internal investigators should take a few more classes , because the last time I checked, you use your own "superior" tools internally and when asked to produce documents you are magically unable to. I'm fairly certain that even a "novice" could find those files. I'm also fairly certain that a novice knows they're not supposed to store customer credit card information.
How about this "inferior product" appeals to the rest of the world because it costs a fraction of what FIM and EE cost. More on that later. How about it appeals to the rest of the world because it works? How about it appeals to the rest of the world because it's simple? How about it appeals to the rest of the world because it meets our needs? A wise man once said "buy the cheapest product that meets your needs". Guess what Guidance, your products are too expensive. In these trying economic times, people don't have millions to invest in to EE or tens of thousands to invest in FIM when this "inferior product" does just fine.
There are claims that this "inferior product" is not court validated. Well Guidance, what does that mean? Court validation is not something whereby someone waves a magic wand and stamps a product as "court validated". Validation comes through the process of presenting a case in front of a judge and withstanding scrutiny from the opposition. "court validation" is merely a forensic buzzword just as is "forensically sound". DNA is court validated, is it questioned? You betcha! as is blood evidence, and fingerprints. Guidance says "our products have been vetted through court and industry peer review". Is that why I see your customers bitching and moaning about how encase (all flavors) keeps crashing on them? Let's discuss error rates hmmmm? I don't recall seeing anything in Digital Investigation or other Scientific Journals showing industry peer review.
Acquiring data using a new transfer method. Guidance claims this "inferior product" uses an untested acquisition and transfer method. Guidance, are you saying that Encase acquisition is untested? I thought you said it was court validated? Afterall, your tool is what's being used to do the acquisition. Are you also saying that an industry standard protocol is untested for data acquisition and transfer. My god, stop the presses and contact all of your SAN manufacturers that use iSCSI. Your data is not to be trusted when crossing the wire using that protocol! I guess I better comment on those RFC's. Why, in their message, they even mention AccessData Enterprise as being unproven. Let's not leave anyone out here. Did Guidance forget all the issues they had with their agent? Apparently so.
They go on to mention that there are no granular permissions used by this "inferior tool". Tell me something, if I have the dongle and the dongle needs to be plugged in to my machine, and I set a username and password of my choosing, what more do I need?
No auditing. My god stop the presses again. Windows stopped auditing events. This "inferior tool" provides no auditing. 1) That's easily fixed. 2) It's not required. The process is documented by the investigator. Don't you teach that in your own classes? Let's see..I have a read only connection to a target. Better audit that. Oh wait, that's already done either by the operating system or the tool itself. And besides, do you mean to tell us that Encase doesn't provide an audit log of actions taken. tsk tsk.
No end node processing. Uhm...do I care about this when all I need to do is acquire an image? Do I care about this when I need to examine an intrusion? That your product does this client side...how about impact analysis?
Limited Volatile Data capabilities. Uh-oh..here it comes...what are you talking about on this point? Do you even know? Volatility can't identify hidden processes or injected Dll's or better yet NIC information (what do you mean here anyways, that I can't determine what NIC is in the machine?)? I better let AAron Walters know! Better yet I better let Mandiant know that their product can't do these things. Finally they get to the point. Ahh..Snapshot can do all this and better yet it makes it easy! Not to mention that EE can dump the memory space for a single process! I can't do that with other tools? Guess I better stop doing it with Volatility. That capability can be yours through Guidance for $$$$$$$$$$$$$$$$$$$$$$$. Guess we're all screwed in the memory analysis field. Let's not mention that they're attacking a beta product. Is that fear I smell again?
No solaris, mac, linux, aix, novell. Hey I have an idea, why not throw in plan9 while you're at it? Newsflash! It supports Mac and Linux. I should know..I did an awful lot of testing on both. Guess that takes care of about 95% of the market. Time to check those sources before you start a smear campaign.
No encryption during transfer. This is true, but let me say right off, that IPsec is built in to windows, and works just fine.
No compression. I've acquired terabytes and never had an issue caused by lack of compression. Try again.
64bit examiners. This entire section is based on supposition. Using terms such as (un)likely and "not yet developed" is something that should never be said. Are you on the development team? Are you in the private meetings? If you have no facts to backup your claim, keep your mouth shut.
Limited Stealth capabilities. Guidance can install a better trojan. There's a point in your favor. Hold on to that for dear life. Why not use that in your marketing?
Invasive compared to servlet. The "inferior tool" is not passive. That's right, it doesn't sit there disabled until I want to enable it. They say it requires copying it to the end node. Guess I better shred my CD's that I run it from, and better burn my USB keys that I run it from too. They say it disturbs the endpoint more than the servlet which uses about 1MB of space. Oh I get it, it overwrites diskspace. Now we're talking bits and bytes consumed by agents. Here's a hint, check your facts. This "inferior tool" uses less space than your agent. In addition, if any agent is part of a standard build process then it doesn't alter anything. Deploying an agent in a triage situation is what's called "acceptable", just like inserting an IV is acceptable if the patient needs it.
Agent deployment is manual and doesn't scale. Newsflash! Check out the videos. Management of the agent is manual they say..but it's installed as a service on a remote system. Stop the presses! Microsoft has no way of managing services remotely. Better get Redmond on the phone!
A user can not ask the service to perform a task and receive feedback. Hmmm let's see. I tell a service to start and open a connection. Did it connect? I'd call that feedback.
No throttling of the service. No service management in windows? Encase can set low, medium and high priorities for processes? I can't say I understand the point they're trying to make with this argument.
Ah yes.. the enterprise sweep enscript. psst...let me clue you in...who says I need your script to search my own mapped drives? Guess a for loop stopped being useful. And another powerful utility is the database snapshot utility! pssst...guess what..I can have a look at the database using native tools.
And now we get to my favorite part. Money. Encase FIM costs approximately what? $15k to start?
What can I get with $15k?
AccessData FTK or X-Ways
Two Cisco ASA's
The "inferior tool"
and I've still got $5k. I can ship an ASA to a client, preconfigured to create a tunnel back to my shop and voila, encryption solved. Not to mention I've got $5k in my pocket. With that extra $5k, I can even deploy a dedicated system in the remote location.
Now let's discuss Encase Enterprise. Average cost of an Encase Enterprise deployment? Well over the 6 figure mark just to start! A real deployment is in the millions. There are a few corporations that will spend this kind of money. If that's what they need, then so be it. They've got the budget for it. For the rest of the world, there's no way anyone is going to buy it. I refer to the wise man for this. "Buy the cheapest that meets your needs". So I think to myself, what can I buy for $250,000? I can buy myself an awful lot of hardware that provides all the infrastructure needed. I can even purchase dedicated lines to those "important clients". I can buy an entire development team to build me a product. Point is I can build a bigger, better, more robust forensic capability by NOT using your product for the same amount of money, or less. And that's a low end Encase Enterprise deployment.
A few litigious words come to mind after reading the message from Guidance but that's not for me to worry about. What concerns me most, is that this message is from the "world leader in digital investigations". Time to change that slogan to "The biggest douche bags in the forensics industry"TM. Honestly, is this who we want representing the industry? Is this the kind of stuff that should be tolerated? I don't mind and in fact I fully support an honest competition, but when you start this game, it's bad for everyone. This is an outright smear campaign by Guidance and there are too many false statements to count, unfortunately given the history of Guidance I'm somehow not surprised. I am, like I said in the beginning amused by this as well. Guidance is actually showing fear. Only those who are afraid lash out. Guidance has lashed out at a number of vendors in the industry with this message. It's truly sad when they have to resort to this.
Harlan has picked up on this story as well.
I'm pointing the finger directly at Guidance software and their classy representatives who see fit to trash their competition. Quid Pro Quo Guidance.
To begin...Guidance is feeling the pressure of a small company breathing down their backs. This product is being mentioned left and right in Guidance's own forums. At one point, Guidance saw fit to blacklist all use of the name of the product. Is that fear? Afraid someone will catch on to your fleecing of the industry?
First they begin by insulting the rest of the industry by saying that this "inferior tool" appeals to novice investigators. As someone with many years of investigative experience I heartily disagree. Maybe your internal investigators should take a few more classes , because the last time I checked, you use your own "superior" tools internally and when asked to produce documents you are magically unable to. I'm fairly certain that even a "novice" could find those files. I'm also fairly certain that a novice knows they're not supposed to store customer credit card information.
How about this "inferior product" appeals to the rest of the world because it costs a fraction of what FIM and EE cost. More on that later. How about it appeals to the rest of the world because it works? How about it appeals to the rest of the world because it's simple? How about it appeals to the rest of the world because it meets our needs? A wise man once said "buy the cheapest product that meets your needs". Guess what Guidance, your products are too expensive. In these trying economic times, people don't have millions to invest in to EE or tens of thousands to invest in FIM when this "inferior product" does just fine.
There are claims that this "inferior product" is not court validated. Well Guidance, what does that mean? Court validation is not something whereby someone waves a magic wand and stamps a product as "court validated". Validation comes through the process of presenting a case in front of a judge and withstanding scrutiny from the opposition. "court validation" is merely a forensic buzzword just as is "forensically sound". DNA is court validated, is it questioned? You betcha! as is blood evidence, and fingerprints. Guidance says "our products have been vetted through court and industry peer review". Is that why I see your customers bitching and moaning about how encase (all flavors) keeps crashing on them? Let's discuss error rates hmmmm? I don't recall seeing anything in Digital Investigation or other Scientific Journals showing industry peer review.
Acquiring data using a new transfer method. Guidance claims this "inferior product" uses an untested acquisition and transfer method. Guidance, are you saying that Encase acquisition is untested? I thought you said it was court validated? Afterall, your tool is what's being used to do the acquisition. Are you also saying that an industry standard protocol is untested for data acquisition and transfer. My god, stop the presses and contact all of your SAN manufacturers that use iSCSI. Your data is not to be trusted when crossing the wire using that protocol! I guess I better comment on those RFC's. Why, in their message, they even mention AccessData Enterprise as being unproven. Let's not leave anyone out here. Did Guidance forget all the issues they had with their agent? Apparently so.
They go on to mention that there are no granular permissions used by this "inferior tool". Tell me something, if I have the dongle and the dongle needs to be plugged in to my machine, and I set a username and password of my choosing, what more do I need?
No auditing. My god stop the presses again. Windows stopped auditing events. This "inferior tool" provides no auditing. 1) That's easily fixed. 2) It's not required. The process is documented by the investigator. Don't you teach that in your own classes? Let's see..I have a read only connection to a target. Better audit that. Oh wait, that's already done either by the operating system or the tool itself. And besides, do you mean to tell us that Encase doesn't provide an audit log of actions taken. tsk tsk.
No end node processing. Uhm...do I care about this when all I need to do is acquire an image? Do I care about this when I need to examine an intrusion? That your product does this client side...how about impact analysis?
Limited Volatile Data capabilities. Uh-oh..here it comes...what are you talking about on this point? Do you even know? Volatility can't identify hidden processes or injected Dll's or better yet NIC information (what do you mean here anyways, that I can't determine what NIC is in the machine?)? I better let AAron Walters know! Better yet I better let Mandiant know that their product can't do these things. Finally they get to the point. Ahh..Snapshot can do all this and better yet it makes it easy! Not to mention that EE can dump the memory space for a single process! I can't do that with other tools? Guess I better stop doing it with Volatility. That capability can be yours through Guidance for $$$$$$$$$$$$$$$$$$$$$$$
No solaris, mac, linux, aix, novell. Hey I have an idea, why not throw in plan9 while you're at it? Newsflash! It supports Mac and Linux. I should know..I did an awful lot of testing on both. Guess that takes care of about 95% of the market. Time to check those sources before you start a smear campaign.
No encryption during transfer. This is true, but let me say right off, that IPsec is built in to windows, and works just fine.
No compression. I've acquired terabytes and never had an issue caused by lack of compression. Try again.
64bit examiners. This entire section is based on supposition. Using terms such as (un)likely and "not yet developed" is something that should never be said. Are you on the development team? Are you in the private meetings? If you have no facts to backup your claim, keep your mouth shut.
Limited Stealth capabilities. Guidance can install a better trojan. There's a point in your favor. Hold on to that for dear life. Why not use that in your marketing?
Invasive compared to servlet. The "inferior tool" is not passive. That's right, it doesn't sit there disabled until I want to enable it. They say it requires copying it to the end node. Guess I better shred my CD's that I run it from, and better burn my USB keys that I run it from too. They say it disturbs the endpoint more than the servlet which uses about 1MB of space. Oh I get it, it overwrites diskspace. Now we're talking bits and bytes consumed by agents. Here's a hint, check your facts. This "inferior tool" uses less space than your agent. In addition, if any agent is part of a standard build process then it doesn't alter anything. Deploying an agent in a triage situation is what's called "acceptable", just like inserting an IV is acceptable if the patient needs it.
Agent deployment is manual and doesn't scale. Newsflash! Check out the videos. Management of the agent is manual they say..but it's installed as a service on a remote system. Stop the presses! Microsoft has no way of managing services remotely. Better get Redmond on the phone!
A user can not ask the service to perform a task and receive feedback. Hmmm let's see. I tell a service to start and open a connection. Did it connect? I'd call that feedback.
No throttling of the service. No service management in windows? Encase can set low, medium and high priorities for processes? I can't say I understand the point they're trying to make with this argument.
Ah yes.. the enterprise sweep enscript. psst...let me clue you in...who says I need your script to search my own mapped drives? Guess a for loop stopped being useful. And another powerful utility is the database snapshot utility! pssst...guess what..I can have a look at the database using native tools.
And now we get to my favorite part. Money. Encase FIM costs approximately what? $15k to start?
What can I get with $15k?
AccessData FTK or X-Ways
Two Cisco ASA's
The "inferior tool"
and I've still got $5k. I can ship an ASA to a client, preconfigured to create a tunnel back to my shop and voila, encryption solved. Not to mention I've got $5k in my pocket. With that extra $5k, I can even deploy a dedicated system in the remote location.
Now let's discuss Encase Enterprise. Average cost of an Encase Enterprise deployment? Well over the 6 figure mark just to start! A real deployment is in the millions. There are a few corporations that will spend this kind of money. If that's what they need, then so be it. They've got the budget for it. For the rest of the world, there's no way anyone is going to buy it. I refer to the wise man for this. "Buy the cheapest that meets your needs". So I think to myself, what can I buy for $250,000? I can buy myself an awful lot of hardware that provides all the infrastructure needed. I can even purchase dedicated lines to those "important clients". I can buy an entire development team to build me a product. Point is I can build a bigger, better, more robust forensic capability by NOT using your product for the same amount of money, or less. And that's a low end Encase Enterprise deployment.
A few litigious words come to mind after reading the message from Guidance but that's not for me to worry about. What concerns me most, is that this message is from the "world leader in digital investigations". Time to change that slogan to "The biggest douche bags in the forensics industry"TM. Honestly, is this who we want representing the industry? Is this the kind of stuff that should be tolerated? I don't mind and in fact I fully support an honest competition, but when you start this game, it's bad for everyone. This is an outright smear campaign by Guidance and there are too many false statements to count, unfortunately given the history of Guidance I'm somehow not surprised. I am, like I said in the beginning amused by this as well. Guidance is actually showing fear. Only those who are afraid lash out. Guidance has lashed out at a number of vendors in the industry with this message. It's truly sad when they have to resort to this.
Harlan has picked up on this story as well.
Monday, March 9, 2009
Flypaper
Years ago I played football and I can recall the day when my coach grabbed me before the game and gave me a pair of receiver gloves. He said "Here, now your hands are like flypaper." If you've never worn receiver gloves before I can tell you they have a sticky substance on the palms and fingers when the gloves are new. Not a ton, but enough to make them tacky...like flypaper.
While testing HBGary's Responder Pro product, Rich Cummings turned me on to a secondary product in their lineup. It's called flypaper. It's currently a free download and I've got to tell you it's been a great experience using it. The process is simple.
Load a virtual machine from a snapshot.
Run flypaper.
Execute the malware or binary of your choice.
Suspend the virtual machine.
Examine the .vmem file.
Unpause the virtual machine.
Stop flypaper.
Extract the flypaper log file - which happens to log changes to the system. (You could extract the file from the .vmdk if you were inclined of course.)
A quick look at how simple the flypaper interface is:
You're probably saying..uhh I do that anyways. Ahh but flypaper allows you to have great control over what can happen. For instance, you can block all network traffic to and from the virtual machine. You can also prevent processes from exiting. Why is this important? Well friends, have you ever tried to reverse engineer something that's packed with themida or armadillo? These are two of the most advanced packers out there and they are pretty useless when flypaper is involved. How about a multistage packed binary? When a program executes and loads in to memory, it's unpacked. Flypaper keeps it that way and allows you, the examiner an opportunity to look at a completely naked version of the malware. How's that for a time saver? How about that's flippin sweet? Is it 100% effective? No it's not, but it gives us a chance to examine malware without a lot of the pains involved with reverse engineering packed malware. And if you were to do the memory dumping with FastDump or FD pro, you could get a copy of the page file for complete analysis of memory. With Responder and Responder pro in the mix and the ability to analyze the pagefile and memory dump, HBgary is building an impressive suite.
While testing HBGary's Responder Pro product, Rich Cummings turned me on to a secondary product in their lineup. It's called flypaper. It's currently a free download and I've got to tell you it's been a great experience using it. The process is simple.
Load a virtual machine from a snapshot.
Run flypaper.
Execute the malware or binary of your choice.
Suspend the virtual machine.
Examine the .vmem file.
Unpause the virtual machine.
Stop flypaper.
Extract the flypaper log file - which happens to log changes to the system. (You could extract the file from the .vmdk if you were inclined of course.)
A quick look at how simple the flypaper interface is:
You're probably saying..uhh I do that anyways. Ahh but flypaper allows you to have great control over what can happen. For instance, you can block all network traffic to and from the virtual machine. You can also prevent processes from exiting. Why is this important? Well friends, have you ever tried to reverse engineer something that's packed with themida or armadillo? These are two of the most advanced packers out there and they are pretty useless when flypaper is involved. How about a multistage packed binary? When a program executes and loads in to memory, it's unpacked. Flypaper keeps it that way and allows you, the examiner an opportunity to look at a completely naked version of the malware. How's that for a time saver? How about that's flippin sweet? Is it 100% effective? No it's not, but it gives us a chance to examine malware without a lot of the pains involved with reverse engineering packed malware. And if you were to do the memory dumping with FastDump or FD pro, you could get a copy of the page file for complete analysis of memory. With Responder and Responder pro in the mix and the ability to analyze the pagefile and memory dump, HBgary is building an impressive suite.
Why your antivirus can't tell you anything useful.
I wrote about your antivirus being unable to tell you anything here and here. I want to take a quick minute to tell you why your antivirus product can't tell you the things you want to know. Or how about you hear the AV industry tell you why they can't tell you anything? The following two quotes are from an AV vendor.
"The most effective detection nowadays is either generic (detection of whole families and sub-families), proactive (heuristics, sandboxing, emulation etc), or hybrid.”
“In the 90s, a good heuristic scanner could claim to detect something like 70-80% of new malware: clearly, that's no longer the case.“
That sort of explains some things I think. The primary detection method is generic, followed by a proactive and hybrid detection model. In short, Antivirus products can't do what they claim to - which is protect your system from malware infections. And when they do detect malware, they are unable to tell you a whole lot about it since the method of detection is nonspecific. Harlan's been on a rampage lately discussing how antivirus vendors are unable to provide adequate information to Incident Responders and I tend to think this explains the source of the problem.
That AV vendors are admitting they can't do the same thing they used to, I tend to think it's past time organizations move beyond antivirus products and in to new markets. Antivirus products are now marginalized and they can't keep up with the malware onslaught. I don't say this to be pessimistic, but I do tend to think it's true. The battle is not being lost, it already is lost. Relying on Antivirus products alone is simple negligence.
"The most effective detection nowadays is either generic (detection of whole families and sub-families), proactive (heuristics, sandboxing, emulation etc), or hybrid.”
“In the 90s, a good heuristic scanner could claim to detect something like 70-80% of new malware: clearly, that's no longer the case.“
That sort of explains some things I think. The primary detection method is generic, followed by a proactive and hybrid detection model. In short, Antivirus products can't do what they claim to - which is protect your system from malware infections. And when they do detect malware, they are unable to tell you a whole lot about it since the method of detection is nonspecific. Harlan's been on a rampage lately discussing how antivirus vendors are unable to provide adequate information to Incident Responders and I tend to think this explains the source of the problem.
That AV vendors are admitting they can't do the same thing they used to, I tend to think it's past time organizations move beyond antivirus products and in to new markets. Antivirus products are now marginalized and they can't keep up with the malware onslaught. I don't say this to be pessimistic, but I do tend to think it's true. The battle is not being lost, it already is lost. Relying on Antivirus products alone is simple negligence.
Who dropped their pants?
I am developing a new reality game show for intrusion analysts and investigators. I'm calling it
"WHO DROPPED THEIR PANTS?"
There are several ways systems get compromised, but more often than not it's due to a misconfiguration or sloppy management of controls. I constantly refer back to something Charl van der walt of sensepost said a few years ago about sysadmins being able to screw up only once. That has stayed true. I've analyzed countless incidents where the root cause was determined to be gross misconfiguration leading to compromise. The vendor could come in and instruct the sysadmin to disable the host firewall to make a specific piece of software able to function. A sysadmin could tire of testing and rush a system in to production before it was ready. A list of passwords could be stored on root of the drive in cleartext. You get the idea. Someone is dropping the pants on a system right now in your organization in order to get something to work. Why, for many people what's the first troubleshooting step when there's a firewall involved? Disable the firewall of course. Well, did they turn it back on? My most favorite line in the past has been "That system is a Mac running OSX, are you sure it's compromised?"..this being a question I receive from clients when I alert them that something is amiss.
The first thing I want to know is why in this day and age are people still being given absolute control over a system when their job is to manage only one functional role held by the system? Take a database server for instance. Does the DBA need full admin rights over the operating system, or do they need limited or no rights to the operating system, and full rights over the database they are responsible for? When the untrained has more access than required and they are knowledgeable enough to inadvertently do damage the organization is begging for trouble. Unfortunately this is an all too common occurrence in the IT field. Hence the name of my new game "WHO DROPPED THEIR PANTS?" So for us as responders what do we do?
Well, you want to find out who has access to a system and what rights do they have. Then you want to find out when they were logged in to the system and were they logged in at or around the time of compromise. In addition, is the proper logging in place to determine their actions at the time of compromise? These are just a few things to keep in mind when determining "WHO DROPPED THEIR PANTS?"
"WHO DROPPED THEIR PANTS?"
There are several ways systems get compromised, but more often than not it's due to a misconfiguration or sloppy management of controls. I constantly refer back to something Charl van der walt of sensepost said a few years ago about sysadmins being able to screw up only once. That has stayed true. I've analyzed countless incidents where the root cause was determined to be gross misconfiguration leading to compromise. The vendor could come in and instruct the sysadmin to disable the host firewall to make a specific piece of software able to function. A sysadmin could tire of testing and rush a system in to production before it was ready. A list of passwords could be stored on root of the drive in cleartext. You get the idea. Someone is dropping the pants on a system right now in your organization in order to get something to work. Why, for many people what's the first troubleshooting step when there's a firewall involved? Disable the firewall of course. Well, did they turn it back on? My most favorite line in the past has been "That system is a Mac running OSX, are you sure it's compromised?"..this being a question I receive from clients when I alert them that something is amiss.
The first thing I want to know is why in this day and age are people still being given absolute control over a system when their job is to manage only one functional role held by the system? Take a database server for instance. Does the DBA need full admin rights over the operating system, or do they need limited or no rights to the operating system, and full rights over the database they are responsible for? When the untrained has more access than required and they are knowledgeable enough to inadvertently do damage the organization is begging for trouble. Unfortunately this is an all too common occurrence in the IT field. Hence the name of my new game "WHO DROPPED THEIR PANTS?" So for us as responders what do we do?
Well, you want to find out who has access to a system and what rights do they have. Then you want to find out when they were logged in to the system and were they logged in at or around the time of compromise. In addition, is the proper logging in place to determine their actions at the time of compromise? These are just a few things to keep in mind when determining "WHO DROPPED THEIR PANTS?"
Saturday, March 7, 2009
A long month
February was in a word 'brutal'. Time has been very short, leaving me with just a few hours of sleep each night. It was such a long month that I can't yet recall all that happened, it just hasn't processed. The engagements were long and arduous, arguably some of the most interesting to date, full of new challenges. I'm hoping to find some time to finish a few posts and add new ones.
Subscribe to:
Posts (Atom)