Thursday, July 12, 2007

Determining memory consumption

As I've been developing the methodology I talked about previously , one of the problem areas that's arisen is determining system impact. One component of impact is determining memory consumption. There's a lot of work to be done here because of the complexity of memory management, and there's a lot of work being done currently. If we're after precision, we need to know how many pages have been allocated to a process, how many are used, if they are in swap or resident in memory, what was previously in those pages, and what's in them now.


I started working on a primitive method to determine memory consumption. Note I said consumption rather than displacement, since determining displacement requires a lot of wrench time.

One such method to maybe start determining consumption is to use the debugging tools from Microsoft. The two tools I'll use here are windbg and cdb(same tool actually..just one is command line).

The tool I'm checking out, is another tool included in the debugging toolkit, also used in incident response and live response scenarios.

So..here goes.
fire up a command prompt and run this:
C:\Program Files\Debugging Tools for Windows>cdb.exe -y http://msdl.microsoft.com/download/symbols -o tlist.exe -t

Microsoft (R) Windows Debugger Version 6.7.0005.1
Copyright (c) Microsoft Corporation. All rights reserved.

CommandLine: tlist -t
Symbol search path is: http://msdl.microsoft.com/download/symbols
Executable search path is:
ModLoad: 01000000 012dd000 tlist.exe
ModLoad: 7c900000 7c9b0000 ntdll.dll
ModLoad: 7c800000 7c8f4000 C:\WINDOWS\system32\kernel32.dll
ModLoad: 77c10000 77c68000 C:\WINDOWS\system32\msvcrt.dll
ModLoad: 03000000 03116000 C:\Program Files\Debugging Tools for Windows\dbghel
p.dll
ModLoad: 77dd0000 77e6b000 C:\WINDOWS\system32\ADVAPI32.dll
ModLoad: 77e70000 77f01000 C:\WINDOWS\system32\RPCRT4.dll
ModLoad: 77c00000 77c08000 C:\WINDOWS\system32\VERSION.dll
ModLoad: 7e410000 7e4a0000 C:\WINDOWS\system32\USER32.dll
ModLoad: 77f10000 77f57000 C:\WINDOWS\system32\GDI32.dll
ModLoad: 774e0000 7761d000 C:\WINDOWS\system32\ole32.dll
ModLoad: 77120000 771ac000 C:\WINDOWS\system32\OLEAUT32.dll
(7c.f90): Break instruction exception - code 80000003 (first chance)
eax=00181eb4 ebx=7ffd5000 ecx=00000001 edx=00000002 esi=00181f48 edi=00181eb4
eip=7c901230 esp=0006fb20 ebp=0006fc94 iopl=0 nv up ei pl nz na po nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000202
ntdll!DbgBreakPoint:
7c901230 cc int 3
0:000>

Ok so that's good. Now we're stepping in to the tlist process and we've reached the first breakpoint. Now, fire up windbg and attach to the kernel (CTRL+K), select local and start with !process 0 0 to get a list of processes that are active.

lkd> !process 0 0

PROCESS 82b7c020 SessionId: 0 Cid: 007c Peb: 7ffd5000 ParentCid: 0da8
DirBase: 03660360 ObjectTable: e12f6a48 HandleCount: 7.
Image: tlist.exe

Aha, here's tlist. Now, we can key in on the process (output truncated):
lkd> !process 82b7c020
PROCESS 82b7c020 SessionId: 0 Cid: 007c Peb: 7ffd5000 ParentCid: 0da8
DirBase: 03660360 ObjectTable: e12f6a48 HandleCount: 7.
Image: tlist.exe
VadRoot 829ff7f8 Vads 27 Clone 0 Private 46. Modified 0. Locked 0.
DeviceMap e1b2efd0
Token e1165d48
ElapsedTime 00:05:54.546
UserTime 00:00:00.015
KernelTime 00:00:00.062
QuotaPoolUsage[PagedPool] 27892
QuotaPoolUsage[NonPagedPool] 1080
Working Set Sizes (now,min,max) (236, 50, 345) (944KB, 200KB, 1380KB)
PeakWorkingSetSize 236
VirtualSize 12 Mb
PeakVirtualSize 12 Mb
PageFaultCount 228
MemoryPriority BACKGROUND
BasePriority 8
CommitCharge 829
DebugPort 82e50a60

Look at the bolded text. We can see the process working set size. At present, the tlist process has been allocated 236 pages of memory at 4k. Multiply 236 * 4 and you'll get 944K. So, at the initial BP we see that tlist is "using" 944K of memory.

In the CDB window, if you tell it to 'go' by typing 'g' we'll see what happens to memory usage.

0:000> g
System Process (0)
System (4)
smss.exe (604)
csrss.exe (652)
winlogon.exe (676)
services.exe (720)
svchost.exe (904)
svchost.exe (1004)
svchost.exe (1116)
wuauclt.exe (3272)
svchost.exe (1232)
svchost.exe (1412)
ccSetMgr.exe (1608)
ccEvtMgr.exe (1656)
SPBBCSvc.exe (1772)
spoolsv.exe (428)
DefWatch.exe (332)
Rtvscan.exe (1100) Scan
VMwareService.exe (1384)
alg.exe (1976)
svchost.exe (1916)
wmiapsrv.exe (1444)
lsass.exe (740)
explorer.exe (1528) Program Manager
VMwareTray.exe (632)
VMwareUser.exe (940)
ccApp.exe (944)
VPTray.exe (816) Missing Virus Definitions
PicasaMediaDetector.exe (936) Picasa Media Detector
taskmgr.exe (3252) Windows Task Manager
cmd.exe (3904) Command Prompt - cdb.exe -y http://msdl.microsoft.com/download
/symbols -o tlist -t
windbg.exe (3500) Windows Driver Kit: Debugging Tools
cdb.exe (3496)
tlist.exe (124)
mmc.exe (3216) Performance
hh.exe (4092) Windows Driver Kit: Debugging Tools
eax=0300b7f8 ebx=00000000 ecx=002643e8 edx=00260608 esi=7c90e88e edi=00000000
eip=7c90eb94 esp=0006fe44 ebp=0006ff40 iopl=0 nv up ei pl zr na pe nc
cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246
ntdll!KiFastSystemCallRet:
7c90eb94 c3 ret


Groovy, now we get our tlist output and the tool is done running. However, since we haven't freed the process from the debugger, we have a thread in wait state, which means we can figure out how much memory our process actually consumed:

key in on the same process as before in windbg and you'll see the following:
lkd> !process 82b7c020
PROCESS 82b7c020 SessionId: 0 Cid: 007c Peb: 7ffd5000 ParentCid: 0da8
DirBase: 03660360 ObjectTable: e12f6a48 HandleCount: 22.
Image: tlist.exe
VadRoot 82bc3840 Vads 36 Clone 0 Private 380. Modified 0. Locked 0.
DeviceMap e1b2efd0
Token e1165d48
ElapsedTime 00:12:59.265
UserTime 00:00:00.015
KernelTime 00:00:00.140
QuotaPoolUsage[PagedPool] 38236
QuotaPoolUsage[NonPagedPool] 1440
Working Set Sizes (now,min,max) (771, 50, 345) (3084KB, 200KB, 1380KB)
PeakWorkingSetSize 771
VirtualSize 18 Mb
PeakVirtualSize 19 Mb
PageFaultCount 765
MemoryPriority BACKGROUND
BasePriority 8
CommitCharge 1120
DebugPort 82e50a60


So, we see 3084K has been "used", but can that possibly be accurate? The answer of course is no. There are several factors at play.

1) This was run from a debugger - this adds memory allocated to the process to account for debugging.

2) There is shared memory being used by other DLL's.

3) The working set is not representative of memory actually used by the process. It's representative of the memory(shared, virtual, physical) allocated to the process, but not necessarily that which is consumed(or resident). In addition, Microsoft's documentation on MSDN is inconsistent(surprise, surprise) in describing what's actually included in the working set.

4) The working set is what is recorded by the task manager, which is inaccurate.

So, where does this leave us? Well, in terms of precision and accuracy for determining memory consumption for a process, the working set is not the answer because the allocation hasn't necessarily been consumed, maybe the private bytes associated with a process is what we need to focus on but the real question is, and this has yet to be answered..what's good enough? I'm thinking that maybe the working set size is good enough, but not necessarily precise.


Next steps? Boot a system in debug mode and attach a kernel debugger to it while it's live.

Thoughts?


EDITS: the url is download, not downloads.

8 comments:

H. Carvey said...

Why is "consumption" important?

H. Carvey said...

Hogfly,

Why is "consumption" important? What about this particular metric is of importance?

hogfly said...

Sorry Harlan, must have missed the first post.

My understanding as it exists today, is that the process working set is the number of pages that are allocated for the process in physical memory.

Why is it important? Well, we can begin to narrow down just how much memory is actually being used by our process. We know the total amount (3084K in this case), and can now determine what state each page is in, and can look deeper in to the working set list to figure out what pages are being used by DLL's, and in this case the debugger to determine just how much this process is actually using.

Now that's a little academic I know..but what if I said it's what I think may be a narrowing of focus from the Walters paper and Forensic Discovery(venema & farmer)?

They both look at wholistic memory changes in their experiments. This actually looks at the process level to see what it's changing through allocation. Even if the page is set to demand-zero then that's a change.

It may also point towards the most efficient tool (at least in testing). If I can say I ran tlist in manner X, Y times, and had a memory utilization range of Z pages, we can normalize it and create a range of memory usage.

This would allow someone to compare their usage of a different tool for the same purpose (listing active processes), to determine which tool is the most efficient.

Of potential interest is when a system is tight on memory and the working set manager removes pages rather than moves it to swap. This may also be a next step from the buffalo paper.

Do you think this is too simplistic an approach?

H. Carvey said...

Well, we can begin to narrow down just how much memory is actually being used by our process.

Okay...I guess the next question, then, is why is THIS important?

This actually looks at the process level...

Maybe I'm being dense (that's always possible) but I'm having a little trouble wrapping my brain around the fact that the amount of memory that a particular process consumes is important.

Given that the memory used by active processes is not allocated to newly created processes, I'm still somewhat at a loss to really understand why these metrics are significant or important.

hogfly said...

Ok..why is this important.

It's part of the impact equation.

I'm not concerned with any particular process. I'm concerned with the process created upon execution of a tool used for live response that when executed will have an impact on a system.

You're right about new processes not being allocated memory used by currently active processes, but I don't think that's the point in this. The point is to validate a tool that is used in live response, but was not designed for live response by determining what it does to a system upon execution. As memory consumption is measurable, I think it's pretty significant to include it, and to not do so, would lead to incomplete and inaccurate results.

If you have a better suggestion, I'm all ears.

H. Carvey said...

The point is to validate a tool that is used in live response, but was not designed for live response by determining what it does to a system upon execution.

Right, I hear you...but I'm still struggling with what about the tool that needs to be validated.

As memory consumption is measurable, I think it's pretty significant to include it, and to not do so, would lead to incomplete and inaccurate results.

I'm not sure that including a metric simply because (and for no other reason than) it is measurable, is the right way to go. Where's the science in that?

If you have a better suggestion, I'm all ears.

I'm working on it...posting ideas on FF, DD, and other places, and not getting any feedback whatsoever.

Determining the impact of a specific tool on a system is one thing...we can run tests to determine the impact on the file system, on the Registry, etc. However, it is the "impact" specifically on memory that I'm concerned about here, and I'll admit that I'm struggling somewhat. Memory "consumption" does not sound like a correct measure or metric, and I'm having difficulty determining any that are.

Let's say you run a tool and determine how many memory pages it "consumes". We can say (we seem to be in agreement about this) that no memory pages that are currently in active use by the OS or another process will be overwritten. The process in question completes it's processing, and exits...and it's memory pages are freed for use. We run our next tool...it, too, "consumes" memory pages, but again, not any that are currently in use. How many of those pages were used by the first process and then freed for use? How many were not?

Does it matter?

Let's take a look at what we're trying to get from a volatile memory...

When getting volatile memory, we're looking for information of evidentary value, correct? RAM consists of processes and other objects in active memory...all of which have context. We can dump the contents of memory, find a string, and we can determine it's context, *if* it is in a location that is currently in active use by the system...a process, a thread or some other object.

Now, pages not currently in active use may contain strings of value, correct? We may find a string in an inactive memory page that includes "password:" or "login:"...but as that page is inactive, it cannot be associated with a particular process. If there is no other information within the 4K page itself (ie, a timestamp, etc), then we may have very little context to what we find. At that point, the info will have value as intel but perhaps not evidence.

Now, it's entirely possible that we may find what is obviously HTML code in memory...but without a process that "owns" or is actively using that page, how do you know if the HTML code is in memory due to IE, Firefox, wget, etc? Let's say that you locate what is obviously an AIM conversation in memory, but again, there is no AIM process running, and no process actively using that page. How do you know how that information got there? Was it the result of an actual AIM conversation, or did someone open another AIM conversation that had been created on a different system, archived, and sent to the system being examined via email (or some other process)?

hogfly said...

I'm not sure that including a metric simply because (and for no other reason than) it is measurable, is the right way to go. Where's the science in that?

Ok, let me try to adapt. I'll use tool markings as an example to try to explain why I think this is of potential value. Please note I'm saying potential rather than absolute, because like you I am exploring what makes sense in terms of validation and impact. and like you I get little in the way of response but speculation and naysaying, yet I see no other attempts(or very very few) at doing this.


Let's take a breaking and entering case at a tech shop as an example. I find tool markings on a door casing - the strike plate specifically. Upon searching the scene someone finds a series of screwdrivers, and someone else searching the scene finds a crowbar. After collection and fingerprinting and other processing, the tools are used for creation of toolmark standards or exemplars on various surfaces and various directions. These standards are then compared under a microscope to determine if any of them match the tool markings found on the strike plate. In addition those markings found on the frame and strike can be compared to a toolmark library and this is where I draw the connection.

By accounting for measurable effects of the execution of our tools, we are creating a library of tool marks for those that we use in our duties. I'm not saying that we should collect it simply because we can. It is a measure of impact because it is part of the destructive process of live response and by having a known(tool marking) to compare against, others that use the tool can then run their own comparisons against a baseline of execution under controlled circumstances. It also provides us with some notion of loss due to execution of said tool.

Another similar tool marking category is ballistics. There exists a ballistics library (several in fact) that contains test fire data. If someone fires a copper jacketed .762 from a russian sks it will have properties - individual and class characteristics - that can be used for comparison.

Now, consumption may not be all encompassing and I certainly don't think it(I think I even point out a few flaws) is, but it *could* be a starting point for finding the right metric. I do believe it has a use in and of itself, and I think it can also lead to other things.

We run our next tool...it, too, "consumes" memory pages, but again, not any that are currently in use

Does 'in use' matter? We know that data persists in memory for quite some time depending on a number of factors, I think you may be narrowing the scope of the purpose of a memory dump a little too far by limiting its use to 'in use' pages and processes. If something is destroyed by our action, regardless of its state, we need to account for it. Regarding your questions, that is getting in to accuracy and precision of the measurements we can take of a memory dump. Does it matter how many pages were used and then freed? I think you would agree that it would matter if I executed a process and reached the workingset size limit of 2GB and then during deallocation I overwrite those pages with 0's. However that has no bearing on what I was attempting to gather by determining pages consumed.

As for the rest of the message..

Again I think you may be narrowing the scope a little too much. If I find a string that contains "password:" I don't have evidence, until I put it in context (which you sort of say). Now, let's say I find a saved password in protected storage or a password vault that matches the password found in the contents of memory. I also find a cookie on the disk that leads to a user@website.com with a timestamp, and I also find the the link in the history file of a browser with a matching time stamp- it doesn't matter which. I can reconstruct the event and reach a conclusion based on what was found.

Using the AIM conversation, again it's up to us to reconstruct a plausible scenario of how that came to exist in memory if we're presenting it as evidence.


As I said once in the windowsforensics group during a comparison of memory to blood...

"Memory is collected, not to exculpate or inculpate directly, but to provide links to other evidence and strengthen their presentation. Uncorroborated evidence is nothing more than circumstantial evidence, therefore memory by itself is highly circumstantial, and provides little in the way of conclusiveness, but it provides a road map for examination."

I'm happy to continue this discussion but I wonder..is this the right venue?

H. Carvey said...

Reading over your comment, it seems to me that this discussion is going too far afield.

I still do not see how the number of pages 'consumed' by a process is particularly important, and I am concerned that the stated reason for measuring this is that because it can be measured.

I do agree that this is perhaps not the right venue for this discussion. I would suggest moving it to our other forum...that way, it can be developed before releasing it to the general public.