NTFS Object IDs in EnCase – Part 3

In a previous post, I showed you how to make a condition to find all files in an NTFS volume that have Object IDs associated with them in NTFS. In this post, I will be showing you how to create a condition to search through the values of the Object IDs to filter on specific strings.

The condition I build below is designed to search for a provided value, and then remove that from the filtered list. The goal is to allow the examiner to use this to identify files that were created on computers with difference MAC addresses.

In the image below, I found 229 files that have Object IDs but don’t have my VM’s MAC. The previous condition found 442 total files on this disk with Object IDs. Many Windows files were in the list, but the EnCase.exe file jumped out. MAC address of the engineer that executed the build process?

encase-objid-c2-result

Using EnCase Conditions to Search Inside Object IDs

Similar to the last condition, this requires the mini-filter feature inside the condition to dig inside the attributes for each file. Here is how to build that condition:

  1. Find the conditions pane in the bottom right corner and click on the ‘user’ folder. Use the ‘new’ option in the toolbar or mouse right click to open the condition dialog. I created a folder to organize a bit. EnCase throws an error if you try to create a new condition in the ‘default’ folder.
  2. Click on the ‘filters’ tab and then double click on the ‘AttributeValueRoot’ item in the list.
    encase-objid-c1-filters
  3. Five things to do in this window. We need to give this a unique name since it will show up in the property list later. I chose to use the FullPath property to reduce false positives over using a name only check. The path I used is based on what EnCase displays when viewing in the attributes tab of the details pane.
    1. Name mini-filter ‘zFindObjectId’
    2. Use ‘new’, choose ‘fullpath’, choose ‘find’, type ‘object identifiers\own id’ in the value
    3. Use ‘new’, choose ‘value’, choose ‘Find’, type ‘NOT value’, and check ‘prompt for value’
    4. Right click on ‘Value find [NOT value]’ and choose the ‘Not option’
    5. Right click on the ‘Main’ item at the top of the tree and use ‘change logic’ to flip the ‘or’ to ‘and’
    6. Click ‘ok’
      encase-objid-c2-filter-terms
  4. Back on the main condition window, click on the conditions tab.
    1. Use the ‘new’ option
    2. Scroll to the bottom of the list and click on ‘zHasObjId’
    3. Click on ‘has a value’
    4. Click ‘ok’
      encase-objid-c2-term
  5. Name your condition and click ‘ok’
    encase-objid-c2-final

 

Let me know if you find other uses to search for using this condition. I would love to read about it.

James Habben
@JamesHabben

NTFS Object IDs in EnCase – Part 2

I posted previously about how to view the Object ID values, stored by NTFS, using EnCase as a forensic tool. In this post, I will show you a method to identify the files in your case that have an Object ID assigned to them. You can follow this using EnCase v7 or v8.

Using EnCase Conditions to Find Object IDs

This method requires a user-built custom condition because EnCase doesn’t have any in the default set to search for these values. Because the Object IDs are shown in the attributes tab of EnCase, it makes for a little more advanced condition than the typical. Here is how to build that condition:

  1. Find the conditions pane in the bottom right corner and click on the ‘user’ folder. Use the ‘new’ option in the toolbar or mouse right click to open the condition dialog. I created a folder to organize a bit. EnCase throws an error if you try to create a new condition in the ‘default’ folder.
  2. Click on the ‘filters’ tab and then double click on the ‘AttributeValueRoot’ item in the list.
    encase-objid-c1-filters
  3. Four things to do in this window. We need to give this a unique name since it will show up in the property list later. I chose to use the FullPath property to reduce false positives over using a name only check. The path I used is based on what EnCase displays when viewing in the attributes tab of the details pane.
    1. Name mini-filter ‘zHasObjId’
    2. Use ‘new’, choose ‘fullpath’, choose ‘find’, type ‘object identifiers\own id’ in the value
    3. Use ‘new’, choose ‘value’, choose ‘has a value’
    4. Right click on the ‘Main’ item at the top of the tree and use ‘change logic’ to flip the ‘or’ to ‘and’
    5. Click ‘ok’ to save it
      encase-objid-c1-filters-terms
  4. Back on the main condition window, click on the conditions tab.
    1. Use the ‘new’ option
    2. Scroll to the bottom of the list and click on ‘zHasObjId’
    3. Click on ‘has a value’
    4. Click ‘ok’
      encase-objid-c1-term
  5. Name your condition and click ‘ok’
    encase-objid-c1-final

It is ready to use now. This condition is doing an extra lookup for every file in your case and it causes the operation to take a bit longer. Be patient and it will finish. If you haven’t changed any settings with the view after running a condition, it will come back without the tree pane. I used the ‘ctrl+space’ shortcut to have EnCase blue-check everything in the view. As you can see, I have 442 out of 363,168 files on this disk with Object IDs associated in NTFS.

encase-objid-c1-result-table

You can change from the table-only view with an easy fix. Just use the drop down and select ‘tree-table’.

encase-objid-c1-change-view

Click on the attributes tab in the bottom pane, and you get the same view as before.

encase-objid-c1-result-detail

 

Next post will be another condition that will allow you to search for a partial or full Object ID value across the evidence in your case. Let me know if you have any questions or other thoughts on something to filter on.

 

James Habben
@JamesHabben

NTFS Object IDs in EnCase

Over on the Hacking Exposed Computer Forensics blog, David Cowen has been posting up weekly challenges. I love that he is investing in the DFIR community (literally with $100 prizes).

He posted a challenge on September 9, 2018 for readers to develop a python script to parse the NTFS $ObjId:$O alternate data stream. He apparently didn’t get any takers since on September 15, 2018 he put up a short post stating exactly that.

Commercial Solution

I am all for Open Source and Free Software options in the DFIR community, and I also frequently contribute to that collection through my various GitHub repositories. I have also spent an insane amount of time working with EnCase in my years past, so I wanted to show the way to view the data related to Dave’s challenge in a tool that some of you might have available.

Don’t blink!

Here are the steps to see the Object IDs that are assigned to files in EnCase v7+:

  1. Load your local preview or evidence file into the evidence tab
  2. Click on the evidence name to have EnCase start parsing the file system
  3. Find a file you know to have an Object ID
  4. Click the Attributes tab in the view pane

Here is what that looks like:

encase-attr-objid

You can also see that EnCase parses the GUID and displays the various components. Just expand the field, or hover the mouse over like this:

encase-attr-objid-long

This was just a short post for now. Next one, I will show how to build a condition to narrow down the view to only those files having Object IDs assigned.

 

James Habben
@JamesHabben

Show and Search for NTFS Owner in EnCase

Windows can be such a weird and wonderful thing, both at the same time. In a digital forensics sense, the artifacts left behind from user activity often give me delight. The same artifacts can often leave me scratching my head about why it exists in the first place. One of those features is the owner property in the NTFS file permissions.

User Activity

When a user creates a file, Windows typically drops that user account as the named owner of that file in the NTFS permissions. Sometimes, it assigns a local user group (say administrators) instead of a specific user, though I do not know the details of conditions surrounding that difference. Not the point of this post anyways.

To steps to see the owner of a file will vary a bit depending on the version of Windows you are using. The artifact itself is not affected by the version.

In Windows 8, right click on the file and choose properties. At the top, switch from the general tab to the security tab. Then, click the advanced button at the bottom. A new window will show, and the owner is listed near the top.

encase-owner-win-prop

Showing in EnCase

To see the very same data in EnCase is fairly straight forward. Choose a file in the table pane. Then in the view (lower) pane, you will see a tab called permissions. The view will switch and list one of the records as the owner.

encase-owner-view

Forensic Usefulness

As you might have noticed, the file system in the above image looks a lot like a CMS package on a web server. If you did, great eye! Web servers use a specific account to access and store content for the anonymous users that make requests. This user account is assigned permissions on the file system to prevent that anonymous user from going where they aren’t allowed.

Some web applications allow those anonymous users to upload files to be used by the web application or even submitted to the company for some purpose. Because the web server user account is used for these interactions, you will find that user account as the owner for any files that were uploaded through the web application.

In the event of a web server compromise, this web server user account is often the early stages of attackers interacting with the computer. Attackers want to get their files into that file system to allow more control. These are called web shells and offer nearly identical functionality to the typical remote access tool category, only through a website interface.

What if we could get EnCase to display all files that are owned by this web server user account? I am glad you asked!

Filtering in EnCase

EnCase offers conditions and filters to limit the files shown on screen. Simply put, conditions are easier to create (point and click) while filters are hard (type EnScript code). I will show you the steps to create a condition that will show you only the files with the prompted value in the owner field. This can be done in EnCase v5 through v8 and the windows will look nearly identical.

First step, find the conditions tab and create a new one. I name mine “find sid as owner”, but you can call it whatever makes sense to you.

Next, we have to create a mini-filter before the condition can function. Go to the filters tab, then double click on the PermissionRoot option on the right. Name it “prm_sid2owner”.

Add a new term. Choose ID in the properties list, choose find in the operators list, leave the value box empty, and check the ‘prompt for value’ checkbox. Click ok.

Add another new term. Choose property in the properties list, choose matches in the operators list, type ‘owner’ in the value box. Click ok.

Now right click on the ‘main’ at the top of the tree and choose change logic. Click ok. You should see ‘prm_sid2owner’ listed on the left.

encase-owner-filter-list

Now, go back the conditions and add a new term. At the bottom of the property list, you will find the mini-filter we just created.

encase-owner-condition-list

Now you can apply this to your case. You can supply a fill SID value or a partial. You can also give a list of SID values to search for if you were looking for multiple users.

 

Hope this helps! Reach out with any questions or comments.

James Habben
@JamesHabben

Show Your Timezone in EnCase

A question came up on my team about how to adjust time zones on evidence in EnCase. I figured I would put together a short post in case it might help others.

Setting Time Zones

When you start a case with EnCase, it grabs the timezone that is currently being used by the workstation you are running it on. All of the evidence that you bring into that case is assigned that same timezone.

You can apply a timezone change at a couple levels. First, directly to an evidence file. Second, to multiple evidence files. In EnCase v6, you open a case directly to whats called the ‘entries’ view. Entries are a generic name given to refer to any object inside an evidence file such as files, folders, alternate data streams, NTFS meta files, partitions, etc. even including the evidence file itself. Starting in EnCase v7 (and carried into v8), you are dropped in the ‘evidence’ view and must interact with that list in order to enter the ‘entries’ view. Whatever version you are using, go into the entries view.

To set the timezone, decide if you want an evidence specific or global change. Then right click on the evidence name or the ‘entries’ item at the top of the tree. Towards the bottom find the ‘Device’ sub-menu, then choose the ‘Modify time zone settings…’ option.

encase-tz-rc

A small window will pop up to show the list of time zones that EnCase has available. If you are examining a computer that isn’t properly patched with the current Daylight Saving Time setting, you can force that.

encase-tz-window

Click OK, and the times showing in EnCase will all be adjusted without having to do anything further.

Showing Your Timezone

I encourage everyone reading this to update this setting. Digital forensics requires us to be very accurate and specific. It tells EnCase to attach the timezone setting to every date that is displayed. It has saved me from a situation of reporting an incorrect time more than once. After changing this setting, your dates will look like these. I typically keep the columns smaller and only expanded the ‘Last Accessed’ field to show the full value.

encase-tz-dates

To make this change, find the ‘Tools’ menu in the bar at the top, and choose the ‘options’ option. Then click on the ‘Date’ tab. Check the box at the top of that tab page.

encase-tz-show

Thanks for reading!

James Habben
@JamesHabben

CCM_RecentlyUsedApps Update on Unicode Strings

The research and development that I did previously for the CCM_RecentlyUsedApps record structure and EnScript carving tool was done against case data I was using during investigations. Unfortunately, I had no data available with any of the string data having been written in Unicode characters. With the thought that Windows has been designed with international languages in mind, I used the UTF8 codepage when reading to hopefully catch any switch to Unicode type characters. Using UTF8 is a very safe alternative to ASCII because it defaults to plain ASCII in the lower ranges, and starts expanding bytes when it gets higher. I have an update, however, because I got a volunteer from twitter to graciously do some testing. Thanks @MattNels for the help!

The Tests

The first test that he ran was using characters that were not in the standard ASCII range. The characters like ä or ö are latin based characters with the umlaut dots above, and they fall within the scope of ASCII when you include both the low and the high ranges.

He created a testing directory on his system, which is under the management of his company’s SCCM deployment services. If you recall from my prior posts on this subject, this artifact is triggered simply by being a member. In this directory, he renamed an executable to include the above mention characters from the high ASCII range. The result show that the record stored those high characters exactly the same as the low range characters. You can see what that looks like in the following image.

The next test he ran was to rename that executable again to something high enough in the Unicode range to get clear of the ASCII characters. He went with “秘密”, which consists of two glyphs 0x79d8 and 0x5bc6. Keeping in mind our CPU architecture, we know that those bytes have to be swapped when written to disk as Unicode characters. The text would translate to four bytes on disk as: d8 79 c6 5b.

Another option, going with my earlier assumption/guess, is for the string to be written using UTF8. The use of UTF8 is pretty common on OS X, and less common on Windows, from my experience. Nevertheless, it would be worth being prepared to see what the bytes would look like if it was UTF8. The above glyphs translate into six bytes on disk, three for each character, but we don’t swap the bytes around like we did with Unicode. Confusing, right? Anyways, those bytes would look like this on disk: E7 A7 98 E5 AF 86.

Drumroll please…

The result was evidence of switching to Unicode. You can immediately recognize it as Unicode because of the 0x00 bytes between the extension “.exe” of that file. If you use a hex->ASCII converter on the Unicode bytes from above (d8 79 c6 5b) you get back “ØyÆ[“, which lines up nicely with the following image.

Now you ask: How do we programmatically determine if the string was written using Unicode or ASCII? Excellent question, and I am glad that you are tracking with me!

Let’s expand the view of this record a bit, and recall the structure of the format from the last post. The strings in Windows are typically followed with a 0x00 (null) byte to indicate where the string data stops. It is referred to as C style strings because this is how the C programming language stores strings in memory. In this record however, the strings were separated byte two 0x00 bytes. Take a close look at the following image of the expanded record with the Unicode string.

Did you spot the indicator? Look again at the byte immediately preceding the highlighted string data, and you will see that it is a 0x01 value. This byte has been a 0x00 value in all of my testing because I didn’t have any strings with Unicode text in them, or at least not to my knowledge. Since executables need to have these latin based extensions, the property will actually look to be ending with three 0x00 bytes. The first of those is actually part of the preceding ‘e’. Since this string has been written entirely in Unicode, the null terminating character mentioned just above gets expanded as well. The next byte is then either a 0x00 or 0x01 indicating the codepage for the next string property.

An interesting side note on a situation that Matt ran into, the use of the path “c:\test2\秘密\秘密.exe” for the executable resulted in no records indicating execution. He ran a number of tests surrounding that scenario, and there is something about that path that prevents the recording.

He continued with changing the path to “c:\秘密\秘密.exe”, and the artifact was back. We wanted to get confirmation of that 0x01 indicator byte using another string value. Sure enough, we got it in the following image.

Tool Update

The EnScript that I wrote to carve and parse these records has been updated to properly look for the 0x00 and 0x01 bytes indicating ASCII or Unicode usage. Please reach out to me if you find any problems or have any questions.

Additionally, Matt is adding this artifact to his irFARTpull collection PowerShell. These artifacts can be collected by having PowerShell perform a WMI query against the namespace and class where these records are stored. It should look something like this:
Get-WmiObject -namespace root\ccm\SoftwareMeteringAgent -class CCM_RecentlyUsedApps

Lessons Learned

This is a perfect example of being aware of what your tools are doing behind the scenes and always validating and testing them. Many of the artifacts that we search for and use to show patterns of behavior are detailed through reverse engineering. This process can be helpful, but it can also be a bit blind in not being able to analyze what we don’t have available.

If you aren’t a programmer, you can still contribute with testing, or even just thoughts on possible scenarios of failure. Hopefully the authors of the tools out there will be accepting of the feedback, as it will only provide more benefit for the community.

James Habben
@JamesHabben

Secret Archives of Execution Evidence: CCM_RecentlyUsedApps

UPDATE 2017-04-03: Unicode strings are used when needed. See the update post.

I seem to be running into more and more systems that have Windows Prefetch disabled for one reason or another. It is especially frustrating for me as a consultant since I cannot make the changes necessary to enforce the creation of the trace files nor can I implement any kind of central logging. Without this digital forensic artifact, it becomes increasingly difficult to build out a timeline of events across all the systems involved in an incident response.

One of the evidence sources that has shown itself over and over comes from a connection with a Microsoft SCCM server. SCCM has the ability to collect inventory data from many sources, and tracking executables launching is one. This feature isn’t turned on by default to have the SCCM server collect this data; however, the logging occurs on the endpoints regardless of the settings that are configured on the server.

If you search for CCM_RecentlyUsedApps, you will find tons of articles about configuring SCCM to collect this data or how to perform queries to extract the collected data. If you have the ability to push this in your organization, I say do it! If you can’t, then read on so I can show you how to take advantage of this data anyways.

Data Source

The records holding the information behind CCM_RecentlyUsedApps are stored in the collection of files that make up the database behind WMI. The locations are consistent from Windows XP through Windows 10, and you will find them here:
c:\windows\system32\wbem\repository\
c:\windows\system32\wbem\repository\fs\

I have even seen some systems that have what appears to be an old version of the WMI database. It seems to roll like the Windows Registry controlset keys. When the rebuild process kicks off, a new version of the database is built and it does not carry the previous information with it. I have seen up to 003, but it would likely go further. The previous versions look like this:
c:\windows\system32\wbem\repository.001\
c:\windows\system32\wbem\repository.001\fs\

This specific artifact was a very critical piece in a previous case. It allowed us to narrow the time window of the compromise to be much more specific. Even a single day of exposure can make a big difference in the fines against the victim company during a PCI Forensic Investigation (PFI).

You will see a handful of files in these locations. They are all used to link all the various records together to properly parse these. The guys at FireEye did some work on reverse engineering this database and released a python script to extract all of the available classes and namespaces. You can find their tool here:
https://github.com/fireeye/flare-wmi/tree/master/python-cim

Using this script, you can extract this data using these parameters:
Namespace: root\ccm\SoftwareMeteringAgent
Class: CCM_RecentlyUsedApps

This script was very helpful to me in a number of previous cases, although I have to mention that it is a bit of a pain to get installed properly. The other trouble that I ran into with this script, by no fault of the FireEye team, is that it can only parse the namespaces from the database if the data is not ‘corrupted’. I have found that imaging a live system can cause ‘corruption’ almost half of the time. It is frustrating to know that there are Indicator Of Compromise (IOC) hits inside that data blob, but the data won’t allow for the parsing.

Different Approach

As I manually looked over those seemingly lost IOC hits, I started to recognize patterns surrounding the hits. The fields holding all the property data seemed to be in the same order for all of the records of a certain system that I was reviewing at the time. I then pulled up a few systems with different OS’s from previous cases and found the same structure. YES!! The perfect setup for carving. Time to reverse engineer the record format.

The index uses a hash value in tracking and sorting structures that I won’t bore you with here. I mention though, because this hash is the piece that we will use to find these records. WinXP uses MD5 and newer uses SHA256. The hash in these records is generated from the class name CCM_RecentlyUsedApps, only the text needs to be upper cased as CCM_RECENTLYUSEDAPPS, and then converted to Unicode C\x00C\x00M\x00_\x00R\x00… (and you get the point).
WinXP MD5:
6FA62F462BEF740F820D72D9250D743C
WinVista+ SHA256:
7C261551B264D35E30A7FA29C75283DAE04BBA71DBE8F5E553F7AD381B406DD8

These hashes are what start the records. They are stored in Unicode themselves, for some reason. 128 bytes for the SHA256 and 64 bytes for the MD5.

The next 16 bytes following the hash are two 8 byte FileTimes.

After that will be 2 bytes to tell you the size of the data portion of this record. I have not seen any records using more than 2 bytes and the max size of 2 bytes is either 65,535 unsigned or 32,767 signed. Either of those provide plenty of space for this data, so I wouldn’t expect it to expand for size purposes. The data portion of the record includes these 2 bytes.

You can see on the right in the screenshot above that the size of the data is 432. You can then see at the bottom that I have highlighted 432 bytes (Sel 432 [1B0h]). You can also see another ‘7C261…’ starting immediately after my selection, although don’t let this fool you into thinking that these records will always be contiguous.

From here, the data is broken into 2 sections. The first section consists of various 4 byte fields with some being offsets and others being property values. The second section contains all the string based property values separated by double 0x00 bytes.

There are 3 values we can extract from the number section that are helpful.
Filesize
Offsets: Vista 178d (128+16+34), XP 114d (64+16+34)

ProductLanguage
Offsets: Vista 194d (128+16+50), XP 130d (64+16+50)

LaunchCount
Offsets: Vista 202d (128+16+58), XP 138d (64+16+58)

The string section always starts with ‘CCM_RecentlyUsedApps’ and is followed by the double 0x00 separator. If there are 4 bytes of 0x00 following, then the next string field is null. If there are 6 bytes of 0x00, then the next 2 string fields are null. Follow the pattern?

The string properties are listed in the following order:
ClassName (always “CCM_RecentlyUsedApps”)
AdditionalProductCodes
CompanyName
ExplorerFilename
FileDescription
FilePropertiesHash
FileVersion
FolderPath
LastUsedTime
LastUsername
MsiDisplayName
MsiPublisher
MsiVersion
OriginalFilename
ProductCode
ProductName
ProductVersion
SoftwarePropertiesHash

There will only be a single 0x00 at the very end of the record. Wasn’t that easy?

New Python Tool

After I determined these structures, I was chatting with Willi Ballenthin since he was involved in the research of the database structure. He said something like “that tool sounds pretty neat” and then followed up saying “possibly similar to this” and pointed me to a blog post by David Pany at FireEye.
https://www.fireeye.com/blog/threat-research/2016/12/do_you_see_what_icc.html

Sure enough, David beat me to it with a python script to search for the classname hashes and parse the record structure. The good news is that we arrived at the same basic approach and record structures. Validation is always nice. His python script is on GitHub here:
https://github.com/davidpany/WMI_Forensics/blob/master/CCM_RUA_Finder.py

I have had some trouble running this python script against my systems, but I haven’t spent the time to determine the cause. The output is a CSV file, but I don’t have any screenshots to show because of the errors I ran into.

New EnScript Tool

I decided to write this approach in EnScript. My cases have involved upwards of 500 systems for analysis. Using a python based approach would force me to either extract all those files, or use a mounting or parsing solution to expose the files. By using EnScript in EnCase v7 or v8, I can run the EnScript over all system images with one pass. I was able to successfully do this in testing on a recent case with 73 systems in the same case. EnCase proved to be a powerful tool in this specific scenario.

The EnScript starts off with a GUI to give you the option of running against all files in the case or a smaller subset designated by a blue check or tag selection.

I found records existing in OBJECTS.DATA and INDEX.BTR files. Some seem to be in areas of the file that have been deallocated from the active records of the database. Additionally, I have found quite a large number of records in the PAGEFILE.SYS file as well. You will see a selection option in the GUI for these common filenames.

The output of this EnScript is a CSV file. It includes a few columns in addition to the properties that were parsed from the records: evidence filename to indicate the system source, item path to show which file it was found in, and file offset to manually validate the data later if needed.

I encourage you to use Excel’s data deduplication function since I ran into a number of bugs in EnCase trying to make this EnScript work. There are some hacky workarounds in the code currently. Dedupe on all columns except item path and file offset. This will remove dupes that are found in both pagefile.sys and objects.data files.

I suspect we might be able to pull some of these records from unallocated clusters, but I haven’t found any there yet. Please let me know if you do!

You can grab the latest version of the EnScript on GitHub:
https://github.com/JamesHabben/ccm-rua-enscript

See the followup post about the forensic meanings.

James Habben
@JamesHabben