Block Building Checklist

It is important to understand how artifacts are created that you use during an investigation. Thus I wanted to provide my block building checklist to help others recreate the process. I will walk through the commands used to prepare the blocks for distribution and how to build the block libraries with the removal of a whitelist.
Block Preparation
I have used Windows, Linux and Mac OS X over the course of this project. I recommend using the operating system that your most comfortable with for downloading and unpacking the VirusShare.com torrents. The best performance will come from using solid state drives during the block building steps. The more available memory during whitelisting the better. A lot less system resources are necessary when just doing hash searches and comparisons during block hunting.
We saw this command previously in the Block Huntingpost with a new option. The -x option disables parsers so that bulk_extractor only generates the block sector hashes reducing the necessary generation time.
bulk_extractor -x accts -x aes -x base64 -x elf -x email -x exif -x find -x gps -x gzip -x hiberfile -x httplogs -x json -x kml -x msxml -x net -x pdf -x rar -x sqlite -x vcard -x windirs -x winlnk -x winpe -x winprefetch -x zip -e hashdb -o VxShare199_Out -S hashdb_mode=import -S hashdb_import_repository_name=VxShare199 -S hashdb_block_size=512 -S hashdb_import_sector_size=512 -R VirusShare_00199
The following steps help with the reduction of disk storage requirements and reporting cleanliness for the sector block hash database.  It is also a similar process for migrating from hashdb version one to two.  One improvement that I need to make is to use JSON instead of DFXML that was released at OSDFCon2015 by Bruce Allen.
We need to export the sector block hashes out of the database so that the suggested modifications can be made to the flat file output.
hashdb export VxShare199_Out/hashdb.hdb VxShare199.out
·      hashdb – executed application
·      export – export sector block hashes as a dfxml file
·      VxShare185_Out/ – relative folder path to the hashdb
·      hashdb.hdb – default hashdb name created by bulk_extractor
·      VxShare199.out – flat file output in dfxml format
Copy the first two lines of the VxShare199.out file into a new VxShare199.tmp flat file.
head -n 2 VxShare199.out > VxShare199.tmp
Start copying the contents of VxShare199.out file at line twenty-two that are appended to the existing VxShare199.tmp file. The below image indicates what lines will be removed by this command. The line count may vary depending on the operating system or the version of bulk_extractor and hashdb installed.
tail -n +22 VxShare199.out >> VxShare199.tmp
The sed command will read the VxShare199.tmp file than remove the path and beginning of the file name prior to writing into the new VxShare199.dfxml file. The highlighted text in the image below indicates what will be removed.
sed ‘s/VirusShare_00199\/VirusShare\_//g’ VxShare199.tmp > VxShare199.dfxml
Create an empty hashdb with the sector size of 512 using the -p option. The default size is 4096 if no option is provided.
hashdb create -p 512 VxShare199
Import the processed VxShare199.dfxml file into the newly created VxShare199 hashdb database.
hashdb import VxShare199 VxShare199.dfxml
I compress and upload the hashdb database for distribution saving these steps for everyone.
Building Block Libraries
The links to these previously generated hashdb databases can be found at the following link.
Create an empty hashdb called FileBlock.VxShare for the VirusShare.com collection.
hashdb create -p 512 FileBlock.VxShare
Add the VxShare199 database to the FileBlock.VxShare database.  This step will need to be repeated for each database. Upkeep is easier when you keep the completely built FileBlock.VxShare database for ongoing additions of new sector hashes.
hashdb add VxShare199 FileBlock.VxShare
Download the sector hashes of the NSRL from the following link.
Create an empty hashdb called FileBlock.NSRL for the NSRL collection.
hashdb create -p 512 FileBlock.NSRL
The NSRL block hashes are stored in a tab delimited flat file format.  The import_tab option is used to import each file that are split by the first character of the hash value, 0-9 and A-F.  I also keep a copy of the built FileBlock.NSRL for future updates too.
hashdb import_tab FileBlock.NSRL MD5B512_0.tab
Remove NSRL Blocks
Create an empty hashdb called FileBlock.Info for the removal of the whitelist.
hashdb create -p 512 FileBlock.Info
This command will remove the NSRL sector hashes from the VirusShare.com collection creating the final FileBlock.Info database for block hunting.
hashdb subtract FileBlock.VxShare FileBlock.NSRL FileBlock.Info
The initial build is machine time intensive but once done the maintenance is a walk in the park.
Happy Block Hunting!!
John Lukach

Critical Stack Intel Feed Consumption

Critical Stack provides a free threat intelligence aggregation feed through their Intel Market for consumption by the Bronetwork security monitoring platform. This is a fantastic service that is provided for free!! Special thanks to those who have contributed their feeds for all to take advantage of the benefits!! Installation is beyond the scope of this post as it is super easy with decent documentation available on their website. The feed updates run roughly hourly by default into a tab delimited file available on disk.

My goal was to make the IP address, domain and hash values accessible through a web interface for consumption by other tools in your security stack. Additionally, I didn’t want to create another database structure but be able to read the values into memory for comparison on script restarts. Decided to use Twisted Python by Twisted Matrix Labs to create the web server. Twisted is an event-driven networking engine written in Python. The script provides a basic foundation without entering into the format debate between STIX and JSON.  Kept it simple…

Twisted Python Installation

The following installation steps work on Ubuntu 14.04 as that is my preference.

apt-get install build-essential python-setuptools python-dev python-pip

pip install service_identity

wget https://pypi.python.org/packages/source/T/Twisted/Twisted-15.5.0.tar.bz2

bzip2 -d Twisted-15.5.0.tar.bz2

tar -xvf Twisted-15.5.0.tar

cd Twisted-15.5.0/

python setup.py install

The PIP package installation allows for the future usage of SSL and SSH capabilities in Twisted.

TwistedIntel.py Script

The default installation file and path containing the Critical Stack Intel Feed artifacts.

 

 

 

The field separator on each line that gets loaded into the Python list in memory.

 

 

 

The output that gets displayed on the dynamically generated web page based on user input.

 

 

 

 

The port that the web server runs on for the end-user to access the web page.

 

 

TwistedIntel.py Usage

The TwistedIntel.py script can be used after execution by browsing to the website with an IP address, domain or hash value provided in the path.  If the result returns FOUND that means it is part of the Critical Stack Intel Feed.

Download TwistedIntel.py

https://gist.github.com/jblukach/00f68e560dac78e6bd29

Feel free to change the code to meet your needs and really appreciate any contributions back to the DFIR community.

 Happy Coding!!
John Lukach

Updated 12/15/2015

  •       TwistedIntel2.py displays the feed that an IP address, domain, or hash originated.
  •       Upstart configuration file for running the Twisted Python script at startup.
  •       Crontab configuration that restarts the script hourly after Critical Stack Intel updates.

 

Blocks in Practical Use

Last week, John Lukach put up a post about how to use some tools to find pieces of files left behind after being deleted, and even partially overwritten. I wanted to put together a short post to give a practical use of this technique for a recent case of mine.

The Case

The request came in as a result of an anomaly detected in traffic patterns. It looked like a user had uploaded a significant amount of data to a specific cloud storage website. My client identified the suspected machine, and got an image to preserve the data. Unfortunately, this anomaly wasn’t caught right away, so a couple months passed before action was taken. They took a pass on the image with some forensic tools, but they weren’t able to identify anything relating to this detection, so they asked me for a second look.

My Approach

I started off with a standard pass of the forensic tools. Not because I don’t trust their team, but because tool versions can have an effect on the artifacts that get extracted and I wanted to be sure of my findings. Besides, if you don’t charge for machine time, then this really doesn’t add much in the way of cost in the end. This process did two things for me. The first thing I got from it was that I could not find any artifacts related to the designated website. The second thing was that the history and cache looked very normal. In other words, it didn’t look like the user had cleared any history or cache in an effort to clean up after themselves.

Of course, I am now thinking that they could have done a targeted cleanup and deleted very specific items from their browser. I get some reassurance that this was not the case by usage of some of the deep scan options available in the forensic tools that I used. These tools will scrounge through unallocated clusters, on command, in search of data patterns that are potential matches for deleted and lost internet artifacts. I found none, again, in this more extensive search.

Let’s put on our tin foil hats now, and really go for it. The user could have wiped and deleted individual files. Then gone into the history and cache lists to remove the pointers of those files. This would take care of the files that are still currently allocated. What about all those files that get cataloged in the cache, but quickly discarded because the server instructed this by sending some cache control headers? These files would have been lost to unallocated clusters at some point before the user thought about their cleanup actions. The only way to cover those track would be to use a tool that can do wiping of unallocated clusters. If the user went to this extent, dare I say they deserve to get away with it?

What If?

My client had a secondary thought in the event that we were not able to find traces of user action involved with this detection. They wanted to see if the system was infected with some malware that would have generated this traffic. I did the standard AV scans with multiple vendors and didn’t find anything. I followed that up with a review of the various registry locations that allow for malware to persist and autostart.

Verizon has a great intel team, and I asked them for information on possible campaigns involving the designated website. They gave me some great leads and IOCs to search for, but they did not pan out in this case. What a great resource to have!

After reviewing all of this, I was not able to find any indications of a malware infection, much less one that was capable of performing the suspected actions.

Points to Disprove

Here is the point where block hashing can make a real difference. I will show you how I used block hashing to disprove (as much as the available data will allow) three different points.

  1. Lost Files – User deleted individual history and cache entries to cleanup their tracks and the file system lost track of the related files
  2. Unallocated – User went psycho-nutjob-crazy and wiped unallocated clusters after deleting records
  3. Malware – User wasn’t involved and malware done the dirty deed

Disprove Lost Files

In order to prove the user did not delete these selective entries from the browser history and cache, I need to find the files on disk, most likely in unallocated clusters. The file system no longer has metadata about them, so the forensic tools will not discover them as deleted or orphaned files. I could carve files from unallocated clusters using the known headers of various file types, but this is a very broad approach. I want to identify specific files related to this website, not discover any and all pictures from unallocated.

I start by using HTTrack to crawl the designated website. I want to pull as many files down as I can. These are all the files that a browser would see, and download, during normal interactions with the website. I got a collection of a few hundred files of various types: JPEG, GIF, SWF, HTML, etc.

The next step is to split and block hash these files. On this step, I used EnCase and the File Block Hash Map Analysis (FBHMA) Enscript written by Simon Key of EnCase Training. FBHMA has the ability to do both sides of the block hashing technique and presents an awesome graphic for partially located files. I applied it against my collected files, and then applied those hashes against the rest of the disk. The result was zero matches.

This technique is not affected by the amount of fragmentation or the amount of overwriting, as long as there are some pieces left behind. The only way to beat this technique is to completely overwrite the entire set of blocks for all of the files in question. A very unlikely scenario in such short time without a deliberate action.

Disprove Unallocated

This point takes a little more work to disprove, and it might be subject to your own interpretation. The idea here is that files have been lost to unallocated clusters without control. The user was not able to do a targeted wipe of the file contents before deletion, so the entire area of unallocated clusters would have to be wiped to assure cleanliness.

The user could make a pass with a tool that overwrites every cluster with 0x00 data. This is enough to clear the data from forensic tools, but it can leave behind a pretty suspicious trail. If you used block hashing against this, it would quickly become apparent that the area was zeroed out. Bulk Extractor provides some entropy calculations that make quick work of this scenario.

The other option, and more likely to be used, is to have the wipe tool write random data to the cluster. Most casual and even ‘advanced’ computer users picture common files like zip and JPEG to be a messy clump of random data thrown together in some magic way that draws pictures or spells words. In DFIR, we learn early on that seemingly random data is not actually random. There are recognizable structures in most files, and unless the file involves some type of compression, it is not truly random when measured in terms of entropy. My point in all of this is that a wiping pass that writes random data would cause every cluster of unallocated to show high entropy values. This would raise eyebrows because this is not normal behavior. Some blocks contain compression or encryption and would show high entropy, but others are plain text which registers rather low on the entropy scale.

So with a single pass of Bulk Extractor and having it calculate hashes and entropy, I was able to determine that 1) unallocated clusters was not zeroed out and 2) unallocated clusters was not completely and truly random. It, again, looked very normal.

Disprove Malware

The last point to disprove was the existence of malware on the system. I already established that there was no indications showing of allocated malware, but I can’t ignore the possibility of there having been some malware installed which later got deleted or uninstalled, for some reason. Again, without a full and controlled wiping of this data off the drive, it would leave artifacts in unallocated clusters for me to find with block hashing.

This step is a much larger undertaking. It’s because I am employing the full 20+ million samples that are generously shared by VirusShare.com and users. I didn’t to do all of the work, though, because John has done that for us all. He has a GitHub repo with all the block hashes from VirusShare. Just be sure to remove the NSRL block hashes since those malware authors can be quite lazy in copying code from other executable files. It’s a beast to get up and running, but well worth the effort. Maintaining it is much easier after it’s built.

Now I have a huge data set of known malware, adware, and spyware sample files, and various payload transports like DOC and PDF files. If there was anything bad on this system, my data set is very likely to find it. The result was no samples identifying more than a single block on the disk. You have to allow for a threshold of matches with this size of sample data. A single matching block from a possible 1000 blocks for sample file of 500kb is not interesting.

Value of Blocks

This turned out to be a rather long post, but oh well. I hope this helps you to see the value in the various techniques of applying block level analysis in your cases. The incoming data in our cases is only getting larger, and we need to be smart about how we analyze it. Block hashing is just one way of letting our machines do the heavy work. Don’t be afraid to let your machine burn for a while.

You may be thinking to yourself that this is not a perfect process when it came to the malware point, and I may agree with you. We are advancing our tools, but we need to advance our understanding and interpretation of the results as well. For now, we handle that interpretation as investigators but the tools need to catch up. Harlan recently posted about this as a reflection on OSDFcon and it’s worth a read and consideration.

Happy Hunting!
James Habben
@JamesHabben

Block Hunting

In DFIR practices, we use hash algorithms to identify and validate data of all types. The typical use is applying them against an entire file, and we get a value back that represents that file as a whole. We can then use those hash values to search for Indicators of Compromise (IOC) or even eliminate files that are known to be safe as indicated by collections such as National Software Reference Library (NSRL). In this post, however, I am going to apply these hashes in a different manner.

The complete file will be broken down into smaller chunks and hashed for identification.  You will primarily have two types of blocks, a cluster and a sector. A cluster block will be tied to the operating system where sector blocks corresponds to the physical disk. For example Microsoft Windows by default has a cluster size of 4,096 that is made up of eight 512 sectors that is common across many operating systems. Sectors are the smallest area on the disk that can be used providing the most accuracy for block hunting.

Here are the block hunting techniques I will demonstrate:

  1. locate sectors holding identifiable data
  2. determine if a file has previously existed

I will walk you through the command line process, and then provide links to a super nice GUI. As an extra bonus, I will tell you about some pre-built sector block databases.

Empty Image or Not

If you haven’t already, at some point you will receive an image that appears to be nothing but zeroes. Who wants to scroll through terabytes of unallocated space looking for data? A quick way to triage the image is to use bulk_extractor to identify known artifacts such as internet history, network packets, carved files, keywords and much more. What happens if the artifacts are fragmented or unrecognizable?

This is where sector hashing with bulk_extractor in conjunction with hashdb comes in handy to quickly find identifiable data. A lot of great features are being added on a regular basis, so make sure you are always using the most current versions found at: http://digitalcorpora.org/downloads/hashdb/experimental/

Starting Command

The following command will be used for both block hunting techniques.

bulk_extractor -e hashdb -o Out -S hashdb_mode=import -S hashdb_import_repository_name=Unknown -S hashdb_block_size=512 -S hashdb_import_sector_size=512 USB.dd

  • bulk_extractor – executed application
  • -e hashdb – enables usage of the hashdb application
  • -o Out – user defined output folder created by bulk_extractor
  • -S hashdb_mode=import – generates the hashdb database
  • -S hashdb_import_repository=Unknown – user defined hashdb repository name
  • -S hashdb_block_size=512 – size of block data to read
  • -S hahsdb_import_sector_size=512 – size of block hash to import
  • USB.dd – disk image to process

Inside the Out folder that was declared by the -o option, you will find a hashdb.hdb database folder that is generated. Running the next command will extract the collected hashes into dfxml format for review.

hashdb export hashdb.hdb out.dfxml

Identifying Non-Zero Sectors

The dfxml output will provide the offset in the image where a non-low entropy sector block was identified. This is important to help limit to false positives where a low value block could appear across multiple good and evil files. Entropy is the measurement of randomness. An example of an low entropy block would be one containing all 0x00 or 0xFF data for the entire sector.

Here is what the dfxml file will contain for an identified block.

Use your favorite hex editor or forensic software to review the contents of the identified sectors for recognizable characteristics. Now we have identified that the drive image isn’t empty that didn’t require a large amount of manual effort. Just don’t tell my boss, and I won’t tell yours!

Deleted & Fragmented File Recovery

Occasionally, I receive a request to determine if a file has ever existed on a drive. This file could be intellectual property, customer list or a malicious executable. If the file is allocated, this can be done in short order. If the file doesn’t exist in the file system, it will be nearly impossible to find without a specialized technique. In order for this process to work, you must have a copy of the file that can be used to generate the sector hashdb database.

This command will generate a hashdb.hdb database of the BadFile.zip designated for recovery.

bulk_extractor -e hashdb -o BadFileOut -S hashdb_mode=import -S hashdb_import_repository_name=BadFile -S hashdb_block_size=512 -S hashdb_import_sector_size=512 BadFile.zip

The data will be used for scrubbing our drive of interest to run the comparisons. I am targeting a single file, but the command above can be applied to multiple files inside subfolders by using the -R option against a specific folder.

The technique will be able to identify blocks of a deleted file, as long as they haven’t been overwritten. It doesn’t even matter how fragmented the file was when it was allocated. In order to use the previously generated hashdb database to identify the file (or files) that we put into it, we need to switch the hashdb_mode from import to scan.

bulk_extractor -e hashdb -S hashdb_mode=scan -S hashdb_scan_path_or_socket=hashdb.hdb -S hashdb_block_size=512 -o USBOut USB.dd

Inside the USBOut output folder, there is a text file called identified_blocks.txt that records the matching hashes and image offset location. If the generated hashdb database contained multiple files, the count variable will tell you how many files contained a matching hash for each sector block hash.

Additional information can be obtained by using the expand_identified_blocks command option.

hashdb expand_identified_blocks hashdb.hdb identified_blocks.txt

Super Nice GUI

SectorScope is a Python 3 GUI interface for this same command line process that was presented at OSDFCon 2015 by Michael McCarrin and Bruce Allen. You definitely want to check it out: https://github.com/NPS-DEEP/NPS-SectorScope

Pre-Built Sector Block Databases

The last bit of this post are some goodies that would take you a long time to build on your own. I know, because I built one of these sets for you. The other set is provided by the same great folks at NIST that give us the NSRL hash databases. They went the extra step to provide us with a block hash list of every file contained in the NSRL that we have been using for years.

Subtracting the NSRL sector hashes from your hashdb will remove known blocks.

http://www.nsrl.nist.gov/ftp/MD5B512/

VirusShare.com collections is also available for evil sector block hunting too.

https://github.com/jblukach/FileBlock.Info#aquire-copy

Happy Block Hunting!!
John Lukach

Unified We Stand

Big news happened at #OSDFcon this week. Volatility version 2.5 was dropped. There are quite a number of features that you can read about, but I wanted to take a few minutes to talk about one feature in particular. There have been a number of output options in the past versions of Volatility, but this release makes the different outputs so much easier to work with. The feature is called Unified Output.

This post is not intended to be a ‘How To’ of creating a Volatility plugin. Maybe another day. I just wanted to show the ease of using the unified output in these plugins. If you are feeling like taking on a challenge, take a look through the existing plugins and find out which of them do not yet use the unified output. Give yourself a task to jump into open source development and contribute to a project that has likely helped you to solve some of your cases!

Let me give you a quick rundown of a basic plugin for Volatility

Skeleton in the Plugin

The framework does all the hard work of mapping out the address space, so the code in the plugin has an easier job of discovery and breakdown of the targeted artifacts. It is similar to writing a script to parse data from a PDF file verses having to write code into your script that reads the MBR, VBR, $MFT, etc.

To make a plugin, you have to follow a few structural rules. You can file more details in this document, but here is a quick rundown.

  1. You need to create a class that inherits from the base plugin class. This gives your plugin structure that Volatility knows about. It molds it into a shape that fits into the framework.
  2. You need a function called calculate. This is the main function that the framework is going to call. You can certainly create many more functions and name them however you wish, but Volatility is not going to call them since it won’t even know about them.
  3. You need to generate output.

Number 3 above is where the big change is for version 2.5. In the past versions, you would have to build a function for each format of output.

For example, you would have a render_text to have the results of your plugin output basic text to stdout (console). Here is the render_text function from the iehistory.py plugin file. The formatting of the data has to be handled by the code in the plugin.

If you want to allow to CSV output from that same plugin, then you have to create another function that formats the output into that format. Again, the formatting has to be handled in the plugin code.
For any other format, such as JSON or SQLite, you would have to create a function with code to handle each one.

Output without the Work

With the unified output, you define the column headers and then fill the columns with values. Similar to creating a database table, and then filling the rows with data. The framework then knows how to translate this data into each of the output formats that it supports. You can find the official list on the wiki, but I will reprint the table for a quick glance while you are reading here.
There is a requirement in using this output format and it is in the similar fashion to building a plugin in the first place.
  1. You need to have a function called unified_output which defines the columns
  2. You need to have a function called generator which fills the rows with data

Work the Frame

The first step in using the unified output is setting up your columns by naming the headers. Here is the unified_output function from the same iehistory.py plugin.
Then you define a function to fill each of those columns with the data for each record that you have discovered. There is no requirement on how you fill these columns, they just need the data.
The other benefit from this unified output is that a new output format can be easily added. You can see the existing modules, and add to it by writing code of your own. How about a MySql dump file format? Again, dig in and do some open source dev work!

Experience the Difference

Allow me to pick on the guys that won 1st place in the recent 2015 Volatility plugin contest for a minute. Especially since I got 2nd place behind them. Nope, I am not bitter… All in fun! They did some great research and made a greatplugin.

When you run their plugin against a memory image, you will get the default output to stdout.

If you try to change that output format to something like JSON, you will get an error message.

The reason for this is because they used the previous version rendering. The nice part is if they change the code to add JSON output, the unified output would also support SQLite and XLS or any other rendering format provided by the framework. Thanks to the Fireye guys for being good sports!

Now, I will use one of the standard plugins to display a couple different formats. PSList gives us a basic list of all the processes running on the computer at the time the memory image was acquired.

Here is the standard text output.

Here is JSON output. I added the –output=json to change it. It doesn’t look that great in the console, but it would be great in a file to import into some other tool.
Here is HTML output. Again, the change in with –output=html.

Hear My Plea

Allow me to get a shameless plug for my own project now. Evolve is a web based front end GUI that I created to interface with Volatility. In order for it to work with the plugins, they have to support SQLite output. The easiest way of supporting SQLite is to use the new unified output feature. The best part is that it works for a ton of other functions as well, with all the different formats that are supported.

If you have written, or are writing, a plugin for Volatility, make it loads better by using the unified format. We have to rely on automation with the amount of data that we get in our cases today, so let’s all do our part!

Thanks to the Volatility team for all of their hard work in the past, now, and in the future to come. It is hard to support a framework like this without being a for-profit organization. We all appreciate it!

James
@JamesHabben

Say Uncle

You have all run into this dreaded case, I’m sure. Maybe a few times. I am talking about that case where you are asked to prove or disprove a given action of a user, but the evidence just doesn’t give in. You poke, you prod, you turn it inside-out. Nothing you do seems to be getting you any closer to a conclusion. These are the cases that haunt me for a long time. I still have some rumbling around in my head from over 10 years ago.

What am I missing?
What haven’t I thought of?
Are my tools parsing this data correctly?
Am I using my tools correctly?
Does this user know about forensics and cleaning up?
Did this user really clean up that well?
How can I call myself a forensic examiner?
Should I have eaten that entire tub of ice cream with gummy bear and potato chip toppings while pulling my hair out and questioning my mere existence?

Building Myself

I tend to form an attachment to my cases. Maybe you do to? I take it on as a personal challenge. A challenge to prove that I am capable. A challenge to learn something new. A challenge to use a new tool, or an old tool in a new way. A challenge to take a step towards being a better investigator.
When I run into these cases that present no artifacts of activity, I end up spending a ton more time on them. I try to find something in the obscure areas. Sometimes it works and my perseverance pays off. I finally find that one little hint, and it unravels the rest of my case. I put that hint into my bank of things to look for in the future, and I have just improved my skills as an investigator.
Other times, I have to give in and say “uncle”. Because of the attachment I form with the case, it can sometimes drag me down. It feels like a failure to me. I should be able to find something, even the tiniest thing, which can give some kind of hint. But when have I spent enough time on it?

Evidence of Logic

There is a phrase that has been passed around the DFIR circles for many years. As much as it may seem, it didn’t start in our industry. It is a logic based thought and is discussed in probability as a way of forming a hypothesis. Fortunately, it fits for us too, so here it is:
The absence of evidence is not evidence of absence
In the original context, the usage of the word evidence is referring to an event. We base our investigations on the existence of an artifact on disk as a result of a user action (event). If artifact X exists in this data, then event Y must have taken place on this computer. It is a very sound approach, and one that many investigators use, digital or not. The finding of that artifact allows us to build a case with a reasonable certainty about the actions taken on this computer. We can’t always extend that onto the person on the opposite side of the table from us, but that is a whole ‘nother topic.
If we scour the drive and find that it is lacking evidence of artifact X, does that mean that event Y did not take place on this computer? Of course not! Because of the intimate knowledge we, as forensic experts, have about the programs and systems of computers, we could perform any number of actions and wipe the disk of any artifacts, if given enough time. In that scenario, event Y did happen. We know because we carried out the actions ourselves. Looking back at our phrase, we find it to ring pretty true.
This is what makes our investigations so difficult. Science is able to say things so simply: If x then y; if !y then !x. We cannot use this logic structure because there are too many variables at play. The courts accept our tools and findings based on them being a scientific process, but some argue that digitalforensics is not a science while others say it is a combination of art and science. No matter what your opinion is, we can agree on the phrase I highlighted above. Just because I don’t find a deleted file, it doesn’t mean that file didn’t exist.

Going the Distance

How far, then, do you go? How much time do you spend?
Some of you may have the luxury of spending as much time on a case as you need in order to be satisfied with the conclusion. Most of you, however, have someone asking telling you when you will have this case wrapped up.
I used to have more freedom in the time I spent on my cases. It’s not my decision anymore. I have someone paying for every hour I put into my cases, and that can get expensive. I absolutely am doing my best to find even the tiniest little artifact and I am determined to break the case open. My investigation is a direct cost, though. They will get an invoice. The invoice will get passed around to various departments as it gets processed. My forensic report will get passed around various departments as well. Some of those departments will be analyzing the cost of getting this report. If my report cost them $50,000 and simply says ‘no findings’, you can bet there will be some unhappy people.
So, how much time do you spend?

Releasing the Burden

My approach to this, in both past and present, is to take the burden of cost off of my shoulders. When a case is starting to head towards the dreaded ‘no findings’, I start preparing myself for it. In preparing, I start documenting my actions instead of just my findings. I prepare to deliver the bad news of ‘no findings’ to the person requesting the case.
When I get to a point of being comfortable with having exhausted all reasonable artifacts, I present my work to the customer. I take a different approach because I don’t have a list of artifacts proving the case. I explain that I don’t have any findings yet. Then I spend some time in education with the customer. I explain the standard processes that we use in forensics. I explain the approach that I have taken and the reasons behind it. I want to make sure the customer understands that they are paying for the work I am doing and not just the result. I want them to feel good about spending the money, and that they are also getting a very thorough review of the evidence.
Then, I give the options. I explain the techniques that I feel might give results, and I am honest about my expectations. I don’t ever want a customer to come back to me, unhappy, because I talked them into paying for a technique that didn’t pan out. This usually results in the customer having an internal chat about moving forward. Sometimes they really want that forensic answer, but other times they are satisfied with knowing the proposed scenario of the case was highly improbable based on the lack of evidence to support that the actions were performed. It’s a balance of cost vs benefit.

Letting Go

This is the hardest part for me. If the customer decides that the ‘no findings’ are enough, then I have to move on. There are more cases lined up and waiting for my time. I want resolution in every case, but it just isn’t reasonable. I can only do my best with the time that I am allowed.
Finding no evidence of a proposed action does not make you a substandard examiner. If you can stand up proud with your report in your hand, then you have done all you can. If you can defend your findings to other examiners who constructively ask if you tried technique A or technique B, then you have proven your skills.
Be proud of your work!
James

Crashing into a Hint

I haven’t spent much time looking at crash dumps, but I ran into one recently. I had a case that presented me with little in the way of sources for artifacts. It forced me to get creative, and I ended up finding a user-mode crash dump that gave me the extra information I needed. It wasn’t anything earth-shattering, but it could be helpful for some of you in the future.

The case was a crypto ransomware infection. Anti-virus scans had already wiped out most of the infection, but the question was how it got onto the box in the first place. I determined to a reasonable certainty that a malicious advertisement exploited an out of date flash plugin. The crash dump gave me the URL that loaded the flash exploit. Let’s get the basics out of the way first.

Crash Dump Files

There are several different types of user-mode crash dumps that can be created. The default, starting in Windows Vista, is to create a mini dump, but true to Windows, you can change those defaults in the registry.
The default location drops them in the user profile folder for all applications. The path looks something like this:
C:\uses\[user]\AppData\Local\CrashDumps
The filename convention is pretty simple. It starts with the name of the process. Then includes the process identifier (PID). Last is the extension of dmp. All together, it looks like this:

process.exe.3456.dmp

Forcing a Crash Dump

You can go look in that directory on your computer, and very likely find a few of those crash dumps hanging out. Any process will cause a crash dump file to be created, so it can be a great sign of user activity on the box in absence of other indicators.
If you see no files in that folder, or an absence of that folder altogether, RUN! Go get yourself a lottery ticket! It just means you haven’t used your computer for more than reading this blog post. Don’t worry though because Sysinternals has another useful tool that you can employ for giving you sample crashes.
ProcDumpis a command line tool that allows you to customize the dumps and set up all kinds of triggers on when to create the dumps. Great functionality if you are a developer tracking down that pesky bug in code, but doesn’t do much for us in forensics. Best part about this tool is that it creates the crash dumps without having to actually crash the program.
Here are the commands I used while researching this further and dropping some dummy data to use in screen shots. The number is the PID of the process that I wanted. You can use the name, but you have to make sure there is only one process with that name or it bails out. I used the PID because I was going against Internet Explorer and even a single tab creates at least two processes.
Procdump 1234
Procdump –ma 1234
The –ma switch creates a crash dump with the full process memory. Otherwise, the only difference in the dump file when using ProcDump is that the naming convention uses the current date and time instead of PID.

Analyzing the Dump File

The dump file in my case originally stuck out to me because of the timeline I built. I had a pretty specific time based on a couple other artifacts, so I crashed into it by working backwards in time. Exploit code can often crash a process when encountering unexpected versions or configurations. The flash player was probably older than expected by the code.
Fact #1: IE crashed during relevant time window.
I started by looking around for tools to analyze these dump files. I used IDA Pro and WinDbg, but decided it wasn’t giving me any useful information. I ended up going old school forensics on it.
I have some screen shots to show you the goods, but of course the names have been changed to protect the innocent. Here we go with sample data. I search for ‘SWF’ and locate several hits. One type of hit is on some properties of some sort.
Another type of hit is a structure that contains a URL of a SWF file that was loaded. Alongside that URL is another for the page that loaded the SWF URL.
At this point in my case, I took all of the URLs that I found associated with these SWF hits, and I loaded them up in VirusTotal for evaluation. Sure enough, I had one that was caught by Google. The others showed it as being clear, but this was only a few days after the infection. I checked it again a couple days later and more were detecting it as malicious.
Fact #2: Internet Explorer loaded a SWF file from a URL that has been detected as malicious.

Dump Containers

The default dump type is mini dump, which contains heap and stack data. These files contain parts of the process memory, obviously, so maybe they contain actual flash files? I wanted to find that out to confirm the infection.
The CPU does not have direct access to hard disk data, so it relies on the memory manager to map those files into memory. This means that, most of the time, files sit in memory just like they do on disk. Here is the specfor SWF files (PDF). With this information I can prepare a search for the header. I will print a snippet of the spec here for reference.
Field
Type
Comment
Signature
UI8
Signature byte:
“F” indicates uncompressed
“C” indicates a zlib compressed SWF (SWF 6 and later only)
“Z” indicates a LZMA compressed SWF (SWF 13 and later only)
Signature
UI8
Signature byte always “W”
Signature
UI8
Signature byte always “S”
Version
UI8
Single byte file version (for example, 0x06 for SWF 6)
FileLength
UI32
Length of entire file in bytes
I compare that with several SWF files that I have available, and find them to all use CWS as the header.
This means they are all compressed, and rightfully so since they are being sent over the internet. In the dump file, however, I come up with zero hits for CWS. I do locate 3 hits for FWS in the dump file.
Aha! Nailed this sucker! Now I just need to determine which of these contains the exploit code.

Carving SWF Files

Now I want to get these SWF files out of here. There is a small potential for files to be fragmented in memory, but the chances are generally pretty slim for small files like this. I start working on the headers that I have located. Fortunately, the F in FWS means that it is uncompressed. This means that the value for the file size field will be accurate since it always shows the size of the uncompressed version of the file.
The size field tells the size of the file from the start of the header. So I start my selection at the FWS and go for a length of 6919 in this file pictured.
As a confirmation of my file likely being in one piece, I see the FWS header for another SWF file following immediately after my selection. After selecting this file, I export it. I can now hash it and take the easy way out.
Now the easy way.
Dang! I do the same process for the other 2 hits, and I get the same result for all 3 of these. I have 3 SWF files loaded in my crash dump, but none of them are malicious. My curiosity is now getting the better of me, and I need to know more!
I looked on a different computer and found a crash dump from Chrome. This computer even has a newer flash plugin installed than my sacrificial VM. Now I perform the same search on this crash dump, and I find an interesting result.
The Chrome dump:
The Internet Explorer Dump:
The hits have the same version numbers and file sizes. I did some poking around inside the SWF file, and I found these to be components built into the flash player. You can see one of them in this image.
If I search one of the dumps that I made using the –ma switch of ProcDump, then I do find the exploit code. It is hidden amount many other hits. There are some funny looking strings at the top of this SWF file, but I will get to that in just a bit.

Collecting the Sample

My favorite tool for collecting web based samples is HTTrack. It has a ton of configuration options including the ability to use a custom User Agent string and to route through a proxy server, such as TOR. It is a little easier to use than wget or curl, and it easily gives me the full context of the pages surrounding the sample.
Since I found the malicious URL in the IE crash dump, I can use that to pull down the sample for further analysis.

Analyzing the Sample

Once I have the sample collected, I can now pull it apart to see what’s going on. I won’t claim to be anything other than a novice when it comes to SWF files, but there are similar behaviors used across all sorts of malicious code that can stick out.
I use a couple tools for breaking down SWF files: SWFTools and FFdec. I am not going into a ton of detail because therearemanyothersthat already cover this topic much better.
With this exploit, it is pretty obvious from just a glance that it is not a normal SWF. Take a look at this image and tell me this is normal.
A class named §1I1l11I1I1I1II1IlllIl1§, another named IIl11IIIlllIl1? Can you tell the difference between |, 1, I, l? That’s the idea!
Then there are the strings from the header we saw earlier. They look like this in the code.

Easy Analysis

Now I use the easy button to confirm what I already suspect. I send the hash to VirusTotal and find that it has a pretty strong indication.
I already know the end goal of this sample, so I don’t want to waste time on this. You can read about my new perspective in this earlier post.
Fact #3: SWF sample contains obfuscated code and has a VT score of 17/57

The Last Piece

I need one more piece to complete the findings. Knowing how these files typically get onto disk is our jobs. If it was seen on the screen, then it had to have been in memory at some point in some form. If it was in memory for some time, then there is a fair chance that it has been paged out into swap. If it came from the internet, there is a strong chance that it touched the temporary cache, even if just for a second. If it touched the cache and was cleared out, then there is a reasonable chance of pieces still sitting in unallocated clusters. Follow the logic?
I prepare a search by grabbing the first 8 bytes of the sample.
We saw earlier that SWF files can be either compressed or uncompressed, with the difference indicated by the first character. I don’t know if this SWF file will be compressed, so my keyword will account for that by not forcing a C, as the sample has. Otherwise, this has the SWF signature, the file version, and the file size. It is no SHA256, but it’s unique enough for some confirmation to be made.
? 57 53 0D 54 D4 00 00
Sure enough! The exploit SWF is located in pagefile.sys with this keyword. You can see those same broken strings and I|l1I names in there. Unfortunately, the nature of pagefile operation doesn’t always place file pieces together, nor does it even hold onto the entire copy of the file.
Fact #4: Exploit SWF code found in pagefile.sys

Putting It All Together

I walked you through some of the basic steps I used to discover the infection point. I did this without unallocated clusters, and with no browser history or cache. I put this together to challenge you.
Use your knowledge and be creative! You have a unique understanding of how many components of operating systems and applications work. Don’t get discouraged if your forensic tools don’t automagically dime out the culprit.
Here is a summary of what we found here:
Fact #1: IE crashed during relevant time window.
Fact #2: Internet Explorer loaded a SWF file from a URL that has been detected as malicious.
Fact #3: SWF sample contains obfuscated code and has a VT score of 17/57
Fact #4: Exploit SWF code found in pagefile.sys

What do you think? Enough to satisfy?

Here is the sample SWF I used for this walk through.

Be creative in your cases!
James