Report Rapport

Let me just state this right at the top. You need to be writing reports. I don’t care what type of investigation you are doing or what the findings are. You need to be writing reports.

There are plenty of reasons that your management will tell you about why you have to write a report. There are even more reasons for you to write these reports, for your own benefit. Here is a quick list of a few that I thought of, and I will discuss a bit about each in sections below.

  •  Documenting your findings
  •  Justification of your time
  • CYA
  •  Detail the thoroughness of your work
  • Show history of specific user or group
  • Justification for shiny tools
  • Measure personal growth

Documenting Your Findings

Your boss will share the recommendation for this because it’s a pretty solid one. You need to document what you have found. As DFIR investigators, security specialists, infosec analysts, etc., we are more technical in nature than the average computer user. We know the inner most workings of these computers, and often times how to exploit them in ways they weren’t designed. We dig through systems on an intimate level, and with this knowledge we can make some incorrect assumptions that others understand the most basic of things.

Take an example of a word document. A current generation word document has an extension of ‘docx’ when saved to disk. So many things fly through my mind when I see those letters. I know that because of the ‘x’, that it is a current generation. The current generation use the PK zip file format. It contains metadata, and in the form of XML. It has document data, and is also in the form of XML. It can have attachments, and those are always placed in a specific directory. I know you can keep going too. How many of your executives know this?

The people making decisions to investigate incidents and pay your salary do not need to know these things, but they do need to understand them in the context of your investigation. Document the details like your job depends on it. Use pictures and screen shots if you have to, since that helps display data in a friendlier way to less technical people. Go to town with it and be proud of what you discovered. The next time you have a similar case, you will have this as a reference to help spur thoughts and ensure completeness.

Justification of your time

We are a bunch of professionals that get paid very well, and we work hard for it. How many times in the last month have you thought or said to yourself that you do not have enough time in the day to complete all the work that is being placed in your queue?

 When you report on your work, you are providing documentation of your work. The pile of hard drives on your desk makes it seem to others that you can’t keep up. That could mean that they are asking too much of you, or it could mean that you aren’t capable enough. You don’t want to leave that kind of question in the minds of your management. Write the reports to show the time you are spending. Show them how much work is required for a ‘quick check into this ransomware email’ and that it isn’t actually just a quick check. If you do this right, you might just find yourself with a new partner to help ease that workload.

CYA

People like to place blame on others to make sure they are clear. Your reports should document the facts how they are laid out, and let it speak for itself. You should include information about when data was requested and when it was collected. Document the state of the data and what was needed to make it usable, if that was required. Track information about your security devices and how they detected or didn’t detect pieces of the threat. You should be serving as a neutral party in the investigation to find the answers, not place the blame.

Detail the thoroughness of your work

So many investigations are opened with a broad objective, and that is to find the malware. Depending on the system and other security devices, it could be as easy as running an AV scan on the disk. Most times, in my experience at least, this is going to come up clean since it didn’t get detected in the first place anyways.

You are an expert. Show it in your reports. Give those gritty details that you love to dig into, and not just those about what you found. The findings are important, but you should also document the things you did that resulted in no findings. You spend a lot of time and some people don’t understand what’s required beyond an AV scan.

Show history of specific user or group

If you are an investigator working for a company, you are guaranteed to find those users that always get infected. They are frustrating because it causes more work for you, and they are usually some little Possibly Unwanted Program (PUP) or ransomware. They are the type of person that falls for everything, and you have probably thought or said some things about them that don’t need to be repeated.

Document your investigations, and you will be able to show that Thurston Howell III has a pattern of clicking on things he shouldn’t. Don’t target these people though. Document everything. As a proactive measure, you could start building a report summarizing your reports. Similar to the industry reports about attack trends, you can show internal trends and patterns that indicate things like a training program is needed keep users from clicking on those dang links. This can also support justification to restrict permissions for higher risk people and groups, and now you have data to back up the fact of being high risk. There can be loads of data at your disposal, and it’s limited by your imagination on how to effectively use it.

Justification for shiny tools

Have you asked for a new security tool and been turned down because it costs too much? What if you could provide facts showing that it is actually costing more to NOT have this tool?

Your reports provide documentation of facts and time. You can use these to easily show a cost analysis. Do the math on the number of investigations related to this tool, the hours involved in those investigations by everyone, not just you. You will have to put together a little extra on showing how much time the new fanciness will save, but you should have done the hard part by already writing reports.

Measure personal growth

This one is completely about you. We all grow as people, and we change the way we write and think. We do this because of our experiences, and our understanding that we can evolve to be better. Do you write like you did in 1st grade? Hope not! How about 12th grade? Unless you are a freshman in college, you have probably improved from there also.

When you write reports, you give yourself the ability to measure your growth. This can be very motivating, but it takes personal drive. If you have any reports from even just 6 months ago, go back and read them. You might even ask yourself who actually wrote that report, and I don’t think that’s a bad thing!

Final report

Reports can be a rather tedious part of our job, but if you embrace the personal benefits it can really become a fun part. Take pride in your investigation and display that in your reports. It will show. It works similar to smiling when you talk on the phone. People can tell the difference.

If you are writing reports today, good for you! Push yourself further and make it fun.

If you are not writing reports today, DO IT!

I am starting a mini series of posts on reporting. Future posts will be on structure and various sections of an investigative report. These are all my experiences and opinions, and I welcome your comments as well. Let’s all improve our reports together!

James Habben

MatchMeta.Info

Filenames are trivial to being changed.  It is still important to know what ones are common during your investigation.  You can’t remember every filename as there are already twenty-four million plus in the NSRL data set alone.  MatchMeta.Info is my way of automating these comparisons into the analysis process.  Not all investigators have Internet access on their lab machines so I wanted to share the steps to build your own internal site.    
Server Specifications
Twisted Python Installation
I prefer using Ubuntu but feel free to use whatever operating system that your most comfortable using.  The installation process has become very simple!!
                                 
apt-get install python-dev python-pip
pip install service_identity twisted
Twisted Python Validation

NSRL Filenames
I download the NSRL data set direct from NIST than parse out the filenames with a Python script that I have hosted on the GitHub project site.
Or feel free to download the already precompiled list of filenames that I have posted here. 
meow://storage.bhs1.cloud.ovh.net/v1/AUTH_bfbb205b09774544bb79dd7bf8c3a1d8/MatchMetaInfo/nsrl251.txt.zip
MatchMeta.Info Setup
First create a folder that will contain the mmi.py file from the GitHub site and the uncompressed nsrl251.txt file in the previous section.  One example is a www folder can be created in the opt directory for these files.  
/opt/www/mmi.py
/opt/www/nsrl251.txt
Second make the two files read only to limit permissions.
chmod 400 mmi.py nsrl251.txt
Third make the two files owned by the webserver user and group.
chown www-data:www-data mmi.py nsrl251.txt
Fourth make only the www folder capable of executing the Twisted Python script.
chmod 500 www
Sixth make the www folder owned by the webserver user and group.
chown www-data:www-data www
MatchMeta.Info Service
Upstart on Ubuntu will allow the Twisted Python script to be run as a service by creating the /etc/init/mmi.conf file.  Paste these commands into the newly created file.  Its critical to make sure you use exact absoulute paths in the mmi.py and mmi.conf files or the service will not start.
start on runlevel [2345]
stop on runlevel [016]

setuid www-data
setgid www-data

exec /usr/bin/python /opt/www/mmi.py
respawn
MatchMeta.Info Port Forwarding
Port 80 is privileged and we don’t want to run the service as root so port forwarding can be used.  This will allow us to run the Python service as the www-data user by appending the following to the bottom of the /etc/ufw/before.rules file.
*nat
-F
:PREROUTING ACCEPT [0:0]
-A PREROUTING -p tcp –dport 80 -j REDIRECT –to-port 8080
COMMIT
Thanks to @awhitehatter  for the tip on their GitHub site.
Configure Firewall
Please setup the firewall rules to meet your environments requirements.  Ports 80 and 8080 are currently setup to be used for the MatchMeta.Info service.  Don’t forget SSH for system access.
ufw allow 80/tcp
ufw allow 8080/tcp
ufw allow ssh
ufw enable
MatchMeta.Info Validation
Finally, all set to start the MatchMeta.Info Service!!
start mmi
Browsing to these sites should return the word OK on the website.
Browsing to these sites should return the phrase NA on the website.       
I plan to keep moving MatchMeta.Info features from the command line version into the web interface in the future.  A morph for James Habben’s evolve project a web interface for Volatility has already been submitted to incorporate the analysis process.
John Lukach
@jblukach

Building Python Packages, By a Novice

I am excited to see that Evolve has been getting some use by more and more people. It has gained enough use and attention to even get the attention of SANS. They want to include Evolve in their SIFT workstation build. This is by no means an endorsement by SANS, but it means a lot to an open source developer to know that their tools are being used and helpful.

The requirement of Evolve making it into SIFT is that it needs to be installed from the Python Package Index (PyPI). This is a very reasonable requirement since it makes the maintenance of SIFT a much more reasonable project. A project, may I add, that is also available for free and maintained in the free time of the volunteers. Thanks!

I started reading about Python packages and distribution, and found that it is very capable and very flexible, almost too much. It has been a challenge for me to squeeze in the reading and testing between the demands of my full time job, but I finally stumbled my way through it to a final version. There are many tutorials available that explain the basics of Python packages, but I had a little different problem than what most Python projects face, I guess. I wanted to write this all down to share my experience for any of you that might benefit.

El Problemo

First of all, I am not a full time developer, nor am I a master of Python. I think this may be the root of my problem! I started Python, like many others, with assembling pieces of others scripts to make a solution to the problem I was facing at the time. Because of my background with so many other languages, it was a fairly short phase of learn the environment and intricacies of Python before I was able to start from scratch and StackOverflow my way through. I took on Evolve as a way to expand further, and to solve another problem. That is the mother of invention, after all. It has been a fun and rewarding experience, with many thanks to all of you supporting it!

Put me aside now, and let’s talk about the technical problem with packaging Evolve. A typical package available from PyPI is python code. You get a little more exotic when you find the author including some code written in C or C++ to make for a more efficient function. This code requires compiling, but the PyPI can handle it, and does it well (except for on Windows). Where Evolve presented the problem is in the HTML, CSS, JS, and images used in the web interface. These aren’t considered code by the Python packager, so it was a challenge for me to get them included.

Basic Building

I don’t want to rehash the basic build process since there are already many very well written tutorials out there to explain that. Instead, I will include a short list of some links that were helpful during my journey here.

Creating the setup.py file to start it all off

https://docs.python.org/2/distutils/setupscript.html
This tutorial was very helpful in building the basics of the setup.py file. It explains most of the properties well. The part that I found lacking was in the sections for including extra non-python code in the package.

The trouble with including non-python files

http://blog.codekills.net/2011/07/15/lies,-more-lies-and-python-packaging-documentation-on–package_data-/

C/O
http://stackoverflow.com/questions/7522250/how-to-include-package-data-with-setuptools-distribute

This was a great help in truly understanding what I thought I understood after reading the Python docs. It also helped keep me sane and moving forward!

Including some other files

http://stackoverflow.com/questions/9654694/where-are-package-data-files
This helped me somewhat. I was able to get the folder of HTML files included, but it was a rather manual process. I knew I could fall back on this, but I figured that there must be a better way. We are doing programmer things, after all.

More on including other files

https://wiki.python.org/moin/Distutils/Tutorial
Another good tutorial on the packaging process, but it didn’t fully register with what I needed.

Yet another on the process of building

https://www.digitalocean.com/community/tutorials/how-to-package-and-distribute-python-applications

This is the article that finally made things fit together. In fairness to the others, I think it just took some time to sink in.

Highlighted Points

Here are some points I thought I would share. Some of these may be obvious to you already, but I had trouble getting a full grasp of the exact requirements. These are not listed in any particular order of importance.

sdist and bdist use different properties

As stated in one of the articles, there are different ways to package your project. You can distribute the source code with sdist, or you can build it into a binary with bdist. Each of these methods uses different properties from inside the setup.py file. Be aware of which method you are using to distribute, and which properties are associated with each.

__init__.py is a critical file, even if it’s blank

In building the Morph feature of Evolve, I found that I had to have an __init__.py file in the morphs directory for proper Python function. You can look at both of these files that are in the project (project root and the morphs folder), and you will see they are both essentially blank. There is function behind having them, and they provide more function by placing code inside. I don’t need that function though, so they remain empty. In the process of making this package, I found that the __init__.py file is needed for the build process to recognize that the folder includes other Python files of code that need to be included in the package.

I am using classes in Evolve, but only for the Morphs. It’s something I want to address in the future, but moving the main code into classes will require time in refactoring and testing that I just don’t have quite yet.

MANIFEST is not MANIFEST.in

Bonehead move on my part, but this was the final blocker to me getting this venture to work. I read through many articles talking about modifying the MANIFEST.in file to search for other files to include. I made the bad assumption that MANIFEST was the file being specified. WRONG. The MANIFEST.in file is a template that is used during the build process. The MANIFEST file is written by the build process, as a log of the files included in the distribution package. To make me look even dumber, the first line in the file says ‘# file GENERATED by distutils, do NOT edit’…

Use distutils instead of setuptools

There are a few shortcomings in setuptools that I read about at the start of this project that were addressed in distutils. In typical Python and open source fashion, a problematic library was fixed, but moved to be named differently. Rants aside, just use the newer distutils and things will be much smoother.

That’s It

Again, enough tutorials out there already to explain this process. They do a pretty good job, but I ran into troubles including the extra files in Evolve. I hope this helps one of you to not pull your hair out when trying to build your Python package. Share any other tips you have below in the comments!

James Habben
@JamesHabben

Block Building Checklist

It is important to understand how artifacts are created that you use during an investigation. Thus I wanted to provide my block building checklist to help others recreate the process. I will walk through the commands used to prepare the blocks for distribution and how to build the block libraries with the removal of a whitelist.
Block Preparation
I have used Windows, Linux and Mac OS X over the course of this project. I recommend using the operating system that your most comfortable with for downloading and unpacking the VirusShare.com torrents. The best performance will come from using solid state drives during the block building steps. The more available memory during whitelisting the better. A lot less system resources are necessary when just doing hash searches and comparisons during block hunting.
We saw this command previously in the Block Huntingpost with a new option. The -x option disables parsers so that bulk_extractor only generates the block sector hashes reducing the necessary generation time.
bulk_extractor -x accts -x aes -x base64 -x elf -x email -x exif -x find -x gps -x gzip -x hiberfile -x httplogs -x json -x kml -x msxml -x net -x pdf -x rar -x sqlite -x vcard -x windirs -x winlnk -x winpe -x winprefetch -x zip -e hashdb -o VxShare199_Out -S hashdb_mode=import -S hashdb_import_repository_name=VxShare199 -S hashdb_block_size=512 -S hashdb_import_sector_size=512 -R VirusShare_00199
The following steps help with the reduction of disk storage requirements and reporting cleanliness for the sector block hash database.  It is also a similar process for migrating from hashdb version one to two.  One improvement that I need to make is to use JSON instead of DFXML that was released at OSDFCon2015 by Bruce Allen.
We need to export the sector block hashes out of the database so that the suggested modifications can be made to the flat file output.
hashdb export VxShare199_Out/hashdb.hdb VxShare199.out
·      hashdb – executed application
·      export – export sector block hashes as a dfxml file
·      VxShare185_Out/ – relative folder path to the hashdb
·      hashdb.hdb – default hashdb name created by bulk_extractor
·      VxShare199.out – flat file output in dfxml format
Copy the first two lines of the VxShare199.out file into a new VxShare199.tmp flat file.
head -n 2 VxShare199.out > VxShare199.tmp
Start copying the contents of VxShare199.out file at line twenty-two that are appended to the existing VxShare199.tmp file. The below image indicates what lines will be removed by this command. The line count may vary depending on the operating system or the version of bulk_extractor and hashdb installed.
tail -n +22 VxShare199.out >> VxShare199.tmp
The sed command will read the VxShare199.tmp file than remove the path and beginning of the file name prior to writing into the new VxShare199.dfxml file. The highlighted text in the image below indicates what will be removed.
sed ‘s/VirusShare_00199\/VirusShare\_//g’ VxShare199.tmp > VxShare199.dfxml
Create an empty hashdb with the sector size of 512 using the -p option. The default size is 4096 if no option is provided.
hashdb create -p 512 VxShare199
Import the processed VxShare199.dfxml file into the newly created VxShare199 hashdb database.
hashdb import VxShare199 VxShare199.dfxml
I compress and upload the hashdb database for distribution saving these steps for everyone.
Building Block Libraries
The links to these previously generated hashdb databases can be found at the following link.
Create an empty hashdb called FileBlock.VxShare for the VirusShare.com collection.
hashdb create -p 512 FileBlock.VxShare
Add the VxShare199 database to the FileBlock.VxShare database.  This step will need to be repeated for each database. Upkeep is easier when you keep the completely built FileBlock.VxShare database for ongoing additions of new sector hashes.
hashdb add VxShare199 FileBlock.VxShare
Download the sector hashes of the NSRL from the following link.
Create an empty hashdb called FileBlock.NSRL for the NSRL collection.
hashdb create -p 512 FileBlock.NSRL
The NSRL block hashes are stored in a tab delimited flat file format.  The import_tab option is used to import each file that are split by the first character of the hash value, 0-9 and A-F.  I also keep a copy of the built FileBlock.NSRL for future updates too.
hashdb import_tab FileBlock.NSRL MD5B512_0.tab
Remove NSRL Blocks
Create an empty hashdb called FileBlock.Info for the removal of the whitelist.
hashdb create -p 512 FileBlock.Info
This command will remove the NSRL sector hashes from the VirusShare.com collection creating the final FileBlock.Info database for block hunting.
hashdb subtract FileBlock.VxShare FileBlock.NSRL FileBlock.Info
The initial build is machine time intensive but once done the maintenance is a walk in the park.
Happy Block Hunting!!
John Lukach

Critical Stack Intel Feed Consumption

Critical Stack provides a free threat intelligence aggregation feed through their Intel Market for consumption by the Bronetwork security monitoring platform. This is a fantastic service that is provided for free!! Special thanks to those who have contributed their feeds for all to take advantage of the benefits!! Installation is beyond the scope of this post as it is super easy with decent documentation available on their website. The feed updates run roughly hourly by default into a tab delimited file available on disk.

My goal was to make the IP address, domain and hash values accessible through a web interface for consumption by other tools in your security stack. Additionally, I didn’t want to create another database structure but be able to read the values into memory for comparison on script restarts. Decided to use Twisted Python by Twisted Matrix Labs to create the web server. Twisted is an event-driven networking engine written in Python. The script provides a basic foundation without entering into the format debate between STIX and JSON.  Kept it simple…

Twisted Python Installation

The following installation steps work on Ubuntu 14.04 as that is my preference.

apt-get install build-essential python-setuptools python-dev python-pip

pip install service_identity

wget https://pypi.python.org/packages/source/T/Twisted/Twisted-15.5.0.tar.bz2

bzip2 -d Twisted-15.5.0.tar.bz2

tar -xvf Twisted-15.5.0.tar

cd Twisted-15.5.0/

python setup.py install

The PIP package installation allows for the future usage of SSL and SSH capabilities in Twisted.

TwistedIntel.py Script

The default installation file and path containing the Critical Stack Intel Feed artifacts.

 

 

 

The field separator on each line that gets loaded into the Python list in memory.

 

 

 

The output that gets displayed on the dynamically generated web page based on user input.

 

 

 

 

The port that the web server runs on for the end-user to access the web page.

 

 

TwistedIntel.py Usage

The TwistedIntel.py script can be used after execution by browsing to the website with an IP address, domain or hash value provided in the path.  If the result returns FOUND that means it is part of the Critical Stack Intel Feed.

Download TwistedIntel.py

https://gist.github.com/jblukach/00f68e560dac78e6bd29

Feel free to change the code to meet your needs and really appreciate any contributions back to the DFIR community.

 Happy Coding!!
John Lukach

Updated 12/15/2015

  •       TwistedIntel2.py displays the feed that an IP address, domain, or hash originated.
  •       Upstart configuration file for running the Twisted Python script at startup.
  •       Crontab configuration that restarts the script hourly after Critical Stack Intel updates.

 

Blocks in Practical Use

Last week, John Lukach put up a post about how to use some tools to find pieces of files left behind after being deleted, and even partially overwritten. I wanted to put together a short post to give a practical use of this technique for a recent case of mine.

The Case

The request came in as a result of an anomaly detected in traffic patterns. It looked like a user had uploaded a significant amount of data to a specific cloud storage website. My client identified the suspected machine, and got an image to preserve the data. Unfortunately, this anomaly wasn’t caught right away, so a couple months passed before action was taken. They took a pass on the image with some forensic tools, but they weren’t able to identify anything relating to this detection, so they asked me for a second look.

My Approach

I started off with a standard pass of the forensic tools. Not because I don’t trust their team, but because tool versions can have an effect on the artifacts that get extracted and I wanted to be sure of my findings. Besides, if you don’t charge for machine time, then this really doesn’t add much in the way of cost in the end. This process did two things for me. The first thing I got from it was that I could not find any artifacts related to the designated website. The second thing was that the history and cache looked very normal. In other words, it didn’t look like the user had cleared any history or cache in an effort to clean up after themselves.

Of course, I am now thinking that they could have done a targeted cleanup and deleted very specific items from their browser. I get some reassurance that this was not the case by usage of some of the deep scan options available in the forensic tools that I used. These tools will scrounge through unallocated clusters, on command, in search of data patterns that are potential matches for deleted and lost internet artifacts. I found none, again, in this more extensive search.

Let’s put on our tin foil hats now, and really go for it. The user could have wiped and deleted individual files. Then gone into the history and cache lists to remove the pointers of those files. This would take care of the files that are still currently allocated. What about all those files that get cataloged in the cache, but quickly discarded because the server instructed this by sending some cache control headers? These files would have been lost to unallocated clusters at some point before the user thought about their cleanup actions. The only way to cover those track would be to use a tool that can do wiping of unallocated clusters. If the user went to this extent, dare I say they deserve to get away with it?

What If?

My client had a secondary thought in the event that we were not able to find traces of user action involved with this detection. They wanted to see if the system was infected with some malware that would have generated this traffic. I did the standard AV scans with multiple vendors and didn’t find anything. I followed that up with a review of the various registry locations that allow for malware to persist and autostart.

Verizon has a great intel team, and I asked them for information on possible campaigns involving the designated website. They gave me some great leads and IOCs to search for, but they did not pan out in this case. What a great resource to have!

After reviewing all of this, I was not able to find any indications of a malware infection, much less one that was capable of performing the suspected actions.

Points to Disprove

Here is the point where block hashing can make a real difference. I will show you how I used block hashing to disprove (as much as the available data will allow) three different points.

  1. Lost Files – User deleted individual history and cache entries to cleanup their tracks and the file system lost track of the related files
  2. Unallocated – User went psycho-nutjob-crazy and wiped unallocated clusters after deleting records
  3. Malware – User wasn’t involved and malware done the dirty deed

Disprove Lost Files

In order to prove the user did not delete these selective entries from the browser history and cache, I need to find the files on disk, most likely in unallocated clusters. The file system no longer has metadata about them, so the forensic tools will not discover them as deleted or orphaned files. I could carve files from unallocated clusters using the known headers of various file types, but this is a very broad approach. I want to identify specific files related to this website, not discover any and all pictures from unallocated.

I start by using HTTrack to crawl the designated website. I want to pull as many files down as I can. These are all the files that a browser would see, and download, during normal interactions with the website. I got a collection of a few hundred files of various types: JPEG, GIF, SWF, HTML, etc.

The next step is to split and block hash these files. On this step, I used EnCase and the File Block Hash Map Analysis (FBHMA) Enscript written by Simon Key of EnCase Training. FBHMA has the ability to do both sides of the block hashing technique and presents an awesome graphic for partially located files. I applied it against my collected files, and then applied those hashes against the rest of the disk. The result was zero matches.

This technique is not affected by the amount of fragmentation or the amount of overwriting, as long as there are some pieces left behind. The only way to beat this technique is to completely overwrite the entire set of blocks for all of the files in question. A very unlikely scenario in such short time without a deliberate action.

Disprove Unallocated

This point takes a little more work to disprove, and it might be subject to your own interpretation. The idea here is that files have been lost to unallocated clusters without control. The user was not able to do a targeted wipe of the file contents before deletion, so the entire area of unallocated clusters would have to be wiped to assure cleanliness.

The user could make a pass with a tool that overwrites every cluster with 0x00 data. This is enough to clear the data from forensic tools, but it can leave behind a pretty suspicious trail. If you used block hashing against this, it would quickly become apparent that the area was zeroed out. Bulk Extractor provides some entropy calculations that make quick work of this scenario.

The other option, and more likely to be used, is to have the wipe tool write random data to the cluster. Most casual and even ‘advanced’ computer users picture common files like zip and JPEG to be a messy clump of random data thrown together in some magic way that draws pictures or spells words. In DFIR, we learn early on that seemingly random data is not actually random. There are recognizable structures in most files, and unless the file involves some type of compression, it is not truly random when measured in terms of entropy. My point in all of this is that a wiping pass that writes random data would cause every cluster of unallocated to show high entropy values. This would raise eyebrows because this is not normal behavior. Some blocks contain compression or encryption and would show high entropy, but others are plain text which registers rather low on the entropy scale.

So with a single pass of Bulk Extractor and having it calculate hashes and entropy, I was able to determine that 1) unallocated clusters was not zeroed out and 2) unallocated clusters was not completely and truly random. It, again, looked very normal.

Disprove Malware

The last point to disprove was the existence of malware on the system. I already established that there was no indications showing of allocated malware, but I can’t ignore the possibility of there having been some malware installed which later got deleted or uninstalled, for some reason. Again, without a full and controlled wiping of this data off the drive, it would leave artifacts in unallocated clusters for me to find with block hashing.

This step is a much larger undertaking. It’s because I am employing the full 20+ million samples that are generously shared by VirusShare.com and users. I didn’t to do all of the work, though, because John has done that for us all. He has a GitHub repo with all the block hashes from VirusShare. Just be sure to remove the NSRL block hashes since those malware authors can be quite lazy in copying code from other executable files. It’s a beast to get up and running, but well worth the effort. Maintaining it is much easier after it’s built.

Now I have a huge data set of known malware, adware, and spyware sample files, and various payload transports like DOC and PDF files. If there was anything bad on this system, my data set is very likely to find it. The result was no samples identifying more than a single block on the disk. You have to allow for a threshold of matches with this size of sample data. A single matching block from a possible 1000 blocks for sample file of 500kb is not interesting.

Value of Blocks

This turned out to be a rather long post, but oh well. I hope this helps you to see the value in the various techniques of applying block level analysis in your cases. The incoming data in our cases is only getting larger, and we need to be smart about how we analyze it. Block hashing is just one way of letting our machines do the heavy work. Don’t be afraid to let your machine burn for a while.

You may be thinking to yourself that this is not a perfect process when it came to the malware point, and I may agree with you. We are advancing our tools, but we need to advance our understanding and interpretation of the results as well. For now, we handle that interpretation as investigators but the tools need to catch up. Harlan recently posted about this as a reflection on OSDFcon and it’s worth a read and consideration.

Happy Hunting!
James Habben
@JamesHabben

Block Hunting

In DFIR practices, we use hash algorithms to identify and validate data of all types. The typical use is applying them against an entire file, and we get a value back that represents that file as a whole. We can then use those hash values to search for Indicators of Compromise (IOC) or even eliminate files that are known to be safe as indicated by collections such as National Software Reference Library (NSRL). In this post, however, I am going to apply these hashes in a different manner.

The complete file will be broken down into smaller chunks and hashed for identification.  You will primarily have two types of blocks, a cluster and a sector. A cluster block will be tied to the operating system where sector blocks corresponds to the physical disk. For example Microsoft Windows by default has a cluster size of 4,096 that is made up of eight 512 sectors that is common across many operating systems. Sectors are the smallest area on the disk that can be used providing the most accuracy for block hunting.

Here are the block hunting techniques I will demonstrate:

  1. locate sectors holding identifiable data
  2. determine if a file has previously existed

I will walk you through the command line process, and then provide links to a super nice GUI. As an extra bonus, I will tell you about some pre-built sector block databases.

Empty Image or Not

If you haven’t already, at some point you will receive an image that appears to be nothing but zeroes. Who wants to scroll through terabytes of unallocated space looking for data? A quick way to triage the image is to use bulk_extractor to identify known artifacts such as internet history, network packets, carved files, keywords and much more. What happens if the artifacts are fragmented or unrecognizable?

This is where sector hashing with bulk_extractor in conjunction with hashdb comes in handy to quickly find identifiable data. A lot of great features are being added on a regular basis, so make sure you are always using the most current versions found at: http://digitalcorpora.org/downloads/hashdb/experimental/

Starting Command

The following command will be used for both block hunting techniques.

bulk_extractor -e hashdb -o Out -S hashdb_mode=import -S hashdb_import_repository_name=Unknown -S hashdb_block_size=512 -S hashdb_import_sector_size=512 USB.dd

  • bulk_extractor – executed application
  • -e hashdb – enables usage of the hashdb application
  • -o Out – user defined output folder created by bulk_extractor
  • -S hashdb_mode=import – generates the hashdb database
  • -S hashdb_import_repository=Unknown – user defined hashdb repository name
  • -S hashdb_block_size=512 – size of block data to read
  • -S hahsdb_import_sector_size=512 – size of block hash to import
  • USB.dd – disk image to process

Inside the Out folder that was declared by the -o option, you will find a hashdb.hdb database folder that is generated. Running the next command will extract the collected hashes into dfxml format for review.

hashdb export hashdb.hdb out.dfxml

Identifying Non-Zero Sectors

The dfxml output will provide the offset in the image where a non-low entropy sector block was identified. This is important to help limit to false positives where a low value block could appear across multiple good and evil files. Entropy is the measurement of randomness. An example of an low entropy block would be one containing all 0x00 or 0xFF data for the entire sector.

Here is what the dfxml file will contain for an identified block.

Use your favorite hex editor or forensic software to review the contents of the identified sectors for recognizable characteristics. Now we have identified that the drive image isn’t empty that didn’t require a large amount of manual effort. Just don’t tell my boss, and I won’t tell yours!

Deleted & Fragmented File Recovery

Occasionally, I receive a request to determine if a file has ever existed on a drive. This file could be intellectual property, customer list or a malicious executable. If the file is allocated, this can be done in short order. If the file doesn’t exist in the file system, it will be nearly impossible to find without a specialized technique. In order for this process to work, you must have a copy of the file that can be used to generate the sector hashdb database.

This command will generate a hashdb.hdb database of the BadFile.zip designated for recovery.

bulk_extractor -e hashdb -o BadFileOut -S hashdb_mode=import -S hashdb_import_repository_name=BadFile -S hashdb_block_size=512 -S hashdb_import_sector_size=512 BadFile.zip

The data will be used for scrubbing our drive of interest to run the comparisons. I am targeting a single file, but the command above can be applied to multiple files inside subfolders by using the -R option against a specific folder.

The technique will be able to identify blocks of a deleted file, as long as they haven’t been overwritten. It doesn’t even matter how fragmented the file was when it was allocated. In order to use the previously generated hashdb database to identify the file (or files) that we put into it, we need to switch the hashdb_mode from import to scan.

bulk_extractor -e hashdb -S hashdb_mode=scan -S hashdb_scan_path_or_socket=hashdb.hdb -S hashdb_block_size=512 -o USBOut USB.dd

Inside the USBOut output folder, there is a text file called identified_blocks.txt that records the matching hashes and image offset location. If the generated hashdb database contained multiple files, the count variable will tell you how many files contained a matching hash for each sector block hash.

Additional information can be obtained by using the expand_identified_blocks command option.

hashdb expand_identified_blocks hashdb.hdb identified_blocks.txt

Super Nice GUI

SectorScope is a Python 3 GUI interface for this same command line process that was presented at OSDFCon 2015 by Michael McCarrin and Bruce Allen. You definitely want to check it out: https://github.com/NPS-DEEP/NPS-SectorScope

Pre-Built Sector Block Databases

The last bit of this post are some goodies that would take you a long time to build on your own. I know, because I built one of these sets for you. The other set is provided by the same great folks at NIST that give us the NSRL hash databases. They went the extra step to provide us with a block hash list of every file contained in the NSRL that we have been using for years.

Subtracting the NSRL sector hashes from your hashdb will remove known blocks.

http://www.nsrl.nist.gov/ftp/MD5B512/

VirusShare.com collections is also available for evil sector block hunting too.

https://github.com/jblukach/FileBlock.Info#aquire-copy

Happy Block Hunting!!
John Lukach