AWS Pseudo Pipeline

I have been running my Forensic Artifact API on Ubuntu with a Nginx, Flask Python, and MariaDB stack. I wanted to get out of the infrastructure administration business by moving to the AWS Cloud. I decided to start with the migration of my SHA256 hash library. My goals were to improve availability, allow collaboration and keep the costs down. I wound up having an expensive learning experience while importing the data into DynamoDB!

Forensic Artifact API Diagram

I decided to use the Amazon Web Services (AWS) Boto3 SDK for Python so I could read from an S3 bucket with an EC2 instance that inserts into a DynamoDB table. I was able to read the line-delimited text file of SHA256 hashes as a stream minimizing the amount of memory required on the EC2 instance for Python. Batch writing of items into the DynamoDB table can use a maximum set of twenty-five. I set the batch volume with ‘range’ in the for loop that must match the minimum provisioned capacity for auto-scaling startup. Global tables being used to replicate DynamoDB across regions needs to match ‘range’ until the first auto-scale completes.

import boto3

def import_hash(hashlist,hashtype,hashsrc,hashdesc):

    client = boto3.client('s3')
    resource = boto3.resource('s3')
    matchmeta = resource.Bucket('bucketname')
    obj = client.get_object(Bucket='bucketname', Key=hashlist)

    dynamodb = boto3.resource('dynamodb')
    table = dynamodb.Table('sha256')

    while True:
        with table.batch_writer() as batch:
            for i in range(25):
                item = obj['Body']._raw_stream.readline()[:-2].decode('utf-8')
                if not item: break 
                batch.put_item(Item={'sha256':item.upper(),'type':hashtype,'source':hashsrc,'desc':hashdesc})
        if not item: break

import_hash('Folder/File.txt','Known','HashSets.com','Windows')

DynamoDB has an issue if read/writes go to ‘zero’ that auto-scaling will not reduce down to the minimum provisioned capacity. I needed to use a time-based CloudWatch event to execute a Lambda function to generate regular database activity.

import boto3

dynamodb = boto3.resource('dynamodb')

def lambda_handler(event, context):

    table = dynamodb.Table('sha256')
    table.get_item(Key={'sha256':'0000000000000000000000000000000000000000000000000000000000000000','type':'TEST'})
    table.put_item(Item={'sha256':'FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF','type':'TEST','source':'JOHN','desc':'PING'})

    return

Happy Coding!

John Lukach
@jblukach

Building Ubuntu Packages

Bruce Allen with the Navy Postgraduate School released hashdb 3.0 adding some great improvements for block hashing. My block hunting is mainly done on virtualized Ubuntu so I decided it was time to build a hashdb package. Figured I would document the steps as they could be used for the SANS SIFT, REMnux and many other great Ubuntu distributions too. 

1) Ubuntu 64-bit Server 16.04.1 hashdb Package Requirements

sudo apt-get install git autoconf build-essential libtool swig devscripts dh-make python-dev zlib1g-dev libssl-dev libewf-dev libbz2-dev libtool-bin

2) Download hashdb from GitHub

git clone

https://github.com/NPS-DEEP/hashdb.git

3) Verify hashdb Version

cat hashdb/configure.ac | more

 

 

 

 

4) Rename hashdb Folder with Version Number

mv hashdb hashdb-3.0.0

5) Enter hashdb Folder

cd hashdb-3.0.0

6) Bootstrap GitHub Download

./bootstrap.sh

7) Configure hashdb Package

./configure

8) Make hashdb Package with a Valid Email Address for the Maintainer

dh_make -s -e email@example.com –packagename hashdb –createorig

9) Build hashdb Package

debuild -us -uc

10) Install hashdb

dpkg -i hashdb_3.0.0-1_amd64.deb

John Lukach
@jblukach

Know Your Network

Do you know what is on your network?  Do you have a record of truth like DHCP logs for connected devices?  How do you monitor for unauthorized devices?  What happens if none of this information is currently available?

Nathan Crews @crewsnw1 and Tanner Payne @payneman at the Security Onion Conference 2016 presented on Simplifying Home Security with CHIVE that will definitely help those with Security Onion deployed answer these questions.  Well worth the watch: https://youtu.be/zBDAjNnRiQI

My objective is to create a Python script that helps with the identification of devices on the network using Nmap with limited configuration.  I want to be able to drop a virtual machine or Raspberry Pi onto a network segment that will perform the discovery scans every minute using a cron job.  Generating output that can be easily consumed by a SIEM for monitoring.
I use the netifaces package to determine the network address that was assigned to the device for the discovery scans.
I use the netaddr package to generate the network cidr format that the Nmap syntax uses for scanning subnet ranges.
The script will be executed from cron thus running as the root account, so important to provide absolute paths.  Nmap also needs permission to listen to network responses that is possible at this permission level too.
I take the multi-line native Nmap output and consolidate it down to single lines.  The derived fields are defined by equals (=) for the labels and pipes (|) to separate the values.  I parse out the scan start date, scanner IP address, identified device IP address, identified device MAC address and the vendor associated with the MAC address.
I ship the export.txt file to Loggly (https://www.loggly.com) for parsing and alerting as that allows me to focus on the analysis not the administration.

The full script can be found on GitHub:  https://gist.github.com/jblukach/c67c8695033ad276b4836bea58669958

John Lukach
@jblukach

MatchMeta.Info

Filenames are trivial to being changed.  It is still important to know what ones are common during your investigation.  You can’t remember every filename as there are already twenty-four million plus in the NSRL data set alone.  MatchMeta.Info is my way of automating these comparisons into the analysis process.  Not all investigators have Internet access on their lab machines so I wanted to share the steps to build your own internal site.    
Server Specifications
Twisted Python Installation
I prefer using Ubuntu but feel free to use whatever operating system that your most comfortable using.  The installation process has become very simple!!
                                 
apt-get install python-dev python-pip
pip install service_identity twisted
Twisted Python Validation

NSRL Filenames
I download the NSRL data set direct from NIST than parse out the filenames with a Python script that I have hosted on the GitHub project site.
Or feel free to download the already precompiled list of filenames that I have posted here. 
meow://storage.bhs1.cloud.ovh.net/v1/AUTH_bfbb205b09774544bb79dd7bf8c3a1d8/MatchMetaInfo/nsrl251.txt.zip
MatchMeta.Info Setup
First create a folder that will contain the mmi.py file from the GitHub site and the uncompressed nsrl251.txt file in the previous section.  One example is a www folder can be created in the opt directory for these files.  
/opt/www/mmi.py
/opt/www/nsrl251.txt
Second make the two files read only to limit permissions.
chmod 400 mmi.py nsrl251.txt
Third make the two files owned by the webserver user and group.
chown www-data:www-data mmi.py nsrl251.txt
Fourth make only the www folder capable of executing the Twisted Python script.
chmod 500 www
Sixth make the www folder owned by the webserver user and group.
chown www-data:www-data www
MatchMeta.Info Service
Upstart on Ubuntu will allow the Twisted Python script to be run as a service by creating the /etc/init/mmi.conf file.  Paste these commands into the newly created file.  Its critical to make sure you use exact absoulute paths in the mmi.py and mmi.conf files or the service will not start.
start on runlevel [2345]
stop on runlevel [016]

setuid www-data
setgid www-data

exec /usr/bin/python /opt/www/mmi.py
respawn
MatchMeta.Info Port Forwarding
Port 80 is privileged and we don’t want to run the service as root so port forwarding can be used.  This will allow us to run the Python service as the www-data user by appending the following to the bottom of the /etc/ufw/before.rules file.
*nat
-F
:PREROUTING ACCEPT [0:0]
-A PREROUTING -p tcp –dport 80 -j REDIRECT –to-port 8080
COMMIT
Thanks to @awhitehatter  for the tip on their GitHub site.
Configure Firewall
Please setup the firewall rules to meet your environments requirements.  Ports 80 and 8080 are currently setup to be used for the MatchMeta.Info service.  Don’t forget SSH for system access.
ufw allow 80/tcp
ufw allow 8080/tcp
ufw allow ssh
ufw enable
MatchMeta.Info Validation
Finally, all set to start the MatchMeta.Info Service!!
start mmi
Browsing to these sites should return the word OK on the website.
Browsing to these sites should return the phrase NA on the website.       
I plan to keep moving MatchMeta.Info features from the command line version into the web interface in the future.  A morph for James Habben’s evolve project a web interface for Volatility has already been submitted to incorporate the analysis process.
John Lukach
@jblukach

Block Building Checklist

It is important to understand how artifacts are created that you use during an investigation. Thus I wanted to provide my block building checklist to help others recreate the process. I will walk through the commands used to prepare the blocks for distribution and how to build the block libraries with the removal of a whitelist.
Block Preparation
I have used Windows, Linux and Mac OS X over the course of this project. I recommend using the operating system that your most comfortable with for downloading and unpacking the VirusShare.com torrents. The best performance will come from using solid state drives during the block building steps. The more available memory during whitelisting the better. A lot less system resources are necessary when just doing hash searches and comparisons during block hunting.
We saw this command previously in the Block Huntingpost with a new option. The -x option disables parsers so that bulk_extractor only generates the block sector hashes reducing the necessary generation time.
bulk_extractor -x accts -x aes -x base64 -x elf -x email -x exif -x find -x gps -x gzip -x hiberfile -x httplogs -x json -x kml -x msxml -x net -x pdf -x rar -x sqlite -x vcard -x windirs -x winlnk -x winpe -x winprefetch -x zip -e hashdb -o VxShare199_Out -S hashdb_mode=import -S hashdb_import_repository_name=VxShare199 -S hashdb_block_size=512 -S hashdb_import_sector_size=512 -R VirusShare_00199
The following steps help with the reduction of disk storage requirements and reporting cleanliness for the sector block hash database.  It is also a similar process for migrating from hashdb version one to two.  One improvement that I need to make is to use JSON instead of DFXML that was released at OSDFCon2015 by Bruce Allen.
We need to export the sector block hashes out of the database so that the suggested modifications can be made to the flat file output.
hashdb export VxShare199_Out/hashdb.hdb VxShare199.out
·      hashdb – executed application
·      export – export sector block hashes as a dfxml file
·      VxShare185_Out/ – relative folder path to the hashdb
·      hashdb.hdb – default hashdb name created by bulk_extractor
·      VxShare199.out – flat file output in dfxml format
Copy the first two lines of the VxShare199.out file into a new VxShare199.tmp flat file.
head -n 2 VxShare199.out > VxShare199.tmp
Start copying the contents of VxShare199.out file at line twenty-two that are appended to the existing VxShare199.tmp file. The below image indicates what lines will be removed by this command. The line count may vary depending on the operating system or the version of bulk_extractor and hashdb installed.
tail -n +22 VxShare199.out >> VxShare199.tmp
The sed command will read the VxShare199.tmp file than remove the path and beginning of the file name prior to writing into the new VxShare199.dfxml file. The highlighted text in the image below indicates what will be removed.
sed ‘s/VirusShare_00199\/VirusShare\_//g’ VxShare199.tmp > VxShare199.dfxml
Create an empty hashdb with the sector size of 512 using the -p option. The default size is 4096 if no option is provided.
hashdb create -p 512 VxShare199
Import the processed VxShare199.dfxml file into the newly created VxShare199 hashdb database.
hashdb import VxShare199 VxShare199.dfxml
I compress and upload the hashdb database for distribution saving these steps for everyone.
Building Block Libraries
The links to these previously generated hashdb databases can be found at the following link.
Create an empty hashdb called FileBlock.VxShare for the VirusShare.com collection.
hashdb create -p 512 FileBlock.VxShare
Add the VxShare199 database to the FileBlock.VxShare database.  This step will need to be repeated for each database. Upkeep is easier when you keep the completely built FileBlock.VxShare database for ongoing additions of new sector hashes.
hashdb add VxShare199 FileBlock.VxShare
Download the sector hashes of the NSRL from the following link.
Create an empty hashdb called FileBlock.NSRL for the NSRL collection.
hashdb create -p 512 FileBlock.NSRL
The NSRL block hashes are stored in a tab delimited flat file format.  The import_tab option is used to import each file that are split by the first character of the hash value, 0-9 and A-F.  I also keep a copy of the built FileBlock.NSRL for future updates too.
hashdb import_tab FileBlock.NSRL MD5B512_0.tab
Remove NSRL Blocks
Create an empty hashdb called FileBlock.Info for the removal of the whitelist.
hashdb create -p 512 FileBlock.Info
This command will remove the NSRL sector hashes from the VirusShare.com collection creating the final FileBlock.Info database for block hunting.
hashdb subtract FileBlock.VxShare FileBlock.NSRL FileBlock.Info
The initial build is machine time intensive but once done the maintenance is a walk in the park.
Happy Block Hunting!!
John Lukach

Critical Stack Intel Feed Consumption

Critical Stack provides a free threat intelligence aggregation feed through their Intel Market for consumption by the Bronetwork security monitoring platform. This is a fantastic service that is provided for free!! Special thanks to those who have contributed their feeds for all to take advantage of the benefits!! Installation is beyond the scope of this post as it is super easy with decent documentation available on their website. The feed updates run roughly hourly by default into a tab delimited file available on disk.

My goal was to make the IP address, domain and hash values accessible through a web interface for consumption by other tools in your security stack. Additionally, I didn’t want to create another database structure but be able to read the values into memory for comparison on script restarts. Decided to use Twisted Python by Twisted Matrix Labs to create the web server. Twisted is an event-driven networking engine written in Python. The script provides a basic foundation without entering into the format debate between STIX and JSON.  Kept it simple…

Twisted Python Installation

The following installation steps work on Ubuntu 14.04 as that is my preference.

apt-get install build-essential python-setuptools python-dev python-pip

pip install service_identity

wget https://pypi.python.org/packages/source/T/Twisted/Twisted-15.5.0.tar.bz2

bzip2 -d Twisted-15.5.0.tar.bz2

tar -xvf Twisted-15.5.0.tar

cd Twisted-15.5.0/

python setup.py install

The PIP package installation allows for the future usage of SSL and SSH capabilities in Twisted.

TwistedIntel.py Script

The default installation file and path containing the Critical Stack Intel Feed artifacts.

 

 

 

The field separator on each line that gets loaded into the Python list in memory.

 

 

 

The output that gets displayed on the dynamically generated web page based on user input.

 

 

 

 

The port that the web server runs on for the end-user to access the web page.

 

 

TwistedIntel.py Usage

The TwistedIntel.py script can be used after execution by browsing to the website with an IP address, domain or hash value provided in the path.  If the result returns FOUND that means it is part of the Critical Stack Intel Feed.

Download TwistedIntel.py

https://gist.github.com/jblukach/00f68e560dac78e6bd29

Feel free to change the code to meet your needs and really appreciate any contributions back to the DFIR community.

 Happy Coding!!
John Lukach

Updated 12/15/2015

  •       TwistedIntel2.py displays the feed that an IP address, domain, or hash originated.
  •       Upstart configuration file for running the Twisted Python script at startup.
  •       Crontab configuration that restarts the script hourly after Critical Stack Intel updates.

 

Block Hunting

In DFIR practices, we use hash algorithms to identify and validate data of all types. The typical use is applying them against an entire file, and we get a value back that represents that file as a whole. We can then use those hash values to search for Indicators of Compromise (IOC) or even eliminate files that are known to be safe as indicated by collections such as National Software Reference Library (NSRL). In this post, however, I am going to apply these hashes in a different manner.

The complete file will be broken down into smaller chunks and hashed for identification.  You will primarily have two types of blocks, a cluster and a sector. A cluster block will be tied to the operating system where sector blocks corresponds to the physical disk. For example Microsoft Windows by default has a cluster size of 4,096 that is made up of eight 512 sectors that is common across many operating systems. Sectors are the smallest area on the disk that can be used providing the most accuracy for block hunting.

Here are the block hunting techniques I will demonstrate:

  1. locate sectors holding identifiable data
  2. determine if a file has previously existed

I will walk you through the command line process, and then provide links to a super nice GUI. As an extra bonus, I will tell you about some pre-built sector block databases.

Empty Image or Not

If you haven’t already, at some point you will receive an image that appears to be nothing but zeroes. Who wants to scroll through terabytes of unallocated space looking for data? A quick way to triage the image is to use bulk_extractor to identify known artifacts such as internet history, network packets, carved files, keywords and much more. What happens if the artifacts are fragmented or unrecognizable?

This is where sector hashing with bulk_extractor in conjunction with hashdb comes in handy to quickly find identifiable data. A lot of great features are being added on a regular basis, so make sure you are always using the most current versions found at: http://digitalcorpora.org/downloads/hashdb/experimental/

Starting Command

The following command will be used for both block hunting techniques.

bulk_extractor -e hashdb -o Out -S hashdb_mode=import -S hashdb_import_repository_name=Unknown -S hashdb_block_size=512 -S hashdb_import_sector_size=512 USB.dd

  • bulk_extractor – executed application
  • -e hashdb – enables usage of the hashdb application
  • -o Out – user defined output folder created by bulk_extractor
  • -S hashdb_mode=import – generates the hashdb database
  • -S hashdb_import_repository=Unknown – user defined hashdb repository name
  • -S hashdb_block_size=512 – size of block data to read
  • -S hahsdb_import_sector_size=512 – size of block hash to import
  • USB.dd – disk image to process

Inside the Out folder that was declared by the -o option, you will find a hashdb.hdb database folder that is generated. Running the next command will extract the collected hashes into dfxml format for review.

hashdb export hashdb.hdb out.dfxml

Identifying Non-Zero Sectors

The dfxml output will provide the offset in the image where a non-low entropy sector block was identified. This is important to help limit to false positives where a low value block could appear across multiple good and evil files. Entropy is the measurement of randomness. An example of an low entropy block would be one containing all 0x00 or 0xFF data for the entire sector.

Here is what the dfxml file will contain for an identified block.

Use your favorite hex editor or forensic software to review the contents of the identified sectors for recognizable characteristics. Now we have identified that the drive image isn’t empty that didn’t require a large amount of manual effort. Just don’t tell my boss, and I won’t tell yours!

Deleted & Fragmented File Recovery

Occasionally, I receive a request to determine if a file has ever existed on a drive. This file could be intellectual property, customer list or a malicious executable. If the file is allocated, this can be done in short order. If the file doesn’t exist in the file system, it will be nearly impossible to find without a specialized technique. In order for this process to work, you must have a copy of the file that can be used to generate the sector hashdb database.

This command will generate a hashdb.hdb database of the BadFile.zip designated for recovery.

bulk_extractor -e hashdb -o BadFileOut -S hashdb_mode=import -S hashdb_import_repository_name=BadFile -S hashdb_block_size=512 -S hashdb_import_sector_size=512 BadFile.zip

The data will be used for scrubbing our drive of interest to run the comparisons. I am targeting a single file, but the command above can be applied to multiple files inside subfolders by using the -R option against a specific folder.

The technique will be able to identify blocks of a deleted file, as long as they haven’t been overwritten. It doesn’t even matter how fragmented the file was when it was allocated. In order to use the previously generated hashdb database to identify the file (or files) that we put into it, we need to switch the hashdb_mode from import to scan.

bulk_extractor -e hashdb -S hashdb_mode=scan -S hashdb_scan_path_or_socket=hashdb.hdb -S hashdb_block_size=512 -o USBOut USB.dd

Inside the USBOut output folder, there is a text file called identified_blocks.txt that records the matching hashes and image offset location. If the generated hashdb database contained multiple files, the count variable will tell you how many files contained a matching hash for each sector block hash.

Additional information can be obtained by using the expand_identified_blocks command option.

hashdb expand_identified_blocks hashdb.hdb identified_blocks.txt

Super Nice GUI

SectorScope is a Python 3 GUI interface for this same command line process that was presented at OSDFCon 2015 by Michael McCarrin and Bruce Allen. You definitely want to check it out: https://github.com/NPS-DEEP/NPS-SectorScope

Pre-Built Sector Block Databases

The last bit of this post are some goodies that would take you a long time to build on your own. I know, because I built one of these sets for you. The other set is provided by the same great folks at NIST that give us the NSRL hash databases. They went the extra step to provide us with a block hash list of every file contained in the NSRL that we have been using for years.

Subtracting the NSRL sector hashes from your hashdb will remove known blocks.

http://www.nsrl.nist.gov/ftp/MD5B512/

VirusShare.com collections is also available for evil sector block hunting too.

https://github.com/jblukach/FileBlock.Info#aquire-copy

Happy Block Hunting!!
John Lukach