Secret Archives of Execution Evidence: CCM_RecentlyUsedApps

UPDATE 2017-04-03: Unicode strings are used when needed. See the update post.

I seem to be running into more and more systems that have Windows Prefetch disabled for one reason or another. It is especially frustrating for me as a consultant since I cannot make the changes necessary to enforce the creation of the trace files nor can I implement any kind of central logging. Without this digital forensic artifact, it becomes increasingly difficult to build out a timeline of events across all the systems involved in an incident response.

One of the evidence sources that has shown itself over and over comes from a connection with a Microsoft SCCM server. SCCM has the ability to collect inventory data from many sources, and tracking executables launching is one. This feature isn’t turned on by default to have the SCCM server collect this data; however, the logging occurs on the endpoints regardless of the settings that are configured on the server.

If you search for CCM_RecentlyUsedApps, you will find tons of articles about configuring SCCM to collect this data or how to perform queries to extract the collected data. If you have the ability to push this in your organization, I say do it! If you can’t, then read on so I can show you how to take advantage of this data anyways.

Data Source

The records holding the information behind CCM_RecentlyUsedApps are stored in the collection of files that make up the database behind WMI. The locations are consistent from Windows XP through Windows 10, and you will find them here:
c:\windows\system32\wbem\repository\
c:\windows\system32\wbem\repository\fs\

I have even seen some systems that have what appears to be an old version of the WMI database. It seems to roll like the Windows Registry controlset keys. When the rebuild process kicks off, a new version of the database is built and it does not carry the previous information with it. I have seen up to 003, but it would likely go further. The previous versions look like this:
c:\windows\system32\wbem\repository.001\
c:\windows\system32\wbem\repository.001\fs\

This specific artifact was a very critical piece in a previous case. It allowed us to narrow the time window of the compromise to be much more specific. Even a single day of exposure can make a big difference in the fines against the victim company during a PCI Forensic Investigation (PFI).

You will see a handful of files in these locations. They are all used to link all the various records together to properly parse these. The guys at FireEye did some work on reverse engineering this database and released a python script to extract all of the available classes and namespaces. You can find their tool here:
https://github.com/fireeye/flare-wmi/tree/master/python-cim

Using this script, you can extract this data using these parameters:
Namespace: root\ccm\SoftwareMeteringAgent
Class: CCM_RecentlyUsedApps

This script was very helpful to me in a number of previous cases, although I have to mention that it is a bit of a pain to get installed properly. The other trouble that I ran into with this script, by no fault of the FireEye team, is that it can only parse the namespaces from the database if the data is not ‘corrupted’. I have found that imaging a live system can cause ‘corruption’ almost half of the time. It is frustrating to know that there are Indicator Of Compromise (IOC) hits inside that data blob, but the data won’t allow for the parsing.

Different Approach

As I manually looked over those seemingly lost IOC hits, I started to recognize patterns surrounding the hits. The fields holding all the property data seemed to be in the same order for all of the records of a certain system that I was reviewing at the time. I then pulled up a few systems with different OS’s from previous cases and found the same structure. YES!! The perfect setup for carving. Time to reverse engineer the record format.

The index uses a hash value in tracking and sorting structures that I won’t bore you with here. I mention though, because this hash is the piece that we will use to find these records. WinXP uses MD5 and newer uses SHA256. The hash in these records is generated from the class name CCM_RecentlyUsedApps, only the text needs to be upper cased as CCM_RECENTLYUSEDAPPS, and then converted to Unicode C\x00C\x00M\x00_\x00R\x00… (and you get the point).
WinXP MD5:
6FA62F462BEF740F820D72D9250D743C
WinVista+ SHA256:
7C261551B264D35E30A7FA29C75283DAE04BBA71DBE8F5E553F7AD381B406DD8

These hashes are what start the records. They are stored in Unicode themselves, for some reason. 128 bytes for the SHA256 and 64 bytes for the MD5.

The next 16 bytes following the hash are two 8 byte FileTimes.

After that will be 2 bytes to tell you the size of the data portion of this record. I have not seen any records using more than 2 bytes and the max size of 2 bytes is either 65,535 unsigned or 32,767 signed. Either of those provide plenty of space for this data, so I wouldn’t expect it to expand for size purposes. The data portion of the record includes these 2 bytes.

You can see on the right in the screenshot above that the size of the data is 432. You can then see at the bottom that I have highlighted 432 bytes (Sel 432 [1B0h]). You can also see another ‘7C261…’ starting immediately after my selection, although don’t let this fool you into thinking that these records will always be contiguous.

From here, the data is broken into 2 sections. The first section consists of various 4 byte fields with some being offsets and others being property values. The second section contains all the string based property values separated by double 0x00 bytes.

There are 3 values we can extract from the number section that are helpful.
Filesize
Offsets: Vista 178d (128+16+34), XP 114d (64+16+34)

ProductLanguage
Offsets: Vista 194d (128+16+50), XP 130d (64+16+50)

LaunchCount
Offsets: Vista 202d (128+16+58), XP 138d (64+16+58)

The string section always starts with ‘CCM_RecentlyUsedApps’ and is followed by the double 0x00 separator. If there are 4 bytes of 0x00 following, then the next string field is null. If there are 6 bytes of 0x00, then the next 2 string fields are null. Follow the pattern?

The string properties are listed in the following order:
ClassName (always “CCM_RecentlyUsedApps”)
AdditionalProductCodes
CompanyName
ExplorerFilename
FileDescription
FilePropertiesHash
FileVersion
FolderPath
LastUsedTime
LastUsername
MsiDisplayName
MsiPublisher
MsiVersion
OriginalFilename
ProductCode
ProductName
ProductVersion
SoftwarePropertiesHash

There will only be a single 0x00 at the very end of the record. Wasn’t that easy?

New Python Tool

After I determined these structures, I was chatting with Willi Ballenthin since he was involved in the research of the database structure. He said something like “that tool sounds pretty neat” and then followed up saying “possibly similar to this” and pointed me to a blog post by David Pany at FireEye.
https://www.fireeye.com/blog/threat-research/2016/12/do_you_see_what_icc.html

Sure enough, David beat me to it with a python script to search for the classname hashes and parse the record structure. The good news is that we arrived at the same basic approach and record structures. Validation is always nice. His python script is on GitHub here:
https://github.com/davidpany/WMI_Forensics/blob/master/CCM_RUA_Finder.py

I have had some trouble running this python script against my systems, but I haven’t spent the time to determine the cause. The output is a CSV file, but I don’t have any screenshots to show because of the errors I ran into.

New EnScript Tool

I decided to write this approach in EnScript. My cases have involved upwards of 500 systems for analysis. Using a python based approach would force me to either extract all those files, or use a mounting or parsing solution to expose the files. By using EnScript in EnCase v7 or v8, I can run the EnScript over all system images with one pass. I was able to successfully do this in testing on a recent case with 73 systems in the same case. EnCase proved to be a powerful tool in this specific scenario.

The EnScript starts off with a GUI to give you the option of running against all files in the case or a smaller subset designated by a blue check or tag selection.

I found records existing in OBJECTS.DATA and INDEX.BTR files. Some seem to be in areas of the file that have been deallocated from the active records of the database. Additionally, I have found quite a large number of records in the PAGEFILE.SYS file as well. You will see a selection option in the GUI for these common filenames.

The output of this EnScript is a CSV file. It includes a few columns in addition to the properties that were parsed from the records: evidence filename to indicate the system source, item path to show which file it was found in, and file offset to manually validate the data later if needed.

I encourage you to use Excel’s data deduplication function since I ran into a number of bugs in EnCase trying to make this EnScript work. There are some hacky workarounds in the code currently. Dedupe on all columns except item path and file offset. This will remove dupes that are found in both pagefile.sys and objects.data files.

I suspect we might be able to pull some of these records from unallocated clusters, but I haven’t found any there yet. Please let me know if you do!

You can grab the latest version of the EnScript on GitHub:
https://github.com/JamesHabben/ccm-rua-enscript

See the followup post about the forensic meanings.

James Habben
@JamesHabben

BSides Los Angeles – Experience and Slides

BSides LA – the only security con on the beach. If you haven’t had the opportunity to attend, you should make the effort. The volunteer staff are dedicated to making sure the conference goes well, and this year was another example of their hard work.

I enjoyed attending and learning from a number of sessions and was tickled happy to see so many presenters referencing the Verizon DBIR to give weight to their message. The corpus of data gives us so many ways to improve our industry. You should consider contributing data from your own investigations through the VERIS framework, if you don’t already.

My presentation was titled “USB Device Analysis” and I had a lot of great conversations afterwards because of it. It was great meeting new faces that are both young and old in the industry. The enthusiasm is infectious!

Many asked me for my slides, so I thought I would share them here along with some of these thoughts. Thanks to everyone for attending my talk and to the BSides organizers for having me.

One thing I talked about that isn’t in the slides is the need for user security awareness training. The studies mentioned in the slides show that from 2011 to 2016, not much has changed with the public awareness of the danger around plugging in unknown USB drives. This has been demonstrated too many times to be a easy an effective way for attackers to infiltrate a chosen target.

For those of you that are in the Incident Response role, you don’t even have a chance to get involved unless your users realize the threat.

My slides

diff-pnp-devices.ps1 on GitHub
Feel free to reach out with questions.

James Habben
@JamesHabben

Building Ubuntu Packages

Bruce Allen with the Navy Postgraduate School released hashdb 3.0 adding some great improvements for block hashing. My block hunting is mainly done on virtualized Ubuntu so I decided it was time to build a hashdb package. Figured I would document the steps as they could be used for the SANS SIFT, REMnux and many other great Ubuntu distributions too. 

1) Ubuntu 64-bit Server 16.04.1 hashdb Package Requirements

sudo apt-get install git autoconf build-essential libtool swig devscripts dh-make python-dev zlib1g-dev libssl-dev libewf-dev libbz2-dev libtool-bin

2) Download hashdb from GitHub

git clone

https://github.com/NPS-DEEP/hashdb.git

3) Verify hashdb Version

cat hashdb/configure.ac | more

 

 

 

 

4) Rename hashdb Folder with Version Number

mv hashdb hashdb-3.0.0

5) Enter hashdb Folder

cd hashdb-3.0.0

6) Bootstrap GitHub Download

./bootstrap.sh

7) Configure hashdb Package

./configure

8) Make hashdb Package with a Valid Email Address for the Maintainer

dh_make -s -e email@example.com –packagename hashdb –createorig

9) Build hashdb Package

debuild -us -uc

10) Install hashdb

dpkg -i hashdb_3.0.0-1_amd64.deb

John Lukach
@jblukach

Windows Elevated Programs with Mapped Network Drives

This post is about a lesson that seems to be one that just won’t sink into my own head. I run into the issue time and time again, but I can’t seem to cement it in to prevent the issue from coming up again. I am hoping that writing this post will help you all too, but mostly this is an attempt to really nail it into my own memory. Thanks for the ride along! It involves the Windows feature called User Access Control (UAC) and mapped network drives.

Microsoft changed the behavior of security involving programs that require elevated privileges to run properly. Some of you may be already thinking about why I haven’t just disabled the whole UAC entirely, and I can understand that thought. I have done this on some of my machines, but I keep others with UAC at the default level for a couple of reasons. 1) It does provide an additional level of security for machines that interface with the internet. 2) I do development with a number of different scripts and languages and it is helpful to have a machine with default UAC to run tests against to ensure that my scripts and programs will behave as intended.

One of those programs that I use occasionally is EnCase. You can create a case and then drop-and-drop an evidence file into the window. When you try this from a network share, however, you get an error message stating that the path is not accessible. The cause of this has to do with Windows holding different login tokens open for each mode of your user session. When you click that ‘yes’ button to allow a program to run with the elevated privileges, you have essentially done a logout and login under a completely different user. That part is just automated in the background for user convenience so you don’t have to actually perform the logout.

Microsoft has a solution that involves opening an elevated command prompt to use ‘net use’ to perform a drive mapping under the elevated token, but there is another way to avoid this that makes things a little more usable. It just involves a bit of registry mumbo jumbo to apply the magic.

You can see in the following non-elevated command prompt that I have a mapped drive inside of my VM machine that exposes my shared folders.

Now in this elevated command prompt, you will find the lack of a mapped drive. Again, this is a shared folder through VMware Fusion, but the same applies for any mapped drive you might encounter.

The registry path that unlocks easy mode is in the following location:
HKeyLocalMachine\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLinkedConnections

Give that reg value a DWORD value of 0x1 and your mapped network drives will now show up in the elevated programs just the same as the non-elevated programs.

Here is the easy way to make this change. Run the following command at the command prompt:
reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v EnableLinkedConnections /t reg_dword /d 1

Then you can run the following command to confirm the addition of the reg value:
reg query HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System

Mostly I am hoping that this helps me to remember this without having to spend time consulting Aunti Google, but I also hope this might give you some help as well.

James Habben
@JamesHabben

Know Your Network

Do you know what is on your network?  Do you have a record of truth like DHCP logs for connected devices?  How do you monitor for unauthorized devices?  What happens if none of this information is currently available?

Nathan Crews @crewsnw1 and Tanner Payne @payneman at the Security Onion Conference 2016 presented on Simplifying Home Security with CHIVE that will definitely help those with Security Onion deployed answer these questions.  Well worth the watch: https://youtu.be/zBDAjNnRiQI

My objective is to create a Python script that helps with the identification of devices on the network using Nmap with limited configuration.  I want to be able to drop a virtual machine or Raspberry Pi onto a network segment that will perform the discovery scans every minute using a cron job.  Generating output that can be easily consumed by a SIEM for monitoring.
I use the netifaces package to determine the network address that was assigned to the device for the discovery scans.
I use the netaddr package to generate the network cidr format that the Nmap syntax uses for scanning subnet ranges.
The script will be executed from cron thus running as the root account, so important to provide absolute paths.  Nmap also needs permission to listen to network responses that is possible at this permission level too.
I take the multi-line native Nmap output and consolidate it down to single lines.  The derived fields are defined by equals (=) for the labels and pipes (|) to separate the values.  I parse out the scan start date, scanner IP address, identified device IP address, identified device MAC address and the vendor associated with the MAC address.
I ship the export.txt file to Loggly (https://www.loggly.com) for parsing and alerting as that allows me to focus on the analysis not the administration.

The full script can be found on GitHub:  https://gist.github.com/jblukach/c67c8695033ad276b4836bea58669958

John Lukach
@jblukach

GUIs are Hard – Python to the Rescue – Part 1

I consider myself an equal opportunity user of tools, but in the same respect I am also an equal opportunity critic of tools. There are both commercial and open source digital forensic and security tools that do a lot of things well, and a lot of things not so well. What makes for a good DFIR examiner is the ability to sort through the marketing fluff to learn what these tools can truly do and also figure out what they can do very well.

One of the things that I find limiting in many of the tools is the Graphical User Interface (GUI). We deal with a huge amount of data, and we sometimes analyze it in ways we couldn’t have predicted ourselves. GUI tools make a lot of tasks easy, but they can also make some of the simplest tasks seem impossible.

My recommendation? Every tool should offer output in a few formats: CSV, JSON, SQLite. Give me the ability to go primal!

Tool of the Day

I have had a number of cases lately that have started as ‘malware’ cases. Evil traffic tripped an alarm, and that means there must be malware on the disk. It shouldn’t be surprising to you, as a DFIR examiner, that this is not always the case. Sometimes there is a bonehead user browsing stupid websites.

Internet Evidence Finder (IEF) is the best tool I have available right now to parse and carve the broad variety of internet activity artifacts from the drive images. It does a pretty good job searching over the disk to find browser, web app, and website artifacts (though I don’t know exactly which versions are supported due to documentation, but I will digress in a different post).

Let me cover some of IEF’s basic storage structure that I have worked out first. The artifacts are stored in SQLite format in the folder that you designate, and it is named ‘IEFv6.db’. Every artifact type that is found creates at least one table in the DB. Because each artifact has different properties, each of the tables have a different schema. Some good things that the dev team at Magnet seem to have decided on do allow for some kind of consistency. If the column has URL data in it, then the column name has ‘URL’ in it. Similarly for dates, the column name will have ‘date’ in it.

IEF provides a search function that allows you to do some basic string searching, or you can get a bit more advanced by providing a RegEx (GREP) pattern for the search. When you kick off this search, IEF creates a new SQLite db file named ‘Search.db’ and it is stored in the same folder. You can only have one search completed at a time since kicking off a new search will cause IEF to overwrite any previous db that was created. The search db, from what i can tell anyways, seems to have an identical schema structure as the main db, only it has been filtered down on the number of records that it holds based on the keywords or patterns that you provided.

There is another feature called filter, and I will admit that I have only recently found this. It allows you to apply various criteria to the dataset, with the major one being a date range. There are other things you can filter one, but I haven’t needed to explore those just yet. When you kick off this process, you end up with yet another SQLite database filled with a reduced number of records based on the criteria and again it seems identical in schema as the main db. This one is named ‘filter.db’ and indicates that the dev team doesn’t have much creativity. 😉

Problem of the Month

The major issue I have with the tool is in the way it presents data to me. The interface has a great way of digging into forensic artifacts as they are categorized and divided by the artifact type. You can dig into the nitty gritty details of each browser artifact. For the cases that I have used IEF for lately, and I suspect many of you in your Incident Response cases as well, I really don’t actually care *which* browser was the source of the traffic. I just need to know if that URL was browsed by bonehead so I can get him fired and move on. Too harsh? 🙂

IEF doesn’t give you the ability to have a view where all of the URLs are consolidated. You have to click, click, click down through the many artifacts, and look through tons of duplicate URLs. The problem behind this is on the design of the artifact storage in multiple tables with different schemas in a relational database. A document based database, such as MongoDB, would have provided an easier search approach, but there are trade-offs that I don’t need to tangent on here. I will just say that there is no 100% clear winner.

To perform a search over multiple tables in a SQL based DB, you have to implement it in some kind of program code because a SQL query is almost impossible to construct. SQLite makes it even more difficult with its reduced list of native functions and it’s lack of ability to create any user-defined functions or stored procedures. It just wasn’t meant for that. IEF handles this task for the search and filter process in c# code, and creates those new DB files as a sort of cache mechanism.

Solution of the Year

Alright, I am sensationalizing my own work a bit too much, but it is easy to do when it makes your work so much easier. That is the case with the python script I am showing you here. It was born out of necessity and tweaked to meet my needs for different cases. This one has saved me a lot of time, and I want to share it with you.

This python script can take an input (-i) of any 3 of the IEF database files that I mentioned above since they share schema structures. The output (-o) is another SQLite database file (I know, like you need another one in addition) in the location of your choosing.

The search (-s) parameter allow you to provide a string to filter the records on based upon that string being present in one of the URL fields of the record being transferred. I added this one because the search function of IEF doesn’t allow me to direct the keyword at a URL field. I had results from my keywords that were hitting on several other metadata fields that I had no interest in.

The limit (-l) parameter was added because of a bug I found in IEF with some of the artifacts. I think it was mainly in the carved artifacts so I really can’t fault too much, but it was causing a size and time issue for me. The bug is that the URL field for a number of records was pushing over 3 million characters long. Let me remind you that each character in ASCII is a byte, and having 3 million of those creates a URL that is 3 megabytes in size. Keep in mind that URLs are allowed to be Unicode, so go ahead and x2 that. I found that most browsers start choking if you give them a URL over 2000 characters, so I decided to cutoff the URL field at 4000 by default to give just a little wiggle room. Magnet is aware of this and will hopefully solve the issue in an upcoming version.

This python script will open the IEF DB file and work its way through each of the tables to look for any columns that have ‘URL’ in the name. If one is found, it will grab the type of the artifact and the value of the URL to create a new record in the new DB file. Some of the records in the IEF artifacts have multiple URL fields, and this will take each one of them into the new file as a simple URL value. The source column is the name of the table (artifact type) and the name of the column of where that value came from.

This post has gotten rather long, so this will be the end of part 1. In part 2, I will go through the new DB structure to explain the SQL views that are created and then walk through some of the code in Python to see how things are done.

In the meantime, you can download the ief-find-url.py Python script and take a look for yourself. You will have to supply your own IEF.

James Habben
@JamesHabben

Reporting: Benefits of Peer Reviews

Now that you are writing reports to get a personal and professional benefit, let’s look at some other ways that you can get benefits from these time suckers. You need the help of others on this one, since you will be giving your reports to them in seeking a review. You need this help from outside of your little bubble to ensure that you are pushing yourself adequately.

You need a minimum of 2 reviews on the reports you write. The first review is a peer review, and the other is a manager review. You can throw additional reviews on top of these if you have the time and resources available, and that is icing on the cake.

Your employer benefits from reviews for these reasons:

  • Reduced risk and liability
  • Improved quality and accuracy
  • Thorough documentation

There are more personal benefits here too:

  • Being held to a higher standard
  • Gauge on your writing improvement
  • You get noticed

Let me explain more about these benefits in the following sections.

Personal Benefits

Because the main intention of this post is to show the personal benefits and improvements, I will start here.

Higher Standards

The phrase ‘You are your own worst critic’ gets used a lot, and I do agree with it for the most part. For those of us with a desire to learn and improve, we have that internal drive to be perfect. We want to be able to bust out a certain task and nail it 110% all of the time. When we don’t meet our high standards we get disappointed in ourselves and note the flaws to do better next time

Here is where I disagree with that statement just a bit. We can’t hold ourselves to a standard that we don’t understand or even have knowledge about. If you don’t know proper grammar, it is very difficult for you to expect better. Similarly in DFIR, if you don’t know a technique to find or parse an artifact, you don’t know that you are missing out on it.

Having a peer examiner review your report is a great way of getting a second pair of eyes on the techniques you used and the processes you performed. They can review all of your steps and ask you questions to cover any potential gaps. In doing this, you then learn how the other examiners think and approach these scenarios, and can take pieces of that into your own thinking process.

Gauging Your Improvement

Your first few rounds of peer review will likely be rough with a lot of suggestions from your peers. Don’t get discouraged, even if the peer is not being positive or kind about the improvements. Accept the challenge, and keep copies of these reviews. As time goes on, you should find yourself with fewer corrections and suggestions. You now have a metric to gauge your improvement.

Getting Noticed

This is one of the top benefits, in my opinion. Being on a team with more experienced examiners can be intimidating and frustrating when you are trying to prove your worth. This is especially hard if you are socially awkward or shy since you won’t have the personality to show off your skills.

Getting your reports reviewed by peers gives you the chance to covertly show off your skills. It’s not boasting. It’s not bragging. It’s asking for a check and suggestions on improvements. Your peers will review your cases and they will notice the effort and skill you apply, even if they don’t overtly acknowledge it. This will build the respect between examiners on the team.

Having your boss as a required part of the review process ensures that they see all the work you put in. All those professional benefits I wrote about in my previous post on reporting go to /dev/null if your boss doesn’t see your work output. If your boss doesn’t want to be a part of it, maybe its a sign that you should start shopping for a new boss.

Employer Benefits

You are part of a team, even if you are a solo examiner. You should have pride in your work, and pride in the work of your team. Being a part of the team means that you support other examiners in their personal goals, and you support the department and its business goals as well. Here are some reasons why your department will benefit as a whole from having a review process.

Reduced Risk and Liability

I want to hit the biggest one first. Business operations break down to assets and liabilities. Our biggest role in the eyes of our employers is to be an asset to reduce risk and liability. Employees in general introduce a lot of liability to a company and we do a lot to help in that area, but we also introduce some amount of risk ourselves in a different way.

We are trusted to be an unbiased authority when something has gone wrong, be it an internal HR issue or an attack on the infrastructure. Who are we really to be that authority? Have you personally examined every DLL in that Windows OS to know what is normal and what is bad? Not likely! We have tools (assets) that our employers invest in to reduce the risk of us missing that hidden malicious file. Have you browsed every website on the internet to determine which are malicious, inappropriate for work, a waste of time, or valid for business purposes? Again, not a chance. Our employers invest in proxy servers and filters (assets) from companies that specialize in exactly that to reduce the risk of us missing one of those URLs. Why shouldn’t your employer put a small investment in a process (asset) that brings another layer of protection against the risk of us potentially missing something because we haven’t experienced that specific scenario before?

Improved Accuracy and Quality

This is a no brainer really. It is embarrassing to show a report that is full of spelling, grammar, or factual errors. Your entire management chain will be judged when people outside of that chain are reading through your reports. The best conclusions and recommendations in the world can be thrown out like yesterdays garbage if they are filled with easy to find errors. It happens though, because of the amount of time it takes to write these reports. You can become blind to some of those errors, and a fresh set of eyes can spot things much quicker and easier. Having your report reviewed gives both you and your boss that extra assurance of the reduced risk of sending out errors.

Thorough Documentation

We have another one of those ‘reducing risk’ things on this one. Having your report reviewed doesn’t give you any extra documentation in itself, but it helps to ensure that the documentation given in the report is thorough.

You are typically writing the report for the investigation because you were leading it, or at least involved in some way. Because you were involved, you know the timeline of events and the various twists and turns that you inevitably had to take. It is easy to leave out what seems like pretty minor events in your own mind, because they don’t seem to make much difference in the story. With a report review, you will get someone else’s understanding of the timeline. Even better is someone who wasn’t involved in that case at all. They can identify any holes that were left by leaving out those minor events and help you to build a more comprehensive story. It can also help to identify unnecessary pieces of the timeline that only bring in complexity by giving too much detail.

Part of the Process

Report reviews need to be a standard part of your report writing process. They benefit both you and your employer in many ways. The only reason against having your reports reviewed is the extra time required by everyone involved in that process. The time is worth it, I promise you. Everyone will benefit and grow as a team.

If you have any additional thoughts on helping others sell the benefits of report reviews, feel free to leave them in the comments. Good luck!

James Habben
@JamesHabben