Layers Are Important

We in InfoSec chant it often and for some of us it might even be a daily mantra. “Use Multi-Factor Authentication!” (MFA) Sometimes called Two Factor Authentication (2FA), it adds an additional layer of security to your organization that almost allows for the use of ‘password’ as a password.

If you keep up with the Verizon Data Breach Investigations Report, you should already know that user credentials are the most sought after piece of information over all the incidents. With that kind of data to support a solution, it is still a bit surprising how many organizations out there are exposing services to the public internet without the extra layer(s) of authentication.

More Layers

As great as MFA/2FA is, it will not eliminate all of your problems. I had a troublesome case recently that involved phishing, exposed web services, Remote Access Tools (RAT), stolen credentials, and more. The part that made it really scary was how the attackers were able to figure out the infrastructure enough to almost get VPN access.

The attackers got access to email. Through email they were able to social engineer their way into quite a few areas. One of those areas was how employees obtain the token software and keys for VPN access. Let me restate that with a little more clarity. The attackers requested and got access to VPN tokens used as a part of the MFA/2FA protection.

The process of getting approved for VPN was quite a lengthy one, I know since I had to go through it for remote access as a part of the incident management. After struggling to get myself access, I was astounded at the fact that attackers were able to get so far. It took me quite a while to work through the protections even with the guys on the phone walking me through it all.

Simple Works

You know what stopped the attackers? A registry key. Nothing functional. Just a simple registry key that they inject on company assets. The VPN login process has a full posture check on validating your patches, anti-virus program version, firewall configuration, agent installs, etc. and part of that process includes checking for the existence of a simple registry key.

It might sound silly amidst discussions about all this high tech prevention and machine learning analysis, but sometimes simple works. Don’t overlook the basic protections. They add layers of protection that just might actually be the one piece that saves the day.

James Habben
@JamesHabben

Soft Skills: Be Present

On the heels of an industry conference, there are so many emotions running through me. Excitement – to apply new techniques and tools to my work. Frustration – that I didn’t get over my shyness to engage with others that also looked shy. Happiness – that I got to see friends from around the world that would otherwise be logistically difficult. Pride – that I didn’t screw up too badly while talking in my sessions. Exhaustion – that I didn’t get enough sleep because there are only 24 hours in a day. This time for me, it was Enfuse 2017.

In reflection, there was one trend that I noticed quite a lot during the conference. Many people were not being present in their conversations with others. I saw this in hallways between sessions, during mealtimes, and at the various parties. I wasn’t immune either, as I caught myself a couple times as well. There is always a lot going on at conferences, and that makes it especially hard to stay focused on the current engagement. This is one of the best times to either start building or further reinforce a connection with other like-minded folks in the industry. Some call it networking, although I prefer the word connecting because I feel that ‘networking’ doesn’t convey the right meaning.

Networking is when you go to an evening mixer party with a stack of business cards hoping that the numbers will work for you. The larger the number of people that have you card, the more likely you are to get contacted about something. That something might be a sales lead, a job opportunity, or even a free meal. This is not a bad thing.

Connecting is when you spend time to get to know a person. The key difference is how you engage. You focus on the one or few people in the circle and you pay attention to those people. You listen to the conversation and interact.

Some focus points to be present:

  1. Keep your phone in your pocket, purse or bag
  2. Turn your phone alerts off if you are too easily distracted
  3. Look at the person talking, not behind or beside
  4. Point your feet at the person (or group) to help keep your body engaged

Some points to help others be present:

  1. In a networking/connecting event, don’t latch onto one person and prevent them from being able to make other connections
  2. If you notice another person drifting away from you, politely bring it into conversation to either lock in attention or give the opportunity to disengage
  3. Pay attention to your own behavior to ensure you aren’t causing someone to drift
  4. Respect other people’s conversations – don’t barge in and take over

Any other tips you have to be present?

UPDATE: Reading Material

How To Win Friends & Influence People by Dale Carnegie
Part Two, Section 6 – How to Make People Like You Instantly

Key point: Make the other person feel important – and do it sincerely.

This book was originally written in 1936 and is still considered one of the best on this subject. It is referenced by almost every book that presents thoughts and ideas. You will serve yourself well by reading this book, and not just once.

This chapter gives many examples of situations on both sides of this recommendation – making yourself the most import and showing others that they are important. It is a great read with a lot of perspective.

There is nothing more frustrating to a person than to feel like the other person doesn’t value the discussion. Although some people do love to talk for hours regardless of anyone actually listening, I will hold off that discussion for another time. If you don’t want to be there, respectfully disengage. If you want to be there, be there.

James Habben
@JamesHabben

Real Self Improvement

This Digital Forensics and Incident Response (DFIR) industry attracts a lot of hard working individuals. The curious nature of people is what has stood out to me the most in all the people that I have talked to. We have an internal drive to find out how things work, and that is not satisfied until we know every part. This is a big part of what makes us stick to a job that can sometimes seem like a battle that could never be won.

The Ongoing Battle

The battle we face is a constant discovery of new artifacts and techniques. These come from both the offense side and the defense side. We don’t all have time to research these on our own, and the community is fortunately very supportive in that there a blogs to detail these findings. The offense finds a new hole and shares with their like minded folks. Then often times the defense finds a way to detect or monitor, and there is more sharing with the like minded community. You only need to see the list of links for a one week period on thisweekin4n6.com to understand the volume and community we have.

Constant Improvement

Because of the community, there are tons of resources to explain all the technical loveliness that we all enjoy. Improving our technical skills is a very achievable task. Reality is that some of the skills I learned to examine Win2k systems are (thankfully) starting to fade. Our tech changes with rapid speed.

What about our non-technical skills? Do you make any effort to improve how you interact with other people? These are often referred to as ‘soft skills’ and you will find them listed, in some form or another, on every job opening.

  • Strong communication skills
  • Ability to convey technical concepts to others
  • Be a team player
  • Comfortable speaking to a crowd

In fact, you might have been a witness to a peer getting a promotion instead of yourself while you have proven multiple times that you are far more technically capable than this peer. Your technical skills were likely not even part of the consideration for that promotion, as the soft skills matter much more when moving up.

Steps

The first step is always to realize. I won’t call this a problem because I don’t see it as such. It is a deficiency, and one that can easily be corrected if you will first make that realization.

Next, make a commitment to improve. I mean a real commitment. You won’t make much progress if you don’t take it seriously. Improving soft skills is a whole lot harder than improving your technical skills. You cannot do it alone.

Find someone to help you be accountable. This can be a sibling, friend, classmate, coworker, workout partner, or even someone you just met at a local association meet up. The important thing to find in this person is the ability to be called on the carpet if you are not following through. You know yourself best and what type of person you would be most receptive to.

Find a mentor (or two). This mentor doesn’t have to be someone in the DFIR industry since soft skills are pretty universal. In fact, you might find some extra insight from someone outside your circles. Don’t be afraid to aim high either. For the most part, I have found that people are very willing to give advice all the way up through the C-suite. If there is someone who you admire for a certain trait, go talk to them and find out about the struggle they had to gain that trait. There is an interesting program called infosecmentors.com that might be a good start.

Lastly, don’t waste time. This is one of the only things in this world that we can’t just make more of. We can make more money. We can learn more things. We can drink more whiskey. We can’t take back the hour that we sat listening to that one guy who just wanted to blabber on and on about the things only he thought were important. Be respectful of your time and anyone else you ask for time from. These people will want to see improvements made, or they will start to see time spend with you as a waste. Set an expectation of time with a person and don’t waste it.

More to Come

I have seen and heard a lot of discussion about soft skills in more recent times. I initially wanted to put together another ‘must read book list’, but I decided that I would take a little more time and talk about some various soft skills that we can work on improving together. I will be writing about these in future posts and I will provide information about some of the books that I continue to use in my path of improvement. This can be an intimidating set of skills to improve, and I want to help you do it.

James Habben
@JamesHabben

CCM_RecentlyUsedApps Update on Unicode Strings

The research and development that I did previously for the CCM_RecentlyUsedApps record structure and EnScript carving tool was done against case data I was using during investigations. Unfortunately, I had no data available with any of the string data having been written in Unicode characters. With the thought that Windows has been designed with international languages in mind, I used the UTF8 codepage when reading to hopefully catch any switch to Unicode type characters. Using UTF8 is a very safe alternative to ASCII because it defaults to plain ASCII in the lower ranges, and starts expanding bytes when it gets higher. I have an update, however, because I got a volunteer from twitter to graciously do some testing. Thanks @MattNels for the help!

The Tests

The first test that he ran was using characters that were not in the standard ASCII range. The characters like ä or ö are latin based characters with the umlaut dots above, and they fall within the scope of ASCII when you include both the low and the high ranges.

He created a testing directory on his system, which is under the management of his company’s SCCM deployment services. If you recall from my prior posts on this subject, this artifact is triggered simply by being a member. In this directory, he renamed an executable to include the above mention characters from the high ASCII range. The result show that the record stored those high characters exactly the same as the low range characters. You can see what that looks like in the following image.

The next test he ran was to rename that executable again to something high enough in the Unicode range to get clear of the ASCII characters. He went with “秘密”, which consists of two glyphs 0x79d8 and 0x5bc6. Keeping in mind our CPU architecture, we know that those bytes have to be swapped when written to disk as Unicode characters. The text would translate to four bytes on disk as: d8 79 c6 5b.

Another option, going with my earlier assumption/guess, is for the string to be written using UTF8. The use of UTF8 is pretty common on OS X, and less common on Windows, from my experience. Nevertheless, it would be worth being prepared to see what the bytes would look like if it was UTF8. The above glyphs translate into six bytes on disk, three for each character, but we don’t swap the bytes around like we did with Unicode. Confusing, right? Anyways, those bytes would look like this on disk: E7 A7 98 E5 AF 86.

Drumroll please…

The result was evidence of switching to Unicode. You can immediately recognize it as Unicode because of the 0x00 bytes between the extension “.exe” of that file. If you use a hex->ASCII converter on the Unicode bytes from above (d8 79 c6 5b) you get back “ØyÆ[“, which lines up nicely with the following image.

Now you ask: How do we programmatically determine if the string was written using Unicode or ASCII? Excellent question, and I am glad that you are tracking with me!

Let’s expand the view of this record a bit, and recall the structure of the format from the last post. The strings in Windows are typically followed with a 0x00 (null) byte to indicate where the string data stops. It is referred to as C style strings because this is how the C programming language stores strings in memory. In this record however, the strings were separated byte two 0x00 bytes. Take a close look at the following image of the expanded record with the Unicode string.

Did you spot the indicator? Look again at the byte immediately preceding the highlighted string data, and you will see that it is a 0x01 value. This byte has been a 0x00 value in all of my testing because I didn’t have any strings with Unicode text in them, or at least not to my knowledge. Since executables need to have these latin based extensions, the property will actually look to be ending with three 0x00 bytes. The first of those is actually part of the preceding ‘e’. Since this string has been written entirely in Unicode, the null terminating character mentioned just above gets expanded as well. The next byte is then either a 0x00 or 0x01 indicating the codepage for the next string property.

An interesting side note on a situation that Matt ran into, the use of the path “c:\test2\秘密\秘密.exe” for the executable resulted in no records indicating execution. He ran a number of tests surrounding that scenario, and there is something about that path that prevents the recording.

He continued with changing the path to “c:\秘密\秘密.exe”, and the artifact was back. We wanted to get confirmation of that 0x01 indicator byte using another string value. Sure enough, we got it in the following image.

Tool Update

The EnScript that I wrote to carve and parse these records has been updated to properly look for the 0x00 and 0x01 bytes indicating ASCII or Unicode usage. Please reach out to me if you find any problems or have any questions.

Additionally, Matt is adding this artifact to his irFARTpull collection PowerShell. These artifacts can be collected by having PowerShell perform a WMI query against the namespace and class where these records are stored. It should look something like this:
Get-WmiObject -namespace root\ccm\SoftwareMeteringAgent -class CCM_RecentlyUsedApps

Lessons Learned

This is a perfect example of being aware of what your tools are doing behind the scenes and always validating and testing them. Many of the artifacts that we search for and use to show patterns of behavior are detailed through reverse engineering. This process can be helpful, but it can also be a bit blind in not being able to analyze what we don’t have available.

If you aren’t a programmer, you can still contribute with testing, or even just thoughts on possible scenarios of failure. Hopefully the authors of the tools out there will be accepting of the feedback, as it will only provide more benefit for the community.

James Habben
@JamesHabben

Windows Prefetch: Tech Details of New Research in Section A & B

I wrote previously with an overview about the research into Windows prefetch I have been working on for years. This post will be getting more into the technical details of what I know to help others take the baton and get us all a better understanding of these files and the windows prefetch system.

I will be using my fork of the Windows-Prefetch-Parser to display the outputs in parsing this data. Some of the trace files I use below are public, but I didn’t have certain characteristics in my generated sample files to show all the scenarios.

Section A Records

I will just start off with a table of properties for the section A records, referred to as the file metrics. The records are different sizes depending on the version. I have been working with the newer version (winVista+) and it has just a tad more info than the xp version.

Section A Version 17 format (4 byte records)

0 trace chain starting index id
4 total count of trace chains in section B
8 offset in section C to filename
12 number of characters in section C string
16 flags

Section A Version 23 format (4 byte records, except noted)

0 trace chain starting index id
4 total count of trace chains in section B
8 count of blocks that should be prefetched
12 offset in section C to filename
16 number of characters in section C string
20 flags
24 (6) $MFT record id
30 (2) $MFT record sequence update

As you can see between the tables, the records grew a bit starting with winVista to include a bit more data. The biggest difference is in the $MFT record references. Very handy to know the record number and the sequence update to be able to track down previous instances of files in $Log or $UsnJrnl records. The other added field is a count of blocks to be prefetched. There is a flag setting in the trace chain records that allows the program to specify if a block (or group) should be pulled fresh every time, somewhat like a web browser.

The flag values seem to be consistent between the two versions of files. This is an area that applies a general setting to all of the blocks (section B) loaded from the referenced file, but I have seen times where the blocks in section B were assigned a different flag value. Mostly, they line up. Here are the flag values

Flag values (integer bytes have been flipped from disk)
0x0200    X    blocks (section B) will be loaded into executable memory sections
0x0002    R    blocks (section B) will be loaded as resources, non-executable
0x0001    D    blocks should not be prefetched

You can see these properties and the associated filenames in the output below. You will notice that the $MFT has been marked as one that shouldn’t be prefetched, which makes a lot of sense to not have stale data there. The other thing is that there are a couple DLL files that are referenced with XR because they are being requested to provide both executable code and non-executable resources.

Section B Records

This section has records that are much smaller, but there is so much more going on. The most exciting part to me is the bitfields that show a record of usage over the last eight program runs. You have probably seen these bitfields printed next to the file resource list of the python output when running the tool, but that data is not associated with either the filename in section C or the file metrics records in section A. These bitfields are actually tracking each of the block clusters in section B, so the output is actually a calculated value combined from all associated section B records. I will get to that later. Let’s build that property offset table first. These records have stayed the same over all versions of prefetch so far.

Section B record format

0 (4) next trace record number (-1 if last block in chain)
4 (4) memory block offset
8 (1) Flags1
9 (1) Flags2
10 (1) usage bitfield
11 (1) prefetched bitfield

The records in this section typically point to clusters of 8 512 blocks that are loaded from the file on disk. Most of the time, you will find the block offset property walking up in values of 8. It isn’t a requirement though, so you will find intervals smaller than that as well.

Here is an example of these records walking by 8.

Here is an example of one record jumping in after 2.

Here is an example of a couple sequential records, jumping only by 1.

I broke the two flag fields up early on just to be able to determine what was going on with each of them. What I found out was that Flags2 is always a value of 1. I haven’t seen this change ever. Without a change, it is very difficult to determine the meaning of this value and field. I have kept it separate still because of the no change.

The Flags1 field is similar to the Flags field that is found in the section A records. It holds values for the same purposes (XRD), though the number values representing those properties aren’t necessarily the same. It also has a property that forces a block cluster to be prefetched as long as it has been used at least once in the last eight runs. I will get into more later about the patterns of prefetching that I have observed, but for now let’s build the table for the properties and their values.

0x02    X    blocks are loaded as executable
0x04    R    blocks are loaded as resources
0x08    F    blocks are forced to be prefetched
0x01    D    blocks will not be prefetched

Now I get to show my favorite part: the bitfields for usage and prefetch. They are each single byte values that hold eight slots in the form of bits. Every time the parent program executes, the bits are all shifted to the left. If this block cluster is used or fetched, the right most bit gets a 1; otherwise it remains 0. When a block cluster usage bitfield ends up with all 0, that block record is removed and the chain is resettled without it.

Imagine yourself sitting in front of a scrabble tile holder. It is has the capacity to hold only eight tiles, and it is currently filled with all 0 tiles. Each time the program runs and that block cluster is used, you put a 1 tile on from the right side. If the program runs and the block cluster is not used, then you place a 0 tile. Either way, you are going to push a tile off the left side because it doesn’t have enough room to hold that ninth tile. That tile is now gone and forgotten.

Prefetch Patterns

The patterns listed below occur in section B since this is where the two bitfields are housed. Remember that these are for block clusters and not for entire files. Here are some various scenarios around the patterns that I have seen. The assumption is neither the D or F property assigned unless specified. Also, none of these are guaranteed, just that I have observed them and noted the pattern at some point.

Block with the F (force prefetch) property assigned, after 1 use on 8th run:
10000000    11111111

Block with the D (don’t prefetch) property assigned, after a few uses:
01001011    00000000

Block that is generally used, but missed on one:
11011111    11111111

Block on first use:
00000001    00000000

Block on second run, single use:
00000010    00000001

Block on third run, single use:
00000100    00000011

Block on fourth run, single use:
00001000    00000110

Block used every other run:
01010101    00111111

Block used multiple times, then not:
01110000    00111111

Block used multiple times, but only one use showing:
10000000    11100000

More Work

I am excited to see what else can be learned about these files. My hope is that some of you take this data to test it and break it. You don’t have to be the best DFIR person out there to do that. All you need is that drive to learn.

James Habben
@JamesHabben

Windows Prefetch: Overview of New Research in Sections A & B


The data stored in Prefetch trace files (those with a .pf extension) is a topic discussed quite a bit in digital forensics and incident response, and for good reason. It provides a great record of the executables that have been used, and Windows is configured to store them by default for workstation systems. In this article, I am going to add just a little bit more to the type of information that we can glean from one of these trace files.

File Format Review

The file format of Prefetch trace files has changed a bit over the years and those changes have generally included more information for us to take advantage of in our analysis. In Windows 10 for example, we were thrown a curve ball in that the prefetch trace files are now being stored compressed, for the most part.

The image below shows just the top portion of the trace files. The header and file information sections have been the recipient of the most version changes over the years. The sections following are labeled with letters as well as names according to Joachim’s document on the prefetch trace file format. The document does state that the name of section B is only based on what is known to this point, so it might change in the future. I hope that image isn’t too offensive. Drawing graphics is not a specialty of mine.

New Information, More Work

The information that I am writing about here is the result of many drawn out years and noncontiguous time of research. I have spent way too much time in IDA trying to analyze kernel level code (probably should just bite the bullet and learn WinDbg) and even more time watching patterns emerge as I stare deeply into the trace file contents. It is not fully baked, so I am hoping that what I explain here can lead to others, smarter than me, to run with this even further. I think there is more exciting things to be discovered still. I have added code to my fork of the windows-prefetch-parser python module, which I forked a while back to add SQLite output, and I will get a pull request into the main project in short time. This code adds just a bit of extra information in the standard display output, but there is also a -v option to get a full dump of the record parsing. (warning, lots of data)

File Usage – When

The first and major thing that I have determined is that we can get additional information about the files used (section C) in that we can specify which of the last 8 program executions took advantage of each file. We have to combine data from all three sections (A, B, and C) in order to get this more complete picture, something that the windows prefetcher refers to as a scenario. This can also help to explain why files can show up in trace files and randomly disappear some time later. Take a look at this image for a second.

This trace file is for Programmer’s Notepad (pn.exe) and was executed on a Windows 8 virtual machine. I created several small, unique text files to have distinct records for each program execution. I used the command line to execute pn.exe while passing it the name of each of those text files. I piped the output into grep to minimize the display data for easier understanding here.

There are two groups of 8 digits, and these are a bitfield. The left group represents the program triggering a page fault (soft or hard) to request data from the file. The right group represents the prefetcher doing a proactive grab of the data from that file, as this is the whole point to have data ready for the soft fault and to prevent the much more costly hard fault. In typical binary representation, a zero is false and a one is true. Each time the program is executed, these fields  are bitshifted to the left. This makes the right side the most recent execution and each column working left is the scenario prior, going up to eight total.

If you focus on an imaginary single file being used by an imaginary program, the bitfield would look like this over eight runs.
00000001
00000010
00000100
00001000
00010000
00100000
01000000
10000000

What happens after eight runs? I am glad you asked. If the value of this bitfield ends up being all zero’s, the file is removed from section C, and all associated records are removed from sections A and B. Interestingly, the file is not removed from the layout.ini file that sits beside all these trace files; not immediately, from what I have been able to determine.

If the file gets used again before that 1 gets pushed out, then the sections referencing that file will remain in the trace file.
00000001
00000010
00000100
00001000
00010001
00100010
01000100
10001000
00010000
00100001
01000010
10000100
00001000
etc.

File Usage – How

The second part, and the one that needs more research, is how this file was used by the executing program. There are some flag fields in both section A and B that provide a few values that have stuck out to me. There are other values that I have observed in these flag fields as well, but I have not been able to make a full determination about their designation yet.

The flag field that I have focused on is housed in section A. The three values that I have found purpose behind seem to represent 1) if a file was used to import executable code, 2) if the file was used just to reference some data, perhaps strings or constants, and 3) if the file was requested to not be prefetched. You will mostly see DLL files with the executable flag, although there are some that are referenced as a resource. You will find most of the other files being used as a resource.

In the output of windowsprefetch, I have indicated these properties as follows:
X    Executable code
R    Resource data
D    Don’t Prefetch

See some examples of these properties in the output below from pn.exe.

More Tech to Follow

I am going to stop this post here because I wanted this to be more of a higher level overview about the ways we can use these properties. I will be writing another blog post that gets into a little more gory detail of the records for those that might be interested.

Please help the community in this by testing the tool and the data that I am presenting here. Samples are in the GitHub repo. This has all been my own research, and we need to validate my findings or correct my mistakes. Take a few minutes to explore some of your system’s prefetch files.

You can comment below, DM me on twitter, or email me first@last.net if you have feedback. Thanks for reading!

James Habben
@JamesHabben

BsidesSLC Experience and Offer to Help

I was given the privilege of speaking at the BsidesSLC conference this month, and it was a very enjoyable conference for me. The people in the SLC area are very welcoming and the crew that puts the conference on did an amazing job. The name of the conference is changing for next year, but the format is staying pretty much the same. If you have the ability to attend next year, I would highly encourage you to do so.

Here are some points that I picked up during my attendance:

Bryce talked about a well known issue of developers posting secrets to code repositories such as GitHub or BitBucket. The funniest part of this is that these developers realize their mistake and commit a revision to remove. What happens to the previous commit? Exactly! This same mistake is made by even more developers when you include other cloud technologies like S3 storage. That WordPress vulnerability that allows file injection can lead to a complete meltdown when the attacker accesses all of your data that is stored inside S3 or other systems. Keep your secrets secret.

Bri explained the challenges in compromising Industrial Control System (ICS) devices. Getting the highest level of privilege on a system doesn’t automatically mean the compromise of the connected devices. There is a secondary payload required to further infiltrate and that secondary payload requires expert knowledge of the ICS being targeted. We aren’t yet at the point of having commoditized malware for ICS.

JC walked us through how he operates tabletop exercises for his clients. There wasn’t anything new for me in this one, but it was a great reassurance that I have been facilitating a quality exercise for all of my clients. I think the attendees should takeaway that there really needs to be a externally hired facilitator to run some of their exercises to work around any of the internal politics or bias. Mr. ‘Junior Infosec’ may not feel comfortable calling out the CEO for a wrong answer, but I am happy to do it.

Chad gave us an earful of all the various ways that Windows credentials can be picked and harvested by attackers, both on the wire and on the disk. He even provided a handout with all the additional notes he talked about. This is a very important topic to be aware of because the DBIR has consistently shown that credentials are the most targeted in incidents and breaches. Defenders need to be aware of every possibility of credential compromise in order to put safeguards in place.

Lastly, Lesley gave an inspiring talk about how we as industry have a collective skill to land a plane while not being professional pilots (at least most of us). She went through a great demonstration showing how every person (not an exaggeration) can contribute in some way to improving the security field. We just have to look at ourselves and identify the skills we have and offer the help to others that are trying to learn. No one in this field is an expert at everything, even though its hard to believe with the reputation following many people. We all have skills, and we all have something we want to learn.

My Offer to Help

I consistently see advice given to new folks in the field, or those trying to get into the field, that blogging is one of the best ways. This allows you to demonstrate the skills you have and gives you a reference on your resume. You don’t have to post about the latest research on the newest malware. Focus on the skills you have that you can share with others, or document your journey of learning a new skill. Communication is a critical skill in this industry and I challenge you to find a job listing that doesn’t ask for someone with ‘good communication skills’ or the ‘ability to explain technical concepts’. Blogging is pure demonstration of that ability.

I want to put the offer out there to anyone who wants to get into blogging but is too shy to get it rolling. If you enjoy my style and reading my posts, then reach out to me so that I can help you. I can help you to organize your thoughts into a post that flows. I can help you come up with topics. I can help you improve on your writing skills. I am even happy to have you post on this blog.

My DMs are open on twitter, and my email is first@last.net. Your move.

James Habben
@JamesHabben