PLASO – Google and Timelines

by Destruct_Icon
Categories: Analysis, Host Forensics
Tags: , , , , ,
Comments: Leave a Comment

PLASO – When Google Met Timelines

Many moons ago (ok, not that many moons ago) log2timeline was the go to source for easily building a timeline from a forensics image. Log2timeline is an amazing application that builds out a timeline perspective of an image using any timestamps it can identify. This is done through a combination of reading the registry, the file system, event logs or even browsing history. PLASO is a back-end engine that interacts with tools like log2timeline. For the purpose of this post, we are going to show how to use log2timeline and psort to create a csv from an encase image.


  • You will need a copy of PLASO (http://plaso.kiddaland.net/), I’ve had some trouble with the linux version but the windows release hasn’t given me any issues so far.
  • A forensics image available such as an Encase E01.
    • Alternatively, you may just have an image or a VHD. You can just as easily mount the partition using a program such as FTK imager and run log2timeline against the entire partition.

Starting the Process

Your first step is to build out the dump file from log2timeline. This may take quite a bit of time as it’s pulling all of the data out of the image you have pointed to.


The breakdown of the command is as follows;

  • log2timeline.exe: the location of your executable.
  • -z UTC: a timezone trigger set to UTC.
  • newfile.dump: this is the file in which all of your information will be stored.
  • C.E01: your source image in which you are investigating. If you are using a partition, this is where you would designate location such as F:.
  • -o was not used but this will allow you to point to the offset of the image. Generally -o 63 is your average offset with an imaged system.

After running log2timeline, we find that it’s requesting for which VSS snapshots. This is the Volume Shadow Snapshots or Volume Shadow Copies that it may have discovered when beginning initial triage of the image. I have added the –vss_stores 1, 2, 3 to my command and begun the parsing.


This can take quite a bit a time (hours) so be patient. The end result should be similar to the file below.


The image I was parsing was roughly 80 gigabytes so we received a very small dump file of ~400Mb. If you think this file is large, originally files being dumped out of l2t was gigs and gigs worth. So we’ve got our dump file, what’s the next step?


PSORT, which is part of the PLASO suite, is the next program we will interact with. This will allow us to pinpoint what data we may want out of the dump file based on the criteria we provide.


The breakdown of the command is as follows;

  •  psort.exe: location of your executable.
  • -z UTC: a timezone trigger set to UTC.
  • -o L2tcsv: your output format. You may create an elastic search file or simply create a log2timeline csv.
  • -w l2timeline.csv: your output file name and location.
  • –slicer: tells PSORT that it will be using a filter against the file it will be parsing.
  • newfile.dump: the dump file which we created using log2timeline.
  • “date > ‘2014-08-01’ AND date < ‘2014-10-01′”: our filter we have placed looking for anything between the dates of August 1st – October 1st. I will dive deeper into these filters at a later time but there is some great information on PLASO’s page regarding how to setup proper filters.

When processing has finished you should see the output overview below reflecting what has been dumped into your output file.


Let’s take a quick gander at the file and what’s inside.


We have created a ~200 Meg file that was only a few months worth of data. As this image has been used for over 2 years, this should give you some perspective of just how big of a csv file could be created if, let’s say, you were parsing a 500 gig drive with years worth of writes. Learn the PSORT filters, they will save you a lot of time. Diving a bit deeper into the csv, we can start to understand sources of data, the data identified and timeframes of events.


One quick thought, ~200 Meg csv! Yikes! So we have our data! We have an idea of what we are looking for. Is there anyway to make the analysis easier? I’d like to introduce you to the ELK stack. This is comprised of Elasticsearch, Logstash and Kibana. This is a free log aggrigator which will give you an immense amount of power through searching. Please follow this link to download, http://www.elasticsearch.org/overview/elkdownloads/. Installation and configuration is a bit tricky but every time they update the stack it gets easier and easier to setup. There is both a windows and linux version of the software so by all means pick your poison. I will be making a basic configuration post when Elasticsearch 2 and Kibana 4 are released as they are just on the horizon.

Assuming you have already installed and configured your ELK stack environment, you can dump your log2timeline.csv file into a directory being monitored by Logstash. This will then be 100% searchable through Elasticsearch using the interface of Kibana. Lets say out of the 200 Meg csv, I wanted to look for my user of NDMaster;


We see the query of NDMaster has identified over 13,000 logs in the database. Kibana has an implied OR which is a tad different then some other log aggregators which have implied ANDs. So if I was specifically looking for all MP4 files that were on this system under my User, I would have built a dashboard with NDMaster AND mp4 in the query.

I hope this information has given you some ideas on how to leverage PLASO and the ELK stack into your typical analysis. If you have any questions, please feel free to contact me at destruct_icon@malwerewolf.com or please leave a comment. I will be following up with some filter posts on Logstash and configurations for the ELK stack at a later time.

Leave a Reply

Your email address will not be published. Required fields are marked *

Today is Monday