2015/12/01

Beholder

Destruct_Icon

:The Beholder Script:

This is Destruct_Icon from MalWerewolf and I would like to introduce you to the Beholder script. This script allows you to take advantage of free software that may help you identify malware on your network. The origins of this script spawned from the needs of administrators who did not have the resources available to them to learn how these programs work but had the requirements looming over their head of having packet capture and detection systems in place to pinpoint malicious behavior.

Let’s face it, if you are an admin of one working for a small to medium sized business, it gets more and more difficult to find time to do some research and development on your security stack. So what does this eventually lead to? Lots of $$$. The need to burn cash on hardware/software then turns into a situation of trying to justify that cost with upper management or owners. You are then required to keep up support contracts just to get help when appliances don’t quite work as intended. Although this doesn’t tend to be as much of an issue in a larger company, this gets frustrating when you are trying to justify keeping a piece of security equipment or being able to renew support for proprietary software in a small business. We are hoping this may be able to alleviate some of your issues while giving you a foundation to build on for the future of your security environment.

Download Location:

https://github.com/malwerewolf/beholder

Let’s talk about what’s in the box. Don’t worry, there’s no hole in this one.

Bro (https://www.bro.org/)

Bro will be your network analyzer. We have had a lot of success at identifying malicious behavior on a network simply by using log sources from Bro.

ELK (https://www.elastic.co/)

ELK is Elasticsearch, Logstash and Kibana. Logstash will be what parses your data, preps it for Elasticsearch consumption and hands it over to Elasticsearch. Elasticsearch will store your data and give you the ability to analyze it. Kibana is your eyes into the Elasticsearch indexes.

Libtrace (http://research.wand.net.nz/software/libtrace.php)

Libtrace has a set of tools which allows you to do packet captures and works very well in small or large environments.

Minimum Requirements:

  • Ubuntu 14 / 15
  • 3GB RAM
  • 64 Bit Processor
  • 40 GB Free Space

The purpose of this post is to give quick highlights of what the script is doing as it gets installed. We went with a “bootstrap” format in where our script is strictly a bash shell script that will download all the necessary dependencies while also configure each of the applications to run from the get-go. There are only a few user input prompts throughout the script and they are at the beginning and end. Before we start, be sure to “git clone https://github.com/MalWerewolf/beholder.git” to pick up the latest version of the Beholder script.

Step 1: Run the Script

`sudo sh beholder.sh`

beholderDI1

  • Did you forget to SUDO? If so, no problem! The first action of the script is to make sure you are in sudo. If sudo was not used, you will be prompted for your password.
  • After the sudo check, the Ubuntu version check kicks in. We have tested the Beholder script on Ubuntu 14.04 and 15.10 so anything else will report back as a failure and exit the script. Ubuntu 14 and 15 have a few slightly different dependencies so you may see a difference in how long it takes to complete the installation.

Step 2: Enter a Password

`Enter new UNIX password: pancakes`

beholderDI2

  • You will be asked to create a password for the user “beholder”. This will be the user that the ELK stack runs under. After entering the password, the magic will begin.
  • It’s time to setup the file system. `/logs/` gets prepared and will be your central point for the elasticsearch indexes as well as bro logs. `/pcaps/` may be used for when you use tracesplit to get some captures of your network data.
  • Next is the installation of updates and dependencies. We are forcing the installations so no action is required by the user.
  • The majority of your time with the Beholder script will be in this next sequence of events, the installation of ELK/Bro and Libtrace. Most of this is hands off and these applications are installed in `/opt/`.
    • Elasticsearch: Used for search! This houses your data and can be queried via API over port 9200. If this is your first time interacting with Elasticsearch, you may not be working directly with ES itself as Kibana functions as the easy to use window into your data. Some notable commands for your Elasticsearch instance are as follows.
      • Curl –XDELETE localhost:9200/bro*
        • This will remove all of the logs in a bro index of Elasticsearch. Be careful when using this.
      • Curl –XGET localhost:9200/_template/*
        • Returns all of the mappings which Elasticsearch uses to organize the data which is sent to it.
      • The default configuration of Beholder add the following information into the Elasticsearch yml file.
        • name: beholder
        • name: beholder ß You may change the node name if you have expanded your ELK instance to multiple hosts and they will begin to replicate data between each other.
        • data: /logs/index ß Stores all of your indices.
        • logs: /logs/elasticsearch ß Stores any logs from elasticsearch itself.
      • Logstash: Parses and Transports your data. Logstash will mostly be a behind the scenes effort and is configured by the script to monitor files in the /logs/ directory. As data is dumped from Bro, Logstash reviews the log, builds fields around that data and sends it off into its new home in Elasticsearch.
        • The bro Logstash configuration file is located in /opt/logstash/config/bro.conf. This file does two things;
          • filter{} uses grok for field extraction.
          • ouput{} tells Logstash where to throw the data towards; in this case it’s sending it back into localhost:9200 which is the Elasticsearch service. You will also notice that it separates the index by date as well as uses an Elasticsearch mapping template JSON file. The template provides a couple required functions.
            • First it creates a “raw” form of each identified field. By default, Elasticsearch will “analyze” strings as they are fed into it. This creates complications if you are trying to use a URL. An example of this would be if you have “cows-and-pants” in the URI, Elasticsearch will break apart the URI field and have individual “cows” “and” “pants”. To remediate this, we created the “raw” or “not analyzed” duplicate fields of each field. Now when we run queries against uri.raw, we will be able to properly look for the full string of “cows-and-pants”.
            • Secondly, the template tells Elasticsearch what fields need to be integer based. Without this, you would be unable to create dashboards that rely on any kind of math such as averages or sums.
          • Kibana: View your data! This is what you may interact the most with after a full installation. Use Kibana to search through your Elasticsearch instance and find data you are looking for. Build visualizations such as the most or least used user agent strings in your environment. Get a total of bytes in/out based on IP. While you are satisfied with your visualizations, build them all into one sleek dashboard to impress your execs!
            • The script installs Kibana with all defaults. The one thing to keep in mind is that when you first boot into Kibana (browsing to localhost:5601), you will be required to specify an index. In the textbox, add “bro*” and set your timestamp to @timestamp.
            • beholderDI5
            • NOTE: If you notice that “bro*” doesn’t identify as an index, it’s possible that Bro hasn’t booted up yet. Do a “sudo ps waux | grep bro” and you should see about three or four Bro services running. If not, wait five minutes as a cronjob is setup by the script which forces it run.

Step 3: Watch an Interface

beholderDI3

  • Bro: This will be your network analyzer. You don’t directly interact with Bro but there will be a few configurations you should be aware of.
    • You will need to specify an interface that you want Bro to be monitoring.
      • Your interface list will be provided. Please make sure you choose whichever interface will be the one you want to monitor. Keep the interfaces in mind if you are tapping or mirroring your traffic and have multiple controllers.
    • The script will set your log directory to /logs/bro. Here you can browse through and see the data that is being extracted. The files are separated based on specific functions such as HTTP, SMTP, DNS, Files and DHCP.
  • Libtrace: Great for packet captures! In a future release, we will be adding some functionality to make the packet capture side of a thing a little more user friendly. As it stands, you can use the example command below as a reference.
    • sudo /opt/libtrace/tools/tracesplit/tracesplit -z 6 -Z gzip int:eth0 erf:/pcaps/capture.gz

Step 4: Profit

beholderDI4

That’s all folks! You should have a brand spanking new setup of the ELK stack alongside Bro. All the configurations have been provided and should get you up and running quickly.
A few things to reiterate:

  • Please bounce the host.
  • Please wait about 5 minutes after the box has been rebooted for Bro to kick off.

After you have waited a few moments, you can jump into Kibana located at localhost:5601 and set your “bro*” index. This will allow you to start reviewing all of the data that Bro has been identifying. Thank you for taking the time to read through this wall of text and I hope the Beholder script is useful in your environment. If you have any questions, please don’t hesitate to ask me by e-mail at destruct_icon@malwerewolf.com or leave a comment. I will be updating this script with new features as I can get around to them as well as consistently updating the tools when necessary. I will be providing more information about the tools in some upcoming posts so watch out for it! For now, Soundwave, Jam this transmission!


Leave a Reply

Your email address will not be published. Required fields are marked *



Today is Monday
2017/12/18