2015/11/03

Elasticsearch 2.0!

by Destruct_Icon
Categories: Analysis, News
Tags: No Tags
Comments: Leave a Comment

:Elasticsearch 2.0:

Last week brought the release of Elasticsearch 2.0, Logstash 2.0 and Kibana 4.2. Please visit elastic.co to obtain a copy. There have been multiple improvements and changes within the three environments which has also required me to do some heavy changes to the ELK shell script which I was working on a post for. In its current state, the script is functional however some of the methods for mapping data have been deprecated which is now a slight blocker for some of the automation. It’s a busy time but I am hoping to get the post and video of installation up and available in the next two weeks. For the moment, below is some information regarding the Elastic products:

  • Elasticsearch – Data Storage “for search”
    • APIs to easily interact with JSON formatted data.
    • Uses Apache Lucene for text based searching.
    • Simple scalability. Too much data on one node? Spin up a fresh elasticsearch instance, point the new node to the cluster and BAM! One extra line of code in a yml file is all you need.
    • Data replication across current and new nodes is automatic! As new nodes get introduced into the environment, sharding occurs without any required interactions.
  • Logstash – Monitoring and Parsing
    • Monitor files or entire directories for changes.
    • As new data is introduced, logstash workers will use your custom filters to extract field names; src ip, dest ip, mac address.
    • Use pattern matching to create your custom parsers!
    • 2.0 introduces better out of the box defaults such as changes to workers which improves throughput.
  • Kibana – Find
    • Your WebGUI for all things Elastic. Create dashboards for infrastructure monitoring or security event identification.

In the post, I will be mentioning some of the more important files to modify for a speedy deployment. If you don’t want to use the shell script, the goal will be for anyone to use it as a tutorial to spin up their own instances and begin indexing data from whatever log source they choose. We are a more security focused group so the post will primarily be about security related uses; however, we are planning on using it in our own environment for netflow and general infrastructure administration. Got any tips for us? Please leave us a comment as we’d love to hear from you!


Leave a Reply

Your email address will not be published. Required fields are marked *



Today is Friday
2018/02/23