Fixed code snippets and added blog+github links.

This commit is contained in:
Doru Mihai 2016-01-13 16:18:47 +02:00
parent 4e36712e07
commit b533c98e96
1 changed files with 11 additions and 3 deletions

View File

@ -1,6 +1,6 @@
---
layout: post
title: Log Aggregation with Fluentd, Elasticsearch und Kibana
title: Log Aggregation with Fluentd, Elasticsearch and Kibana
subtite: A short introduction
description: Introduction to log aggregation using Fluentd, Elasticsearch and Kibana
category: howto
@ -81,6 +81,8 @@ Let's take a look at what fluentd sends to Elasticsearch. Here is a sample log f
A message sent to Elasticsearch from fluentd would contain these values:
*-this isn't the exact message, this is the result of the stdout output plugin-*
~~~ruby
2015-11-12 06:34:01 -0800 tag.common: {"message":"[ ajp-apr-127.0.0.1-8009-exec-3] LogInterceptor INFO ==== Request ===","time_as_string":"2015-11-12 06:34:01 -0800"}
@ -108,6 +110,8 @@ Next you need to parse the timestamp of your logs into separate date, time and m
~~~
The result is that the above sample will come out like this:
~~~ruby
2015-12-12 05:26:15 -0800 akai.common: {"date_string":"2015-11-12","time_string":"06:34:01","msec":"471","message":"[ ajp-apr-127.0.0.1-8009-exec-3] LogInterceptor INFO ==== Request ===","@timestamp":"2015-11-12T06:34:01.471Z"}
2015-12-12 05:26:15 -0800 akai.common: {"date_string":"2015-11-12","time_string":"06:34:01","msec":"473","message":"[ ajp-apr-127.0.0.1-8009-exec-3] LogInterceptor INFO GET /monitor/broker/ HTTP/1.1\n","@timestamp":"2015-11-12T06:34:01.473Z"}
@ -145,6 +149,8 @@ Using this example configuration I tried to create a pie chart showing the numbe
~~~
Sample output from stdout:
~~~ruby
2015-12-12 06:01:35 -0800 clear: {"date_string":"2015-10-15","time_string":"06:37:32","msec":"415","message":"[amelJettyClient(0xdc64419)-706] jetty:test/test INFO totallyAnonymousContent: http://whyAreYouReadingThis?:)/history/3374425?limit=1","@timestamp":"2015-10-15T06:37:32.415Z","sourceProject":"Test-Analyzed-Field"}
~~~
@ -172,6 +178,8 @@ curl -XPUT localhost:9200/_template/template_doru -d '{
~~~
The main thing to note in the whole template is this section:
~~~json
"string_fields" : {
"match" : "*",
@ -191,6 +199,6 @@ This tells Elasticsearch that for any field of type string that it receives it s
The `not_analyzed` suffixed field is the one you can safely use in visualizations, but do keep in mind that this creates the scenario mentioned before where you can have up to 40% inflation in storage requirements because you will have both analyzed and not_analyzed fields in store.
# Have fun
So, now you know what we went through here at Haufe and what problems we faced and how we can overcome them.
So, now you know what we went through here at [HaufeDev](http://haufe-lexware.github.io/) and what problems we faced and how we can overcome them.
If you want to give it a try you can take a look at our docker templates on github, there you will find a template for an EFK setup + a shipper that can transfer messages securely to the EFK solution and you can have it up and running in a matter of minutes: https://github.com/Haufe-Lexware/docker-templates/tree/master/logaggregation
If you want to give it a try you can take a look at [our docker templates on github](https://github.com/Haufe-Lexware/docker-templates), there you will find a [logaggregation template](https://github.com/Haufe-Lexware/docker-templates/tree/master/logaggregation) for an EFK setup + a shipper that can transfer messages securely to the EFK solution and you can have it up and running in a matter of minutes.