Merge pull request #27 from hlgr360/master

Adding Haufe picture, updating markdown processor
This commit is contained in:
Holger Reinhardt 2016-02-03 15:02:24 +01:00
commit 1f7b6a6a6b
13 changed files with 33 additions and 35 deletions

View file

@ -56,8 +56,8 @@ baseurl: ""
# !! You don't need to change any of the configuration flags below !! # !! You don't need to change any of the configuration flags below !!
# #
markdown: redcarpet markdown: kramdown
highlighter: pygments highlighter: rouge
permalink: /:title/ permalink: /:title/
# default pagination # default pagination

View file

@ -23,7 +23,8 @@
<!-- suppress all category and tag meta pages --> <!-- suppress all category and tag meta pages -->
{% if page.url contains '/meta/' %} {% if page.url contains '/meta/' %}
{% else %} {% else %}
{% if page.url != '/404.html' %} {% if page.url contains '404' %}
{% else %}
<li> <li>
<a href="{{ page.url | prepend: site.baseurl }}">{{ page.title }}</a> <a href="{{ page.url | prepend: site.baseurl }}">{{ page.title }}</a>
</li> </li>

View file

@ -14,15 +14,15 @@ It was an impressive conference with a lot of new information and also excellent
In the following I want to focus on my personal highlights. In the following I want to focus on my personal highlights.
##Docker Basis Workshop ## Docker Basis Workshop
I joined the workshop **Der Docker Basis Workshop** from [Peter Rossbach](http://www.bee42.com/). Till now I managed to stay away from Docker cause there are other colleagues in our company that have more enthusiasm for tools like that. A **Basis Workshop** offered a good way to get familiar with Docker. The workshop itself focused on pure Docker. Peter introduced the intention and basic structure of the docker environment and the relationship between docker images, container, daemon, registry etc. Peters created his slides with markdown and shipped it using containers. This guy is really a Docker evangelist and is convinced about the stuff he presents. For most of the workshop we worked using the terminal on a virtual machine running docker and we learned about the different commands. It wasn't that easy for me because the workshop was clearly designed for guys that are familiar with linux. I struggled for example with creating a simple Docker file with vi (don't know how anybody can work with this editor). I joined the workshop **Der Docker Basis Workshop** from [Peter Rossbach](http://www.bee42.com/). Till now I managed to stay away from Docker cause there are other colleagues in our company that have more enthusiasm for tools like that. A **Basis Workshop** offered a good way to get familiar with Docker. The workshop itself focused on pure Docker. Peter introduced the intention and basic structure of the docker environment and the relationship between docker images, container, daemon, registry etc. Peters created his slides with markdown and shipped it using containers. This guy is really a Docker evangelist and is convinced about the stuff he presents. For most of the workshop we worked using the terminal on a virtual machine running docker and we learned about the different commands. It wasn't that easy for me because the workshop was clearly designed for guys that are familiar with linux. I struggled for example with creating a simple Docker file with vi (don't know how anybody can work with this editor).
One of the reasons why I joined the workshop was to watch Peter presenting Docker and whether it is a good fit for an inhouse workshop. I'm sure this would work out great. I'm also sure that it's a good idea to meet with Peter to review our own docker journey and to get feedback from him. One of the reasons why I joined the workshop was to watch Peter presenting Docker and whether it is a good fit for an inhouse workshop. I'm sure this would work out great. I'm also sure that it's a good idea to meet with Peter to review our own docker journey and to get feedback from him.
##Microservices and DevOps Journey at Wix.com ## Microservices and DevOps Journey at Wix.com
Aviran Mordo from [Wix.com](http://de.wix.com/) presented the way how wix.com separated their existing monolithic application in different microservices. This was the session that I enjoyed the most. Aviran explained how they broke up the existing application in just two services in a first step. They learned a lot of stuff about database separation, independent deployment etc. They also learned that it's not a good idea to do too much at the same time. I loved to hear what he marked as **YAGNI** (You ain't gonna need it). It allowed them to focus on business value and how they could handle the task and got the job done. Wix.com did not implement API versioning, distributed logging and some other stuff we are talking about. Aviran emphasized more than once that they strictly focused on tasks that must be done and that they cut away the "nice-to-have" things. Nevertheless it took a year to split the monolith in two services! After that they had more experience and they got faster. Now the need for distributed logging arose and they took care of it. After three years Wix.com has now 140 microservices. For me it was an eyeopener that it is absolutely ok to start with a small set of requirements in favor of getting the job done and to learn. Every journey begins with a single step! Aviran Mordo from [Wix.com](http://de.wix.com/) presented the way how wix.com separated their existing monolithic application in different microservices. This was the session that I enjoyed the most. Aviran explained how they broke up the existing application in just two services in a first step. They learned a lot of stuff about database separation, independent deployment etc. They also learned that it's not a good idea to do too much at the same time. I loved to hear what he marked as **YAGNI** (You ain't gonna need it). It allowed them to focus on business value and how they could handle the task and got the job done. Wix.com did not implement API versioning, distributed logging and some other stuff we are talking about. Aviran emphasized more than once that they strictly focused on tasks that must be done and that they cut away the "nice-to-have" things. Nevertheless it took a year to split the monolith in two services! After that they had more experience and they got faster. Now the need for distributed logging arose and they took care of it. After three years Wix.com has now 140 microservices. For me it was an eyeopener that it is absolutely ok to start with a small set of requirements in favor of getting the job done and to learn. Every journey begins with a single step!
##Spreadshirts way to continuous delivery ## Spreadshirts way to continuous delivery
Samuel Ferraz-Leite from [Spreadshirt.com](www.spreadshirt.de) presented their way to continuous delivery. They started with a matrix organisation. The teams were separated in Samuel Ferraz-Leite from [Spreadshirt.com](www.spreadshirt.de) presented their way to continuous delivery. They started with a matrix organisation. The teams were separated in
* Frontend DEV * Frontend DEV
@ -38,5 +38,5 @@ I was really amazed by the power of organization restructuring. Of course I know
* What about ourselves organized as one architect team? Isn't that an antipattern? * What about ourselves organized as one architect team? Isn't that an antipattern?
* What about SAP, SSMP? CorpDev and SBC? BTS and H2/H3? * What about SAP, SSMP? CorpDev and SBC? BTS and H2/H3?
##Conclusion ## Conclusion
It was a good conference and I especially appreciated learning from the experiences of other companies where they failed and where they are successful. I hope that in three years we can look back and share our successful way with others. It was a good conference and I especially appreciated learning from the experiences of other companies where they failed and where they are successful. I hope that in three years we can look back and share our successful way with others.

View file

@ -8,6 +8,7 @@ author: ThomasSc
author_email: thomas.schuering@haufe-lexware.com author_email: thomas.schuering@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---
Once upon a time in a ~~galaxy~~ container far, far away ... We, a bunch or ~~rebels~~ Haufe-employees, entered the halls of container - wisdom: DockerCon EU 2015 in Barcelona, Spain. Hailing from different departments and locations (Freiburg AND Timisoira, CTO's, ICT, DevOps, ...), the common goal was to learn about the current state of Docker, the technology behind it and its evolving eco-system. Once upon a time in a ~~galaxy~~ container far, far away ... We, a bunch or ~~rebels~~ Haufe-employees, entered the halls of container - wisdom: DockerCon EU 2015 in Barcelona, Spain. Hailing from different departments and locations (Freiburg AND Timisoira, CTO's, ICT, DevOps, ...), the common goal was to learn about the current state of Docker, the technology behind it and its evolving eco-system.
The unexpected high level catering (at least on the first day and the DockerCon Party in the Marine Museum) was asking for more activity than moving from one session to the next, but we had to bear that burden (poor us!). The unexpected high level catering (at least on the first day and the DockerCon Party in the Marine Museum) was asking for more activity than moving from one session to the next, but we had to bear that burden (poor us!).
@ -48,10 +49,8 @@ Here's what I found most important (no specific order and might be intermixed wi
In the meantime have a look at the [great overview](https://github.com/docker-saigon/dockercon-eu-2015) what happened during both days with links to all/most of the presentations, slides or videos to be found a . In the meantime have a look at the [great overview](https://github.com/docker-saigon/dockercon-eu-2015) what happened during both days with links to all/most of the presentations, slides or videos to be found a .
# Things "inbetween" ## Things "inbetween"
... were quite interesting, too. We met with some guys from Zalando (yes, the guys who're screeming a lot in their adverts), who were explaining how they are using some home-brewn (git-available) facade (or tool if you like) to ease the pain with running a custom PaaS on AWS. The project [STUPS](https://stups.io/) uses a plethora of Docker-containers and is to be found on its own web-page and [github](https://github.com/zalando-stups) . ... were quite interesting, too. We met with some guys from Zalando (yes, the guys who're screeming a lot in their adverts), who were explaining how they are using some home-brewn (git-available) facade (or tool if you like) to ease the pain with running a custom PaaS on AWS. The project [STUPS](https://stups.io/) uses a plethora of Docker-containers and is to be found on its own web-page and [github](https://github.com/zalando-stups) .
(To be extended :-)) (To be continued :-))
#
There's more to come (from other colleagues, too :-)

View file

@ -15,19 +15,19 @@ First of all, some background, although [Docker](https://www.docker.com/what-doc
So I was excited to visit DockerCon and see how Docker continued to evolve as platform into a very flexible, lightweight virtualization platform. The Docker universe indeed made big steps under the hood, with the tooling around it, and also with a growing number of third party adopters improving many aspects of what Docker is and wants to be. Docker may and will revolutionize the way we will build and deploy software in the future. And the future starts now, in the projects we bring ahead. So I was excited to visit DockerCon and see how Docker continued to evolve as platform into a very flexible, lightweight virtualization platform. The Docker universe indeed made big steps under the hood, with the tooling around it, and also with a growing number of third party adopters improving many aspects of what Docker is and wants to be. Docker may and will revolutionize the way we will build and deploy software in the future. And the future starts now, in the projects we bring ahead.
##Virtualization and Docker## ### Virtualization and Docker
The past waves of virtualization are now commodity, has reached IT and is no longer the domain of development as it was years ago, when we started with VMware for development and testing. It is the base for nowadays deployment. Virtualization has many aspects and flavours, but one thing is in common: it is rather heavy weighted to build up a virtualization platform, and using it will cause some performance reduction in comparision to deploying software artifacts directly to physical machines - what was still done for this reason, to have maximum throughput and optimal performance for business. But with virtualization we gain flexibility, beeing able to move a virtualized computing unit on the hardware below, especially from an older system to a newer one without having to rebuild, repackage or deploy anything. And it is already a big, well known industry behind virtualization infrastructure and technology. The past waves of virtualization are now commodity, has reached IT and is no longer the domain of development as it was years ago, when we started with VMware for development and testing. It is the base for nowadays deployment. Virtualization has many aspects and flavours, but one thing is in common: it is rather heavy weighted to build up a virtualization platform, and using it will cause some performance reduction in comparision to deploying software artifacts directly to physical machines - what was still done for this reason, to have maximum throughput and optimal performance for business. But with virtualization we gain flexibility, beeing able to move a virtualized computing unit on the hardware below, especially from an older system to a newer one without having to rebuild, repackage or deploy anything. And it is already a big, well known industry behind virtualization infrastructure and technology.
So what is new with Docker? First of all, Docker is *very lightweight*. It fits well in modern Unix enviroments as it bases upon kernel features like CGroups, LXC and more to provide a separation of the runtime environment for the application components from the base os system and drivers and hardware below. But docker is not linux only, there is movement also in the non-Linux part of our world implementing docker and docker related services. Important is: Docker is not about VMs, it is containers. Docker as technology and platform promises to become a radical shift in view. But as I am no authority in this domain, I just refer to a recent article on why Docker is [the biggest disruption in Linux virtualization](http://www.nextplatform.com/2015/11/06/linux-containers-will-disrupt-virtualization-incumbents/). So what is new with Docker? First of all, Docker is *very lightweight*. It fits well in modern Unix enviroments as it bases upon kernel features like CGroups, LXC and more to provide a separation of the runtime environment for the application components from the base os system and drivers and hardware below. But docker is not linux only, there is movement also in the non-Linux part of our world implementing docker and docker related services. Important is: Docker is not about VMs, it is containers. Docker as technology and platform promises to become a radical shift in view. But as I am no authority in this domain, I just refer to a recent article on why Docker is [the biggest disruption in Linux virtualization](http://www.nextplatform.com/2015/11/06/linux-containers-will-disrupt-virtualization-incumbents/).
##Docker fundamentals## ### Docker fundamentals
There was one session that made a deep impression on me. It was the session titled ["Cgroups, namespaces and beyond: what are containers made from"](http://de.slideshare.net/Docker/cgroups-namespaces-and-beyond-what-are-containers-made-from) by Jerome Pettazzoni. Jerome shows how Docker bases on and evolves from Linux features like cgroups and namespaces and LXC (Linux containers). Whereas early docker releases based on LXC, it uses now an own abstraction from underlying OS and Linux kernel called libcontainer. He show containers can be built from out of the box linus features in an impressive demo. The main message I took from this presentation: Docker introduces no overhead in comparision to direct deployment to a linux system, as the mechanism used by Docker are system inherent, it means are also used and in effect when one uses Linux without, as they sit there and are used anywhere. Docker is lightweight, really, and has nearly no runtime overhead, so it should be the natural way to deploy and use non-OS software components on Linux. There was one session that made a deep impression on me. It was the session titled ["Cgroups, namespaces and beyond: what are containers made from"](http://de.slideshare.net/Docker/cgroups-namespaces-and-beyond-what-are-containers-made-from) by Jerome Pettazzoni. Jerome shows how Docker bases on and evolves from Linux features like cgroups and namespaces and LXC (Linux containers). Whereas early docker releases based on LXC, it uses now an own abstraction from underlying OS and Linux kernel called libcontainer. He show containers can be built from out of the box linus features in an impressive demo. The main message I took from this presentation: Docker introduces no overhead in comparision to direct deployment to a linux system, as the mechanism used by Docker are system inherent, it means are also used and in effect when one uses Linux without, as they sit there and are used anywhere. Docker is lightweight, really, and has nearly no runtime overhead, so it should be the natural way to deploy and use non-OS software components on Linux.
##Docker and Security## ### Docker and Security
When I started in 2014, one message from IT was: Docker is insecure, and not ready for production use, we cannot support it. Indeed there are a couple of security issues related to Docker, especially if the application to deploy depends on NFS needed to share configuration data and provide a central pool of storage to be accessed by a multi-node system (as HRS is, for reasons of scaling and load balancing). In a Docker container, you are root, and this implies also root access to underlying services, such as NFS mounted volumes. Nowadays, still true, unfortunately. You will find the discussions in various discussion groups in the internet. For example in ["NFS Security in Docker"] (https://groups.google.com/forum/#!topic/docker-user/baFYhFZp0Uw) and many more. When I started in 2014, one message from IT was: Docker is insecure, and not ready for production use, we cannot support it. Indeed there are a couple of security issues related to Docker, especially if the application to deploy depends on NFS needed to share configuration data and provide a central pool of storage to be accessed by a multi-node system (as HRS is, for reasons of scaling and load balancing). In a Docker container, you are root, and this implies also root access to underlying services, such as NFS mounted volumes. Nowadays, still true, unfortunately. You will find the discussions in various discussion groups in the internet. For example in ["NFS Security in Docker"](https://groups.google.com/forum/#!topic/docker-user/baFYhFZp0Uw) and many more.
But there are be big advances with Docker that may slip into the next versions that are planned. One of them, that I yearn to have, is called user namespace mapping. It was announced at DockerCon in more than one presentation, but I remember from "Understanding Docker Security", presented by two memobers of the Docker team, Nathan McCauley and Diogo Monica. The reason why it is not yet final is that it requires further improvements and testing, and so it is only available in the experimental branch of docker, currently. But there are be big advances with Docker that may slip into the next versions that are planned. One of them, that I yearn to have, is called user namespace mapping. It was announced at DockerCon in more than one presentation, but I remember from "Understanding Docker Security", presented by two memobers of the Docker team, Nathan McCauley and Diogo Monica. The reason why it is not yet final is that it requires further improvements and testing, and so it is only available in the experimental branch of docker, currently.
The announcement can be read here: []"User namespaces have arrived in Docker"] (http://integratedcode.us/2015/10/13/user-namespaces-have-arrived-in-docker). The concept of user namespaces in Linux itself is described in the [linux manpages](http://man7.org/linux/man-pages/man7/user_namespaces.7.html) and may be supported by a few up-to-date linux kernels. So it is something for the hopefully near future. See also the known restrictions section in [github project 'Experimental: User namespace support'](https://github.com/docker/docker/blob/master/experimental/userns.md). The announcement can be read here: []"User namespaces have arrived in Docker"] (http://integratedcode.us/2015/10/13/user-namespaces-have-arrived-in-docker). The concept of user namespaces in Linux itself is described in the [linux manpages](http://man7.org/linux/man-pages/man7/user_namespaces.7.html) and may be supported by a few up-to-date linux kernels. So it is something for the hopefully near future. See also the known restrictions section in [github project 'Experimental: User namespace support'](https://github.com/docker/docker/blob/master/experimental/userns.md).
@ -36,20 +36,20 @@ An other progress with container security is the project [Notary and docker cont
More technical information on it can be read in the blog article [Docker Content Trust](https://blog.docker.com/2015/08/content-trust-docker-1-8/). More technical information on it can be read in the blog article [Docker Content Trust](https://blog.docker.com/2015/08/content-trust-docker-1-8/).
See also the [InnoQ article](http://www.infoq.com/news/2015/11/docker-security-containers) and the [presentation](https://blog.docker.com/2015/05/understanding-docker-security-and-best-practices/) from May 2015. See also the [InnoQ article](http://www.infoq.com/news/2015/11/docker-security-containers) and the [presentation](https://blog.docker.com/2015/05/understanding-docker-security-and-best-practices/) from May 2015.
##Stateless vs Persistency## ### Stateless vs Persistency
One thing that struck me last year, when I worked on my Docker prototype implementation, was that Docker is perfect for stateless services. But troubles are ahead, as in real world projects, many services tend to be stateful, with more or less heavy dependencies on configuration and data. This has to be handled with care when constructing Docker containers - and I indeed ran into a problems with that in my experiments. One thing that struck me last year, when I worked on my Docker prototype implementation, was that Docker is perfect for stateless services. But troubles are ahead, as in real world projects, many services tend to be stateful, with more or less heavy dependencies on configuration and data. This has to be handled with care when constructing Docker containers - and I indeed ran into a problems with that in my experiments.
I hoped to hear more on this topic, as I guess I am probably not the only one that has run into issues while constructing docker containers. I hoped to hear more on this topic, as I guess I am probably not the only one that has run into issues while constructing docker containers.
Advances in docker volumes where mentioned, indeed. Here I mention the session "Persistent, stateful services with docker clusters, namespaces and docker volume magic" by Michael Neale. Advances in docker volumes where mentioned, indeed. Here I mention the session "Persistent, stateful services with docker clusters, namespaces and docker volume magic" by Michael Neale.
##Usecase and Messages## ### Usecase and Messages
As a contrast to the large number of rather technology focussed sessions was the one held by Ian Miell - author of 'Docker in Practice' - on ["Cultural Revolution - How to Manage the Change Docker brings"](http://de.slideshare.net/Docker/cultural-revolution-how-to-mange-the-change-docker-brings) As a contrast to the large number of rather technology focussed sessions was the one held by Ian Miell - author of 'Docker in Practice' - on ["Cultural Revolution - How to Manage the Change Docker brings"](http://de.slideshare.net/Docker/cultural-revolution-how-to-mange-the-change-docker-brings)
A use case presentation was "Continuous Integration with Jenkins, Docker and Compose", held by Sandro Cirulli, Platform Tech Lead of Oxford University Press (OUP). He presents the devops workflow used with OUP for building and deploying two websites provising resources for digitally under represented languages. The infrastructure runs on Docker containers, with Jenkins used to rebuild the Docker Images for the API based on a Python Flask application and Docker Compose to orchestrate the container. The CI workflow and demo of how continuous integration was achived were given in the presentation. It is available in [slideshare](http://de.slideshare.net/Docker/continuous-integration-with-jenkins-docker-and-compose), too. A use case presentation was "Continuous Integration with Jenkins, Docker and Compose", held by Sandro Cirulli, Platform Tech Lead of Oxford University Press (OUP). He presents the devops workflow used with OUP for building and deploying two websites provising resources for digitally under represented languages. The infrastructure runs on Docker containers, with Jenkins used to rebuild the Docker Images for the API based on a Python Flask application and Docker Compose to orchestrate the container. The CI workflow and demo of how continuous integration was achived were given in the presentation. It is available in [slideshare](http://de.slideshare.net/Docker/continuous-integration-with-jenkins-docker-and-compose), too.
One big message hovered over the whole conference: Docker is evolving ... as an open source project that is not only based on a core team but also heavily on many constributors making it grow and become a success story. Here to mention the presentation ["The Missing Piece: when Docker networking unleashing soft architecture 2.0"](http://de.slideshare.net/Docker/the-missing-piece-when-docker-networking-unleashing-soft-architecture-v15). And "Intro to the Docker Project: Engine, Networking, Swarm, Distribution" that rouse some expectations that were not met by the speaker, unfortunately. One big message hovered over the whole conference: Docker is evolving ... as an open source project that is not only based on a core team but also heavily on many constributors making it grow and become a success story. Here to mention the presentation ["The Missing Piece: when Docker networking unleashing soft architecture 2.0"](http://de.slideshare.net/Docker/the-missing-piece-when-docker-networking-unleashing-soft-architecture-v15). And "Intro to the Docker Project: Engine, Networking, Swarm, Distribution" that rouse some expectations that were not met by the speaker, unfortunately.
##Session Overview## ### Session Overview
An overview on the sessions held at DockerCon 2015 in Barcelona can be found [here](https://github.com/ngtuna/dockercon-eu-2015/blob/master/README.md), together with many links to the announcements made, presentations for most sessions in slideshare, and links to youtube videos of the general sessions, of which I recommented viewing the one for [day 2 closing general session](https://www.youtube.com/watch?v=ZBcMy-_xuYk) with a couple of demonstrations what can be done using Docker. It is entertaining and amazing. An overview on the sessions held at DockerCon 2015 in Barcelona can be found [here](https://github.com/ngtuna/dockercon-eu-2015/blob/master/README.md), together with many links to the announcements made, presentations for most sessions in slideshare, and links to youtube videos of the general sessions, of which I recommented viewing the one for [day 2 closing general session](https://www.youtube.com/watch?v=ZBcMy-_xuYk) with a couple of demonstrations what can be done using Docker. It is entertaining and amazing.

View file

@ -9,8 +9,6 @@ author_email: doru.mihai@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---
# Approaches to log parsing
When you will start to deploy your log shippers to more and more systems you will encounter the issue of adapting your solution to be able to parse whatever log format and source each system is using. Luckily, fluentd has a lot of plugins and you can approach a problem of parsing a log file in different ways. When you will start to deploy your log shippers to more and more systems you will encounter the issue of adapting your solution to be able to parse whatever log format and source each system is using. Luckily, fluentd has a lot of plugins and you can approach a problem of parsing a log file in different ways.
@ -28,7 +26,7 @@ The simplest approach is to just parse all messages using the common denominator
In the case of a typical log file a configuration can be something like this (but not necessarily): In the case of a typical log file a configuration can be something like this (but not necessarily):
~~~xml ~~~ xml
<source> <source>
type tail type tail
path /var/log/test.log path /var/log/test.log
@ -40,7 +38,7 @@ In the case of a typical log file a configuration can be something like this (bu
#a timestamp in front of it, the rest is just stored in the field 'message' #a timestamp in front of it, the rest is just stored in the field 'message'
format1 /(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<message>(.|\s)*)/ format1 /(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<message>(.|\s)*)/
</source> </source>
~~~ ~~~
You will notice we still do a bit of parsing, the minimal level would be to just have a multiline format to split the log contents into separate messages and then to push the contents on. You will notice we still do a bit of parsing, the minimal level would be to just have a multiline format to split the log contents into separate messages and then to push the contents on.
@ -51,7 +49,7 @@ If more pieces are common to all messages, it can be included in the regex for s
As the name would suggest, this approach suggests that you should try to create an internal routing that would allow you to precisely target log messages based on their content later on downstream. As the name would suggest, this approach suggests that you should try to create an internal routing that would allow you to precisely target log messages based on their content later on downstream.
An example of this is shown in the configuration below: An example of this is shown in the configuration below:
~~~ruby ~~~
#Sample input: #Sample input:
#2015-10-15 08:19:05,190 [testThread] INFO testClass - Queue: update.testEntity; method: updateTestEntity; Object: testEntity; Key: 154696614; MessageID: ID:test1-37782-1444827636952-1:1:2:25:1; CorrelationID: f583ed1c-5352-4916-8252-47298732516e; started processing #2015-10-15 08:19:05,190 [testThread] INFO testClass - Queue: update.testEntity; method: updateTestEntity; Object: testEntity; Key: 154696614; MessageID: ID:test1-37782-1444827636952-1:1:2:25:1; CorrelationID: f583ed1c-5352-4916-8252-47298732516e; started processing
#2015-10-15 06:44:01,727 [ ajp-apr-127.0.0.1-8009-exec-2] LogInterceptor INFO user-agent: check_http/v2.1.1 (monitoring-plugins 2.1.1) #2015-10-15 06:44:01,727 [ ajp-apr-127.0.0.1-8009-exec-2] LogInterceptor INFO user-agent: check_http/v2.1.1 (monitoring-plugins 2.1.1)
@ -101,7 +99,7 @@ Fluentd will continue to read logfile lines and keep them in a buffer until a li
Looking at the example, all our log messages (single or multiline) will take the form: Looking at the example, all our log messages (single or multiline) will take the form:
~~~json ~~~ json
{ "time":"2015-10-15 08:21:04,716", "message":"[ ttt-grp-127.0.0.1-8119-test-11] LogInterceptor INFO HTTP/1.1 200 OK" } { "time":"2015-10-15 08:21:04,716", "message":"[ ttt-grp-127.0.0.1-8119-test-11] LogInterceptor INFO HTTP/1.1 200 OK" }
~~~ ~~~
@ -114,7 +112,7 @@ You can use *fluent-plugin-multi-format-parser* to try to match each line read f
This approach probably comes with performance drawbacks because fluentd will try to match using each regex pattern sequentially until one matches. This approach probably comes with performance drawbacks because fluentd will try to match using each regex pattern sequentially until one matches.
An example of this approach can be seen below: An example of this approach can be seen below:
~~~ruby ~~~
<source> <source>
type tail type tail
path /var/log/aka/test.log path /var/log/aka/test.log
@ -171,21 +169,19 @@ When choosing this path there are multiple issues you need to be aware of:
The biggest issue with this approach is that it is very very hard to handle multi-line log messages if there are significantly different log syntaxes in the log. The biggest issue with this approach is that it is very very hard to handle multi-line log messages if there are significantly different log syntaxes in the log.
__Warning:__ Be aware that the multiline parser continues to store log messages in a buffer until it matches another firstline token and when it does it then it packages and emits the multiline log it collected. __Warning:__ Be aware that the multiline parser continues to store log messages in a buffer until it matches another firstline token and when it does it then it packages and emits the multiline log it collected.
This approach is useful when you have good control and know-how about the format of your log source. This approach is useful when you have good control and know-how about the format of your log source.
## Order & Chaos ## Order & Chaos
Introducing Grok! Introducing Grok!
Slowly but surely getting all your different syntaxes, for which you will have to define different regular expressions, will make your config file look very messy, filled with regex-es that are longer and longer, and just relying on the multiple format lines to split it up doesn't bring that much readability nor does it help with maintainability. Reusability is something that we cannot even discuss in the case of pure regex formatters. Slowly but surely getting all your different syntaxes, for which you will have to define different regular expressions, will make your config file look very messy, filled with regex-es that are longer and longer, and just relying on the multiple format lines to split it up doesn't bring that much readability nor does it help with maintainability. Reusability is something that we cannot even discuss in the case of pure regex formatters.
Grok allows you to define a library of regexes that can be reused and referenced via identifiers. It is structured as a list of key-value pairs and can also contain named capture groups. Grok allows you to define a library of regexes that can be reused and referenced via identifiers. It is structured as a list of key-value pairs and can also contain named capture groups.
An example of such a library can be seen below. (Note this is just a snippet and does not contain all the minor expressions that are referenced from within the ones enumerated below) An example of such a library can be seen below. (Note this is just a snippet and does not contain all the minor expressions that are referenced from within the ones enumerated below)
~~~ruby ~~~
### ###
# AKA-I # AKA-I
### ###

View file

@ -12,11 +12,11 @@ The Haufe Group stands for integrated cloud and desktop solutions in the areas o
The main target groups are large and midsize firms, small businesses and freelancers, tax advisers and lawyers, the public sector, real estate agents, and nonprofit organizations. A number of our brands Haufe, Haufe Academy, and Lexware are market leaders in these target groups. The main target groups are large and midsize firms, small businesses and freelancers, tax advisers and lawyers, the public sector, real estate agents, and nonprofit organizations. A number of our brands Haufe, Haufe Academy, and Lexware are market leaders in these target groups.
####Corporate background #### Corporate Background
The Haufe Group emerged from a successful publisher and the provider of Lexware, a leading source of commercial software for small firms; over time, it grew to offer a comprehensive portfolio of digital and web-based solutions. Its wide range of products and services have been merged in integrated workstation solutions that allow customers to work effectively. The Haufe Group emerged from a successful publisher and the provider of Lexware, a leading source of commercial software for small firms; over time, it grew to offer a comprehensive portfolio of digital and web-based solutions. Its wide range of products and services have been merged in integrated workstation solutions that allow customers to work effectively.
####Current market Position #### Current Market Position
The Haufe Group is now considered one of the most innovative media and software vendors in Germany. Its solutions use state-of-the-art technology and are very user-friendly and practice-oriented. More than one million customers including all DAX 30 firms generate over 266 million euros in revenue. The Freiburg-based firm currently has some 1,500 employees within Germany and abroad. The Group's international growth strategy is based on its current product portfolio, and it expands on an ongoing basis thanks to the synergies which evolve from the core competencies and strengths of the individual firms and brands within the Group. The Haufe Group is now considered one of the most innovative media and software vendors in Germany. Its solutions use state-of-the-art technology and are very user-friendly and practice-oriented. More than one million customers including all DAX 30 firms generate over 266 million euros in revenue. The Freiburg-based firm currently has some 1,500 employees within Germany and abroad. The Group's international growth strategy is based on its current product portfolio, and it expands on an ongoing basis thanks to the synergies which evolve from the core competencies and strengths of the individual firms and brands within the Group.

View file

@ -144,8 +144,11 @@ hr.small {
margin-bottom: 50px; margin-bottom: 50px;
} }
.intro-header .site-heading, .intro-header .site-heading,
.intro-header .post-heading,
.intro-header .page-heading { .intro-header .page-heading {
padding: 100px 0 50px;
color: black;
}
.intro-header .post-heading {
padding: 100px 0 50px; padding: 100px 0 50px;
color: white; color: white;
} }

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

After

Width:  |  Height:  |  Size: 45 KiB

BIN
images/bg-post.old.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

View file

@ -7,7 +7,7 @@ permalink: /impressum/
Holger Reinhardt Holger Reinhardt
(verantwortlich i.S.d. § 55 Abs. 2 RStV) (verantwortlich i.S.d. § 55 Abs. 2 RStV)
###Anschrift: ### Anschrift:
Haufe-Lexware GmbH & Co. KG Haufe-Lexware GmbH & Co. KG
@ -33,7 +33,7 @@ Steuernummer: 06392/11008
Umsatzsteuer-Identifikationsnummer: DE 812398835 Umsatzsteuer-Identifikationsnummer: DE 812398835
###Public Relations: ### Public Relations:
Haufe-Lexware GmbH & Co. KG Haufe-Lexware GmbH & Co. KG
@ -47,7 +47,7 @@ Telefon: 0761 898 3940
Telefax: 0761 898 3900 Telefax: 0761 898 3900
E-Mail: presse(at)haufe-lexware.com E-Mail: presse(at)haufe-lexware.com
###Bitte wenden Sie sich bei Fragen an: ### Bitte wenden Sie sich bei Fragen an:
Haufe Service Center GmbH Haufe Service Center GmbH

View file

@ -4,7 +4,6 @@ title: Resources
permalink: /resources/ permalink: /resources/
--- ---
---
### API Style Guide ### API Style Guide
A List of rules, best practices, resources and our way of creating REST APIs in the Haufe Group. The style guide addresses API Designers, mostly developers and architects, who want to design an API. A List of rules, best practices, resources and our way of creating REST APIs in the Haufe Group. The style guide addresses API Designers, mostly developers and architects, who want to design an API.