Merge remote-tracking branch 'refs/remotes/Haufe-Lexware/master'

This commit is contained in:
Robert Fitch 2016-04-12 10:07:04 +02:00
commit 887bf9e370
47 changed files with 1326 additions and 37 deletions

View file

@ -6,7 +6,10 @@ The short version of it is to simply clone this repo into (a) a repo of your own
Support for Categories and Tags were inspired by [this blog entry](http://www.minddust.com/post/tags-and-categories-on-github-pages/). A list of the defined categories and tags can be found at `_data/categories.yml` and `_data\tags.yml` respectively. If you want to add new categories or tags, you need to add them to the corresponding `.yml` file and add the matching template into the `meta/category` or `meta/tag` directories. Please do not go overboard with adding new categories and tags but try to stay within the ones we have. On the other hand - if you feel strongly about adding one, feel free to submit a pull request. Support for Categories and Tags were inspired by [this blog entry](http://www.minddust.com/post/tags-and-categories-on-github-pages/). A list of the defined categories and tags can be found at `_data/categories.yml` and `_data\tags.yml` respectively. If you want to add new categories or tags, you need to add them to the corresponding `.yml` file and add the matching template into the `meta/category` or `meta/tag` directories. Please do not go overboard with adding new categories and tags but try to stay within the ones we have. On the other hand - if you feel strongly about adding one, feel free to submit a pull request.
Author support was inspired by [this blog entry](https://blog.sorryapp.com/blogging-with-jekyll/2014/02/06/adding-authors-to-your-jekyll-site.html). In order to add information on a new author, edit the `_data/authors.yml` file, then use the new key as `author` link in the posts. If an author cannot be found in `authors.yml`, the content of the `author` tag will be used verbose. In that case, no links to any social media (Twitter, Github and LinkedIn are currently supported) will be included.
If you want to find out more about using `github-pages` for blogging or want to improve our blog the following links might be good starting points If you want to find out more about using `github-pages` for blogging or want to improve our blog the following links might be good starting points
* [Jekyll documentation, i.e. how to include images](http://jekyllrb.com/docs/posts/) * [Jekyll documentation, i.e. how to include images](http://jekyllrb.com/docs/posts/)
* [Github pages powered by Jekyll](https://github.com/jekyll/jekyll/wiki/sites) * [Github pages powered by Jekyll](https://github.com/jekyll/jekyll/wiki/sites)
* Liquid Documentation [here](https://docs.shopify.com/themes/liquid-documentation/basics) and [here](https://github.com/Shopify/liquid/wiki/Liquid-for-Designers) * Liquid Documentation [here](https://docs.shopify.com/themes/liquid-documentation/basics) and [here](https://github.com/Shopify/liquid/wiki/Liquid-for-Designers)
@ -18,3 +21,64 @@ Please note to set the proxy if you are working from within the Haufe Intranet
If you find bugs or issues you can [open an issue](https://github.com/Haufe-Lexware/Haufe-Lexware.github.io/issues/new) describing the problem that you're looking to resolve and we'll go from there. If you find bugs or issues you can [open an issue](https://github.com/Haufe-Lexware/Haufe-Lexware.github.io/issues/new) describing the problem that you're looking to resolve and we'll go from there.
### Setting up jekyll on Mac OS X
If you happen to have Mac OS X device, it is a lot simpler to test your additions using the `jekyll` command line directly; you don't have to set up github pages, and you can still verify everything is fine.
To install `jekyll`, issue the following command in Terminal (I here assume you have the Mac OS X developer command line tools installed, which include ruby/gem):
```
$ sudo gem install jekyll
```
That will take a while. After that, `cd` into your `Haufe-Lexware.github.io` git clone (on your own fork obviously) and issue a
```
$ jekyll build
```
This will throw a couple of errors due to missing gems; install them one after the other in the order they occur:
```
$ sudo gem install jekyll-paginate
$ ...
```
Eventually (and hopefully) your `jekyll build` will succeed. After the build has succeeded, you can do a `jekyll serve`, and after that, you can browse the site locally on [`http://127.0.0.1:4000`](http://127.0.0.1:4000).
**Note**: The `https_proxy` setting is also needed on Mac OS X if you're inside the Haufe intranet:
```
$ export http_proxy=http://10.12.1.236:8083
$ export https_proxy=https://10.12.1.236:8083
```
### Setting up jekyll on Windows
The short version of this is: It's complicated, and not actually advisable.
The most promising path to doing this is most probably to set up a Linux VM and do it from there; that involves setting up ruby correctly, which may also be challenging, but it's still a lot simpler (and more supported) than directly on Windows.
But you can try this:
### Setting up jekyll using docker
**Note**: This will work both on Windows and Mac OS X, in case you do not want to "pollute" your local machine with ruby packages.
If you have a working `docker` setup on your machine, you can use the prepackaged docker image by the jekyll team to try out the blog generation using that image.
Pull the `jekyll/jekyll:pages` image to get something which behaves almost exactly (or really close to) the github pages generation engine:
```sh
$ docker pull jekyll/jekyll:pages
```
Inside the docker Quickstart terminal, `cd` into your `Haufe-Lexware.github.io` fork containing your changes, and then issue the following command:
```sh
$ docker run --rm --label=jekyll --volume=$(pwd):/srv/jekyll \
-it -p $(docker-machine ip `docker-machine active`):4000:4000 \
jekyll/jekyll:pages
```
If everything works out, the jekyll server will serve the blog preview on `http://<ip of your docker machine>:4000`. More information on running jekyll inside docker can be found here: [github.com/jekyll/docker](https://github.com/jekyll/docker).

60
_data/authors.yml Normal file
View file

@ -0,0 +1,60 @@
# Author details.
holger_reinhardt:
name: Holger Reinhardt
email: holger.reinhardt@haufe-lexware.com
twitter: hlgr360
github: hlgr360
linkedin: hrreinhardt
martin_danielsson:
name: Martin Danielsson
email: martin.danielsson@haufe-lexware.com
twitter: donmartin76
github: donmartin76
linkedin: martindanielsson
marco_seifried:
name: Marco Seifried
email: marco.seifried@haufe-lexware.com
twitter: marcoseifried
github: marc00s
linkedin: marcoseifried
thomas_schuering:
name: Thomas Sch&uuml;ring
email: thomas.schuering@haufe-lexware.com
github: thomsch98
linkedin: thomas-schuering-205a8780
rainer_zehnle:
name: Rainer Zehnle
email: rainer.zehnle@haufe-lexware.com
github: Kodrafo
linkedin: rainer-zehnle-09a537107
twitter: RainerZehnle
doru_mihai:
name: Doru Mihai
email: doru.mihai@haufe-lexware.com
github: Dutzu
linkedin: doru-mihai-32090112
twitter: dcmihai
eike_hirsch:
name: Eike Hirsch
email: eike.hirsch@haufe-lexware.com
twitter: stagzta
axel_schulz:
name: Axel Schulz
email: axel.schulz@semigator.de
github: axelschulz
linkedin: luckyguy
carol_biro:
name: Carol Biro
email: carol.biro@haufe-lexware.com
github: birocarol
linkedin : carol-biro-5b0a5342
frederik_michel:
name: Frederik Michel
email: frederik.michel@haufe-lexware.com
github: FrederikMichel
twitter: frederik_michel
tora_onaca:
name: Teodora Onaca
email: teodora.onaca@haufe-lexware.com
github: toraonaca
twitter: toraonaca

View file

@ -35,4 +35,7 @@
name: Smartsteuer name: Smartsteuer
- slug: logging - slug: logging
name: Logging name: Logging
- slug: automation
name: Automation

View file

@ -36,6 +36,30 @@ layout: default
{% else %} {% else %}
{% assign tags_content = '' %} {% assign tags_content = '' %}
{% endif %} {% endif %}
<!-- author links -->
{% if page.author %}
{% assign author = site.data.authors[page.author] %}
{% if author %}
<!-- {% capture author_content_temp %}<a href="mailto:{{ author.email }}" target="_blank">{{ author.name }}</a>{% endcapture %} -->
<!-- This would be a great place to insert a link to all posts by an author. If I knew how. -->
{% capture author_content_temp %}{{ author.name }}{% endcapture %}
{% assign author_content = author_content_temp %}
{% if author.twitter %}
{% capture author_twitter %}<a href="https://twitter.com/{{ author.twitter }}" target="_blank"><i class="fa fa-twitter-square">&nbsp;</i></a>{% endcapture %}
{% endif %}
{% if author.linkedin %}
{% capture author_linkedin %}<a href="https://www.linkedin.com/in/{{ author.linkedin }}" target="_blank"><i class="fa fa-linkedin-square"> </i></a>{% endcapture %}
{% endif %}
{% if author.github %}
{% capture author_github %}<a href="https://github.com/{{ author.github }}" target="_blank"><i class="fa fa-github-square">&nbsp;</i></a>{% endcapture %}
{% endif %}
{% else %}
{% assign author_content = page.author %}
{% endif %}
{% else %}
{% assign author_content = page.author %}
{% endif %}
<div class="container"> <div class="container">
<div class="row"> <div class="row">
@ -45,7 +69,7 @@ layout: default
{% if page.subtitle %} {% if page.subtitle %}
<h2 class="subheading">{{ page.subtitle }}</h2> <h2 class="subheading">{{ page.subtitle }}</h2>
{% endif %} {% endif %}
<span class="meta">Posted by {% if page.author %}{{ page.author }}{% else %}{{ site.title }}{% endif %} on {{ page.date | date: "%B %-d, %Y" }} {{ category_content }}{{ tags_content }}</span> <span class="meta">Posted by {{ author_content }} {{ author_twitter }}{{ author_github }}{{ author_linkedin }} on {{ page.date | date: "%B %-d, %Y" }} {{ category_content }}{{ tags_content }}</span>
</div> </div>
</div> </div>
</div> </div>

View file

@ -39,7 +39,17 @@ layout: default
{% endif %} {% endif %}
{% endif %} {% endif %}
--> -->
{% if post.author %}
{% assign author = site.data.authors[post.author] %}
{% if author %}
{% assign author_name = author.name %}
{% else %}
{% assign author_name = post.author %}
{% endif %}
{% else %}
{% assign author_name = site.title %}
{% endif %}
<div class="post-preview"> <div class="post-preview">
<a href="{{ post.url | prepend: site.baseurl }}"> <a href="{{ post.url | prepend: site.baseurl }}">
<h2 class="post-title"> {{ post.title }} <h2 class="post-title"> {{ post.title }}
@ -50,7 +60,7 @@ layout: default
</h3> </h3>
{% endif %} {% endif %}
</a> </a>
<p class="post-meta">Posted by {% if post.author %}{{ post.author }}{% else %}{{ site.title }}{% endif %} on {{ post.date | date: "%B %-d, %Y" }}</p> <p class="post-meta">Posted by {{ author_name }} on {{ post.date | date: "%B %-d, %Y" }}</p>
</div> </div>
<hr> <hr>

View file

@ -39,7 +39,17 @@ layout: default
{% endif %} {% endif %}
{% endif %} {% endif %}
--> -->
{% if post.author %}
{% assign author = site.data.authors[post.author] %}
{% if author %}
{% assign author_name = author.name %}
{% else %}
{% assign author_name = post.author %}
{% endif %}
{% else %}
{% assign author_name = site.title %}
{% endif %}
<div class="post-preview"> <div class="post-preview">
<a href="{{ post.url | prepend: site.baseurl }}"> <a href="{{ post.url | prepend: site.baseurl }}">
<h2 class="post-title"> {{ post.title }} <h2 class="post-title"> {{ post.title }}
@ -50,7 +60,7 @@ layout: default
</h3> </h3>
{% endif %} {% endif %}
</a> </a>
<p class="post-meta">Posted by {% if post.author %}{{ post.author }}{% else %}{{ site.title }}{% endif %} on {{ post.date | date: "%B %-d, %Y" }}</p> <p class="post-meta">Posted by {{ author_name }} on {{ post.date | date: "%B %-d, %Y" }}</p>
</div> </div>
<hr> <hr>

View file

@ -4,7 +4,7 @@ title: We are live or How to start a developer blog
subtitle: The 'Hello World' Post subtitle: The 'Hello World' Post
category: general category: general
tags: [cto, culture] tags: [cto, culture]
author: Holger author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---

View file

@ -4,7 +4,7 @@ title: OSCON Europe 2015
subtitle: Notes from OSCON Europe 2015 subtitle: Notes from OSCON Europe 2015
category: conference category: conference
tags: [open-source] tags: [open-source]
author: Marco author: marco_seifried
author_email: marco.seifried@haufe-lexware.com author_email: marco.seifried@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---

View file

@ -4,7 +4,7 @@ title: The beginnings of our API Journey
subtitle: Intro to our API Style Guide subtitle: Intro to our API Style Guide
category: api category: api
tags: [api] tags: [api]
author: Holger author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---

View file

@ -4,7 +4,7 @@ title: Impressions from DevOpsCon 2015
subtitle: Notes from DevOpsCon 2015 subtitle: Notes from DevOpsCon 2015
category: conference category: conference
tags: [docker, devops] tags: [docker, devops]
author: Rainer author: rainer_zehnle
author_email: rainer.zehnle@haufe-lexware.com author_email: rainer.zehnle@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---

View file

@ -4,7 +4,7 @@ title: Impressions from DockerCon 2015 - Part 1
subtitle: Insights, Outlooks and Inbetweens subtitle: Insights, Outlooks and Inbetweens
category: conference category: conference
tags: [docker, security] tags: [docker, security]
author: ThomasSc author: thomas_schuering
author_email: thomas.schuering@haufe-lexware.com author_email: thomas.schuering@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---

View file

@ -4,7 +4,7 @@ title: APIdays Paris - From Philosophy to Technology and back again
subtitle: A biased report from APIdays Global in Paris subtitle: A biased report from APIdays Global in Paris
category: conference category: conference
tags: [api] tags: [api]
author: Martin Danielsson author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---

View file

@ -4,7 +4,7 @@ title: Using 'Let's Encrypt' Certificates with Azure
subtitle: Create free valid SSL certificates in 20 minutes. subtitle: Create free valid SSL certificates in 20 minutes.
category: howto category: howto
tags: [security, cloud] tags: [security, cloud]
author: Martin Danielsson author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---

View file

@ -4,7 +4,7 @@ title: Creating the Smartsteuer 'Snap' App
subtitle: A behind the scenes view of the birth of our youngest creation. subtitle: A behind the scenes view of the birth of our youngest creation.
category: product category: product
tags: [smartsteuer, mobile, custdev] tags: [smartsteuer, mobile, custdev]
author: Eike Hirsch author: eike_hirsch
author_email: eike.hirsch@smartsteuer.de author_email: eike.hirsch@smartsteuer.de
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---

View file

@ -4,7 +4,7 @@ title: Log Aggregation with Fluentd, Elasticsearch and Kibana
subtitle: Introduction to log aggregation using Fluentd, Elasticsearch and Kibana subtitle: Introduction to log aggregation using Fluentd, Elasticsearch and Kibana
category: howto category: howto
tags: [devops, docker, logging] tags: [devops, docker, logging]
author: Doru Mihai author: doru_mihai
author_email: doru.mihai@haufe-lexware.com author_email: doru.mihai@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---

View file

@ -4,7 +4,7 @@ title: Better Log Parsing with Fluentd
subtitle: Description of a couple of approaches to designing your fluentd configuration. subtitle: Description of a couple of approaches to designing your fluentd configuration.
category: howto category: howto
tags: [devops, logging] tags: [devops, logging]
author: Doru Mihai author: doru_mihai
author_email: doru.mihai@haufe-lexware.com author_email: doru.mihai@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---

View file

@ -4,7 +4,7 @@ title: Providing Secure File Storage through Azure API Management
subtitle: Shared Access Signatures with Azure Storage subtitle: Shared Access Signatures with Azure Storage
category: howto category: howto
tags: [security, cloud, api] tags: [security, cloud, api]
author: Martin Danielsson author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---
@ -53,9 +53,8 @@ Luckily, Azure already provides a means of anonymous and restricted access to st
We leverage the SAS feature to explicitly grant **write** access to one single blob (file) on the storage for which we define the file name. The access is granted for 60 minutes (one hour), which is enough to transfer large scale files. Our Content API exposes an end point which returns an URL containing the SAS token which can immediately be used to do a `PUT` to the storage. We leverage the SAS feature to explicitly grant **write** access to one single blob (file) on the storage for which we define the file name. The access is granted for 60 minutes (one hour), which is enough to transfer large scale files. Our Content API exposes an end point which returns an URL containing the SAS token which can immediately be used to do a `PUT` to the storage.
<center> {:.center}
![Azure Storage SAS - Diagram]({{ site.url }}/images/azure-storage-sas-1.png) ![Azure Storage SAS - Diagram]({{ site.url }}/images/azure-storage-sas-1.png){:style="margin:auto"}
</center>
The upload to the storage can either be done using any http library (using a `PUT`), or using an Azure Storage SDK ([available for multiple languages](https://github.com/Azure?utf8=%E2%9C%93&query=storage), it's on github), which in turn enables features like parallel uploading or block uploading (for more robust uploading). The upload to the storage can either be done using any http library (using a `PUT`), or using an Azure Storage SDK ([available for multiple languages](https://github.com/Azure?utf8=%E2%9C%93&query=storage), it's on github), which in turn enables features like parallel uploading or block uploading (for more robust uploading).

View file

@ -1,15 +1,15 @@
--- ---
layout: post layout: post
title: Reisekosten App - Proof of Concept title: Extending On-Premise Products With Mobile Apps - Part 1
subtitle: Read/write Lexware on-premise data from a smartphone subtitle: Modernizing on-premise application using Azure Service Bus Relay
category: general category: general
tags: [mobile, custdev] tags: [mobile, cloud]
author: Robert Fitch author: Robert Fitch
author_email: robert.fitch@haufe-lexware.com author_email: robert.fitch@haufe-lexware.com
header-img: "images/bg-post.jpg" header-img: "images/bg-post.jpg"
--- ---
### What is this about? ### ### What is this about?
This was a proof-of-concept project to find out what it takes to access on-premise data (in the form of our Lexware pro product line) from an internet client, even though that data resides behind company firewalls, without forcing our customers to open an incoming port to the outside world. This would be a different approach than, say, "Lexware mobile", which synchronizes data into the cloud, from where it is accessed by client devices. This was a proof-of-concept project to find out what it takes to access on-premise data (in the form of our Lexware pro product line) from an internet client, even though that data resides behind company firewalls, without forcing our customers to open an incoming port to the outside world. This would be a different approach than, say, "Lexware mobile", which synchronizes data into the cloud, from where it is accessed by client devices.
@ -17,7 +17,7 @@ There was a fair amount of work already done in the past, which made the job eas
These HTTP REST Api's are currently used by "Lexware myCenter". These HTTP REST Api's are currently used by "Lexware myCenter".
### Okay, what is Lexware myCenter? ### ### Okay, what is Lexware myCenter?
Typically, Lexware pro is installed in the HR department of our customer's company. This means that the other employees of this company have no access to the application. However, there are plenty of use-cases which would make some kind of communication attractive, for example: Typically, Lexware pro is installed in the HR department of our customer's company. This means that the other employees of this company have no access to the application. However, there are plenty of use-cases which would make some kind of communication attractive, for example:
@ -38,7 +38,7 @@ With myCenter, every employee (and her boss) may be given a browser link and can
![myCenter - Apply for Vacation]({{ site.url }}/images/reisekosten-app/mycenter.jpg){:style="margin:auto"} ![myCenter - Apply for Vacation]({{ site.url }}/images/reisekosten-app/mycenter.jpg){:style="margin:auto"}
### Enter Azure Service Bus Relay ### ### Enter Azure Service Bus Relay
Azure Service Bus Relay allows an on-premise service to open a WCF interface to servers running in the internet. Anyone who knows the correct url (and passes any security tests you may implement) has a proxy to the interface which can be called directly. Note that this does **not** relay HTTP requests, but uses the WCF protocol via TCP to call the methods directly. This works behind any company firewall. Depending on how restrictive the firewall is configured, the IT department may need to specifically allow outgoing access to the given Azure port. Azure Service Bus Relay allows an on-premise service to open a WCF interface to servers running in the internet. Anyone who knows the correct url (and passes any security tests you may implement) has a proxy to the interface which can be called directly. Note that this does **not** relay HTTP requests, but uses the WCF protocol via TCP to call the methods directly. This works behind any company firewall. Depending on how restrictive the firewall is configured, the IT department may need to specifically allow outgoing access to the given Azure port.
@ -53,23 +53,23 @@ For the Reisekosten-App, we decided on the first method. Using a "smart" interne
However, the other method also works well. I have used it during a test to make the complete myCenter web-site available over the internet. However, the other method also works well. I have used it during a test to make the complete myCenter web-site available over the internet.
### Putting it all together ### ### Putting it all together
With the tools thus available, we started on the proof-of-concept and decided to implement the use-case "Business traveller wants to record her travel receipts". So while underway, she can enter the basic trip data (dates, from/to) and for that trip enter any number of receipts (taxi, hotel, etc.). All of this information should find its way in real-time into the on-premise database where it can be processed by the HR department. With the tools thus available, we started on the proof-of-concept and decided to implement the use-case "Business traveller wants to record her travel receipts". So while underway, she can enter the basic trip data (dates, from/to) and for that trip enter any number of receipts (taxi, hotel, etc.). All of this information should find its way in real-time into the on-premise database where it can be processed by the HR department.
### Steps along the way ### ### Steps along the way
#### The on-premise service must have a unique ID #### #### The on-premise service must have a unique ID
This requirement comes from the fact that the on-premise service must open a unique endpoint for the Azure Service Bus Relay. Since every Lexware pro database comes with a unique GUID (and this GUID will move with the system if it gets reinstalled on different hardware), we decided to use this ID as the unique connection ID. This requirement comes from the fact that the on-premise service must open a unique endpoint for the Azure Service Bus Relay. Since every Lexware pro database comes with a unique GUID (and this GUID will move with the system if it gets reinstalled on different hardware), we decided to use this ID as the unique connection ID.
#### The travelling employee must be a "user" of the Lexware pro application #### #### The travelling employee must be a "user" of the Lexware pro application
The Lexware pro application has the concept of users, each of whom has certain rights to use the application. Since the employee will be accessing the database, she must exist as a user in the system. She must have very limited rights, allowing access only to her own person and given the single permission to edit trip data. Because myCenter has similar requirements, the ability for HR to automatically add specific employees as new users, each having only this limited access, was already implemented. So, for example, the employee "Andrea Ackermann" has her own login and password to the system. This, however, is **not** the identity with which she will log in to the App. The App login has its own requirements regarding: The Lexware pro application has the concept of users, each of whom has certain rights to use the application. Since the employee will be accessing the database, she must exist as a user in the system. She must have very limited rights, allowing access only to her own person and given the single permission to edit trip data. Because myCenter has similar requirements, the ability for HR to automatically add specific employees as new users, each having only this limited access, was already implemented. So, for example, the employee "Andrea Ackermann" has her own login and password to the system. This, however, is **not** the identity with which she will log in to the App. The App login has its own requirements regarding:
- Global uniqueness of user name - Global uniqueness of user name
- Strength of password - Strength of password
- The possibility to use, for example, a Facebook identity instead of username/password - The possibility to use, for example, a Facebook identity instead of username/password
#### The user must do a one-time registration and bind the App identity to the unique on-premise ID and to the Lexware pro user identity #### #### The user must do a one-time registration and bind the App identity to the unique on-premise ID and to the Lexware pro user identity
We developed a small web-site for this one-time registration. The App user specifies her own e-mail as user name and can decided on her own password (with password strength regulations enforced). Once registered, she makes the connection to her company's on-premise service: We developed a small web-site for this one-time registration. The App user specifies her own e-mail as user name and can decided on her own password (with password strength regulations enforced). Once registered, she makes the connection to her company's on-premise service:
@ -103,11 +103,11 @@ And here is a screenshot of one of the views, entering actual receipt data:
{:.center} {:.center}
![Reisekosten App - Receipt input]({{ site.url }}/images/reisekosten-app/receipt.jpg){:style="margin:auto"} ![Reisekosten App - Receipt input]({{ site.url }}/images/reisekosten-app/receipt.jpg){:style="margin:auto"}
### Developing the Front-End ### ### Developing the Front-End
The front-end development (HTML5, AngularJS, Apache Cordova) was done by our Romanian colleague Carol, who is going to write a follow-up blog about that experience. The front-end development (HTML5, AngularJS, Apache Cordova) was done by our Romanian colleague Carol, who is going to write a follow-up blog about that experience.
### What about making a Real Product? ### ### What about making a Real Product?
This proof-of-concept goes a long way towards showing how we can connect to on-premise data, but it is not yet a "real product". Some aspects which need further investigation and which I will be looking into next: This proof-of-concept goes a long way towards showing how we can connect to on-premise data, but it is not yet a "real product". Some aspects which need further investigation and which I will be looking into next:

View file

@ -0,0 +1,115 @@
---
layout: post
title: Securing Backend Services behind Azure API Management
subtitle: Different approaches to securing API implementations
category: howto
tags: [security, cloud, api]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
We are currently planning our first round of published APIs, and in the course of this process, we obviously had to ask ourselves how we can secure our backend services which we will surface using [Azure API Management](https://azure.microsoft.com/en-us/services/api-management/). This may sound like a trivial problem, but it turns out it actually isn't. This blog post will show the different options you have (or don't) using Azure API Management as a front end to your APIs.
### The problem
A key property of the Azure API Management solution is that it is not possible to deploy the APIm instance to some sort of pre-defined virtual network. The Azure APIm instance will always reside in its own "cloudapp" kind of virtual machine, and you can only select which region it is to run in (e.g. "North Europe" or "East US").
As an effect, you will always have to talk to your backend services via a public IP address (except in the VPN case, see below). You can't simply deploy APIm and your backend services together within a virtual network and only open up a route over port 443 to your APIm instance. This means it is normally possible to also talk "directly" to your backend service, which is something you do not want. You will always want consumers to go over API Management to be able to use the APIm security/throttling/analytics on the traffic. Thus, we have to look at different approaches how to secure your backend services from direct access.
We will check out the following possibilities:
* Security by obscurity
* Basic Auth
* Mutual SSL
* Virtual Networks and Network Security Groups
* VPNs
What is not part of this blog post is how you also can use OAuth related techniques to secure backend services. Focus of this article is how to technically secure the backends, not using means such as OAuth.
### Security by obscurity
For some very non-critical backend services running in the same Azure region (and only in those cases), it may be enough to secure the backend via obscurity; some have suggested that it can be enough to check for the `Ocp-Apim-Subscription-Key` header which will by default be passed on from the client via the API gateway to the backend service (unless you filter it out via some policy).
This is quite obviously not by any security standards actually secure, but it may rule out the occasional nosy port scan by returning a 401 or similar.
Other variants of this could be to add a second header to the backend call, using an additional secret key which tells the backend service that it is actually Azure APIm calling the service. The drawbacks of this are quite obvious:
* You have to implement the header check in your backend service
* You have a shared secret between Azure APIm and your backend service (you have coupled them)
* The secret has to be deployed to both Azure APIm and your backend service
* It is only secure if the connection between Azure APIm and the backend service is using https transport (TLS)
### Basic Auth
The second variant of "Security by obscurity" is actually equivalent to using Basic Authentication between Azure APIm and your backend service. Support for Basic Auth is though implemented into Azure APIm directly, so that you do not have to create a custom policy which inserts the custom header into the backend communication. Azure APIm can automatically add the `Authorization: Basic ...` header to the backend call.
Once more, the very same drawbacks apply as for the above case:
* You have to implement the Basic Auth in the backend (some backends do have explicit support for this, so it may be easy)
* You have a shared secret between the APIm and the backend
* If you are not using `https` (TLS), this is not by any means actually secure
### Mutual SSL
One step up from Basic Auth and Security by Obscurity is to use Mutual SSL between Azure APIm and the backend. This also is directly supported by Azure APIm, so that you "only" have to upload the client certificate to use for communication with the backend service, and then check the certificate in the backend. In this case, using a self-signed certificate will work. I tested it using [this blog post with nginx](https://pravka.net/nginx-mutual-auth). The only thing that had to be done additionally was to create PFX client certificate using `openssl`, as Azure APIm only will accept PFX certificates.
Checking the certificate in the backend can be simple or challenging, depending on which kind of backend service your are using:
* nginx: See above link to the tutorial on how to verify the client certificate; SSL termination with nginx is probably quite a good idea
* Apache web server also directly supports Client Certificate verification
* Spring Boot: Intended way of securing the service, see e.g. [Spring Boot Security Reference (v4.0.4)](http://docs.spring.io/spring-security/site/docs/4.0.4.CI-SNAPSHOT/reference/htmlsingle/#x509).
* Web API/.NET: Funnily, in the case of .NET applications, verifying a client certificate is quite challenging. There are various tutorials on how to do this, but unfortunately I don't like any of them particularly:
* [Suggestion from 'Designing evolvable Web APIs using ASP.NET'](http://chimera.labs.oreilly.com/books/1234000001708/ch15.html#example_ch15_cert_handler)
* [How to use mutual certificates with Azure API Management](https://azure.microsoft.com/en-us/documentation/articles/api-management-howto-mutual-certificates/)
* [Azure App Services - How to configure TLS Mutual Authentication](https://azure.microsoft.com/en-us/documentation/articles/app-service-web-configure-tls-mutual-auth/)
* For node.js and similiar, I would suggest using nginx for SSL termination (as a reverse proxy in front of node)
All in all, using mutual SSL is a valid approach to securing your backend; it offers real security. It will still be possible to flood the network interface with requests (which will be rejected immediately due to the SSL certificate mismatch), and thus could and possibly should be combined with the below method additionally.
I am waiting for simpler solutions of doing this directly in Azure, but currently you can't decouple it from your API implementation.
### Virtual Networks and Network Security Groups
In case your backend service runs in an Azure VM (deployed using ARM, Azure Resource Manager), you can make use of the built in firewall, the Network Security Groups. As of the Standard Tier (which is the "cheapest" one you are allowed to use in production), your Azure APIm instance will get a static IP; this IP in turn you can use to define a NSG rule to only allow traffic from that specific IP address (the APIm Gateway) to go through the NSG. All other traffic will be silently discarded.
As mentioned above, it's unfortunately not (yet) possible to add an Azure APIm instance to a virtual network (and thus put it inside an ARM NSG), but you can still restrict traffic into the NSG by doing IP address filtering.
The following whitepaper suggests that Azure virtual networks are additionally safeguarded against IP spoofing: [Azure Network Security Whitepaper](http://download.microsoft.com/download/4/3/9/43902ec9-410e-4875-8800-0788be146a3d/windows%20azure%20network%20security%20whitepaper%20-%20final.docx).
This means that if you create an NSG rule which only allows the APIm gateway to enter the virtual network, most attack vectors are already eliminated by the firewall: Azure filters out IP spoofed packages coming from outside Azure when they enter the Azure network, and additionally they will inspect packages from inside Azure to validate they actually origin from the IP address they claim to do. Combined with Mutual SSL, this should render sufficient backend service protection,
* On a security level, making sure only APIm can call the backend service, and
* On a DDoS prevention level, making sure that the backend service cannot be flooded with calls, even if they are immediately rejected
#### Azure Web Apps and Virtual Networks
Using standard Web Apps/API Apps (the PaaS approach in Azure), it is not possible to add those services to a virtual network. This in turn makes the above method of securing the backend services moot. There are workarounds for this, which let you combine the advantages of using Web Apps and the possibility to put the hosting environment of such Web Apps inside a virtual networks, and that's called [App Service Environments](https://azure.microsoft.com/en-us/documentation/articles/app-service-app-service-environment-intro). In short, an App Service Environment is a set of dedicated virtual machines deployed into a specific virtual networks which is only used by your own organization to deploy Web Apps/API Apps into. You have to deploy at least four virtual machines for use with the App Env (two front ends and two worker machines), and these are the costs that you actually pay. In return, you can deploy into a virtual network, and additionally you can be sure that you get the power you pay for, as nobody else will be using the same machines.
### VPNs
As a last possibility to secure the backend services, it is possible to create a VPN connection from a "classic" virtual network to the APIm instance. By doing so, you can connect the APIm instance directly to a closed subnet/virtual network, just as you would expect it to be possible using Azure Resource Manager virtual networks.
This approach has the following severe limitations which render it difficult to use as the "go to solution" it sounds like it is:
* Connecting VPNs to Azure APIm only works when using the Premium Tier, priced well over 2500€ per month; this is difficult to motivate in many cases, given that producing 5 TB of traffic per month is not something which will happen immediately
* Only Azure Service Manager ("Classic") virtual networks can be used for this, not the more recent Azure Resource Manager virtual networks
* In order to build up a VPN connection, you will need a Gateway virtual appliance inside your virtual network, which also comes at an additional cost (around 70€/month).
* You can't use VPN connections cross-region; if your APIm resides in West Europe, you can only connect to VNs in West Europe.
In theory, it would even be possible to first [bridge an ARM virtual network to a classic virtual network](https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-arm-asm-s2s/), and then in turn use that VN and an additional Gateway appliance to connect to APIm, but this setup gives me bad dreams.
### Conclusion and Recommendation
For critical backend services, use a combination of
* Mutual SSL
* Inbound NSG rules limiting traffic to the Azure APIm IP address
In case need to use Web Apps/API Apps, consider provisioning an App Environment which you can deploy into a virtual network, and then in turn use the NSG (as suggested above).
For less critical backend services (such as read-only APIs), choosing the NSG rule option only may also be a lightweight and easy to implement option. The only prerequisites for this are:
* Your backend service runs inside an Azure Resource Manager Virtual Network
* ... in the same region as your APIm instance
If you have further suggestions and/or corrections, please feel free to comment below.

View file

@ -0,0 +1,64 @@
---
layout: post
title: Being a Microservice or Cattle over pets
subtitle: A personal recap on QCon 2016
category: conference
tags: [qcon, microservices, devops]
author: axel_schulz
author_email: axel.schulz@semigator.de
header-img: "images/bg-post.jpg"
---
# Being a Microservice or Cattle over pets
First thing I did after receiving the invitation for QCon 2016 was of course to take a look at the schedule.
And to be honest: I was kinda disappointed by the seemingly missing link between all the tracks and sessions. Though it offered you a variety of interesting areas to dive into, I was missing the glue that should keep a conference and its attendees together.
Turned out - the glue were the mysterious microservices or at least they were suposed to be. I attended seemingly endless talks in which people were almost desperately trying to find some connection between microservices and their actual topic:
* Chaos testing a microservice infrastructure? _Well, to be honest: we don't test microservices, we test instances - but we do have > 700 microservices_
* Test-driven microservices? _Nice topic, but I'd rather speak about how important and awesome microservices can be_
* Modern Agile Developmen? _Yea, we'll just present you some lean management stuff and btw, we do microservices as well!_
But when there is shadow, there has to be light that casts the shadow and I stumbled upon some talks and lessons that I will really carry with me back to my team.
## "Treat your machines as cattle - not as pets!
At [Semigator](http://www.semigator.de) we're still doing the hosting of our production environment in a pretty conservative way. We have a bunch of virtual ressources (CPU, RAM, HDD etc.) that we combined into virtual machines and we take care of everything on these machines - starting from the OS updates to fine tuning application configuration on every machine. We're really pampering them like pets, because that's how system administration works, right? But why would we want to spend time on doing this that have actually nothing to do with our business? We are a webshop for further education and our business is to provide our customers with a lots of training offers - not to do server management!
Today's technology stacks enables you to ship your application either as a (almost) full working instance (Axel Fontaine of Boxfuse demonstrated in his talk "Rise of the machine images" how easily an application including a complete OS image can be created with only 15MB and deployed to AWS including propagating the new IP to the DNS) or at least as a container that bundles all dependencies and leaves it to the host to provide them. So if you need to deploy a new version of your application - or your microservice - you just create a new image, deploy it and delete the old one. So no more pampering of Linux or Windows machines! Just deploy what you need and where you need it! Of course this requires some preparations: you'll need to get rid of everything that you don't need on your machines, like:
* **Package Managers** - we're not going to install anything on this instance, so just get rid of it
* **Compilers** - This instance is supposed to run our application and not serve as a developer machine and we don't plan to update it either - so beat it gcc, javac and the rest!
* **Logging / Monitoring** - all logging (system and application side) should be centralized using fluentd, logstash or whatever anyways
* **User Management** - we don't want anybody to work on these machines, why would we need user management?
* **Man pages** - if no ones working on it, no one will have to look things up
* **SSH** - if nobody is to connect to these machines, we don't need SSH
and you could continue this list until it fits for your use case, as long as you only take as much with you as need.
Right now, we're wasting lots of time on monitoring available system updates, root logins or passwd changes. Our servers are overloaded with editors, drivers and other things that are absolutely superfluous for their actual job.
So, it's not like we'll be switching to this kind of slimline image deployment by snipping out fingers - I tried it - but it's no rocket science either. We see the obstacles in our way, some are minor - like routing the rest of our logs to our logging instance - and some are bigger, like figuring out, how to built our images for indivdual fit: what do we need and what's just an impediment for us.
We will not start with an automatic deployment on our hypervisor, but we fell like doing this, will give us the ultimate control of our application and the environment its running in and it's a crucial part of our tech strategy @Semigator.
## Talk by Aviran Mordo on his microservices and DevOps Journey
The reason why I liked this talk by Aviran Mordo from WIX.com is simple: he had the answer - it's that simple! He had the answer to my utter question: How...? How the heck do you go from your fat, ugly, scary monolith to microservices? His answer is: be pragmatic - if you split your monolith into two services, 50% of your application will still be available if the one services dies!
Aviran described how WIX.com started to work on their microservice architecture: by splitting its monolith in two, drawing a firm border between these two and go down further from there, which helped them building up experience steady on the way. The team drew the cutting line for the services on the data access level - one service focused more on reading data while the other focused on writing data. To get the data from the writing service to the reading service, they just copied them. Well, you might like this particular solution or not (I don't), but the point is: find this one - and only one - border that goes through your system and separates it. The other important part of his talk was the ubiquitous question on which technology to use for orchestrating the microservices, event messaging system, API versioning and distributed logging - and it's:
**YAGNI, you ain't gonna need it! - Default to the stack you know how to operate!**
So that was like an oasis among the zillions of sandcorns of todays kafka, akka, amqp, fluentd, logstash, graylog, zookeeper, consul etc. What he meant was: if you didn't need it before with 1 monolith - you still won't need it with 2 services. Or with 3 or 4 or 5... Now, that they've got up to 200 microservices, they think about adding some of these stacks - but why adding further complexity in the beginning when you've got your hands and minds full with other things?
Why would I start thinking about, e.g. how to implement Service Discovery or which API Management System to choose, if I only have 2 services running and I know exactly where they run and how to access them?
When you're splitting your monolith in two, you have other problem to take care of like how to make sure the two new services still get to communicate with existing other components. How do they get their data, since they were probably doing cross-domain data access before. Where to deploy the services? Same site as before? That might require re-configuration your web server. So solve these problems first and play around with the rest later.
For WIX.com already this first step of splitting the monolith brought significant benefits:
* seperation by product lifecycle brought deployment independence and gave developers the assurance that one change could bring down the whole system (but only half of it)
* separation by service level made it possible to scale independently and optimize the data to their respective use cases (read vs write)
What I particulary liked about this talk was that he showed a real practical methodology, that everybody could follow - literally and practically - and that alignes with the hands-on mentality that you need to have if you're taking on a problem like this.
This not often heard pragmatic approach wrapped up with the ubiquitous remark that each services must be owned by one team and this team has to take on the reponsibility for it (_You build it you run it!_) frankly did not contain any new insights at all but it served so well as a real life experience that I would really love to try it out myself - watch out Semigator-monolith!

View file

@ -0,0 +1,330 @@
---
layout: post
title: Extending On-Premise Products With Mobile Apps - Part 2
subtitle: Creating a Single Page App using Apache Cordova and AngularJS
category: howto
tags: [mobile, cloud]
author: carol_biro
author_email: carol.biro@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
### What is this about?
This is a blog-post about a proof-of-concept project I have worked on together with my colleague Robert Fitch to find out what it takes to access on-premise data from an internet client. Robert has created the server side API (you can read about it in [part 1](http://dev.haufe-lexware.com/Reisekosten-App/)) and my role was to create a mobile app which consumes the methods exposed by this API. The technologies used in order to create the app were HTML5, AngularJS, Bootstrap css and Apache Cordova.
### Why Apache Cordova?
Apache Cordova targets multiple platforms with one code base(html, css and javascript) and it is open source. There are many pros and cons using this stack of technology. We needed to address the app to a wide range of users with as little effort as possible. That is why we didn't use a pure native approach. For the POC app we have targeted Android and iOS devices as potential consumers.
### What else do we need beside Apache Cordova?
Apache Cordova is offering a good way to organize your project. It assures OS specific customizations and the OS specific mobile app builds(the .apk for Android and .ipa for iOS). Apache Cordova being set up once is very little modified during the lifetime of the project. For the effective development we have used AngularJS and Bootstrap.css.
It is worth mentioning that when I have started to work on this project I had no experience with the above mentioned technologies: neither Apache Cordova , nor AngularJS or Bootstrap.css. I am a pretty experienced web developer who have worked before mainly on jquery based projects. Starting to learn about AngularJS I have discovered a new way of thinking about web development, more precisely how to develop a web application avoiding the use of jquery. The main idea using AngularJS was to create dynamic views driven by javascript controllers. AngularJS lets you extend HTML with your own directives , the result being a very expressive, readable and quick to develop environment.
In my day by day job I mostly use Microsoft technologies like C#. I do this using Visual Studio as an IDE. That is why a good choice to set up this project was to use Visual Studio Tools for Apache Cordova. By the time I have started to work on the project the Update 2 of these tools were available, now after a couple of months Update 7 can be downloaded with a lot of improvements.
{:.center}
![Reisekosten App Frontend - Visual Studio Tools for Apache Cordova]( /images/reisekosten-app/visualstudioupdate7.jpg){:style="margin:auto"}
This being installed you have what you need to start the work. It installs Android SDK and a lot of other dependent stuff you might need during development. I will not detail this since this has been done before by others. If you want to read about you have a lot of resources available like :
[http://taco.visualstudio.com/en-us/docs/get-started-first-mobile-app/](http://taco.visualstudio.com/en-us/docs/get-started-first-mobile-app/)
### Front-End requirements
In a few words we needed:
* a login page
* a list of trips
* the possibility to create/edit a trip
* add/edit/delete receipts assigned to a trip
* the receipts form needed visibility rules of some fields(depending on a category)
### The project
Knowing the above here is a screenshot of how I ended up structuring the project:
{:.center}
![Reisekosten App Frontend - Visual Studio Tools for Apache Cordova]( /images/reisekosten-app/projectstructure.jpg){:style="margin:auto"}
In the upper part of the solution we can see the some folders created by Visual Studio project structure where OS specific things are kept. For instance **merges** folder has android and ios subfolders each of them containing a css folder where resides a file called overrides.css. Of course these files have different content depending on the OS. What is cool with this, is that at build time Visual Studio places the corresponding override in each OS specific build(in the .apk or the .ipa in our case).
The **plugins** folder is nice too , here will reside the plugins which helps to extend the web application with native app abilities. For instance we can install here a plugin which will be able to access the device camera.
The **res** folder contains OS specific icons (e.g. rounded icons for IOS and square icons for Android) and screens. Finally in the upper part of the solution there is a **test** folder where the unit and integration tests will reside.
The next folder **www** contains the project itself, the common code base. We see here a bunch of files which are nicely and clearly organized, maybe not for you yet, but hopefully things become more clear with the next code snippet of the index.html which is the core of the SPA and where the whole app is running:
```html
<!doctype html>
<html ng-app="travelExpensesApp" ng-csp>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0, user-scalable=no">
<meta name="format-detection" content="email=no">
<meta name="format-detection" content="telephone=no">
<link href="css/bootstrap.css" rel="stylesheet">
<link href="css/bootstrap-additions.css" rel="stylesheet" />
<link href="css/index.css" rel="stylesheet"/>
<!-- Cordova reference, this is added to the app when it's built. -->
<link href="css/overrides.css" rel="stylesheet"/>
<!-- Angular JS -->
<script src="scripts/frameworks/angular.js"></script>
<script src="scripts/frameworks/angular-resource.js"></script>
<script src="scripts/frameworks/angular-route.js"></script>
<script src="scripts/frameworks/angular-strap.js"></script>
<script src="scripts/frameworks/angular-strap.tpl.js"></script>
<script src="scripts/frameworks/angular-input-masks-standalone.min.js"></script>
<!-- Cordova reference, this is added to the app when it's built. -->
<script src="cordova.js"></script>
<script src="scripts/platformOverrides.js"></script>
<!-- Initialize all the modules -->
<script src="scripts/index.js"></script>
<!-- Services -->
<script src="scripts/services/cordova.js"></script>
<script src="scripts/services/global.js"></script>
<script src="scripts/services/httpInterceptor.js"></script>
<!-- Controllers -->
<script src="scripts/controllers/loginController.js"></script>
<script src="scripts/controllers/tripsController.js"></script>
<script src="scripts/controllers/tripDetailsController.js"></script>
<script src="scripts/controllers/receiptsController.js"></script>
<script src="scripts/controllers/receiptDetailsController.js"></script>
</head>
<body >
<div ng-view></div>
</body>
</html>
```
Being a SPA everything runs in one place. All the necessary files are included in the header : first the app specific css files, then the css override file which is replaced with OS specific one at build time. Next are included the AngularJS and AnfularJS specific libraries and then comes the platform specific javascript overrides. Until now we have included only libraries and overrides, what comes from now is the base of the app the index.js.
A code snippet from here helps to understand the AngularJS application:
```javascript
(function () {
"use strict";
var travelExpensesApp = angular.module("travelExpensesApp", ["ngRoute", "mgcrea.ngStrap", "ui.utils.masks", "travelExpensesControllers", "travelExpensesApp.services"]);
angular.module("travelExpensesControllers", []);
angular.module("travelExpensesApp.services", ["ngResource"]);
travelExpensesApp.config([
"$routeProvider",
function($routeProvider) {
$routeProvider.
when("/", {
templateUrl: "partials/login.html",
controller: "LoginControl"
}).
when("/companies/:companyId/employees/:employeeId/trips", {
templateUrl: "partials/trips.html",
controller: "TripsControl"
}).
when("/companies/:companyId/employees/:employeeId/trips/:tripId", {
templateUrl: "partials/tripDetails.html",
controller: "TripDetailsControl"
}).
when("/companies/:companyId/employees/:employeeId/trips/:tripId/receipts/:receiptId", {
templateUrl: "partials/receiptDetails.html",
controller: "ReceiptDetailsControl"
}).
otherwise({
redirectTo: "/"
});
}
]);
```
The AngularJS application is organized using modules. We can think of modules as containers for controllers , services, directives, etc. These modules are reusable and testable. What we can see above is that I have declared an application level module (travelExpensesApp) which depends on other modules .
I have created a separate module for controllers(travelExpensesControllers) and one for services(travelExpensesApp.services). I have also used some libraries as modules like : ngRoute (from angular-route.js), mgCrea.ngStrap (from angular-strap.js) and ui.utils.masks (from angular-input-masks-standalone.min.js). The module declarations links everything together and creates the base of the app.
Beside modules the above code snippet contains the configuration of routing. Using ngRoute lets us use routing which helps us a lot to define a nice and clear structure for the frontend app. AngularJS routing permits a good separation of concerns. GUI resides in html templates defined by the templateUrls and the logic behind is separated in the controller javascript files.
To understand a bit better let's take a look on the trips page which is built up from trips.html as template and TripsControl as controller:
```html
<div class="navbar navbar-inverse">
<div class="navbar-header pull-left">
<button type="button" class="btn-lx-hamburger navbar-toggle pull-left" data-toggle="collapse" data-target="#myNavbar" bs-aside="aside" data-template-url="partials/aside.html" data-placement="left" data-animation="am-slide-left" data-container="body">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand padding-rl10">
<span class="glyphicon glyphicon-user"></span> {% raw %}{{employeeData.first_name}} {{employeeData.last_name}}{% endraw %}
</a>
</div>
<div class="navbar-header pull-right">
<a class="navbar-brand" ng-disabled="true" ng-click="newTravel()"><span class="glyphicon glyphicon-plus"></span></a>
</div>
</div>
<div class="container">
<div waithttp ng-show="!waitingHttp">
<div class="table-responsive">
<table class="table table-striped">
<thead>
<tr>
<th>Datum</th>
<th>Von</th>
<th>Nach</th>
</tr>
</thead>
<tbody>
<tr ng-repeat="trip in trips | orderBy : departure_date : reverse" ng-click="editTravel(trip.id)">
<td>{% raw %}
{{trip.departure_date | date : 'dd.MM.yyyy'}}{% endraw %}
</td>
<td>{% raw %}
{{trip.departure}}{% endraw %}
</td>
<td>{% raw %}
{{trip.destination}}{% endraw %}
</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
```
The above html contains a navbar and a simple list of trips. What we notice from the start are the double curly braces (e.g. '\{\{ trip.departure \}\}') and directives starting with "ng" prefix. These are the main ways how the template interacts with the controller :
```javascript
(function () {
"use strict";
angular.module("travelExpensesControllers").controller("TripsControl", ["$scope", "$http", "$routeParams", "$location","global", TripsControl]);
function TripsControl($scope, $http, $routeParams, $location, global) {
$scope.companyId = $routeParams.companyId;
$scope.employeeId = $routeParams.employeeId;
$scope.employeeData = global.user.employeeData;
$scope.aside = {
"title" : global.user.employeeData.first_name + " " + global.user.employeeData.last_name
};
var tripsRequest = {
method : "GET",
url : global.baseUrl + "companies/" + $scope.companyId + "/employees/" + $scope.employeeId + "/trips",
headers : {
"Content-Type": "application/hal+json",
"Session-Id": global.sessionId
}
};
$http(tripsRequest).then(
function (data) {
$scope.trips = data.data.ResourceList;
},
function (data) {
$scope.error = data;
// session probably bad, go back to login page
$location.path("/");
});
$scope.newTravel = function () {
delete global.tripDetails;
var url = "/companies/" + global.user.companyData.id + "/employees/" + global.user.employeeData.id + "/trips/0";
$location.url(url);
}
$scope.editTravel = function (travelId) {
var url = "/companies/" + global.user.companyData.id + "/employees/" + global.user.employeeData.id + "/trips/" + travelId;
$location.url(url);
}
$scope.doLogout = function () {
$location.path("/").search({invalidate: true});
}
}
})();
```
Above we can see how the TripsControl is defined . To expose something usable in the template we just exetend/add a new property or function to the $scope variable. For example we get the list of trips if we have a successful answer for a $http request defined by js var tripsRequest.
In a nutshell the above should give you an idea how angular works and used in this POC.
### What else ?
Since the most of the time during this POC I have worked with AngularJS you might wondering if I had to implement something else which is worth mentioning?
Yes, for example implementing hardware back button support for Android devices was pretty interesting and challenging . In order to achieve this I needed to create an angular service where I had to use Cordova capabilities. The result looks like this :
```javascript
(function () {
"use strict";
angular.module("travelExpensesApp.services").factory("cordova", ["$q", "$window", "$timeout", cordova]);
/**
* Service that allows access to Cordova when it is ready.
*
* @param {!angular.Service} $q
* @param {!angular.Service} $window
* @param {!angular.Service} $timeout
*/
function cordova($q, $window, $timeout) {
var deferred = $q.defer();
var resolved = false;
// Listen to the 'deviceready' event to resolve Cordova.
// This is when Cordova plugins can be used.
document.addEventListener("deviceready", function () {
resolved = true;
deferred.resolve($window.cordova);
console.log("deviceready fired");
}, false);
// If the 'deviceready' event didn't fire after a delay, continue.
$timeout(function () {
if (!resolved && $window.cordova) {
deferred.resolve($window.cordova);
}
}, 1000);
return {
ready: deferred.promise,
platformId: cordova.platformId,
eventHandlers:[],
backAction: function (callback) {
if (typeof (callback) === "function") {
var callbackAction;
var removeCallbackAction = function () {
document.removeEventListener("backbutton", callbackAction);
}
callbackAction = function () {
callback();
removeCallbackAction();
}
// remove previously added event handler if it didn't removed itself already
// this can happen when a navigating deeper than 1 level for example :
// 1. trips - > tripDetails :: back action is added
// 2. tripDetails -> receiptDetials :: back action from step 1 removed , current action is added
if (this.eventHandlers.length > 0) {
document.removeEventListener("backbutton", this.eventHandlers[0]);
this.eventHandlers.splice(0,1);
}
document.addEventListener("backbutton", callbackAction, false);
this.eventHandlers.push(callbackAction);
}
}
};
}
})();
```
The above is not the simplest code, it might be improved, but for the POC it did a great job.
Other interesting thing worth to mention is that testing is pretty easy and straightforward using this stack.I will not detail this but it is worth to know that in one day I have managed to set up the environment and write some unit tests for the receiptDetailsController.js using karma.js and one more day took to set up the environment and create some end to end tests using protractor.js.
Overall this stack of technologies allowed to put a healthy and solid base for a prjoect which in the future can become a complex mobile app. Development was quick and resulted a nice POC app out of it. Is this stack a good choice for future mobile apps? At this moment I think that it is. Let's see what the future will bring to us :).

View file

@ -0,0 +1,160 @@
---
layout: post
title: The Automated Monolith
subtitle: Build, Deploy and Testing using Docker, Docker Compose, Docker Machine, go.cd and Azure
category: howto
tags: [devops]
author: marco_seifried
author_email: marco.seifried@haufe-lexware.com
header-img: "images/bg-post.old.jpg"
---
Let's be honest, systems age and while we try hard not to accumulate technical depth, sometimes you realize it's time for a bigger change. In this case, we looked at a Haufe owned platform providing services like user-, licence- and subscription management for internal and external customers. Written in Java, based on various open source components, somewhat automated, fairly monolithic.
Backed by our technical strategy, we try to follow the microservices approach (a good read is [Sam Newman's book](http://shop.oreilly.com/product/0636920033158.do)). We treat infrastructure as code and automate wherever possible.
So whenever we start from scratch, it's fairly straight forward to apply those principles.
But what if you already have your system, and it's grown over the years? How do you start? Keeping in mind we have a business critical system on a tight budget and a busy team. Try to convince business it's time for a technical face lift...
We decided to look at the current painpoints and start with something that shows *immediate business results in a reasonably short timeframe*.
### Rough Idea
The team responsible for this platform has to develop, maintain and run the system. A fair amount of their time went into deploying environments for internal clients and help them get up and running. This gets even trickier when different clients use an environment for testing simultaneously. Setting up a test environment from scratch - build, deploy, test - takes 5 man days. That's the reality we tried to improve.
We wanted to have a one click deployment of our system per internal client directly onto Azure. Everything should be built from scratch, all the time and we wanted some automated testing in there as well.
To make it more fun, we decided to fix our first go live date to 8 working weeks later by hosting a public [meetup](http://www.meetup.com/de-DE/Timisoara-Java-User-Group/events/228106103/) in Timisoara and present what we did! The pressure (or fun, depending on your viewpoint) was on...
So time was an issue, we wanted to be fast to have something to work with. Meaning that we didn't spend much time in evaluating every little component we used but made sure we were flexible enough to change it easily - evolutionary refinement instead of initial perfection.
### How
Our guiding principles:
* **Infrastructure as code** - EVERYTHING. IN CONFIG FILES. CHECKED IN. No implicit knowledge in people's heads.
* **Immutable Servers** - We build from scratch, the whole lot. ALWAYS. NO UPDATES, HOT FIX, NOTHING.
* **Be independent of underlying infrastructure** - it shouldn't matter where we deploy to. So we picked Azure just for the fun of it.
Main components we used:
* [go.cd](https://www.go.cd/) for continous delivery
* [Docker](https://www.docker.com/): All our components run within docker containers
* [Bitbucket](https://bitbucket.org/) as repository for config files and scripts
* [Team Foundation Server](https://www.visualstudio.com/en-us/products/tfs-overview-vs.aspx) as code repository
* [Artifactory](https://www.jfrog.com/open-source/#os-arti) as internal docker hub
* [ELK stack](https://www.elastic.co/webinars/introduction-elk-stack) for logging
* [Grafana](http://grafana.org/) with [InfluxDB](http://grafana.org/features/#influxdb) for basic monitoring
The flow:
{:.center}
![go.cd Flow]( /images/automated-monolith/automated_monolith_flow.jpg){:style="margin:auto"}
Let's first have a quick look on how go.cd works:
Within go.cd you model your worklows using pipelines. Those pipelines contain stages which you use to run jobs which themselves contain tasks. Stages will run in order and if one fails, the pipeline will stop. Jobs will run in parallel, go.cd is taking care of that.
The trigger for a pipeline to run is called a material - so this can be a git repository where a commit will start the pipeline, but also a timer which will start the pipeline reguarly.
You can also define variables on multiple levels - we have used it on a pipeline level - where you can store things like host names and alike. There is also an option to store secure variables.
In our current setup we use three pipelines: The first on creates a docker image for every component in our infrastructure - database, message queue, application server. It builds images for the logging part - Elastic Search, Kibana and Fluentd - as well as for the monitoring and testing.
We also pull an EAR file out of our Team Foundation Server and deploy it onto the application server.
Haufe has written and open sourced a [plugin](https://github.com/Haufe-Lexware/gocd-plugins/wiki/Docker-pipeline-plugin) to help ease the task to create docker images.
Here is how to use it:
Put in an image name and point to the dockerfile:
![go.cd Flow]( /images/automated-monolith/docker_plugin_1.jpg){:style="margin:auto"}
You can also tag your image:
![go.cd Flow]( /images/automated-monolith/docker_plugin_2.jpg){:style="margin:auto"}
Our docker images get stored in our internal Artifactory which we use as a docker hub. You can add your repository and the credentials for that as well:
![go.cd Flow]( /images/automated-monolith/docker_plugin_3.jpg){:style="margin:auto"}
Those images are based on our [docker guidelines](https://github.com/Haufe-Lexware/docker-style-guide).
The next step is to deploy our environment onto Azure. For that purpose we use a second go.cd pipeline with these stages:
![go.cd Flow]( /images/automated-monolith/deploy_stages.jpg){:style="margin:auto"}
First step is to create an VM on Azure. In this case we create a custom command in go.cd and simply run a shell script:
![go.cd Flow]( /images/automated-monolith/custom_command.jpg){:style="margin:auto"}
Core of the script is a docker machine command which creates an Ubuntu based VM which will serve as a docker host:
~~~bash
docker-machine -s ${DOCKER_LOCAL} create -d azure --azure-location="West Europe" --azure-image=${AZURE_IMAGE} --azure-size="Standard_D3" --azure-ssh-port=22 --azure-username=<your_username> --azure-password=<password> --azure-publish-settings-file azure.settings ${HOST}
~~~
Once the VM is up and running, we run docker compose commands to pull our images from Artifactory (in this case the setup of the logging infrastructure):
~~~yml
version: '2'
services:
elasticsearch:
image: registry.haufe.io/atlantic_fs/elasticsearch:v1.0
hostname: elasticsearch
expose:
- "9200"
- "9300"
networks:
- hgsp
fluentd:
image: registry.haufe.io/atlantic_fs/fluentd:v1.0
hostname: fluentd
ports:
- "24224:24224"
networks:
- hgsp
kibana:
env_file: .env
image: registry.haufe.io/atlantic_fs/kibana:v1.0
hostname: kibana
expose:
- "5601"
links:
- elasticsearch:elasticsearch
networks:
- hgsp
nginx:
image: registry.haufe.io/atlantic_fs/nginx:v1.0
hostname: nginx
ports:
- "4443:4443"
restart:
always
networks:
- hgsp
networks:
hgsp:
driver: bridge
~~~
As a last step we have one pipeline to simple delete everything we've just created.
### Outcome
We kept our timeline, presented what we did and were super proud of it! We even got cake!!
![go.cd Flow]( /images/automated-monolith/cake.jpg){:style="margin:auto"}
Setting up a test environment now only takes 30 minutes, down from 5 days. And even that can be improved by running stuff in parallel.
We also have a solid base we can work with - and we have many ideas on how to take it further. More testing will be included soon, like more code- and security tests. We will include gates that only once the code has a certain quality or has improved in a certain way after the last test, the pipeline will proceed. We will not stop at automating the test environment, but look at our other environments as well.
All the steps necessary we have in code, which makes it repeatable and fast. There is no dependency to anything. This enables our internal clients to setup their personal environments in a fast and bulletproof way on their own.
---
Update: You can find slides of our talk [here](http://www.slideshare.net/HaufeDev/the-automated-monolith)

View file

@ -0,0 +1,27 @@
---
layout: post
title: CQRS, Eventsourcing and DDD
subtitle: Notes from Greg Young's CQRS course
category: conference
tags: [microservice]
author: frederik_michel
author_email: frederik.michel@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
In these notes I would like to share my thoughts about the course I took together with Rainer Michel and Raul-Andrei Firu in London about the [above mentioned topics](http://lmgtfy.com/?q=greg+young+cqrs). In this three days last November Greg Young explained with many practical examples of his career the benefits especially of CQRS and how this is related to things like Event Sourcing which is a way of reaching eventual consistency.
### CQRS and DDD
So let's get to the content of the course. It all starts with some thoughts about Domain Driven Design (DDD) especially about how to get to a design. This included strategies for getting the information out of domain experts and how to come to an ubiquitous language between different departments. All in all Greg pointed out that the software just does not have to solve every problem there is which is actually why the domain model resulting out of this is absolutely unequal to the ERM which might come to mind when solving such problems. One should more think about the actual use cases of the software than about solving each and every corner case that actually just will not happen. He showed very interesting strategies to break up relations between the domains in order to minimize the amount of getters and setters used between domains. At the end Greg spoke shortly about Domain Services which deal with logic using different aggregates and making the transactions consistent. But more often than not one should evaluate eventual consistency to use instead of such domain services as the latter explicitly show that one breaks the rule of not using more than one aggregate within one transaction. In this part Greg actually got just very briefly into CQRS describing it as a step on the way of achieving an architecture with eventual consistency.
### Event Sourcing
This topic was about applying event sourcing to a pretty common architecture that uses basically a relational database with some OR-mapper on top and above that domains/domain services. On the other side there is a thin read model based on a DB with 1st NF data. He showed that this architecture would eventually fail in production. The problem there is to keep these instances in sync especially after some problems in production might have been occurred. In these cases it is occasionally very hard to get the read and write model back on the same page. In order to change this kind of architecture using event sourcing there has to be a transition to a more command based communication between the components/containers within the architecture. This can generally be realized by introducing an event store which gathers all the commands coming from the frontend. This approach eventually leads to a point where the before mentioned 3rd NF database (which up to that point has been the write model) is going to be completely dropped in favor of the event store. This actually has 2 reasons. First of all is that the event store already has all the information stored that also is present in the database. Second and maybe more important, it stores more information than the database as the latter one generally just keeps the current state. The event store on the other hand stores every event in between also which might be relevant for analyzing the data, reporting, … What this architecture we ended up with also brings to the table is eventual consistency as the command send by the UI takes some time until it is available in the read model. The main point about eventual consistency is that the data in the read model is not false data it might just be old data which in most cases is not to be considered critical. However, there are cases where consistency is required. For these situations there are strategies to just simulate consistency. This can be done by making the time the server takes to get the data to the read model smaller than the time the client needs to retrieve the data again. Mostly this is done by just telling the user that the changes have been received by the server or the ui just fakes the output.
To sum this up - the pros about an approach like this are especially the fact that every point in time can be restored (no data loss at all) and the possibility to just pretend that the system still works even if the database is down (we just show the user that we received the message and everything can be restored when the database is up again). In addition to that if a SEDA like approach is used it is very easy to monitor the solution and determine where the time consuming processes are. One central point in this course was that by all means we should prevent widespread outrage - meaning errors that make the complete application crash or stall with effect on many or all users.
### Process Managers
This topic was essentially about separation of concerns in that regard that one should separate process logic and business logic. This is actually something that should be done as much as possible as the system can then be easily changed to using a workflow engine in the longer run. Actually Greg showed two ways of building a process manager. The first one just knows in what sequence the business logic has to be run. It triggers each one after the other. In the second approach the process manager creates a list of the processes that should be run in the correct order. It then hands over this list to the first process which passes the list on to the next and so forth. In this case the process logic is within the list or the creation of the list.
### Conclusion
Even though Greg sometimes switched pretty fast from showing very abstract thoughts to going deep into source code the course was never boring - actually rather exciting and absolutely fun to follow. The different ways of approaching a problem were shown using very good examples - Greg really did a great job there. I can absolutely recommend this course for people wanting to know more about these topics. From my point of view this kind of strategy was very interesting as I see many people trying to create the "perfect" piece of software paying attention to cases that just won't happen or spending a lot of time on cases that happen very very rarely rather to define them as known business risks.

View file

@ -0,0 +1,176 @@
---
layout: post
title: Generating Swagger from your API
subtitle: How to quickly generate the swagger documentation from your existing API.
category: howto
tags: [api]
author: tora_onaca
author_email: teodora.onaca@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
If you already have an existing API and you just want to generate the swagger documentation from it, there are a couple easy steps to make it work. First off, you should be familiar with Swagger and, in particular, with [swagger-core](https://github.com/swagger-api/swagger-core). Assuming that you coded your REST API using JAX-RS, based on which was your library of choice (Jersey or RESTEasy), there are several [guides](https://github.com/swagger-api/swagger-core/wiki/Swagger-Core-JAX-RS-Project-Setup-1.5.X) available to get you set up very fast.
In our case, working with RESTEasy, it was a matter of adding the maven dependencies:
<dependency>
<groupId>io.swagger</groupId>
<artifactId>swagger-jaxrs</artifactId>
<version>1.5.8</version>
</dependency>
<dependency>
<groupId>io.swagger</groupId>
<artifactId>swagger-jaxrs</artifactId>
<version>1.5.8</version>
</dependency>
Note: please make sure to set the jar version to the latest one available, so that the latest bug fixes are included.
In order to hook up swagger-core in the application, there are multiple solutions, the easiest of which is to just use a custom `Application` subclass.
``` java
public class SwaggerTestApplication extends Application {
public SwaggerTestApplication() {
BeanConfig beanConfig = new BeanConfig();
beanConfig.setVersion("1.0");
beanConfig.setSchemes(new String[] { "http" });
beanConfig.setTitle("My API");
beanConfig.setBasePath("/TestSwagger");
beanConfig.setResourcePackage("com.haufe.demo.resources");
beanConfig.setScan(true);
}
@Override
public Set<Class<?>> getClasses() {
HashSet<Class<?>> set = new HashSet<Class<?>>();
set.add(Resource.class);
set.add(io.swagger.jaxrs.listing.ApiListingResource.class);
set.add(io.swagger.jaxrs.listing.SwaggerSerializers.class);
return set;
}
}
```
Once this is done, you can access the generated `swagger.json` or `swagger.yaml` at the location: `http(s)://server:port/contextRoot/swagger.json` or `http(s)://server:port/contextRoot/swagger.yaml`.
Note that the `title` element for the API is mandatory, so a missing one will generate an invalid swagger file. Also, any misuse of the annotations will generate an invalid swagger file. Any existing bugs of swagger-core will have the same effect.
In order for a resource to be documented, other than including it in the list of classes that need to be parsed, it has to be annotated with @Api. You can check the [documentation](https://github.com/swagger-api/swagger-core/wiki/Annotations-1.5.X) for the existing annotations and use any of the described fields.
A special case, that might give you some head aches, is the use of subresources. The REST resource code usually goes something like this:
``` java
@Api
@Path("resource")
public class Resource {
@Context
ResourceContext resourceContext;
@GET
@Produces("application/json")
@ApiOperation(value = "Returns something")
public String getResource() {
return "GET";
}
@POST
@Produces("application/json")
public String postResource(String something) {
return "POST" + something;
}
@Path("/{subresource}")
@ApiOperation(value = "Returns a subresource")
public SubResource getSubResource() {
return resourceContext.getResource(SubResource.class);
}
}
@Api
public class SubResource {
@PathParam("subresource")
private String subresourceName;
@GET
@Produces("application/json")
@ApiOperation(value = "Returns subresource something")
public String getSubresource() {
return "GET " + subresourceName;
}
@POST
@Produces("application/json")
@ApiOperation(value = "Posts subresource something")
public String postSubresource(String something) {
return "POST " + subresourceName + something;
}
}
```
The swagger parser works like a charm if it finds the @Path and @GET and @POST annotations where it thinks they should be. In the case depicted above, the subresource is returned from the parent resource and does not have a @Path annotation at the class level. A lower version of swagger-core will generate an invalid swagger file, so please use the latest version for a correct code generation. If you want to make you life a bit harder and you have a path that goes deeper, something like /resource/{subresource}/{subsubresource}, things might get a bit more complicated.
In the Subresource class, you might have a @PathParam for holding the value of the {subresource}. The Subsubresource class might want to do the same. In this case, the generated swagger file will contain the same parameter twice, which results in an invalid swagger file. It will look like this:
parameters:
- name: "subresource"
in: "path"
required: true
type: "string"
- name: "subsubresource"
in: "path"
required: true
type: "string"
- in: "body"
name: "body"
required: false
schema:
type: "string"
- name: "subresource"
in: "path"
required: true
type: "string"
In order to fix this, use `@ApiParam(hidden=true)` for the subresource `@PathParam` in the `Subsubresource` class. See below.
``` java
@Api
public class SubSubResource {
@ApiParam(hidden=true)
@PathParam("subresource")
private String subresourceName;
@PathParam("subsubresource")
private String subsubresourceName;
@GET
@Produces("application/json")
@ApiOperation(value = "Returns subsubresource something")
public String getSomethingw() {
return "GET " + subresourceName + "/" + subsubresourceName;
}
@POST
@Produces("application/json")
@ApiOperation(value = "Posts subsubresource something")
public String postSomethingw(String something) {
return "POST " + subresourceName + "/" + subsubresourceName + " " +something;
}
}
```
There might be more tips and tricks that you will discover once you start using the annotations for your API, but it will not be a slow learning curve and once you are familiar with swagger (both spec and core) you will be able to document your API really fast.

View file

@ -0,0 +1,30 @@
---
layout: post
title: SAP CodeJam on May 12th, 2016
subtitle: Calling all SAP ABAP Developer in Freiburg area
category: general
tags: [culture]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Am Donnerstag, dem 12. Mai 2016, ist es wieder soweit: Wir werden bei uns im Haus eine weitere SAP CodeJam durchführen.
Das Thema: ABAP in Eclipse.
{:.center}
![SAP JAM]({{ site.url }}/images/sap_codejam.jpg){:style="margin:auto"}
Ein spannender Termin für alle ABAP Entwickler, die sich für die aktuellen Entwicklungswerkzeuge interessieren und verstehen
möchten wohin die Reise geht. Ein spannenden Event, um hands-on erste Erfahrungen mit Eclipse als IDE zu sammeln und einen
Ausblick zu bekommen wohin die Reise geht. Es wird mit dem eigenen Notebook gearbeitet und auf dem aktuellsten SAP Netweaver
Stack (ABAP 7.50) herumgeklopft (Zugriff auf eine von SAP via AWS bereitgestellte Instanz).
Diese Einladung richtet sich nicht nur an Haufe-interne Entwickler, sondern auch an ABAP-Gurus anderer Unternehmen in der
Region. Bitte leitet die Einladung an andere ABAP-Entwickler in anderen Unternehmen weiter.
Die Teilnahme ist kostenfrei via diesem [Registrierungslink](https://www.eventbrite.com/e/sap-codejam-freiburg-registration-24300920708).
Es gibt 30 Plätze, first come, first serve.
Viele Grüße von dem Haufe SAP Team
PS: Ja, ABAP-Skills sind für die Teilnahme erforderlich.

View file

@ -0,0 +1,205 @@
---
layout: post
title: API Management Components
subtitle: What's inside that API Management box?
category: general
tags: [cloud, api]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post-api.jpg"
---
### Introduction
API Management is one of the more hyped up buzzwords you can hear all over the place, in conferences, in various blog posts, in the space of internet of things, containers and microservices. It looks at first sight as a brilliant idea, simple and easy, and alas, it is! But unfortunately not as simple as it might look like when you draw up your first architectural diagrams.
### Where do we start?
We're accustomed to architect large scale systems, and we are trying to move into the microservice direction. It's tempting to put in API Management as one of the components used for encapsulating and insulating the microservices, in a fashion like this:
![API Management in front of "something"]( /images/apim-components/apim-as-a-simple-layer.png)
This definitely helps in laying out the deployment architecture of your system(s), but in many cases, it falls too short. When you are accustomed to introducing API Management components into your microservice architecture, and you already have your blueprints in place, this may be enough, but to reach that point, you will need to do some more research on what you actually want to achieve with an API Management Solution (in short: "APIm").
### Common Requirements for an APIm
Another "problem" is that it's easy to just look at the immediate requirements for API Management solutions and compare to various solutions on the market. Obviously, you need to specify your functional requirements first and check whether they match to the solution you have selected; common APIm requirements are for example the following:
* Proxying and securing backend services
* Rate limiting/throttling of API calls
* Consumer identification
* API Analytics
* Self-service API subscriptions
* API Documentation Portals
* Simple mediations (transformations)
* Configurability over API (APIm APIs, so to say)
* Caching
The nature of these requirements are very diverse, and not all of the requirements are usually equally important. Neither is it always the case that all features are equally featured inside all APIm solutions, even if most solutions obviously try to cover them all to some extent. Some do this via an "all inclusive" type of offering, some have a more fine granular approach.
In the next section, I will try to show which types of components usually can be found inside an API Management Solution, and where the interfaces between the different components are to be found.
### A closer look inside the box
If we open up that blue box simply called "API Management", we can find a plethora of sub-components, which may or may not be present and/or more or less feature-packed depending on the actual APIm solution you choose. The following diagram shows the most usual components inside APIm solutions on the market today:
![API Management Components]( /images/apim-components/apim-in-reality.png)
When looking at an API Management Solution, you will find that in most cases, one or more components are missing in one way or the other, or some component is less elaborate than with other solutions. When assessing APIms, checking the different components can help to find whether the APIm actually matches your requirements.
We will look at the following components:
* [API Gateway](#apigateway)
* [API Identity Provider (IdP)](#apiidp)
* [Configuration Database](#configdb)
* [Cache](#cache)
* [Administration UI](#adminui)
* [Developer Portal](#devportal)
* [Portal Identity Provider (IdP)](#portalidp)
* [Logging](#logging)
* [Analytics](#analytics)
* [Audit Log](#audit)
<a name="apigateway"></a>
#### API Gateway
The core of an APIm is quite obviously the API Gateway. It's the component of the APIm solution through which the API traffic is routed, and which is usually ensuring that the backend services are secured. Depending on the architecture of the APIm solution, the API Gateway can be more or less integrated with the Gateway Identity Provider ("API IdP" in the picture), which provides an identity for the consuming client.
APIm solution requirements usually focus on this component, as it's the core functionality. This component is always part of the APIm solution.
<a name="apiidp"></a>
#### API Identity Provider
A less obvious part of the APIm conglomerate is the API Identity Provider. Depending on your use case, you will only want to know which API Consumers are using your APIs via the API Gateway, or you will want to have full feature OAuth support. Most vendors have direct support for API Key authentication (on a machine/application to API gateway basis), but not all have built-in support for OAuth mechanisms, and/or support pluggable OAuth support.
In short: Make sure you know which your requirements are regarding the API Identity Providers *on the API plane*; this is to be treated separately from the *API Portal users*, which may have [their own IdP](#portalidp).
<a name="configdb"></a>
#### Configuration Database
In most cases, the API Gateway draws its configuration from a configuration database. In some cases, the configuration is completely separated from the API Gateway, in some cases its integrated into the API Gateway (this is especially true for SaaS offerings).
The configuration database may contain the following things:
* API definitions
* Policy rules, e.g. throttling settings, Access Control lists and similar
* API Consumers, if note stored separately in the [API IdP](#apiidp)
* API Portal Users, if not separately stored in an [API Portal IdP](#portalidp)
* API Documentation, if not stored in separate [portal](#devportal) database
The main point to understand regarding the configuration database is that in most cases, the API Gateway and/or its corresponding datastore is a stateful service which carries information which is not only coming from source code (policy definitions, API definitions and such things), but also potentially from users. Updating and deploying API management solutions must take this into account and provide for migration/upgrade processes.
<a name="cache"></a>
#### Cache
When dealing with REST APIs, it is often useful to have a dedicated caching layer. Some (actually most) APIm provide such a component out of the box, while others do not. How caches are incorporated varies between the different solutions, but it ranges from pure `varnish` installations to key-value stores such as redis or similar. Different systems have different approaches to how and what is cached during API calls, and which kinds of calls are cacheable.
It is worth paying attention to which degree of automation is offered, and to which extent you can customize the behaviour of the cache, e.g. depending on the value of headers or `GET` parameters. What you need is obviously highly depending on your requirements. In some situations you will not care about the caching layer being inside the APIm, but for high throughput, this is definitely worth considering, to be able to answer requests as high up in the chain as possible.
<a name="adminui"></a>
#### Administration UI
In order to configure an APIm, many solutions provide an administration UI to configure the API Gateway. In some cases (like with [Mashape Kong](http://www.getkong.org)), there isn't any administration UI, but only an API to configure the API Gateway itself. But usually there is some kind of UI which helps you configuring your Gateway.
The Admin UI can incoroporate many things from other components, such as administering the [API IdP](#apiidp) and [Portal IdP](#portalidp), or viewing [analytics information](#analytics), among other things.
<a name="devportal">
#### Developer Portal
The Developer Portal is, in addition to the API Gateway, what you usually think about when talking about API Management: The API Developer Portal is the place you as a developer goes to when looking for information on an API. Depending on how elaborate the Portal is, it will let you do things like:
* View API Documentation
* Read up on How-tos or best practices documents
* Self-sign up for use of an API
* Interactively trying out of an API using your own credentials ([Swagger UI](http://swagger.io/swagger-ui/) like)
Not all APIm systems actually provide an API Portal, and for quite some use cases (e.g. Mobile API gateways, pure website APIs), it's not even needed. Some systems, especially SaaS offerings, provide a fully featured Developer Portal out of the box, while some others only have very simple portals, or even none at all.
Depending on your own use case, you may need one or multiple instances of a Developer Portal. It's normal practice that a API Portal is tied to a single API Gateway, even if there are some solutions which allow more flexible deployment layouts. Checking your requirements on this point is important to make sure you get what you expect, as Portal feature sets vary wildly.
<a name="portalidp"></a>
#### Portal Identity Provider
Using an API Developer Portal (see above) usually requires the developer to sign in to the portal using some king of authentication. This is what's behind the term "Portal Identity Provider", as opposed to the IdP which is used for the actual access to the API (the [API IdP](#apiidp)). Depending on your requirements, you will want to enable logging in using
* Your own LDAP/ADFS instance
* Social logins, such as Google, Facebook or Twitter
* Developer logins, such as BitBucket or GitHub.
Most solutions will use those identities to federate to an automatically created identity inside the API Portal; i.e. the API Developer Portal will link their Portal IdP users with a federated identity and let developers use those to log in to the API Portal. Usually, enabling social or developer logins will require you to register your API Portal with the corresponding federated identity provider (such as Google or Github). Adding Client Secrets and Credentials for your API Portal is something you will want to be able to do, depending on your requirements.
<a name="logging"></a>
#### Logging
Another puzzle piece in APIm is the question on how to handle logging, as logs can be emitted by most APIm components separately. Most solutions do not offer an out-of-the-box solution for this (haven't found any APIm with logging functionality at all actually), but most allow for plugging in any kind log aggregation mechanisms, such as [log aggregation with fluentd, elastic search and kibana](/log-aggregation).
Depending on your requirements, you will want to look at how to aggregate logs from the at least following components:
* API Gateway (API Access logs)
* API Portal
* Administration UI (overlaps with [audit logs](#audit))
You will also want to verify that you don't introduce unnecessary latencies when logging, e.g. by using queueing mechanisms close to the log emitting party.
<a name="analytics"></a>
#### The Analytics Tier
The area "Analytics" is also something where the different APIm solutions vary significantly in functionality, when it's present at all. Depending on your requirements, the analytics can be handled when looking at logging, e.g. by leveraging elastic search and kibana, or similar approaches. Most SaaS offerings have pre-built analytics solutions which offer a rich variety of statistics and drill-down possibilites without having to put in any extra effort. Frequent analytics are the following:
* API Usage by API
* API Calls
* Bandwith
* API Consumers by Application
* Geo-location of API users (mobile applications)
* Error frequency and error types (4xx, 5xx,...)
<a name="audit"></a>
#### The Audit Log
The Audit Log is a special case of logging, which may or may not be separate from the general logging components. The Audit log stores changes done to the configuration of the APIm solution, e.g.
* API Configuration changes
* Additions and deletions of APIm Consumers (clients)
* Updates of API definitions
* Manually triggered restarts of components
* ...
Some solutions have built-in auditing functionality, e.g. the AWS API Gateway has this type of functionality. The special nature of audit logs is that such logs must be tamper-proof and must never be changeable after the fact. In case of normal logs, they may be subject to cleaning up, which should not (so easily) be the case with audit logs.
### API Management Vendors
{:.center}
![API Management Providers]( /images/apim-components/apim-providers.png){:style="margin:auto"}
Incomplete list of API Management Solution vendors:
* [3scale](https://www.3scale.net)
* [Akana API Management](https://www.akana.com/solution/api-management)
* [Amazon AWS API Gateway](https://aws.amazon.com/api-gateway)
* [API Umbrella](https://apiumbrella.io)
* [Axway API Management](https://www.axway.com/en/enterprise-solutions/api-management)
* [Azure API Management](https://azure.microsoft.com/en-us/services/api-management/)
* [CA API Gateway](http://www.ca.com/us/products/api-management.html)
* [Dreamfactory](https://www.dreamfactory.com)
* [IBM API Connect](http://www-03.ibm.com/software/products/en/api-connect)
* [Mashape Kong](https://getkong.org)
* [TIBCO Mashery](http://www.mashery.com)
* [Tyk.io](https://tyk.io)
* [WSO2 API Management](http://wso2.com/api-management/)
---
<small>
The [background image](/images/bg-post-api.jpg) was taken from [flickr](https://www.flickr.com/photos/rituashrafi/6501999863) and adapted using GIMP. You are free to use the adapted image according the linked [CC BY license](https://creativecommons.org/licenses/by/2.0/).
</small>

BIN
favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 179 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.5 KiB

BIN
images/bg-post-api.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 309 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 258 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

BIN
images/sap_codejam.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.6 KiB

View file

@ -13,7 +13,17 @@ layout: page
</h3> </h3>
{% endif %} {% endif %}
</a> </a>
<p class="post-meta">Posted by {% if post.author %}{{ post.author }}{% else %}{{ site.title }}{% endif %} on {{ post.date | date: "%B %-d, %Y" }}</p> {% if post.author %}
{% assign author = site.data.authors[post.author] %}
{% if author %}
{% assign author_name = author.name %}
{% else %}
{% assign author_name = post.author %}
{% endif %}
{% else %}
{% assign author_name = site.title %}
{% endif %}
<p class="post-meta">Posted by {{ author_name }} on {{ post.date | date: "%B %-d, %Y" }}</p>
</div> </div>
<hr> <hr>
{% endfor %} {% endfor %}

View file

@ -4,7 +4,9 @@ title: Resources
permalink: /resources/ permalink: /resources/
--- ---
### API Style Guide ### [API Style Guide](http://htmlpreview.github.io/?https://raw.githubusercontent.com/Haufe-Lexware/api-style-guide/gh-pages/index.html)
A List of rules, best practices, resources and our way of creating REST APIs in the Haufe Group. The style guide addresses API Designers, mostly developers and architects, who want to design an API. A List of rules, best practices, resources and our way of creating REST APIs in the Haufe Group. The style guide addresses API Designers, mostly developers and architects, who want to design an API.
Goto our [API Style Guide](http://htmlpreview.github.io/?https://raw.githubusercontent.com/Haufe-Lexware/api-style-guide/gh-pages/index.html) ### [Docker Style Guide](http://htmlpreview.github.io/?https://raw.githubusercontent.com/Haufe-Lexware/docker-style-guide/gh-pages/index.html)
A set of documents representing mandantory requirements, recommended best practices and informational resources for using Docker in official (public or internal) Haufe products, services or solutions.

Binary file not shown.