proper commit now

This commit is contained in:
Jeremy Kidwell 2017-01-13 21:19:18 +00:00
parent 3d7206ba1e
commit e79dea21a7
56 changed files with 25 additions and 6096 deletions

View File

@ -1,89 +1,7 @@
# Author details.
holger_reinhardt:
name: Holger Reinhardt
email: holger.reinhardt@haufe-lexware.com
twitter: hlgr360
github: hlgr360
linkedin: hrreinhardt
martin_danielsson:
name: Martin Danielsson
email: martin.danielsson@haufe-lexware.com
twitter: donmartin76
github: donmartin76
linkedin: martindanielsson
marco_seifried:
name: Marco Seifried
email: marco.seifried@haufe-lexware.com
twitter: marcoseifried
github: marc00s
linkedin: marcoseifried
thomas_schuering:
name: Thomas Schüring
email: thomas.schuering@haufe-lexware.com
github: thomsch98
linkedin: thomas-schuering-205a8780
rainer_zehnle:
name: Rainer Zehnle
email: rainer.zehnle@haufe-lexware.com
github: Kodrafo
linkedin: rainer-zehnle-09a537107
twitter: RainerZehnle
doru_mihai:
name: Doru Mihai
email: doru.mihai@haufe-lexware.com
github: Dutzu
linkedin: doru-mihai-32090112
twitter: dcmihai
eike_hirsch:
name: Eike Hirsch
email: eike.hirsch@haufe-lexware.com
twitter: stagzta
axel_schulz:
name: Axel Schulz
email: axel.schulz@semigator.de
github: axelschulz
linkedin: luckyguy
carol_biro:
name: Carol Biro
email: carol.biro@haufe-lexware.com
github: birocarol
linkedin : carol-biro-5b0a5342
frederik_michel:
name: Frederik Michel
email: frederik.michel@haufe-lexware.com
github: FrederikMichel
twitter: frederik_michel
tora_onaca:
name: Teodora Onaca
email: teodora.onaca@haufe-lexware.com
github: toraonaca
twitter: toraonaca
eric_schmieder:
name: Eric Schmieder
email: eric.schmieder@haufe-lexware.com
github: EricAtHaufe
twitter: EricAtHaufe
scott_speights:
name: Scott Speights
email: scott.speights@haufe-lexware.com
github: SSpeights
twitter: ScottSpeights
esmaeil_sarabadani:
name: Esmaeil Sarabadani
email: esmaeil.sarabadani@haufe-lexware.com
twitter: esmaeils
daniel_wehrle:
name: Daniel Wehrle
email: daniel.wehrle@haufe-lexware.com
github: DanielHWe
anja_kienzler:
name: Anja Kienzler
email: anja.kienzler@haufe-lexware.com
filip_fiat:
name: Filip Fiat
email: filip.fiat@haufe-lexware.com
daniel_bryant:
name: Daniel Bryant
email: daniel.bryant@opencredo.com
github: danielbryantuk
twitter: danielbryantuk
jeremy:
name: Jeremy Kidwell
email: j.kidwell@bham.ac.uk
twitter: kidwellj
github: kidwellj
linkedin: kidwellj

View File

@ -1,23 +0,0 @@
---
layout: post
title: We are live or How to start a developer blog
subtitle: The 'Hello World' Post
category: general
tags: [cto, culture]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
So how does one start a developer blog? It is pretty intimidating to look at blank editor (BTW, I use [Mou](http://25.io/mou/) to write this post ;) and think about some witty content, some heading which would rope you in and make you want to read what we have to say. But why should you? And who are we anyhow?
So lets start with first things first. **Welcome to our Haufe Developer Blog**.
We - that is our development and design community at Haufe. And we are having a problem. No, it is not that we are sitting in Freiburg on the edge of the beautiful Black Forest in Germany. It is neither that we are a software company with close to 300 million EUR in yearly revenue you probably never heard of (you might have heard of Lexware though).
No, our problem is that we are actually doing quite some cool stuff (and planning even more in the future) and no one in the developer community knows about it. When I joined Haufe-Lexware as CTO back in March of this year the first thing I did was that I searched for Haufe on my usual (developer) channels: Github (nope), Twitter (nope), SlideShare (nope). Well, you see - I think that is a problem. If a tree falls in the forest but no one sees it - did the tree fall in the forest? How are you ever going to find out about us and get so excited that you want to join us - right now! (And yes - we do have plenty of dev openings if you are interested).
During the 'Meet the new guy' meeting with my team I drew a triangle with the three areas I would like to focus on first: Architecture, Technology and Developer Culture. I figured developing Developer Culture was the easiest - and boy was I naive (and wrong). Fast forward 6 month and I think that developer culture is the number one factor which will determine if we as a team and as company will succeed or fail. No matter what technology or architecture we use, it is the culture which permeates every decision we make at work day in and day out. It is culture which incluences if we go left or if we go right, if we speak up or if we remain silent. Without the right kind of culture, everything else is just band-aid.
You see, I am a pretty opiniated guy. I can probably talk for hours about Microservices, API's, Docker and so on. But if you ask me today what I think my biggest lever to affect lasting change will be, then shaping and influencing the direction of our dev culture will be my answer. Technoloy and architecure are just manifestations of that culture. Sure they need to be aligned, but **culture eats strategy for breakfast**. And how we share our stories, how we talk about what we learned and what worked and what not, are important first steps of this cultural journey.
I would like to invite you to listen in to our stories and hopefully find a few nuggets which are helpful for your own journey. Hopefully we can return a bit of knowledge to the community in as much as we are learning from the stories of so many other great teams out there who share their struggles, triumphs and learnings. So here is the beginning of our story ..

View File

@ -1,28 +0,0 @@
---
layout: post
title: OSCON Europe 2015
subtitle: Notes from OSCON Europe 2015
category: conference
tags: [open-source]
author: marco_seifried
author_email: marco.seifried@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
This is a personal and opinionated summary of my attendance of the [OSCON](http://conferences.oreilly.com/oscon/open-source-eu-2015) conference this year.
I was looking forward to this conference, was hoping to learn what is hot and trendy in the open source world right now. To some extent I got that. I was very impressed by the talks of Sam Aaron - live coding with [Sonic Pi](http://sonic-pi.net/). Very interesting approach, live coding to make music with Raspberry Pi. Plus, Sam is a very enthusiastic character who makes talking about technical stuff a fun thing. Do you know why he did that? When he was asked about his work in a pub over a beer and said he's a developer, he got reactions he wasn't happy with - now he can say he is a DJ, which is way cooler ;-)
(That's only part of the story. He also does that to teach kids about coding as well as music, he works together with schools and institutions in England.)
Another inspiring session was the [Inner Source](http://www.infoq.com/news/2015/10/innersource-at-paypal) one by Paypal: Let's apply open source practices to your own organization internally. Have others (outside your project, product core team etc.) participate - while having trusted committers (recommendation is 10% of your engineers) to keep direction and control. This might be an approach for us internally, to share code and knowledge. Also, to avoid finger pointing: We all can participate and can identify ourselves with code.
Also on my top list of sessions: [Growth Hacking](http://conferences.oreilly.com/oscon/open-source-eu-2015/public/schedule/detail/46945) by David Arnoux. Again, partly because David is someone who can talk and present, is passionate about what he does (and that's something I missed in other talks). Growth hacking is a modern approach to marketing, focused on growth, everything else is second. It uses unconventional approaches to achieve that. An example is Airbnb, which used to piggyback Craigslist (without them knowing) which was way more popular at the beginning.
[Writing code that lasts](http://de.slideshare.net/rdohms/writing-code-that-lasts-or-writing-code-you-wont-hate-tomorrow-54396256) as a session topic is not something that attracts me. It's another session about how to write better code, some low level coding guidelines we all agree on and way to often ignore. But out of better alternatives on the conference schedule, I went - and was surprised. Again, Rafael is a guy who knows how to engage with people, that helped a lot ;-)
One of his rules: Do not use *else*. Let your code handle one thing and focus on that. Also, focus on the major use case first and don't try to anticipate every little possibility up front. A bit like the microservice approach (do one thing and one thing well), but on a smaller scale.
All in all a worthwhile session.
Apart from that I was excited about day 3, the tutorial day. I booked myself into the GO workshop in the morning and Kubernetes in the afternoon.
Well, GO was ok, but very junior level and basically the same you can find as tutorials on the web already. Kubernetes might have been interesting, but it was assumed you have a Google Cloud account with credit card attached - which I didn't have and didn't want just for the sake of a tutorial. Therefore he lost me after 10 mins and I was behind from the start...
Overall I enjoyed my time at OSCON. It's always good to meet up with others, get inspired. But in total the quality of the sessions differed a lot and the tutorials, as stated, were disappointing.

View File

@ -1,25 +0,0 @@
---
layout: post
title: The beginnings of our API Journey
subtitle: Intro to our API Style Guide
category: api
tags: [api]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Before joining [HaufeDev](http://twitter.com/HaufeDev) I was fortunate to work in the [API Academy](http://apiacademy.co) consultancy with some of the smartest guys in the API field. So it was quite predictable that I would advocate for APIs as one of the cornerstones in our technology strategy.
Fast forward a few months and we open sourced the initial release of our [API style guide](http://haufe-lexware.github.io/resources/). It is a comprehensive collection across a wide range of API design resources and best practices. Credit for compiling this incredible resource has to go to our very own [Rainer Zehnle](https://github.com/Kodrafo) who probably cursed me a hundred times over for having to use Markdown to write it.
But this was just the starting point. In parallel we started with formalized API Design Reviews to create the necessary awareness in the development teams. After a couple of those reviews we are now revising and extending our guide to reflect the lessons we have learnt.
The design reviews in turn triggered discussions on the various tradeoffs when designing APIs. One of the most compelling discussion was about [the right use of schema to enable evolvable interfaces](https://github.com/Haufe-Lexware/api-style-guide/blob/master/message-schema.md). In that section we discuss how [Postels Law](https://en.wikipedia.org/wiki/Robustness_principle) (or Robustness Principle) can guide us towards robust and evolvable interfaces, but how our default approach to message schemas can result instead in tightly coupled and brittle interfaces.
Another new section was triggered by our Service Platform teams asking for [feedback on the error response of our Login API](https://github.com/Haufe-Lexware/api-style-guide/blob/master/error-handling.md#error-response-format).
While we are not claiming that our API design guidance and best practices are fool proof, having this document gives us an incredible leg up on having the right kind of conversations with engineering. And step by step we will be improving it.
This is also one of the reasons why we open sourced our API guide from the start - we have gained so much knowledge from the community that we hope we can give something back. We would love to hear your feedback or get pull requests if you are willing to contribute. This is the genius of Github - making it a journey and a conversation with the larger engineering community out there, and not just a point release. :)

View File

@ -1,42 +0,0 @@
---
layout: post
title: Impressions from DevOpsCon 2015
subtitle: Notes from DevOpsCon 2015
category: conference
tags: [docker, devops]
author: rainer_zehnle
author_email: rainer.zehnle@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Elias Weingaertner, Helmut Strasser and I attended [DevOpsCon](http://devopsconference.de/de/) in Munich from 23th - 25th November 2015.
It was an impressive conference with a lot of new information and also excellent food :-).
In the following I want to focus on my personal highlights.
## Docker Basis Workshop
I joined the workshop **Der Docker Basis Workshop** from [Peter Rossbach](http://www.bee42.com/). Till now I managed to stay away from Docker cause there are other colleagues in our company that have more enthusiasm for tools like that. A **Basis Workshop** offered a good way to get familiar with Docker. The workshop itself focused on pure Docker. Peter introduced the intention and basic structure of the docker environment and the relationship between docker images, container, daemon, registry etc. Peters created his slides with markdown and shipped it using containers. This guy is really a Docker evangelist and is convinced about the stuff he presents. For most of the workshop we worked using the terminal on a virtual machine running docker and we learned about the different commands. It wasn't that easy for me because the workshop was clearly designed for guys that are familiar with linux. I struggled for example with creating a simple Docker file with vi (don't know how anybody can work with this editor).
One of the reasons why I joined the workshop was to watch Peter presenting Docker and whether it is a good fit for an inhouse workshop. I'm sure this would work out great. I'm also sure that it's a good idea to meet with Peter to review our own docker journey and to get feedback from him.
## Microservices and DevOps Journey at Wix.com
Aviran Mordo from [Wix.com](http://de.wix.com/) presented the way how wix.com separated their existing monolithic application in different microservices. This was the session that I enjoyed the most. Aviran explained how they broke up the existing application in just two services in a first step. They learned a lot of stuff about database separation, independent deployment etc. They also learned that it's not a good idea to do too much at the same time. I loved to hear what he marked as **YAGNI** (You ain't gonna need it). It allowed them to focus on business value and how they could handle the task and got the job done. Wix.com did not implement API versioning, distributed logging and some other stuff we are talking about. Aviran emphasized more than once that they strictly focused on tasks that must be done and that they cut away the "nice-to-have" things. Nevertheless it took a year to split the monolith in two services! After that they had more experience and they got faster. Now the need for distributed logging arose and they took care of it. After three years Wix.com has now 140 microservices. For me it was an eyeopener that it is absolutely ok to start with a small set of requirements in favor of getting the job done and to learn. Every journey begins with a single step!
## Spreadshirts way to continuous delivery
Samuel Ferraz-Leite from [Spreadshirt.com](www.spreadshirt.de) presented their way to continuous delivery. They started with a matrix organisation. The teams were separated in
* Frontend DEV
* Backend DEV
* QA
* Operation
The QA team was located in another building than the DEV team. The Ops team as well. This setup led to a monolithic app and architecture that didn't scale. Symptoms were ticket ping-pong or phrases like "the feature lies with Q&A". The deployment of DEV was different from QA. Ops deployed manually. The cycles to even deploy a feature took days. Samuel quoted [Conway's law](https://en.wikipedia.org/wiki/Conway%27s_law). They got the results according to their organization structure. They reorganized. They created teams with a product owner, DEV and QA. Ops was not included in the first step. Each team got the full responsibility for a service/product. They also got the authority to decide. One outcome was the end of ticket ping-pong and the whole team felt responsible for product quality. They also started the construction of a microservice architecture and started to reduce technical debts. After the first succesful reorganization they integrated Ops to each team. This resulted in excellent telemetry and monitoring capabilities, infrastructure as code (puppet) and continuous delivery (rundeck). Team overlapping topics like Puppet are addressed in so called **FriendsOf** groups. Product owners and the whole management fosters these groups. Additionally they have weekly standups with representants of each team.
I was really amazed by the power of organization restructuring. Of course I know Conway's law. But that it really influences the outcome of a whole company in such a heavy way made me thoughtful. I mulled it over for our own company.
* How is our company structured regarding the product teams? How do we setup the teams?
* What about ourselves organized as one architect team? Isn't that an antipattern?
* What about SAP, SSMP? CorpDev and SBC? BTS and H2/H3?
## Conclusion
It was a good conference and I especially appreciated learning from the experiences of other companies where they failed and where they are successful. I hope that in three years we can look back and share our successful way with others.

View File

@ -1,56 +0,0 @@
---
layout: post
title: Impressions from DockerCon 2015 - Part 1
subtitle: Insights, Outlooks and Inbetweens
category: conference
tags: [docker, security]
author: thomas_schuering
author_email: thomas.schuering@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Once upon a time in a ~~galaxy~~ container far, far away ... We, a bunch or ~~rebels~~ Haufe-employees, entered the halls of container - wisdom: DockerCon EU 2015 in Barcelona, Spain. Hailing from different departments and locations (Freiburg AND Timisoira, CTO's, ICT, DevOps, ...), the common goal was to learn about the current state of Docker, the technology behind it and its evolving eco-system.
The unexpected high level catering (at least on the first day and the DockerCon Party in the Marine Museum) was asking for more activity than moving from one session to the next, but we had to bear that burden (poor us!).
We met with a couple of companies (Rancher Labs, SysDig, Zanlado and some more) to get a feeling what is already available on the market and how mature do the solutions feel?
The recordings of the [Day 1 General Session](http://blog.docker.com/2015/11/dockercon-eu-2015-day-1-general-session/) and [Day 2 General Session](http://blog.docker.com/2015/11/dockercon-eu-2015-day-2-general-session/) contain the major news and most important things.
Here's what I found most important (no specific order and might be intermixed with other sessions or presentations, but you know that I'm deeply interested in security and stuff ;-)):
- Docker delivers "Enterprise" Features and is getting more mature:
- Security
- Docker Content Trust is getting stronger by hardware signing
Since Docker 1.8, strong cryptographic guarantess over the content of an Docker image can be established by using signing procedures. From their [Blog](https://blog.docker.com/2015/08/content-trust-docker-1-8/) here an excerpt:
[...] Docker Engine uses the publishers public key to verify that the image you are about to run is exactly what the publisher created, has not been tampered with and is up to date. [...].
At DockerConEU 2015, the support of Hardware Signing via Yubi Key was announced that strengthens the signing process even more. There's an elaborate [article](https://blog.docker.com/2015/11/docker-content-trust-yubikey/) available.
- Security scans for Images: Project Nautilus
Trusting an image is good and well, but what if the binaries (or packages) used image are vulnarable? Usually, distribution-maintainers provide information and updates for packages. But what about a Dockerfile that installs its artifacts without using a package-manager? Project Nautilus takes care about this situation but not only checking "all" of the vulnerability databases, but by scanning the files of an image. There's not much public information (link :-)) available yet, but it's a promising approach.
- No more exception from isolation: User namespaces (available in Docker 1.10 "dev")
Almost everything in Docker is isolated / abstracted from the Host-OS. One crucial exception was still present: Userids were used "as is". For example, this would allow the root user of a container to modify a readonly mounted file (by "volume" command) that is owned by the root user of the host. In future, this will not be possible anymore, because the userids will be "mapped" and the rootid "0" inside the container will be effectively treated as "xxxx + 0" outside the container.
- SecComp
Seccomp is a filter technology to restrict the set of systemcalls available for processes. You could imagine a container being corrupted and the docker-engine would simply not allow a process inside a container to execute "unwanted" systemcalls. Setting the system (host) time? Modifying swap params? Such calls (and more) can be "eliminated".
- Security made easy
This is mentioned in the recording of [Day 1 General Session](http://blog.docker.com/2015/11/dockercon-eu-2015-day-1-general-session/). Basically it says: If security is hard to do, nobody will do it ...
Docker tries to ease the "security pain" and I ask you to look/search for that small mentioning in the session. Maybe you agree to the points mentioned there :-)
- Does it (Docker) scale?
- Live [scale testing](https://blog.docker.com/2015/11/scale-testing-docker-swarm-30000-containers/) wasn't something I was really looking forward to see, but it was impressive anyway.
- Managing containers
- I DO like Rancher, I'd love to have something even more powerfull ... and there comes "Docker UCP (Universal Control Plane) Beta". The UCP and the Docker trusted registry are two of the "commercial" products I've seen from Docker. Hopefully, the basic tools are staying on the "Forces light side" of free opensource - at least for developers and private users.
- Using Docker in production
- At Haufe, sometimes we seem to be lagging behind new technologies. Some of the presentations at Dockercon put a pretty strong emphasis on being carefull and preserve successfull processes (esp. dev, ops and security). A quick compilation of presentation links and topics will follow.
In the meantime have a look at the [great overview](https://github.com/docker-saigon/dockercon-eu-2015) what happened during both days with links to all/most of the presentations, slides or videos to be found a .
## Things "inbetween"
... were quite interesting, too. We met with some guys from Zalando (yes, the guys who're screeming a lot in their adverts), who were explaining how they are using some home-brewn (git-available) facade (or tool if you like) to ease the pain with running a custom PaaS on AWS. The project [STUPS](https://stups.io/) uses a plethora of Docker-containers and is to be found on its own web-page and [github](https://github.com/zalando-stups) .
(To be continued :-))

View File

@ -1,100 +0,0 @@
---
layout: post
title: APIdays Paris - From Philosophy to Technology and back again
subtitle: A biased report from APIdays Global in Paris
category: conference
tags: [api]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Having just recently come home again from the [APIdays](http://www.apidays.io) conference in Paris (Dec 8-9th 2015), memories are still quite fresh. It was a crowded event, the first day hosting a whopping 800 API enthusiasts, ranging from the geekiest of the geeks to a fair amount of suited business people, showing that talking about APIs is no longer something just the most avantgardist of companies, the most high tech of the tech companies, spend their time with. *Au contraire* (we were in Paris after all), APIs are mainstream, and they contribute to the advancing of collaboration and automatization of the (digital) world as such.
{:.center}
![Eiffel Tower - Paris]({{ site.url }}/images/2015-12-11-paris-eiffeltower.jpg){:style="margin:auto"}
<small>(Image by Martin Danielsson, [CC BY 4.0 License](https://creativecommons.org/licenses/by/4.0/))</small>
This was also one of the reasons the topic of APIdays was chosen as such: **Automating IT, Business and the whole society with APIs**. The partly non-techy twist of the subtitle to APIdays was also reflected in the sessions: Split into roughly three (or four) categories, you had a choice between real tech stuff, business related sessions and quite a few workshops. In addition to that, the opening and ending keynotes were more kept in a philosophical tone, featuring (in the opening keynote) [Christian Fauré](http://www.christian-faure.net/) and renowned french philosopher [Bernard Stiegler](https://en.wikipedia.org/wiki/Bernard_Stiegler) (in the ending keynote), presenting their takes on digital automation, collaboration and its effects on society, with respect to APIs. Even [Steven Willmott](http://twitter.com/njyx) pulled off a rather non-techy talk, and even non-businessy talk, rather unusual for a CEO of one of the big players in API space ([3scale](http://www.3scale.net)).
### API Philosophy
In their talks, both Fauré and Stiegler were talking about the effects of automation on society, but both with quite contradicting base sentiments, even if the message - in the end - seems similar. But more on that later.
Fauré's topic was "Automation in the 21st century", and the fear of many people that software/robots/automated processes replace humans in tasks which were previously accomplished manually, the simple fear of becoming superfluous. This is what he calls *Opposition* to the automation in society, and it is our main task to instead encourage a culture of *Composition* in order to leverage the good, and focus on the capabilities to be creative (and yes, he included a sidekick to [Peter Drucker](https://en.wikipedia.org/wiki/Peter_Drucker)'s "Culture eats strategy for breakfast" quote). This is where he sees the realm of APIs: As an area of creativity. Composing APIs to create value in ways we have not thought of before.
> Designing an API is an act of creativity.
> <hr>
> <small>Christian Fauré ([@ChristianFaure](https://twitter.com/ChristianFaure))</small>
This act of composition is creativity, as well as designing an API is an act of creativity. Good APIs take time to design, APIs which encourage creative use of them even more so. Fauré also stresses that even with enhanced tooling (and we're just seeing the first big wave of API management and development tools yet), the actual designing of the API is still where the main work lies, or, at least the main lever.
> API management solutions have great benefits, but you still cannot switch your brain off!
> <hr>
> <small>Christian Fauré ([@ChristianFaure](https://twitter.com/ChristianFaure))</small>
Growing ground for such creativity lies for Fauré in the "Hacking Culture". Try out things, be creative, and use APIs as a means to bring your ideas into reality faster and simpler.
Steven Willmotts ([@njyx](http://twitter.com/njyx)) main message in the session ([slides](http://www.slideshare.net/3scale/apis-and-the-creation-of-wealth-in-the-digital-economy-apidays-paris-2015-keynote)) following Christian Faurés gives the idea of enabling creativity a new spin, but still points in a similar direction: As of now, APIs are still a technical topic. You need to be a developer to be able to really leverage the functionality (see also [twilio's](http://www.twilio.com) billboard ad campaing, e.g. [here](https://twitter.com/ctava1/status/608451693110550529)). Steven thinks the "next big thing" in API space will be enabling business users to interact more easily with APIs, without needing fundamental engineering skills. Or, as he put it:
> I want to buy my flowers from a florist, not from an engineer!
> <hr>
> <small>Steven Willmott ([@njyx](http://twitter.com/njyx))</small>
The last but not least session of APIdays was to be by [Bernard Stiegler](https://en.wikipedia.org/wiki/Bernard_Stiegler); drawing a lot from his book "Automatic Society" ([*La Société Automatique*](http://www.amazon.fr/La-Soci%C3%A9t%C3%A9-automatique-Lavenir-travail/dp/2213685657), not yet available in english), he was also talking about the need to create new jobs out of automation. His claim is that a closed system, in which automation does not generate value and new opportunities, is doomed to self-destruction by *entropy*. Only a living system, allowing for biological processes (read: life, or life-like organisms), can survive. This is a main reason he sees automation not only as a positive aspect, but also highly critical: Automating to free up time only makes sense if the free time is used in a sensible way. And no, Facebook is not, according to Stiegler. The search for opportunities to create *disentropy* (as the opposite of entropy) has to be what human kind has to pursue, albeit the road there is not clear.
### API Technology
This blog post may until now have given the impression I attended a philosophy conference, which was of course not the case. It set an interesting frame to the conference though, opening up a new view on a topic which tended to be extremely techy until now.
Many of the more technical talks dealt with the usual suspects [Microservices and DevOps](https://haufe-lexware.github.io/microservices-devopscon/), as being an integral part of the API economy and architecture style. Some were very enthusiastic, some, such as [Ori Pekelman](http://platform.sh) have had enough of it, tooting in our Elias' horn, saying it's no news, and he can't stand seeing "unicorns farting containers into microservice heaven" anymore. He had a quite drastic slide accompanying that one (yes, it's an actual quote), but I wasn't quick enough with the camera.
The return to more serious topics, *Hypermedia* was an astonishingly big topic at the conference. Not that it's not a seriously good idea, but now adoption seems to find its way into real world scenarios, with practical and working specifications popping up, which are being adopted at an increasing rate. As Hypermedia leaves the state of a research topic (see below picture on [HATEOAS](https://en.wikipedia.org/wiki/HATEOAS) - Bless you!) and is actually being used.
{:.center}
![HATEOAS - Bless you!]({{ site.url }}/images/2015-12-11-hateoas.jpg){:style="margin:auto"}
<small>(Courtesy of [CommitStrip](http://www.commitstrip.com/en/2015/12/03/apiception/))</small>
Many people are perhaps scared of the seemingly intransparent topic, but there are a lot of really good use cases for hypermedia. Jason Harmon of PayPal/Braintree ([@jharmn](http://twitter.com/jharmn)) pointed to some of the most prominent ones in his talk:
* Paging links inside result sets (*first*, *last*, *previous*, *next*)
* Actions and permissions on actions: If an action is contained within the result, it's allowed, otherwise it isn't
* Self links for caching and refreshing purposes (you know where the result came from)
Adopting Hypermedia techniques for these use cases can help doing the heavy lifting of doing e.g. paging for all clients at once, as opposed to forcing each client to find its own pattern of implementation. The adoption of hypermedia techniques is also due to the existance of (more or less) pragmatic specifications, such as
* [HAL](http://stateless.co/hal_specification.html) (actually [Mike Kelly](http://stateless.co) also attended APIdays)
* [JSON-LD](http://json-ld.org) ([Elf Pavlik](https://twitter.com/elfpavlik) also attended APIdays)
* [Collection+JSON](http://amundsen.com/media-types/collection) ([Mike Amundsen](http://amundsen.com))
* [SIREN](https://github.com/kevinswiber/siren) (by [Kevin Swiber](https://github.com/kevinswiber))
But, to reiterate the theme of "no actual news":
> Hypermedia is actually already in Fielding's dissertation on REST, if you read until the end.
> <hr>
> <small>Benjamin Young ([@BigBlueHat](http://twitter.com/BigBlueHat)), organizer of [RESTFest](http://www.restfest.org)</small>
In order to keep this blog post not exceedingly long (I bet nobody's reading this far anyway), I'll just mention a couple of the more interesting topics I had the pleasure to check out in one or more sessions:
* [RDF and SPARQL](http://www.w3.org/TR/rdf-sparql-query/) seems to get adopted more and more; new interesting techniques to offload work to clients make scaling easier (support only simpler queries, not full SPARQL language, let clients assemble results): Ruben Verborgh ([@rubenverborgh](https://twitter.com/rubenverborgh)) - [Slides](http://www.slideshare.net/RubenVerborgh/hypermedia-apis-that-make-sense).
* [Graph/QL](https://facebook.github.io/graphql/) looks very promising in terms of providing a very flexible querying language which claims to be backend agnostic (have to check that out in more detail, despite it being by Facebook): [Slides](http://www.slideshare.net/yann_s/introduction-to-graphql-at-api-days)
### API Hackday
Despite being tempted by a packed agenda of talks on the second day, I chose to participate in the "mini RESTFest" which was organized at the conference venue. Darrel Miller ([@darrell_miller](http://twitter.com/darrel_miller)) of Microsoft (yes, that Microsoft) and Benjamin Young ([@BigBlueHat](http://twitter.com/BugBlueHat)) did a great job in organizing and taming the different opinions which gathered in the hackday space on the second floor of the [*Tapis Rouge*](http://www.tapisrouge.fr/).
The scene setting was in short the following: Starting with a RFC style definition of a "Conference Talk Proposal" media type which was conceived by Darrel, what can we do with that?
I *think* Darrel had a master plan of creating something quite lightweight to be able to have an iCal or vCard corresponding transfer media type for conference sessions, but boy, did discussions come up on this. We had [Elf Pavlik](https://twitter.com/elfpavlik) taking part, bringing a lot of ideas into play regarding Hypermedia and JSON-LD. Additionally, [Paul Chavard](https://github.com/tchak) from Captain Train participated in the lively discussion. Darrel did explicitly not want to *boil the ocean* by adopting some larger scale specification like JSON-LD, but he wanted something lean and well specified to make adoption of the media type simple. After a good while, we *sort of* agreed on something inbetween...
In the end, we did finish a couple of presentable things, such as a translator of the format into JSON-LD (guess who implemented that?), a cool Jekyll template for displaying the proposals on a static website (by Shelby Switzer, [@switzerly](https://twitter.com/switzerly)). My own contribution was to create a [JSON schema](http://json-schema.org/) matching the media type, and implementing an HTML form using [Jeremy Dorn](https://github.com/jdorn)'s quite cool [JSON Editor component](https://github.com/jdorn/json-editor).
The results (and possibly also further proceedings with this) can be viewed on [RESTFests github repository](https://github.com/RESTFest/2015-apidays-conference-talk-api); some of the results are still in the branches.
### Conclusion
I had a good time at the APIdays; the sessions had overall good quality, and the audience was fantastic. I had a lot of opportunities to meet people I knew, and even more important, people I did not yet know. I would definitely recommend going there again.
{:.center}
![APIdays]({{ site.url }}/images/2015-12-11-apidays-logo.png){:style="margin:auto"}
<small>[APIdays](http://www.apidays.io)</small>

View File

@ -1,55 +0,0 @@
---
layout: post
title: Impressions from DockerCon 2015 - Part 2
subtitle: Highlights and picks from DockerCon 2015
category: conference
tags: [docker]
author: Peter Frey
author_email: peter.frey@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
There are already some weeks since last [DockerCon 2015](http://europe-2015.dockercon.com/), and I want to share some ideas and thoughts that I took from there. The conference took place on November 16th and 17th this year in Barcelona, and was attended by over 1000 participants from Europe and beyond. The estimation is based on the size of the large [forum auditorium](http://www.ccib.es/spaces/forum-auditorium) that may take up 3.140 persons that got filled for the three plenary sessions.
First of all, some background, although [Docker](https://www.docker.com/what-docker) as a technology or hype or platform - however you conceive it - is in the meantime an already well-known terminology. And a large number of articles have already been published on it in the last two years. Docker was [initially released in 2013](https://en.wikipedia.org/wiki/Docker_\(software\)#History). so it is a relatively new. My first experience with Docker was last year, in Spring 2014, when I was asked to do a prototype implementation for the Haufe editorial system (internally known as HRS). Docker was new, and it was new to me, and I struggled with a lot of does and don'ts when transforming an environment - even the small part choosen for the prototype - that has been grown over years and with heavy data centricity.
So I was excited to visit DockerCon and see how Docker continued to evolve as platform into a very flexible, lightweight virtualization platform. The Docker universe indeed made big steps under the hood, with the tooling around it, and also with a growing number of third party adopters improving many aspects of what Docker is and wants to be. Docker may and will revolutionize the way we will build and deploy software in the future. And the future starts now, in the projects we bring ahead.
### Virtualization and Docker
The past waves of virtualization are now commodity, has reached IT and is no longer the domain of development as it was years ago, when we started with VMware for development and testing. It is the base for nowadays deployment. Virtualization has many aspects and flavours, but one thing is in common: it is rather heavy weighted to build up a virtualization platform, and using it will cause some performance reduction in comparision to deploying software artifacts directly to physical machines - what was still done for this reason, to have maximum throughput and optimal performance for business. But with virtualization we gain flexibility, beeing able to move a virtualized computing unit on the hardware below, especially from an older system to a newer one without having to rebuild, repackage or deploy anything. And it is already a big, well known industry behind virtualization infrastructure and technology.
So what is new with Docker? First of all, Docker is *very lightweight*. It fits well in modern Unix enviroments as it bases upon kernel features like CGroups, LXC and more to provide a separation of the runtime environment for the application components from the base os system and drivers and hardware below. But docker is not linux only, there is movement also in the non-Linux part of our world implementing docker and docker related services. Important is: Docker is not about VMs, it is containers. Docker as technology and platform promises to become a radical shift in view. But as I am no authority in this domain, I just refer to a recent article on why Docker is [the biggest disruption in Linux virtualization](http://www.nextplatform.com/2015/11/06/linux-containers-will-disrupt-virtualization-incumbents/).
### Docker fundamentals
There was one session that made a deep impression on me. It was the session titled ["Cgroups, namespaces and beyond: what are containers made from"](http://de.slideshare.net/Docker/cgroups-namespaces-and-beyond-what-are-containers-made-from) by Jerome Pettazzoni. Jerome shows how Docker bases on and evolves from Linux features like cgroups and namespaces and LXC (Linux containers). Whereas early docker releases based on LXC, it uses now an own abstraction from underlying OS and Linux kernel called libcontainer. He show containers can be built from out of the box linus features in an impressive demo. The main message I took from this presentation: Docker introduces no overhead in comparision to direct deployment to a linux system, as the mechanism used by Docker are system inherent, it means are also used and in effect when one uses Linux without, as they sit there and are used anywhere. Docker is lightweight, really, and has nearly no runtime overhead, so it should be the natural way to deploy and use non-OS software components on Linux.
### Docker and Security
When I started in 2014, one message from IT was: Docker is insecure, and not ready for production use, we cannot support it. Indeed there are a couple of security issues related to Docker, especially if the application to deploy depends on NFS needed to share configuration data and provide a central pool of storage to be accessed by a multi-node system (as HRS is, for reasons of scaling and load balancing). In a Docker container, you are root, and this implies also root access to underlying services, such as NFS mounted volumes. Nowadays, still true, unfortunately. You will find the discussions in various discussion groups in the internet. For example in ["NFS Security in Docker"](https://groups.google.com/forum/#!topic/docker-user/baFYhFZp0Uw) and many more.
But there are be big advances with Docker that may slip into the next versions that are planned. One of them, that I yearn to have, is called user namespace mapping. It was announced at DockerCon in more than one presentation, but I remember from "Understanding Docker Security", presented by two memobers of the Docker team, Nathan McCauley and Diogo Monica. The reason why it is not yet final is that it requires further improvements and testing, and so it is only available in the experimental branch of docker, currently.
The announcement can be read here: []"User namespaces have arrived in Docker"] (http://integratedcode.us/2015/10/13/user-namespaces-have-arrived-in-docker). The concept of user namespaces in Linux itself is described in the [linux manpages](http://man7.org/linux/man-pages/man7/user_namespaces.7.html) and may be supported by a few up-to-date linux kernels. So it is something for the hopefully near future. See also the known restrictions section in [github project 'Experimental: User namespace support'](https://github.com/docker/docker/blob/master/experimental/userns.md).
An other progress with container security is the project [Notary and docker content trust](https://github.com/docker/notary). It was briefly presented at DockerCon, and I would have to dive deeper into this topic to say more on it. Interesting news is also support for secure hardware based security. To promote that, every participant in one of the general sessions got a YubiKey 4 Nano device, and its use for two factor authentication with code in a Docker repository was demonstrated in the session. The announcement can be found in ["Yubico Launches YubiKey 4 and Touch-to-Sign Functionality at DockerCon Europe 2015"](http://www.marketwired.com/press-release/yubico-launches-yubikey-4-and-touch-to-sign-functionality-at-dockercon-europe-2015-2073790.htm).
More technical information on it can be read in the blog article [Docker Content Trust](https://blog.docker.com/2015/08/content-trust-docker-1-8/).
See also the [InnoQ article](http://www.infoq.com/news/2015/11/docker-security-containers) and the [presentation](https://blog.docker.com/2015/05/understanding-docker-security-and-best-practices/) from May 2015.
### Stateless vs Persistency
One thing that struck me last year, when I worked on my Docker prototype implementation, was that Docker is perfect for stateless services. But troubles are ahead, as in real world projects, many services tend to be stateful, with more or less heavy dependencies on configuration and data. This has to be handled with care when constructing Docker containers - and I indeed ran into a problems with that in my experiments.
I hoped to hear more on this topic, as I guess I am probably not the only one that has run into issues while constructing docker containers.
Advances in docker volumes where mentioned, indeed. Here I mention the session "Persistent, stateful services with docker clusters, namespaces and docker volume magic" by Michael Neale.
### Usecase and Messages
As a contrast to the large number of rather technology focussed sessions was the one held by Ian Miell - author of 'Docker in Practice' - on ["Cultural Revolution - How to Manage the Change Docker brings"](http://de.slideshare.net/Docker/cultural-revolution-how-to-mange-the-change-docker-brings)
A use case presentation was "Continuous Integration with Jenkins, Docker and Compose", held by Sandro Cirulli, Platform Tech Lead of Oxford University Press (OUP). He presents the devops workflow used with OUP for building and deploying two websites provising resources for digitally under represented languages. The infrastructure runs on Docker containers, with Jenkins used to rebuild the Docker Images for the API based on a Python Flask application and Docker Compose to orchestrate the container. The CI workflow and demo of how continuous integration was achived were given in the presentation. It is available in [slideshare](http://de.slideshare.net/Docker/continuous-integration-with-jenkins-docker-and-compose), too.
One big message hovered over the whole conference: Docker is evolving ... as an open source project that is not only based on a core team but also heavily on many constributors making it grow and become a success story. Here to mention the presentation ["The Missing Piece: when Docker networking unleashing soft architecture 2.0"](http://de.slideshare.net/Docker/the-missing-piece-when-docker-networking-unleashing-soft-architecture-v15). And "Intro to the Docker Project: Engine, Networking, Swarm, Distribution" that rouse some expectations that were not met by the speaker, unfortunately.
### Session Overview
An overview on the sessions held at DockerCon 2015 in Barcelona can be found [here](https://github.com/ngtuna/dockercon-eu-2015/blob/master/README.md), together with many links to the announcements made, presentations for most sessions in slideshare, and links to youtube videos of the general sessions, of which I recommented viewing the one for [day 2 closing general session](https://www.youtube.com/watch?v=ZBcMy-_xuYk) with a couple of demonstrations what can be done using Docker. It is entertaining and amazing.

View File

@ -1,178 +0,0 @@
---
layout: post
title: Using 'Let's Encrypt' Certificates with Azure
subtitle: Create free valid SSL certificates in 20 minutes.
category: howto
tags: [security, cloud]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
[Let's Encrypt](https://letsencrypt.org/) is a new Certificate Authority which has a couple of benefits almost unheard of before: It's free, automated and open. This means you can actually use Let's Encrypt to create real SSL certificates which will be accepted as valid by web browsers and others.
This post will describe how you can leverage a simple [Azure](http://azure.com) Ubuntu VM to create a certificate for any domain you can control yourself (create a CNAME entry for). This how-to really starts with Adam and Eve, so if you already have an Ubuntu machine you can play with, and which is reachable from the public internet, you can skip the provisioning part on Azure.
Please note that Let's Encrypt in no way depends on the VM being provisioned on Azure. It's just what I currently have at hand (as I have an MSDN subscription).
### Prerequisites
You will need the following things, which this how-to does not provide you with:
* A valid Azure Account and some credit to play with (depending on how fast you are, you will need something between 10 Cents and a Euro/Dollar).
* Know-how on how to create a CNAME (DNS entry) for your machine (I usually delegate this to our friendly IT, they know that by heart).
* You need to know which DNS name you want for your certificate. I will assume `myserver.contoso.com` throughout this blog post.
### Provision an Ubuntu VM on Azure
To start things, open up the [Azure Portal](https://portal.azure.com) using your favorite web browser, and log in so that you have access to the Azure portal. Then click *Virtual machines (Classic)*, then *Add +*.
{:.center}
![New VM]({{ site.url }}/images/letsencrypt-1-new-vm.png){:style="margin:auto"}
Then, search for `ubuntu` and select *Ubuntu Server 14.04 LTS* (I think you can choose a regular Ubuntu, too, but this one definitely works).
{:.center}
![Select Ubuntu]({{ site.url }}/images/letsencrypt-2-select-ubuntu.png){:style="margin:auto"}
Specify the correct settings for the VM. I chose the following specs for the VM:
* Hostname `letsencrypt` (Azure will pick a name for you starting with `letsencrypt`)
* User name `azure` (or whatever you want)
* Choose *Password* and enter a valid password
* Standard A1 (1,75 GB of RAM, 1 Core, completely sufficient)
* Add two endpoints: http (tunnel port 80) and https (tunnel port 443). See image below.
* Leave the rest of the setting to the defaults
{:.center}
![VM Settings]({{ site.url }}/images/letsencrypt-3-vm-settings.png){:style="margin:auto"}
When you're done and all your settings have been confirmed (*OK*), click the *Create* button to provision your VM.
**Note**: You will be charged for running the VM on Azure. This is the only cost though you will generate when creating the certificate.
This will take some time (around 5 minutes), but after that, you will find the information on your machine in the following way:
{:.center}
![Azure VM Provisioned]({{ site.url }}/images/letsencrypt-4-azure-name.png){:style="margin:auto"}
The automatically created DNS entry for your machine is displayed there, and this is the name you can use to connect to the machine using your favorite SSH tool (`ssh` if you're on Linux or Mac OS X, e.g PuTTY if you're on Windows).
### Set the CNAME to your VM
Now that you have a running Ubuntu machine we can play with, make sure the desired host name resolves to the Ubuntu VM DNS name from the above picture. Pinging `myserver.contoso.com` must resolve to your Ubuntu machine.
If you don't know how to do this, contact your IT department or somebody else who knows how to do it. This is highly depending on your DNS provider, so this is left out here.
### Setting up your Ubuntu machine for Let's Encrypt
Now, using an SSH client, log into your machine (using the user name and password you provided when provisioning it). I will assume that your user is allowed to `sudo`, which is the case if you provisioned the Ubuntu machine according to the above.
First, install a `git` client:
```
azure@letsencrypt:~$ sudo apt-get install git
```
Then, clone the `letsencrypt` GitHub repository into the `azure` user's home directory:
```
azure@letsencrypt:~$ git clone https://github.com/letsencrypt/letsencrypt
```
Get into that directory, and call the `letsencrypt-auto` script using the `certonly` parameter. This means Let's Encrypt will just create a certificate, but not install it onto some machine. Out of the box, Let's Encrypt is able to automatically create and install a certificate onto a web server (currently, Apache is supported, nginx support is on its way), but that requires the web server to run on the very same machine. But as I said, we'll just create a certificate here:
```
azure@letsencrypt:~$ cd letsencrypt/
azure@letsencrypt:~$ ./letsencrypt-auto certonly
```
This will install quite a few additional packages onto your machine, which is also in part why I prefer to do this on a separate machine. The installation process and creation of the Let's Encrypt environment takes a couple of minutes. Don't get nervous, it will work.
Using only the default values, you will end up with a 2048 bit certificate valid for 3 months. If you issue `./letsencrypt-auto --help all` you will see an extensive documentation of the various command line paramaters. The most useful one would presumably be `--rsa-key-size` which you can use to e.g. create a 4096 bit certificate.
### Using Let's Encrypt
In the first step, Let's Encrypt will ask for an administration email address; this is the email address which will be used if some problems occur (which normally doesn't happen). You will only have to provide this address once, subsequent calls of `letsencrypt-auto` will not ask for it (it's stored in `/etc/letsencrypt`).
After that, you will have to accept the license terms:
{:.center}
![License Terms]({{ site.url }}/images/letsencrypt-5-terms.png){:style="margin:auto"}
In the next step, enter the domain name(s) you want to create the certificate for:
{:.center}
![Domain Name]({{ site.url }}/images/letsencrypt-6-domain-name.png){:style="margin:auto"}
Usually, you will create one certificate per domain you will use. Exceptions will be for example when creating a certificate which is both valid for `www.contoso.com` and `contoso.com`, if your web server answers to both. In this case, we will just provide `myserver.contoso.com` (this might be a web service or similar).
If everything works out, Let's Encrypt will have created the certificate files for you in the `/etc/letsencrypt/live` folder. If you run into trouble, see below section of common problems.
### Getting the certificates to a different machine
In order to get the certificates off the Ubuntu VM, issue the following commands (first, we'll go `root`):
```
azure@letsencrypt:~$ sudo bash
root@letsencrypt:~# cd /etc/letsencrypt/live
root@letsencrypt:/etc/letsencrypt/live# ll
total 20
drwx------ 5 root root 4096 Dec 16 13:50 ./
drwxr-xr-x 8 root root 4096 Dec 15 14:38 ../
drwxr-xr-x 2 root root 4096 Dec 16 13:50 myserver.contoso.com/
root@letsencrypt:/etc/letsencrypt/live# ll myserver.contoso.com
total 8
drwxr-xr-x 2 root root 4096 Dec 16 13:50 ./
drwx------ 5 root root 4096 Dec 16 13:50 ../
lrwxrwxrwx 1 root root 43 Dec 16 13:50 cert.pem -> ../../archive/myserver.contoso.com/cert1.pem
lrwxrwxrwx 1 root root 44 Dec 16 13:50 chain.pem -> ../../archive/myserver.contoso.com/chain1.pem
lrwxrwxrwx 1 root root 48 Dec 16 13:50 fullchain.pem -> ../../archive/myserver.contoso.com/fullchain1.pem
lrwxrwxrwx 1 root root 46 Dec 16 13:50 privkey.pem -> ../../archive/myserver.contoso.com/privkey1.pem
```
You should see the four files belonging to the certificate inside the `/etc/letsencrypt/live` folder. We will tar these up and make sure you can access them (securely) from the outside:
```
root@letsencrypt:/etc/letsencrypt/live# tar cfvzh ~azure/keys_contoso_com.tgz myserver.contoso.com/*
myserver.contoso.com/cert.pem
myserver.contoso.com/chain.pem
myserver.contoso.com/fullchain.pem
myserver.contoso.com/privkey.pem
root@letsencrypt:/etc/letsencrypt/live# chown azure:azure ~azure/keys_contoso_com.tgz
root@letsencrypt:/etc/letsencrypt/live# exit
azure@letsencrypt:~$
```
Now you'll have a file called `keys_contoso_com.tgz` in the home directory of the `azure` user. Pick your favorite tool to get the file off the machine, e.g. WinSCP on Windows or `scp` on Linux or Mac OS X machines.
### Backing up `/etc/letsencrypt`
If you plan to re-use the settings of Let's Encrypt, please also back up the entire `/etc/letsencrypt` folder and store that in a safe place.
### Converting to other formats
In some cases, you can just use the PEM certificate files (e.g. for nginx or Apache). In other cases, you will need to convert these certificate files into a different format, like `PFX`. For more information on that, please see the following website: [https://www.sslshopper.com/ssl-converter.html](https://www.sslshopper.com/ssl-converter.html).
### Using the certificates
Now you're all set and done. You can now use the certificates on the actual machine you want to use them on. Before you do that, make sure the CNAME is set to the correct machine (your web server, web service,...). Depending on the TTL of the DNS setting, this may take some time, but your friendly DNS responsible will be able to tell you this.
**Side note**: I successfully used certificates created in this way with the Azure API Management service to get nicer looking names for my API end points (e.g. `api.contoso.com` instead of `api-983bbc2.azure-api.net` or similar) and developer portal (e.g. `https://portal.contoso.com`).
### VM disposal
After you have finished all steps, make sure you power off your virtual machine (using the Azure Portal). In case you want to re-use it for other certificates, just power it off, but keep the storage for it, so that you can power it back on again. This will also generate some running cost, but it's almost neglectable (a couple of cents per month).
If you want to get rid of all running costs for the VM, delete the VM altogether, including the storage (Azure Portal will ask you whether you want to do this automatically).
### Common Problems
* **Let's Encrypt cannot connect**: Let's Encrypt starts its own little web server which is used to verify the CNAME actually belongs to the machine. If port 80 and/or 443 are already occupied, this will obviously not work. Likewise, if ports 80 and 443 are not available from the public internet (you forgot to specify the endpoints?), Let's Encrypt will also fail.
* **Domain Name blacklisted**: If you try to create a certificate for a domain name which has a top level domain belonging to one of the larger providers, chances are that the request will be rejected (`Name is blacklisted`). This also applies for any machine names directly on Azure (`*.cloudapp.net`). You will need your own domain for this to work.
### Disclaimer
Currently, at the time of writing, Let's Encrypt is in public *Beta*. Which means I would not recommend using these certificates for production. When testing SSL related things, it may very well be useful anyhow.
Additionally, by default the certificates are only valid for 3 months. If you need to renew the certificate, you should probably think of either getting a paid certificate valid for a longer period of time, and/or actually installing Let's Encrypt on your actual web server. On that machine, you could create a `cron` job to renew the certificate every two months.

View File

@ -1,53 +0,0 @@
---
layout: post
title: DevOpsCon 2015 - Is it really about the tools?
subtitle: My opinionated findings from DevOpsCon 2015 in Munich
category: conference
tags: [devops, microservice]
author: Elias Weingaertner
author_email: elias.weingaertner@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Two weeks ago, I attended the DevOp Conference (DevOpsCon) in Munich. As expected, it turned out to be the Mecca for Docker fans and Microservice enthusiasts. While I really enjoyed the conference, I drew two somehow controversial conclusions that are open for debate:
1. *Microservices are dinosaurs:* When people spoke about Microservices at the conference, they often were very convinced that Microservices are a new and bleeding edge concept. I disagree. In my opinion, Microservices are a new name for concepts that are partially known for fourty years.
2. *DevOps is not about technology:* Listening to many talks, I got the impression DevOps is all about containers. People somehow suggested that one just needed to get Docker + Docker-Compose + Consul up and running to transform you into a DevOps shop. Or CoreOs+Rocket. Or Whizzwatz plus Watzwitz. I disagree. Introducing DevOps concepts to your organization is mainly about getting your organizational structure right.
### What is new about Microservices?
To be honest, I really like the design principles of Microservices. A microservices does one particular job (*separations of concerns*). It is cleanly separated from other services (*isolation*), up to a degree where a microservice is responsible for its data persistence and hence is integrated with a dedicated database.
The rest of the microservice story is told quickly. Divide your system into individual functional units that are available via REST APIs. Label them microservices. Build applications by composing functionalities from different microservices. As of 2015, wrapping individual microservices into containers using a Docker-based toolchain makes it super easy to run your microservice-based empowered ecosystem at almost any infrastructure provider, no matter if it is Amazon, Azure, Rackspace, DigitalOcean or your personal hosting company that has installed Docker on their machines. I totally agree that Docker and related technologies help a lot making Microservices actually work. I also think that wrapping functional units into containers is an exciting pattern for building service landscapes these days. But is it a novel concept? Not at all.
In fact, many concepts that are nicely blended in todays microservices cocktail are more like dinosaurs that have escaped from the Jurassic Parc of computer science - not necessarily a bad thing, remembering that dinosaurs are still one of the most powerful and interesting creatures that have ever lived on earth. But what concepts am I thinking of?
First of all, Micro Kernels! The basic idea of micro kernels was to design a modular operating systems, in which basic functionalities like device drivers, file system implementations and applications are are implemented as services that operate on top of a thin operating system kernel that provides basic primitives like scheduling, inter-process communication, device isolation and hardware I/O. In essence, the micro kernel is a general execution context, and not more. All high level operating system functionality, no matter if it is a VFAT driver or a window manager, would operate on top of the micro kernel. And guess what: The operating system is working simply because all services on top of the microkernel are cleverly interacting with each other, using an API delivered by the microkernel. The idea of micro kernels was first introduced in 1970 by Hansen[1], with a lot of research having been carried in this domain since then. Replace the micro kernel with a Container run-time of choice (CoreOS, Docker, Docker plus Docker Compose) - and it becomes clear that Docker can be seen as a microkernel infrastructure for distributed applications, of course at higher abstraction levels.
Another fundamental cornerstones of Microservices as they are considered today are REST APIs. Computer scientists also have discussed APIs ever since. For example, modern operating systems (OS) like Windows or Linux do a great job in maintaining long-standing APIs that enable loose coupling between software and the OS. While we even don't notice that anymore, this is the reason why we can download pre-compiled software binaries or "Apps" to a computer or a smartphone, install them, and run them. One of the reasons this works like a charm are standardization efforts like POSIX[2] that have been carried out long before people even thought about Linux Containers.
In the distributed systems domain, we had a lot of discussions about how to do evolvable interface design over the past 20 years, mostly connected to technologies like Corba, Java RMI, XML-RPC or newer stuff like Apache Thrift, Protocol Buffers and now REST. In its core, the discussions have always been tackling the same questions: How can we version interfaces the best? Should we version at all? Or simply keep the old interfaces? In the OS domain, Microsoft is a good example: Windows still allows unmodified Win32 software from the mid nineties to be executed on today's versions of Windows - in the year of 2015.
At DevOpsCon, I voiced this opinion during the Microservice Lifecycle workshop given by Viktor Farcic. Many people agreed and also said that we're constantly re-inventing the wheel and struggle with the same questions. We had a nice discussion how modern's REST and MicroService world is related to SOAP. And this was in fact the motivation to write this article.
### DevOps is not about technology###
First of all, I am not the first person to make this claim. In fact, at the conference there've been a number of people that reported that they needed to adapt their organization's structure to effectively work with MicroServices and DevOps concepts.
Many speakers at the conference quoted the famous quote by Melvin Conway from 1967 that is commonly referred to as Conway's Law.
> Organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations
> <hr>
> <small>Martin Conway, 1967</small>
Similar to as mentioned by Rainer Zehnle, this led to my assumption that effectively doing Microservices and DevOps somehow doesn't work well in matrix-based organizations. Effectively, matrix-based organizations are often monoliths in which a lot of projects are tightly coupled, due to shared responsibilities of project teams and individuals.
As already mentioned by Rainer in his blog post, I was really impressed how the folks at Spreadshirt - thank you Samuel for sharing this! - restructured their once matrix-based organization that produced a huge monolith into a company that is able to effectively develop a microservice-based enterprise architecture. I hope that success stories like that are not only shared among software architects and developers in the future, as a faster time to market for software artifacts does not only make a developer happy, but also the manager that carries the wallet.
### Conclusion ###
I took a lot from the conference - and I have constantly asked myself the question afterwards if we're ready yet for DevOps and MicroServices as a organization. Are we? Probably not yet, although we're certainly on the right track. And we're in good company: From many talks at the coffee table I got the feeling that many companies in the German IT industry are in the same phase of transition as we are. How do we get more agile? How do we do microservices? Should we have a central release engineering team? Or leave that to DevOps? I am excited which answers we will find at Haufe. We'll keep you updated. Promised.
[1] Per Brinch Hansen. 1970. The nucleus of a multiprogramming system. Commun. ACM 13, 4 (April 1970), 238-241. DOI=[http://dx.doi.org/10.1145/362258.362278](http://dx.doi.org/10.1145/362258.362278)
[2] [http://www.opengroup.org/austin/papers/posix_faq.html]([http://www.opengroup.org/austin/papers/posix_faq.html)

View File

@ -1,205 +0,0 @@
---
layout: post
title: Log Aggregation with Fluentd, Elasticsearch and Kibana
subtitle: Introduction to log aggregation using Fluentd, Elasticsearch and Kibana
category: howto
tags: [devops, docker, logging]
author: doru_mihai
author_email: doru.mihai@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
With an increasing number of systems decoupled and scattered throughout the landscape it becomes increasingly difficult to track and trace events across all systems.
Log aggregation solutions provides a series of benefits to distributed systems.
The problems it tackles are:
- Centralized, aggregated view over all log events
- Normalization of log schema
- Automated processing of log messages
- Support for a great number of event sources and outputs
One of the most prolific open source solutions on the market is the [ELK stack](https://www.elastic.co/videos/introduction-to-the-elk-stack) created by Elastic.
{:.center}
![Log aggregation Elk]({{ site.url }}/images/logaggregation-elk.png){:style="margin:auto; width:70%"}
ELK stands for Elasticsearch Logstash Kibana and they are respectively their Search engine, Log Shipper and Visualization frontend solutions.
Elasticsearch becomes the nexus for gathering and storing the log data and it is not exclusive to Logstash.
Another very good data collection solution on the market is Fluentd, and it also supports Elasticsearch (amongst others) as the destination for its gathered data. So using the same data repository and frontend solutions, this becomes the EFK stack and if you do a bit of searching you will discover many people have chosen to substitute Elastic's logstash with FluentD and we will talk about why that is in a minute.
{:.center}
![Log aggregation Efk]({{ site.url }}/images/logaggregation-efk.png){:style="margin:auto; width:40%"}
# Logstash vs FluentD
Both of them are very capable, have [hundreds](https://www.elastic.co/guide/en/logstash/current/input-plugins.html) and [hundreds](http://www.fluentd.org/plugins) of plugins available and are being maintained actively by corporation backed support.
### Technology - Fluentd wins
The big elephant in the room is that Logstash is written in JRuby and FluentD is [written in Ruby with performance sensitive parts in C](http://www.fluentd.org/faqs). As a result the overhead of running a JVM for the log shipper translates in large memory consumption, especially when you compare it to the footprint of Fluentd. The only advantage that Logstash can still invoke is the good parallelism support that the JVM brings and very good [Grok](https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html) support.
The only downside for Fluentd was the lack of support for Windows, but even that has been [solved](https://github.com/fluent/fluentd/pull/674) and [grok](https://github.com/kiyoto/fluent-plugin-grok-parser) support is also available for Fluentd and you can even re-use the grok libraries you had used/built, even [Logstash grok patterns](https://github.com/elastic/logstash/tree/v1.4.2/patterns).
### Shippers - Fluentd wins
They both however offer the option of deploying lightweight components that will only read and send the log messages to a fully fledged instance that will do the necessary processing. These are called log forwarders and both have lightweight forwarders written in Go. As of this writing Elastic has released a replacement for it's [logstash-forwarder](https://github.com/elastic/logstash-forwarder)(formerly called Lumberjack) and it is built on top of it's new Data shipper platform [Beats](https://www.elastic.co/products/beats), and it is called [Filebeat](https://github.com/elastic/beats/tree/master/filebeat).
This new Logstash forwarder allows for a TLS secured communication with the log shipper, something that the old one was not capable of but it is still lacking a very valuable feature that fluentd offers, and that is buffering.
### Resiliency - Fluentd wins
As mentioned previously fluentd offers buffering, something that you get "for free" and coupled with the active client-side load balancing you get a very competent solution without a large footprint.
On the other side, logstash doesn't have buffering and only has an in-memory queue of messages that is fixed in length (20 messages) so in case messages can't get through, they are lost. To alleviate this weakness the common practice is to set up an external queue (like [Redis](http://www.logstash.net/docs/1.3.2//tutorials/getting-started-centralized)) for persistence of the messages in case something goes wrong at either end. They are [working on it](https://github.com/elastic/logstash/issues/2605) though, so in the future we might see an improvement in this area.
Fluentd offers in-memory or file based buffering coupled with [active-active and active-standby load balancing and even weighted load balancing](http://docs.fluentd.org/articles/high-availability) and last but not least it also offers [at-most-once and at-least-once](http://docs.fluentd.org/articles/out_forward#requireackresponse) semantics.
# Additional considerations
Logstash benefits from a more chiselled, mature implementation due to the fact that the core and a lot of the essential plugins are maintained by Elastic, and some may argue that it's easier to deploy a JRE and the logstash jar and be done with it while others would consider it overkill to have a JVM running for such a small task. Plus the need to deploy and maintain a separate queueing
Fluentd provides just the core and a couple of input/output plugins and filters and the rest of the large number of plugins available are community driven and so you are exposed to the risk of potential version incompatibilities and lack of documentation and support.
I have personally seen that there is a bit of chaos since each plugin creator will define his own set of configuration input variables and there isn't a sense of consistency when you look at different plugins. You will encounter variables that are optional and have different default values, variables that are not properly documented but you can deduct their usage from the examples that the author offers and virtually all known naming conventions will appear in your config file.
# What next?
Well, as you can probably already tell, I have chosen to go with fluentd, and as such it became quickly apparent that I need to integrate it with Elasticsearch and Kibana to have a complete solution, and that wasn't a smooth ride due to 2 issues:
- Timestamps were sent to Elasticsearch without milliseconds
- All field values were by default analyzed fields
For communicating with Elasticsearch I used the plugin [fluent-plugin-elasticsearch](https://github.com/uken/fluent-plugin-elasticsearch) as presented in one of their very helpful [use case tutorials](http://docs.fluentd.org/articles/free-alternative-to-splunk-by-fluentd).
This plugin allows fluentd to impersonate logstash by just enabling the setting `logstash-format` in the configuration file. I snooped arround a bit and found that basically the only difference is that the plugin will make sure that the message sent has a timestamp field named `@timestamp`.
And here we arrive at our first problem....
### Timestamp fix
This is a pain because if you want to properly visualize a set of log messages gathered from multiple systems, in sequence, to be able to see exactly what step followed the other.....well, you see the problem.
Let's take a look at what fluentd sends to Elasticsearch. Here is a sample log file with 2 log messages:
~~~
2015-11-12 06:34:01,471 [ ajp-apr-127.0.0.1-8009-exec-3] LogInterceptor INFO ==== Request ===
2015-11-12 06:34:01,473 [ ajp-apr-127.0.0.1-8009-exec-3] LogInterceptor INFO GET /monitor/broker/ HTTP/1.1
~~~
{: .language-java}
A message sent to Elasticsearch from fluentd would contain these values:
*-this isn't the exact message, this is the result of the stdout output plugin-*
~~~
2015-11-12 06:34:01 -0800 tag.common: {"message":"[ ajp-apr-127.0.0.1-8009-exec-3] LogInterceptor INFO ==== Request ===","time_as_string":"2015-11-12 06:34:01 -0800"}
2015-11-12 06:34:01 -0800 tag.common: {"message":"[ ajp-apr-127.0.0.1-8009-exec-3] LogInterceptor INFO GET /monitor/broker/ HTTP/1.1\n","time_as_string":"2015-11-12 06:34:01 -0800"}
~~~
{: .language-java}
I added the `time_as_string` field in there just so you can see the literal string that is sent as the time value.
This is a known issue and initially it was the fault of fluentd for not supporting that level of granularity, but is had been [fixed](https://github.com/fluent/fluentd/issues/461). Sadly, the fix has not made it's way to the Elasticsearch plugin and so, [alternatives have appeared](https://github.com/shivaken/fluent-plugin-better-timestamp).
The fix basically involves manually formatting the `@timestamp` field to have the format `YYYY-MM-ddThh:mm:ss.SSSZ`. So you can either bring on the previously mentioned `fluent-plugin-better-timestamp` into your log processing pipeline to act as a filter that fixes your timestamps OR you can build it yourself.
In order to build it yourself you only need the `record_transformer` filter that is part of the core of plugins that fluentd comes with and that I anyway would recommend you use for enriching your messages with things like the source hostname for example.
Next you need to parse the timestamp of your logs into separate date, time and millisecond components (which is basically what the better-timestamp plugin asks you to do, to some extent), and then to create a filter that would match all the messages you will send to Elasticsearch and to create the `@timestamp` value by appending the 3 components. This makes use of the fact that fluentd also allows you to run ruby code within your record_transformer filters to accommodate for more special log manipulation tasks.
~~~
<filter tag.**>
type record_transformer
enable_ruby true
<record>
@timestamp ${date_string + "T" + time_string + "." + msec + "Z"}
</record>
</filter>
~~~
{: .language-xml}
The result is that the above sample will come out like this:
~~~
2015-12-12 05:26:15 -0800 akai.common: {"date_string":"2015-11-12","time_string":"06:34:01","msec":"471","message":"[ ajp-apr-127.0.0.1-8009-exec-3] LogInterceptor INFO ==== Request ===","@timestamp":"2015-11-12T06:34:01.471Z"}
2015-12-12 05:26:15 -0800 akai.common: {"date_string":"2015-11-12","time_string":"06:34:01","msec":"473","message":"[ ajp-apr-127.0.0.1-8009-exec-3] LogInterceptor INFO GET /monitor/broker/ HTTP/1.1\n","@timestamp":"2015-11-12T06:34:01.473Z"}
~~~
{: .language-java}
*__Note__: you can use the same record_transformer filter to remove the 3 separate time components after creating the `@timestamp` field via the `remove_keys` option.*
### Do not analyse
There are 2 reasons why you shouldn't want your fields to be analyzed in this scenario:
- It will potentially increase the storage requirements
- It will make it impossible to do proper analysis and visualization on your data if you have field values that contain hyphens, dots or others.
Ok, so first, why does it increase the storage requirements?
Well, while researching what would be the proper hardware sizing requirements for setting up our production EFK installation I stumbled upon [this](http://peter.mistermoo.com/2015/01/05/hardware-sizing-or-how-many-servers-do-i-really-need/) post that goes in detail about what and why and how big can the problem become.
Worst case scenario, you could be using up to **40% more** disk space than you really need. Pretty bad huh?
And the second issue which would become apparent much quicker than the first is that when you will try to make use of Kibana to visualize your data you will encounter the issue that fields that contain hyphens for example will appear split and duplicate when used in visualizations.
For instance, by using the record_transformer I would send the hostname and also a statically specified field called `sourceProject`, to be able to group together messages that came from different identical instances of a project application.
Using this example configuration I tried to create a pie chart showing the number of messages per project for a dashboard. Here is what I got.
~~~
<filter tag.**>
type record_transformer
enable_ruby true
<record>
@timestamp ${date_string + "T" + time_string + "." + msec + "Z"}
sourceProject Test-Analyzed-Field
</record>
</filter>
~~~
{: .language-xml}
Sample output from stdout:
~~~
2015-12-12 06:01:35 -0800 clear: {"date_string":"2015-10-15","time_string":"06:37:32","msec":"415","message":"[amelJettyClient(0xdc64419)-706] jetty:test/test INFO totallyAnonymousContent: http://whyAreYouReadingThis?:)/history/3374425?limit=1","@timestamp":"2015-10-15T06:37:32.415Z","sourceProject":"Test-Analyzed-Field"}
~~~
{: .language-java}
And here is the result of trying to use it in a visualization:
{:.center}
![Log aggregation analyzed]({{ site.url }}/images/logaggregation-analyzed-field.png){:style="margin:auto; width:35%"}
I should mention, what you are seeing is the result of 6 messages that all have the field sourceProject set to the value "Test-Analyzed-Field".
Sadly, once you put some data into Elasticsearch, indices are automatically created (by the fluent-plugin-elasticsearch) and mappings along with them and once a field is mapped as being analyzed [it cannot be changed](https://www.elastic.co/blog/changing-mapping-with-zero-downtime).
Curiously this did not happen when using Logstash, which made me look into how they are handling this problem. Then I discovered the issue was discussed also in the context of the fluent-plugin-elasticsearch and [the solution was posted there](https://github.com/uken/fluent-plugin-elasticsearch/issues/33) along with the request to include it in future versions of the plugin.
And the solution is: When Elasticsearch creates a new index, it will rely on the existence of a template to create that index. Logstash comes with a template of its own that it uses to tell Elasticsearch to create not analyzed copies of the fields it sends to it so that users can benefit from the analyzed fields for searching and the not analyzed fields when doing visualizations. And that template can be found [here](https://github.com/logstash-plugins/logstash-output-elasticsearch/blob/master/lib/logstash/outputs/elasticsearch/elasticsearch-template.json).
And what you basically need to do is to do a curl put with that json content to ES and then all the indices created that are prefixed with `logstash-*` will use that template. Be aware that with the fluent-plugin-elasticsearch you can specify your own index prefix so make sure to adjust the template to match your prefix:
~~~
curl -XPUT localhost:9200/_template/template_doru -d '{
"template" : "logstash-*",
"settings" : {....
}'
~~~
{: .language-bash}
The main thing to note in the whole template is this section:
~~~
"string_fields" : {
"match" : "*",
"match_mapping_type" : "string",
"mapping" : {
"type" : "string", "index" : "analyzed", "omit_norms" : true,
"fielddata" : { "format" : "disabled" },
"fields" : {
"raw" : {"type": "string", "index" : "not_analyzed", "doc_values" : true, "ignore_above" : 256}
}
}
}
~~~
{: .language-json}
This tells Elasticsearch that for any field of type string that it receives it should create a mapping of type string that is analyzed + another field that adds a `.raw` suffix that will not be analyzed.
The `not_analyzed` suffixed field is the one you can safely use in visualizations, but do keep in mind that this creates the scenario mentioned before where you can have up to 40% inflation in storage requirements because you will have both analyzed and not_analyzed fields in store.
# Have fun
So, now you know what we went through here at [HaufeDev](http://haufe-lexware.github.io/) and what problems we faced and how we can overcome them.
If you want to give it a try you can take a look at [our docker templates on github](https://github.com/Haufe-Lexware/docker-templates), there you will find a [logaggregation template](https://github.com/Haufe-Lexware/docker-templates/tree/master/logaggregation) for an EFK setup + a shipper that can transfer messages securely to the EFK solution and you can have it up and running in a matter of minutes.

View File

@ -1,218 +0,0 @@
---
layout: post
title: Better Log Parsing with Fluentd
subtitle: Description of a couple of approaches to designing your fluentd configuration.
category: howto
tags: [devops, logging]
author: doru_mihai
author_email: doru.mihai@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
When you will start to deploy your log shippers to more and more systems you will encounter the issue of adapting your solution to be able to parse whatever log format and source each system is using. Luckily, fluentd has a lot of plugins and you can approach a problem of parsing a log file in different ways.
The main reason you may want to parse a log file and not just pass along the contents is that when you have multi-line log messages that you would want to transfer as a single element rather than split up in an incoherent sequence.
Another reason would be log files that contain multiple log formats that you would want to parse into a common data structure for easy processing.
Below I will enumerate a couple of strategies that can be applied for parsing logs.
And last but not least, there is the case that you have multiple log sources (perhaps each using a different technology) and you want to parse them and aggregate all information to a common data structure for coherent analysis and visualization of the data.
## One Regex to rule them all
The simplest approach is to just parse all messages using the common denominator. This will lead to a very black-box type approach to your messages deferring any parsing efforts to a later time or to another component further downstream.
In the case of a typical log file a configuration can be something like this (but not necessarily):
~~~
<source>
type tail
path /var/log/test.log
read_from_head true
tag test.unprocessed
format multiline
format_firstline /\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}/
#we go with the most generic pattern where we know a message will have
#a timestamp in front of it, the rest is just stored in the field 'message'
format1 /(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<message>(.|\s)*)/
</source>
~~~
{: .language-xml}
You will notice we still do a bit of parsing, the minimal level would be to just have a multiline format to split the log contents into separate messages and then to push the contents on.
The reason we do not just put everything into a single field with a greedy regex pattern is to have the correct timestamp pushed showing the time of the log and not the time when the log message was read by the log shipper, along with the rest of the message.
If more pieces are common to all messages, it can be included in the regex for separate extraction, if it is of interest of course.
## Divide & Conquer
As the name would suggest, this approach suggests that you should try to create an internal routing that would allow you to precisely target log messages based on their content later on downstream.
An example of this is shown in the configuration below:
~~~
#Sample input:
#2015-10-15 08:19:05,190 [testThread] INFO testClass - Queue: update.testEntity; method: updateTestEntity; Object: testEntity; Key: 154696614; MessageID: ID:test1-37782-1444827636952-1:1:2:25:1; CorrelationID: f583ed1c-5352-4916-8252-47298732516e; started processing
#2015-10-15 06:44:01,727 [ ajp-apr-127.0.0.1-8009-exec-2] LogInterceptor INFO user-agent: check_http/v2.1.1 (monitoring-plugins 2.1.1)
#connection: close
#host: test.testing.com
#content-length: 0
#X-Forwarded-For: 8.8.8.8
#2015-10-15 08:21:04,716 [ ttt-grp-127.0.0.1-8119-test-11] LogInterceptor INFO HTTP/1.1 200 OK
<source>
type tail
path /test/test.log
tag log.unprocessed
read_from_head true
format multiline
format_firstline /\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}/
format1 /(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<message>(.|\s)*)/
</source>
<match log.unprocessed.**>
type rewrite_tag_filter
rewriterule1 message \bCorrelationID\b correlation
rewriterule2 message .* clear
</match>
<match clear>
type null
</match>
<filter correlation>
type parser
key_name message
format / * (.*method:) (?<method>[^;]*) *(.*Object:) (?<object>[^;]*) *(.*Key:) (?<objectkey>[^;]*) *(.*MessageID:) (?<messageID>[^;]*) *(.*CorrelationID:) (?<correlationID>[^;]*).*/
reserve_data yes
</filter>
<match correlation>
type stdout
</match>
~~~
{: .language-ruby}
This approach is useful when we have multiline log messages within our logfile and the messages themselves have different formats for the content. Still, the important thing to note is that all log messages are prefixed by a standard timestamp, this is key to succesfully splitting messages correctly.
The break-down of the approach with the configuration shown is that all entries in the log are first parsed into individual events to be processed. The key separator here is the timestamp and it is marked by the *format_firstline* key/value pair as a regex pattern.
Fluentd will continue to read logfile lines and keep them in a buffer until a line is reached that starts with text that matches the regex pattern specified in the *format_firstline* field. After detecting a new log message, the one already in the buffer is packaged and sent to the parser defined by the regex pattern stored in the format<n> fields.
Looking at the example, all our log messages (single or multiline) will take the form:
~~~
{ "time":"2015-10-15 08:21:04,716", "message":"[ ttt-grp-127.0.0.1-8119-test-11] LogInterceptor INFO HTTP/1.1 200 OK" }
~~~
{: .language-json}
Being tagged with log.unprocessed all the messages will be caught by the *rewrite_tag_filter* match tag and it is at this point that we can pinpoint what type of contents each message has and we can re-tag them for individual processing.
This module is key to the whole mechanism as the *rewrite_tag_filter* takes the role of a router. You can use this module to redirect messages to different processing modules or even outputs depending on the rules you define in it.
## Shooting fish in a barrel
You can use *fluent-plugin-multi-format-parser* to try to match each line read from the log file with a specific regex pattern (format).
This approach probably comes with performance drawbacks because fluentd will try to match using each regex pattern sequentially until one matches.
An example of this approach can be seen below:
~~~
<source>
type tail
path /var/log/aka/test.log
read_from_head true
keep_time_key true
tag akai.unprocessed
format multi_format
# 2015-10-15 08:19:05,190 [externalPerson]] INFO externalPersonToSapSystem - Queue: aka.update.externalPerson; method: ; Object: externalPerson; Key: ; MessageID: ID:test1-37782-1444827636952-1:1:2:25:1; CorrelationID: f583ed1c-5352-4916-8252-47698732506e; received
<pattern>
format /(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) \[(?<thread>.*)\] (?<loglevel>[A-Z]*) * (.*method:) (?<method>[^;]*) *(.*Object:) (?<object>[^;]*) *(.*Key:) (?<objectkey>[^;]*) *(.*MessageID:) (?<messageID>[^;]*) *(.*CorrelationID:) (?<correlationID>[^;]*); (?<status>.*)/
</pattern>
# 2015-10-13 12:30:18,475 [ajp-apr-127.0.0.1-8009-exec-14] LogInterceptor INFO Content-Type: text/xml; charset=UTF-8
# Authorization: Basic UFJPRE9NT1xyZXN0VGVzdFVzZXI6e3tjc2Vydi5wYXNzd29yZH19
# breadcrumbId: ID-haufe-prodomo-stage-51837-1444690926044-1-1731
# checkoutId: 0
# Content-Encoding: gzip
# CS-Cache-Minutes: 0
# CS-Cache-Time: 2015-10-13 12:30:13
# CS-Client-IP: 172.16.2.51
# CS-Inner-Duration: 207.6 ms
# CS-Outer-Duration: 413.1 ms
# CS-Project: PRODOMO
# CS-UserID: 190844
# CS-UserName: restTestUser
# Expires: Thu, 19 Nov 1981 08:52:00 GMT
# Server: Apache
# SSL_CLIENT_S_DN_OU: Aka-Testuser
# User-Agent: check_http/v2.1.1 (monitoring-plugins 2.1.1)
# Vary: Accept-Encoding
# workplace: 0
# X-Forwarded-For: 213.164.69.219
# X-Powered-By: PHP/5.3.21 ZendServer/5.0
# Content-Length: 2883
# Connection: close
<pattern>
format multiline
format_firstline /\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}/
format1 /(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) \[(?<thread>.*)\] (?<class>\w*) * (?<loglevel>[A-Z]*) (?<message>.*)/
</pattern>
#Greedy default format
<pattern>
format /(?<time>\d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2},\d{3}) (?<message>(.|\s)*)/
</pattern>
</source>
~~~
{: .language-ruby}
When choosing this path there are multiple issues you need to be aware of:
* The pattern matching is done sequentially and the first pattern that matches the message is used to parse it and the message is passed along
* You need to make sure the most specific patterns are higher in the list and the more generic ones lower
* Make sure to create a very generic pattern to use as a default at the end of the list of patterns.
* Performance will probably decrease due to the trial&error approach to finding the matching regex
The biggest issue with this approach is that it is very very hard to handle multi-line log messages if there are significantly different log syntaxes in the log.
__Warning:__ Be aware that the multiline parser continues to store log messages in a buffer until it matches another firstline token and when it does it then it packages and emits the multiline log it collected.
This approach is useful when you have good control and know-how about the format of your log source.
## Order & Chaos
Introducing Grok!
Slowly but surely getting all your different syntaxes, for which you will have to define different regular expressions, will make your config file look very messy, filled with regex-es that are longer and longer, and just relying on the multiple format lines to split it up doesn't bring that much readability nor does it help with maintainability. Reusability is something that we cannot even discuss in the case of pure regex formatters.
Grok allows you to define a library of regexes that can be reused and referenced via identifiers. It is structured as a list of key-value pairs and can also contain named capture groups.
An example of such a library can be seen below. (Note this is just a snippet and does not contain all the minor expressions that are referenced from within the ones enumerated below)
~~~
###
# AKA-I
###
# Queue: aka.update.externalPerson; method: updateExternalPerson; Object: externalPerson; Key: ; MessageID: ID:test1-37782-1444827636952-1:1:2:25:1; CorrelationID: f583ed1c-5352-4916-8252-47698732506e; received
AKA_AKAI_CORRELATIONLOG %{AKA_AKAI_QUEUELABEL} %{AKA_AKAI_QUEUE:queue}; %{AKA_AKAI_METHODLABEL} %{AKA_AKAI_METHOD:method}; %{AKA_AKAI_OBJECTLABEL} %{AKA_AKAI_OBJECT:object}; %{AKA_AKAI_KEYLABEL} %{AKA_AKAI_KEY:key}; %{AKA_AKAI_MESSAGEIDLABEL} %{AKA_AKAI_MESSAGEID:messageId}; %{AKA_AKAI_CORRELATIONLABEL} %{AKA_AKAI_CORRELATIONID:correlationId}; %{WORD:message}
#CorrelationId log message from AKAI
#2015-10-15 08:19:05,190 [externalPerson]] INFO externalPersonToSapSystem - Queue: aka.update.externalPerson; method: updateExternalPerson; Object: externalPerson; Key: ; MessageID: ID:test1-37782-1444827636952-1:1:2:25:1; CorrelationID: f583ed1c-5352-4916-8252-47698732506e; received
AKA_AKAI_CORRELATIONID %{AKAIDATESTAMP:time} %{AKA_THREAD:threadName} %{LOGLEVEL:logLevel} * %{JAVACLASS:className} %{AKA_AKAI_CORRELATIONLOG}
#Multiline generic log pattern
# For detecting that a new log message has been read we will use AKAIDATESTAMP as the pattern and then match with a greedy pattern
AKA_AKAI_GREEDY %{AKAIDATESTAMP:time} %{AKA_THREAD:threadName} %{JAVACLASS:class} * %{LOGLEVEL:loglevel} %{AKA_GREEDYMULTILINE:message}
# Variation since some log messages have loglevel before classname or vice-versa
AKA_AKAI_GREEDY2 %{AKAIDATESTAMP:time} %{AKA_THREAD:threadName} * %{LOGLEVEL:loglevel} %{JAVACLASS:class} %{AKA_GREEDYMULTILINE:message}
###
# ARGO Specific
###
AKA_ARGO_DATESTAMP %{MONTHDAY}-%{MONTH}-%{YEAR} %{TIME}
#17-Nov-2015 07:53:38.786 INFO [www.lexware-akademie.de-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /var/www/html/www.lexware-akademie.de/ROOT has finished in 44,796 ms
AKA_ARGO_LOG %{AKA_ARGO_DATESTAMP:time} %{LOGLEVEL:logLevel} %{AKA_THREAD:threadName} %{JAVACLASS:className} %{AKA_GREEDYMULTILINE:message}
#2015-11-17 07:53:51.606 RVW INFO AbstractApplicationContext:515 - Refreshing Root WebApplicationContext: startup date [Tue Nov 17 07:53:51 CET 2015]; root of context hierarchy
AKA_ARGO_LOG2 %{AKAIDATESTAMP2:time} %{WORD:argoComponent} *%{LOGLEVEL:logLevel} * %{AKA_CLASSLINENUMBER:className} %{AKA_GREEDYMULTILINE:message}
#[GC (Allocation Failure) [ParNew: 39296K->4351K(39296K), 0.0070888 secs] 147064K->114083K(172160K), 0.0071458 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
#[CMS-concurrent-abortable-preclean: 0.088/0.233 secs] [Times: user=0.37 sys=0.02, real=0.23 secs]
AKA_ARGO_SOURCE (GC|CMS)
AKA_ARGO_GC \[%{AKA_ARGO_SOURCE:source} %{AKA_GREEDYMULTILINE:message}
~~~
{: .language-bash}
To use Grok you will need to install the *fluent-plugin-grok-parser* and then you can use grok patterns with any of the other techniques previously described with regex: Multiline, Multi-format.
# Go Get'em!
Now you should have a pretty good idea of how you can approach different log formats and how you can structure your config file using a couple of plugins from the hundreds of plugins available.

View File

@ -1,120 +0,0 @@
---
layout: post
title: Providing Secure File Storage through Azure API Management
subtitle: Shared Access Signatures with Azure Storage
category: howto
tags: [security, cloud, api]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Continuing our API journey, we're currently designing an API for one of our most valuable assets: Our content, such as law texts and commentaries. Let's call this project the "Content Hub". The API will eventually consist of different sub-APIs: content search, retrieval and ingestion ("upload"). This blog post will shed some light on how we will support bulk ingestion (uploading) of documents into our content hub using Azure out of the box technology: [Azure Storage SAS - Shared Access Signatures](https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-shared-access-signature-part-1/).
### Problem description
In order to create new content, our API needs a means to upload content into the Content Hub, both single documents and bulk ZIP files, which for example correspond to updated products (blocks of content). Ingesting single documents via an API is less a problem (and not covered in this blog post), but supporting large size ZIP files (up to 2 GB and even larger) is a different story, for various reasons:
* Large http transfers need to be supported by all layers of the web application stack (chunked transfer), which potentially introduces additional complexity
* Transferring large files is a rather difficult problem we do not want to solve on our own (again)
* Most SaaS API gateways (such as [Azure API Management](https://azure.microsoft.com/en-us/services/api-management/)) have traffic limits and/or traffic costs on their API gateways
### First approach: Setting up an sftp server
Our first architectural solution approach was to set up an (s)ftp server for use with the bulk ingestion API. From a high level perspective, this looks like a valid solution: We make use of existing technology which is known to work well. When detailing the solution, we found a series of caveats which made us look a little further:
* Providing secure access to the sftp server requires dynamic creation of users; this would also - to make the API developer experience smooth - have to be integrated into the API provisioning process (via an API portal)
* Likewise: After a document upload has succeeded, we would want to revoke the API client's rights on the sftp server to avoid misuse/abuse
* Setting up an (s)ftp server requires an additional VM, which introduces operations efforts, e.g. for OS patching and monitoring.
* The sftp server has to provide reasonable storage space for multiple ZIP ingest processes, and this storage would have to be provided up front and paid for subsequently.
### Second approach: Enter Azure Storage
Knowing we will most probably host our content hub on the [Microsoft Azure](https://azure.microsoft.com) cloud, looks immediately went to services offered via Azure, and in our case we we took a closer look at [Azure Storage](https://azure.microsoft.com/en-us/services/storage/). The immediate benefits of having such a storage as a SaaS offering are striking:
* You only pay for the storage you actually use, and the cost is rather low
* Storage capacity is unlimited for most use cases (where you don't actually need multiple TB of storage), and for our use case (document ingest) more than sufficient
* Storage capacity is does not need to be defined up front, but adapts automatically as you upload more files ("blobs") into the storage
* Azure Storage has a large variety of SDKs for use with it, all leveraging a standard REST API (which you can also use fairly simple)
Remains the question of securing the access to the storage, which was one of the main reasons why an ftp server seemed like a less than optimal idea.
### Accessing Azure Storage
Accessing an Azure Storage usually involves passing a storage identifier and an access key (the application secret), which in turn grants full access to the storage. Having an API client have access to these secrets is obviously a security risk, and as such not advisable. Similarly to the ftp server approach, it would in principle be possible to create multiple users/roles which have limited access to the storage, but this is also an additional administrational effort, and/or an extra implementation effort to make it automatic.
#### Azure Storage Shared Access Signatures
Luckily, Azure already provides a means of anonymous and restricted access to storages using a technique which is know e.g. from JWT tokens: Signed access tokens with a limited time span, a.k.a. "Shared Access Signatures" ("SAS"). These SAS tokens actually match our requirements:
* Using a SAS, you can limit access to the storage to either a "container" (similar to a folder/directory) or a specific "blob" (similar to a file)
* The SAS only has a limited validity which you can define freely, e.g. from "now" to "now plus 30 minutes"; after the validity of the token has expired, the storage can no longer be accessed
* Using an Azure Storage SDK, creating SAS URLs is extremely simple. Tokens are created without Storage API interaction, simply by *signing* the URL with the application secret key. This in turn can be validated by Azure Storage (which obviously also has the secret key).
We leverage the SAS feature to explicitly grant **write** access to one single blob (file) on the storage for which we define the file name. The access is granted for 60 minutes (one hour), which is enough to transfer large scale files. Our Content API exposes an end point which returns an URL containing the SAS token which can immediately be used to do a `PUT` to the storage.
{:.center}
![Azure Storage SAS - Diagram]({{ site.url }}/images/azure-storage-sas-1.png){:style="margin:auto"}
The upload to the storage can either be done using any http library (using a `PUT`), or using an Azure Storage SDK ([available for multiple languages](https://github.com/Azure?utf8=%E2%9C%93&query=storage), it's on github), which in turn enables features like parallel uploading or block uploading (for more robust uploading).
#### How does this look in code?
The best part of all this is that it's not only simple in theory to use the Storage API, it's actually simple in practice, too. When I tried to do this, I chose [node.js](https://nodejs.org) to implement a service which issues SAS tokens. Azure Storage has an `npm` package for that: `azure-storage`, which can be installed just like any other `npm` package using `npm install azure-storage [--save]`.
To get things up and running fast, I created a simple [Express](https://expressjs.com) application and replaced a couple of lines. The actual code for issuing a token is just the following:
```javascript
app.post('/bulk/token', function(req, res) {
var blobService = azure.createBlobService();
var startDate = new Date();
var expiryDate = new Date(startDate);
expiryDate.setMinutes(startDate.getMinutes() + 100);
startDate.setMinutes(startDate.getMinutes() - 10);
var filename = uuidGen.v4() + ".zip";
var container = 'bulkingest';
if (process.env.AZURE_STORAGE_SAS_CONTAINER)
{
container = process.env.AZURE_STORAGE_SAS_CONTAINER;
}
var sharedAccessPolicy = {
AccessPolicy: {
Permissions: azure.BlobUtilities.SharedAccessPermissions.READ +
azure.BlobUtilities.SharedAccessPermissions.WRITE,
Start: startDate,
Expiry: expiryDate
},
};
var token = blobService.generateSharedAccessSignature(container, filename, sharedAccessPolicy);
var sasUrl = blobService.getUrl(container, filename, token);
res.jsonp({ storageUrl: sasUrl,
filename: filename,
headers: [ { header: "x-ms-blob-type", value: "BlockBlob" } ],
method: "PUT" });
});
```
So, what does this do?
* Creates a Blob Service (from the `azure-storage` package), using defined credentials
* Defines Start and End dates for the token validity (here, from 10 minutes ago, until in 100 minutes from now)
* Defines the container for which the token is to be created
* Creates a GUID as the file name for which to grant write access to
* Defines a shared access policy which combines permissions with start and end dates
* Generates a shared access signature (this is serverless, in the SDK) and assembles this into a URL
* Returns a JSON structure containing information on how to access the storage (hypermedia-like)
For more information on how to actually try this out, see the link to a github sample below.
### Can I haz the codez?
My sample project can be found on github:
[https://github.com/DonMartin76/azure-storage-sas](https://github.com/DonMartin76/azure-storage-sas)
Have fun!

View File

@ -1,117 +0,0 @@
---
layout: post
title: Extending On-Premise Products With Mobile Apps - Part 1
subtitle: Modernizing on-premise application using Azure Service Bus Relay
category: general
tags: [mobile, cloud]
author: Robert Fitch
author_email: robert.fitch@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
### What is this about?
This was a proof-of-concept project to find out what it takes to access on-premise data (in the form of our Lexware pro product line) from an internet client, even though that data resides behind company firewalls, without forcing our customers to open an incoming port to the outside world. This would be a different approach than, say, "Lexware mobile", which synchronizes data into the cloud, from where it is accessed by client devices.
There was a fair amount of work already done in the past, which made the job easier. For a couple of years now, Lexware pro installs a service in the LAN, running on the same computer on which the database server resides. This service opens a port and accepts HTTP requests originating from the local network. These REST requests, depending on the url, are relayed to various business services. Again we profit from work which began many years ago, to encapsulate business logic in modules which can run in their own processes (separate from the actual Lexware pro application) and with their own database connections.
These HTTP REST Api's are currently used by "Lexware myCenter".
### Okay, what is Lexware myCenter?
Typically, Lexware pro is installed in the HR department of our customer's company. This means that the other employees of this company have no access to the application. However, there are plenty of use-cases which would make some kind of communication attractive, for example:
- The entire workflow for applying for vacation (that's "holiday" for you Brits)
- Employee applies for vacation from his boss
- Boss approves (or not) and relays the application to HR
- HR approves (or not) and informs the employee
&nbsp;
- The entire workflow for employee business trips
- Similar approval workflow to above
- In addition, employee gathers travel receipts which must be entered into the system
Previously, these workflows took place "outside the system". Everything was organized via e-mail, paper (yes, Virginia, paper exists), telephone, and face-to-face, until it was ready for HR to enter into the system.
With myCenter, every employee (and her boss) may be given a browser link and can carry out her part of the workflow independently. The necessary data is automatically transfered from/to the on-premise server via the HTTP REST Api. Of course, it only works within the company LAN.
{:.center}
![myCenter - Apply for Vacation]({{ site.url }}/images/reisekosten-app/mycenter.jpg){:style="margin:auto"}
### Enter Azure Service Bus Relay
Azure Service Bus Relay allows an on-premise service to open a WCF interface to servers running in the internet. Anyone who knows the correct url (and passes any security tests you may implement) has a proxy to the interface which can be called directly. Note that this does **not** relay HTTP requests, but uses the WCF protocol via TCP to call the methods directly. This works behind any company firewall. Depending on how restrictive the firewall is configured, the IT department may need to specifically allow outgoing access to the given Azure port.
So we have two options to access our business services on the desktop.
1. Call our business service interfaces (which just happen to be WCF!) directly and completely ignore the HTTP REST Api.
2. Publish a new "generic" WCF interface which consists of a single method, which simply accepts a "url"-argument as a string and hands this over the HTTP REST Api, then relays the response back to the caller.
Both options work. The second option works with a "stupid" internet server which simply packs the url of any request it gets from its clients into a string and calls the "generic" WCF method. The first option works with a "smart" internet server (which may have advantages), this server having enough information to translate the REST calls it gets from its clients into business requests on the "real" WCF interfaces.
For the Reisekosten-App, we decided on the first method. Using a "smart" internet server, we could hand-craft the Api to fit the task at hand and we could easily implement valid mock responses on the server, so that the front-end developer could get started immediately.
However, the other method also works well. I have used it during a test to make the complete myCenter web-site available over the internet.
### Putting it all together
With the tools thus available, we started on the proof-of-concept and decided to implement the use-case "Business traveller wants to record her travel receipts". So while underway, she can enter the basic trip data (dates, from/to) and for that trip enter any number of receipts (taxi, hotel, etc.). All of this information should find its way in real-time into the on-premise database where it can be processed by the HR department.
### Steps along the way
#### The on-premise service must have a unique ID
This requirement comes from the fact that the on-premise service must open a unique endpoint for the Azure Service Bus Relay. Since every Lexware pro database comes with a unique GUID (and this GUID will move with the system if it gets reinstalled on different hardware), we decided to use this ID as the unique connection ID.
#### The travelling employee must be a "user" of the Lexware pro application
The Lexware pro application has the concept of users, each of whom has certain rights to use the application. Since the employee will be accessing the database, she must exist as a user in the system. She must have very limited rights, allowing access only to her own person and given the single permission to edit trip data. Because myCenter has similar requirements, the ability for HR to automatically add specific employees as new users, each having only this limited access, was already implemented. So, for example, the employee "Andrea Ackermann" has her own login and password to the system. This, however, is **not** the identity with which she will log in to the App. The App login has its own requirements regarding:
- Global uniqueness of user name
- Strength of password
- The possibility to use, for example, a Facebook identity instead of username/password
#### The user must do a one-time registration and bind the App identity to the unique on-premise ID and to the Lexware pro user identity
We developed a small web-site for this one-time registration. The App user specifies her own e-mail as user name and can decided on her own password (with password strength regulations enforced). Once registered, she makes the connection to her company's on-premise service:
Here is the registration of a new App user:
{:.center}
![Reisekosten App - Register as a new user]({{ site.url }}/images/reisekosten-app/login1.jpg){:style="margin:auto"}
And, once registered and logged in, the specification of details for the Lexware pro connection:
{:.center}
![Reisekosten App - Specify the Lexware pro details]({{ site.url }}/images/reisekosten-app/login2.jpg){:style="margin:auto"}
The three values are given to the employee by her HR manager
`Lexware pro Serverkennung`: The unique on-premise service ID
`Lexware pro Benutzername`: The employee's user name in the Lexware pro application
`Lexware pro Passwort`: This user's password in the Lexware pro application
The connection and the Lexware pro login are immediately tested, so the password does not need to be persisted. As mentioned, this is a one-time process, so the employee never needs to return to this site. From this point on, she can log in to the smartphone App using her new credentials:
{:.center}
![Reisekosten App - Login]({{ site.url }}/images/reisekosten-app/login3.jpg){:style="margin:auto"}
The Reisekosten-App server looks up the corresponding 'Lexware pro Serverkennung' in its database and connects to the on-premise Relay connection with that path. From that point on, the user is connected to the on-premise service of her company and logged in there as the proper Lexware pro user. Note that the App login does not need to be repeated on that smartphone device, because the login token can be saved in the device's local storage.
And here is a screenshot of one of the views, entering actual receipt data:
{:.center}
![Reisekosten App - Receipt input]({{ site.url }}/images/reisekosten-app/receipt.jpg){:style="margin:auto"}
### Developing the Front-End
The front-end development (HTML5, AngularJS, Apache Cordova) was done by our Romanian colleague Carol, who is going to write a follow-up blog about that experience.
### What about making a Real Product?
This proof-of-concept goes a long way towards showing how we can connect to on-premise data, but it is not yet a "real product". Some aspects which need further investigation and which I will be looking into next:
- How can we use a Haufe SSO (or other authentication sources) as the identity?
- How do we register customers so that we can monetize (or track) the use?
- Is the system secure along the entire path from the device through Azure and on to the on-premise service?

View File

@ -1,115 +0,0 @@
---
layout: post
title: Securing Backend Services behind Azure API Management
subtitle: Different approaches to securing API implementations
category: howto
tags: [security, cloud, api]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
We are currently planning our first round of published APIs, and in the course of this process, we obviously had to ask ourselves how we can secure our backend services which we will surface using [Azure API Management](https://azure.microsoft.com/en-us/services/api-management/). This may sound like a trivial problem, but it turns out it actually isn't. This blog post will show the different options you have (or don't) using Azure API Management as a front end to your APIs.
### The problem
A key property of the Azure API Management solution is that it is not possible to deploy the APIm instance to some sort of pre-defined virtual network. The Azure APIm instance will always reside in its own "cloudapp" kind of virtual machine, and you can only select which region it is to run in (e.g. "North Europe" or "East US").
As an effect, you will always have to talk to your backend services via a public IP address (except in the VPN case, see below). You can't simply deploy APIm and your backend services together within a virtual network and only open up a route over port 443 to your APIm instance. This means it is normally possible to also talk "directly" to your backend service, which is something you do not want. You will always want consumers to go over API Management to be able to use the APIm security/throttling/analytics on the traffic. Thus, we have to look at different approaches how to secure your backend services from direct access.
We will check out the following possibilities:
* Security by obscurity
* Basic Auth
* Mutual SSL
* Virtual Networks and Network Security Groups
* VPNs
What is not part of this blog post is how you also can use OAuth related techniques to secure backend services. Focus of this article is how to technically secure the backends, not using means such as OAuth.
### Security by obscurity
For some very non-critical backend services running in the same Azure region (and only in those cases), it may be enough to secure the backend via obscurity; some have suggested that it can be enough to check for the `Ocp-Apim-Subscription-Key` header which will by default be passed on from the client via the API gateway to the backend service (unless you filter it out via some policy).
This is quite obviously not by any security standards actually secure, but it may rule out the occasional nosy port scan by returning a 401 or similar.
Other variants of this could be to add a second header to the backend call, using an additional secret key which tells the backend service that it is actually Azure APIm calling the service. The drawbacks of this are quite obvious:
* You have to implement the header check in your backend service
* You have a shared secret between Azure APIm and your backend service (you have coupled them)
* The secret has to be deployed to both Azure APIm and your backend service
* It is only secure if the connection between Azure APIm and the backend service is using https transport (TLS)
### Basic Auth
The second variant of "Security by obscurity" is actually equivalent to using Basic Authentication between Azure APIm and your backend service. Support for Basic Auth is though implemented into Azure APIm directly, so that you do not have to create a custom policy which inserts the custom header into the backend communication. Azure APIm can automatically add the `Authorization: Basic ...` header to the backend call.
Once more, the very same drawbacks apply as for the above case:
* You have to implement the Basic Auth in the backend (some backends do have explicit support for this, so it may be easy)
* You have a shared secret between the APIm and the backend
* If you are not using `https` (TLS), this is not by any means actually secure
### Mutual SSL
One step up from Basic Auth and Security by Obscurity is to use Mutual SSL between Azure APIm and the backend. This also is directly supported by Azure APIm, so that you "only" have to upload the client certificate to use for communication with the backend service, and then check the certificate in the backend. In this case, using a self-signed certificate will work. I tested it using [this blog post with nginx](https://pravka.net/nginx-mutual-auth). The only thing that had to be done additionally was to create PFX client certificate using `openssl`, as Azure APIm only will accept PFX certificates.
Checking the certificate in the backend can be simple or challenging, depending on which kind of backend service your are using:
* nginx: See above link to the tutorial on how to verify the client certificate; SSL termination with nginx is probably quite a good idea
* Apache web server also directly supports Client Certificate verification
* Spring Boot: Intended way of securing the service, see e.g. [Spring Boot Security Reference (v4.0.4)](http://docs.spring.io/spring-security/site/docs/4.0.4.CI-SNAPSHOT/reference/htmlsingle/#x509).
* Web API/.NET: Funnily, in the case of .NET applications, verifying a client certificate is quite challenging. There are various tutorials on how to do this, but unfortunately I don't like any of them particularly:
* [Suggestion from 'Designing evolvable Web APIs using ASP.NET'](http://chimera.labs.oreilly.com/books/1234000001708/ch15.html#example_ch15_cert_handler)
* [How to use mutual certificates with Azure API Management](https://azure.microsoft.com/en-us/documentation/articles/api-management-howto-mutual-certificates/)
* [Azure App Services - How to configure TLS Mutual Authentication](https://azure.microsoft.com/en-us/documentation/articles/app-service-web-configure-tls-mutual-auth/)
* For node.js and similiar, I would suggest using nginx for SSL termination (as a reverse proxy in front of node)
All in all, using mutual SSL is a valid approach to securing your backend; it offers real security. It will still be possible to flood the network interface with requests (which will be rejected immediately due to the SSL certificate mismatch), and thus could and possibly should be combined with the below method additionally.
I am waiting for simpler solutions of doing this directly in Azure, but currently you can't decouple it from your API implementation.
### Virtual Networks and Network Security Groups
In case your backend service runs in an Azure VM (deployed using ARM, Azure Resource Manager), you can make use of the built in firewall, the Network Security Groups. As of the Standard Tier (which is the "cheapest" one you are allowed to use in production), your Azure APIm instance will get a static IP; this IP in turn you can use to define a NSG rule to only allow traffic from that specific IP address (the APIm Gateway) to go through the NSG. All other traffic will be silently discarded.
As mentioned above, it's unfortunately not (yet) possible to add an Azure APIm instance to a virtual network (and thus put it inside an ARM NSG), but you can still restrict traffic into the NSG by doing IP address filtering.
The following whitepaper suggests that Azure virtual networks are additionally safeguarded against IP spoofing: [Azure Network Security Whitepaper](http://download.microsoft.com/download/4/3/9/43902ec9-410e-4875-8800-0788be146a3d/windows%20azure%20network%20security%20whitepaper%20-%20final.docx).
This means that if you create an NSG rule which only allows the APIm gateway to enter the virtual network, most attack vectors are already eliminated by the firewall: Azure filters out IP spoofed packages coming from outside Azure when they enter the Azure network, and additionally they will inspect packages from inside Azure to validate they actually origin from the IP address they claim to do. Combined with Mutual SSL, this should render sufficient backend service protection,
* On a security level, making sure only APIm can call the backend service, and
* On a DDoS prevention level, making sure that the backend service cannot be flooded with calls, even if they are immediately rejected
#### Azure Web Apps and Virtual Networks
Using standard Web Apps/API Apps (the PaaS approach in Azure), it is not possible to add those services to a virtual network. This in turn makes the above method of securing the backend services moot. There are workarounds for this, which let you combine the advantages of using Web Apps and the possibility to put the hosting environment of such Web Apps inside a virtual networks, and that's called [App Service Environments](https://azure.microsoft.com/en-us/documentation/articles/app-service-app-service-environment-intro). In short, an App Service Environment is a set of dedicated virtual machines deployed into a specific virtual networks which is only used by your own organization to deploy Web Apps/API Apps into. You have to deploy at least four virtual machines for use with the App Env (two front ends and two worker machines), and these are the costs that you actually pay. In return, you can deploy into a virtual network, and additionally you can be sure that you get the power you pay for, as nobody else will be using the same machines.
### VPNs
As a last possibility to secure the backend services, it is possible to create a VPN connection from a "classic" virtual network to the APIm instance. By doing so, you can connect the APIm instance directly to a closed subnet/virtual network, just as you would expect it to be possible using Azure Resource Manager virtual networks.
This approach has the following severe limitations which render it difficult to use as the "go to solution" it sounds like it is:
* Connecting VPNs to Azure APIm only works when using the Premium Tier, priced well over 2500€ per month; this is difficult to motivate in many cases, given that producing 5 TB of traffic per month is not something which will happen immediately
* Only Azure Service Manager ("Classic") virtual networks can be used for this, not the more recent Azure Resource Manager virtual networks
* In order to build up a VPN connection, you will need a Gateway virtual appliance inside your virtual network, which also comes at an additional cost (around 70€/month).
* You can't use VPN connections cross-region; if your APIm resides in West Europe, you can only connect to VNs in West Europe.
In theory, it would even be possible to first [bridge an ARM virtual network to a classic virtual network](https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-arm-asm-s2s/), and then in turn use that VN and an additional Gateway appliance to connect to APIm, but this setup gives me bad dreams.
### Conclusion and Recommendation
For critical backend services, use a combination of
* Mutual SSL
* Inbound NSG rules limiting traffic to the Azure APIm IP address
In case need to use Web Apps/API Apps, consider provisioning an App Environment which you can deploy into a virtual network, and then in turn use the NSG (as suggested above).
For less critical backend services (such as read-only APIs), choosing the NSG rule option only may also be a lightweight and easy to implement option. The only prerequisites for this are:
* Your backend service runs inside an Azure Resource Manager Virtual Network
* ... in the same region as your APIm instance
If you have further suggestions and/or corrections, please feel free to comment below.

View File

@ -1,64 +0,0 @@
---
layout: post
title: Being a Microservice or Cattle over pets
subtitle: A personal recap on QCon 2016
category: conference
tags: [qcon, microservices, devops]
author: axel_schulz
author_email: axel.schulz@semigator.de
header-img: "images/bg-post.jpg"
---
# Being a Microservice or Cattle over pets
First thing I did after receiving the invitation for QCon 2016 was of course to take a look at the schedule.
And to be honest: I was kinda disappointed by the seemingly missing link between all the tracks and sessions. Though it offered you a variety of interesting areas to dive into, I was missing the glue that should keep a conference and its attendees together.
Turned out - the glue were the mysterious microservices or at least they were suposed to be. I attended seemingly endless talks in which people were almost desperately trying to find some connection between microservices and their actual topic:
* Chaos testing a microservice infrastructure? _Well, to be honest: we don't test microservices, we test instances - but we do have > 700 microservices_
* Test-driven microservices? _Nice topic, but I'd rather speak about how important and awesome microservices can be_
* Modern Agile Developmen? _Yea, we'll just present you some lean management stuff and btw, we do microservices as well!_
But when there is shadow, there has to be light that casts the shadow and I stumbled upon some talks and lessons that I will really carry with me back to my team.
## "Treat your machines as cattle - not as pets!
At [Semigator](http://www.semigator.de) we're still doing the hosting of our production environment in a pretty conservative way. We have a bunch of virtual ressources (CPU, RAM, HDD etc.) that we combined into virtual machines and we take care of everything on these machines - starting from the OS updates to fine tuning application configuration on every machine. We're really pampering them like pets, because that's how system administration works, right? But why would we want to spend time on doing this that have actually nothing to do with our business? We are a webshop for further education and our business is to provide our customers with a lots of training offers - not to do server management!
Today's technology stacks enables you to ship your application either as a (almost) full working instance (Axel Fontaine of Boxfuse demonstrated in his talk "Rise of the machine images" how easily an application including a complete OS image can be created with only 15MB and deployed to AWS including propagating the new IP to the DNS) or at least as a container that bundles all dependencies and leaves it to the host to provide them. So if you need to deploy a new version of your application - or your microservice - you just create a new image, deploy it and delete the old one. So no more pampering of Linux or Windows machines! Just deploy what you need and where you need it! Of course this requires some preparations: you'll need to get rid of everything that you don't need on your machines, like:
* **Package Managers** - we're not going to install anything on this instance, so just get rid of it
* **Compilers** - This instance is supposed to run our application and not serve as a developer machine and we don't plan to update it either - so beat it gcc, javac and the rest!
* **Logging / Monitoring** - all logging (system and application side) should be centralized using fluentd, logstash or whatever anyways
* **User Management** - we don't want anybody to work on these machines, why would we need user management?
* **Man pages** - if no ones working on it, no one will have to look things up
* **SSH** - if nobody is to connect to these machines, we don't need SSH
and you could continue this list until it fits for your use case, as long as you only take as much with you as need.
Right now, we're wasting lots of time on monitoring available system updates, root logins or passwd changes. Our servers are overloaded with editors, drivers and other things that are absolutely superfluous for their actual job.
So, it's not like we'll be switching to this kind of slimline image deployment by snipping out fingers - I tried it - but it's no rocket science either. We see the obstacles in our way, some are minor - like routing the rest of our logs to our logging instance - and some are bigger, like figuring out, how to built our images for indivdual fit: what do we need and what's just an impediment for us.
We will not start with an automatic deployment on our hypervisor, but we fell like doing this, will give us the ultimate control of our application and the environment its running in and it's a crucial part of our tech strategy @Semigator.
## Talk by Aviran Mordo on his microservices and DevOps Journey
The reason why I liked this talk by Aviran Mordo from WIX.com is simple: he had the answer - it's that simple! He had the answer to my utter question: How...? How the heck do you go from your fat, ugly, scary monolith to microservices? His answer is: be pragmatic - if you split your monolith into two services, 50% of your application will still be available if the one services dies!
Aviran described how WIX.com started to work on their microservice architecture: by splitting its monolith in two, drawing a firm border between these two and go down further from there, which helped them building up experience steady on the way. The team drew the cutting line for the services on the data access level - one service focused more on reading data while the other focused on writing data. To get the data from the writing service to the reading service, they just copied them. Well, you might like this particular solution or not (I don't), but the point is: find this one - and only one - border that goes through your system and separates it. The other important part of his talk was the ubiquitous question on which technology to use for orchestrating the microservices, event messaging system, API versioning and distributed logging - and it's:
**YAGNI, you ain't gonna need it! - Default to the stack you know how to operate!**
So that was like an oasis among the zillions of sandcorns of todays kafka, akka, amqp, fluentd, logstash, graylog, zookeeper, consul etc. What he meant was: if you didn't need it before with 1 monolith - you still won't need it with 2 services. Or with 3 or 4 or 5... Now, that they've got up to 200 microservices, they think about adding some of these stacks - but why adding further complexity in the beginning when you've got your hands and minds full with other things?
Why would I start thinking about, e.g. how to implement Service Discovery or which API Management System to choose, if I only have 2 services running and I know exactly where they run and how to access them?
When you're splitting your monolith in two, you have other problem to take care of like how to make sure the two new services still get to communicate with existing other components. How do they get their data, since they were probably doing cross-domain data access before. Where to deploy the services? Same site as before? That might require re-configuration your web server. So solve these problems first and play around with the rest later.
For WIX.com already this first step of splitting the monolith brought significant benefits:
* seperation by product lifecycle brought deployment independence and gave developers the assurance that one change could bring down the whole system (but only half of it)
* separation by service level made it possible to scale independently and optimize the data to their respective use cases (read vs write)
What I particulary liked about this talk was that he showed a real practical methodology, that everybody could follow - literally and practically - and that alignes with the hands-on mentality that you need to have if you're taking on a problem like this.
This not often heard pragmatic approach wrapped up with the ubiquitous remark that each services must be owned by one team and this team has to take on the reponsibility for it (_You build it you run it!_) frankly did not contain any new insights at all but it served so well as a real life experience that I would really love to try it out myself - watch out Semigator-monolith!

View File

@ -1,330 +0,0 @@
---
layout: post
title: Extending On-Premise Products With Mobile Apps - Part 2
subtitle: Creating a Single Page App using Apache Cordova and AngularJS
category: howto
tags: [mobile, cloud]
author: carol_biro
author_email: carol.biro@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
### What is this about?
This is a blog-post about a proof-of-concept project I have worked on together with my colleague Robert Fitch to find out what it takes to access on-premise data from an internet client. Robert has created the server side API (you can read about it in [part 1](http://dev.haufe-lexware.com/Reisekosten-App/)) and my role was to create a mobile app which consumes the methods exposed by this API. The technologies used in order to create the app were HTML5, AngularJS, Bootstrap css and Apache Cordova.
### Why Apache Cordova?
Apache Cordova targets multiple platforms with one code base(html, css and javascript) and it is open source. There are many pros and cons using this stack of technology. We needed to address the app to a wide range of users with as little effort as possible. That is why we didn't use a pure native approach. For the POC app we have targeted Android and iOS devices as potential consumers.
### What else do we need beside Apache Cordova?
Apache Cordova is offering a good way to organize your project. It assures OS specific customizations and the OS specific mobile app builds(the .apk for Android and .ipa for iOS). Apache Cordova being set up once is very little modified during the lifetime of the project. For the effective development we have used AngularJS and Bootstrap.css.
It is worth mentioning that when I have started to work on this project I had no experience with the above mentioned technologies: neither Apache Cordova , nor AngularJS or Bootstrap.css. I am a pretty experienced web developer who have worked before mainly on jquery based projects. Starting to learn about AngularJS I have discovered a new way of thinking about web development, more precisely how to develop a web application avoiding the use of jquery. The main idea using AngularJS was to create dynamic views driven by javascript controllers. AngularJS lets you extend HTML with your own directives , the result being a very expressive, readable and quick to develop environment.
In my day by day job I mostly use Microsoft technologies like C#. I do this using Visual Studio as an IDE. That is why a good choice to set up this project was to use Visual Studio Tools for Apache Cordova. By the time I have started to work on the project the Update 2 of these tools were available, now after a couple of months Update 7 can be downloaded with a lot of improvements.
{:.center}
![Reisekosten App Frontend - Visual Studio Tools for Apache Cordova]( /images/reisekosten-app/visualstudioupdate7.jpg){:style="margin:auto"}
This being installed you have what you need to start the work. It installs Android SDK and a lot of other dependent stuff you might need during development. I will not detail this since this has been done before by others. If you want to read about you have a lot of resources available like :
[http://taco.visualstudio.com/en-us/docs/get-started-first-mobile-app/](http://taco.visualstudio.com/en-us/docs/get-started-first-mobile-app/)
### Front-End requirements
In a few words we needed:
* a login page
* a list of trips
* the possibility to create/edit a trip
* add/edit/delete receipts assigned to a trip
* the receipts form needed visibility rules of some fields(depending on a category)
### The project
Knowing the above here is a screenshot of how I ended up structuring the project:
{:.center}
![Reisekosten App Frontend - Visual Studio Tools for Apache Cordova]( /images/reisekosten-app/projectstructure.jpg){:style="margin:auto"}
In the upper part of the solution we can see the some folders created by Visual Studio project structure where OS specific things are kept. For instance **merges** folder has android and ios subfolders each of them containing a css folder where resides a file called overrides.css. Of course these files have different content depending on the OS. What is cool with this, is that at build time Visual Studio places the corresponding override in each OS specific build(in the .apk or the .ipa in our case).
The **plugins** folder is nice too , here will reside the plugins which helps to extend the web application with native app abilities. For instance we can install here a plugin which will be able to access the device camera.
The **res** folder contains OS specific icons (e.g. rounded icons for IOS and square icons for Android) and screens. Finally in the upper part of the solution there is a **test** folder where the unit and integration tests will reside.
The next folder **www** contains the project itself, the common code base. We see here a bunch of files which are nicely and clearly organized, maybe not for you yet, but hopefully things become more clear with the next code snippet of the index.html which is the core of the SPA and where the whole app is running:
```html
<!doctype html>
<html ng-app="travelExpensesApp" ng-csp>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0, user-scalable=no">
<meta name="format-detection" content="email=no">
<meta name="format-detection" content="telephone=no">
<link href="css/bootstrap.css" rel="stylesheet">
<link href="css/bootstrap-additions.css" rel="stylesheet" />
<link href="css/index.css" rel="stylesheet"/>
<!-- Cordova reference, this is added to the app when it's built. -->
<link href="css/overrides.css" rel="stylesheet"/>
<!-- Angular JS -->
<script src="scripts/frameworks/angular.js"></script>
<script src="scripts/frameworks/angular-resource.js"></script>
<script src="scripts/frameworks/angular-route.js"></script>
<script src="scripts/frameworks/angular-strap.js"></script>
<script src="scripts/frameworks/angular-strap.tpl.js"></script>
<script src="scripts/frameworks/angular-input-masks-standalone.min.js"></script>
<!-- Cordova reference, this is added to the app when it's built. -->
<script src="cordova.js"></script>
<script src="scripts/platformOverrides.js"></script>
<!-- Initialize all the modules -->
<script src="scripts/index.js"></script>
<!-- Services -->
<script src="scripts/services/cordova.js"></script>
<script src="scripts/services/global.js"></script>
<script src="scripts/services/httpInterceptor.js"></script>
<!-- Controllers -->
<script src="scripts/controllers/loginController.js"></script>
<script src="scripts/controllers/tripsController.js"></script>
<script src="scripts/controllers/tripDetailsController.js"></script>
<script src="scripts/controllers/receiptsController.js"></script>
<script src="scripts/controllers/receiptDetailsController.js"></script>
</head>
<body >
<div ng-view></div>
</body>
</html>
```
Being a SPA everything runs in one place. All the necessary files are included in the header : first the app specific css files, then the css override file which is replaced with OS specific one at build time. Next are included the AngularJS and AnfularJS specific libraries and then comes the platform specific javascript overrides. Until now we have included only libraries and overrides, what comes from now is the base of the app the index.js.
A code snippet from here helps to understand the AngularJS application:
```javascript
(function () {
"use strict";
var travelExpensesApp = angular.module("travelExpensesApp", ["ngRoute", "mgcrea.ngStrap", "ui.utils.masks", "travelExpensesControllers", "travelExpensesApp.services"]);
angular.module("travelExpensesControllers", []);
angular.module("travelExpensesApp.services", ["ngResource"]);
travelExpensesApp.config([
"$routeProvider",
function($routeProvider) {
$routeProvider.
when("/", {
templateUrl: "partials/login.html",
controller: "LoginControl"
}).
when("/companies/:companyId/employees/:employeeId/trips", {
templateUrl: "partials/trips.html",
controller: "TripsControl"
}).
when("/companies/:companyId/employees/:employeeId/trips/:tripId", {
templateUrl: "partials/tripDetails.html",
controller: "TripDetailsControl"
}).
when("/companies/:companyId/employees/:employeeId/trips/:tripId/receipts/:receiptId", {
templateUrl: "partials/receiptDetails.html",
controller: "ReceiptDetailsControl"
}).
otherwise({
redirectTo: "/"
});
}
]);
```
The AngularJS application is organized using modules. We can think of modules as containers for controllers , services, directives, etc. These modules are reusable and testable. What we can see above is that I have declared an application level module (travelExpensesApp) which depends on other modules .
I have created a separate module for controllers(travelExpensesControllers) and one for services(travelExpensesApp.services). I have also used some libraries as modules like : ngRoute (from angular-route.js), mgCrea.ngStrap (from angular-strap.js) and ui.utils.masks (from angular-input-masks-standalone.min.js). The module declarations links everything together and creates the base of the app.
Beside modules the above code snippet contains the configuration of routing. Using ngRoute lets us use routing which helps us a lot to define a nice and clear structure for the frontend app. AngularJS routing permits a good separation of concerns. GUI resides in html templates defined by the templateUrls and the logic behind is separated in the controller javascript files.
To understand a bit better let's take a look on the trips page which is built up from trips.html as template and TripsControl as controller:
```html
<div class="navbar navbar-inverse">
<div class="navbar-header pull-left">
<button type="button" class="btn-lx-hamburger navbar-toggle pull-left" data-toggle="collapse" data-target="#myNavbar" bs-aside="aside" data-template-url="partials/aside.html" data-placement="left" data-animation="am-slide-left" data-container="body">
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand padding-rl10">
<span class="glyphicon glyphicon-user"></span> {% raw %}{{employeeData.first_name}} {{employeeData.last_name}}{% endraw %}
</a>
</div>
<div class="navbar-header pull-right">
<a class="navbar-brand" ng-disabled="true" ng-click="newTravel()"><span class="glyphicon glyphicon-plus"></span></a>
</div>
</div>
<div class="container">
<div waithttp ng-show="!waitingHttp">
<div class="table-responsive">
<table class="table table-striped">
<thead>
<tr>
<th>Datum</th>
<th>Von</th>
<th>Nach</th>
</tr>
</thead>
<tbody>
<tr ng-repeat="trip in trips | orderBy : departure_date : reverse" ng-click="editTravel(trip.id)">
<td>{% raw %}
{{trip.departure_date | date : 'dd.MM.yyyy'}}{% endraw %}
</td>
<td>{% raw %}
{{trip.departure}}{% endraw %}
</td>
<td>{% raw %}
{{trip.destination}}{% endraw %}
</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
```
The above html contains a navbar and a simple list of trips. What we notice from the start are the double curly braces (e.g. '\{\{ trip.departure \}\}') and directives starting with "ng" prefix. These are the main ways how the template interacts with the controller :
```javascript
(function () {
"use strict";
angular.module("travelExpensesControllers").controller("TripsControl", ["$scope", "$http", "$routeParams", "$location","global", TripsControl]);
function TripsControl($scope, $http, $routeParams, $location, global) {
$scope.companyId = $routeParams.companyId;
$scope.employeeId = $routeParams.employeeId;
$scope.employeeData = global.user.employeeData;
$scope.aside = {
"title" : global.user.employeeData.first_name + " " + global.user.employeeData.last_name
};
var tripsRequest = {
method : "GET",
url : global.baseUrl + "companies/" + $scope.companyId + "/employees/" + $scope.employeeId + "/trips",
headers : {
"Content-Type": "application/hal+json",
"Session-Id": global.sessionId
}
};
$http(tripsRequest).then(
function (data) {
$scope.trips = data.data.ResourceList;
},
function (data) {
$scope.error = data;
// session probably bad, go back to login page
$location.path("/");
});
$scope.newTravel = function () {
delete global.tripDetails;
var url = "/companies/" + global.user.companyData.id + "/employees/" + global.user.employeeData.id + "/trips/0";
$location.url(url);
}
$scope.editTravel = function (travelId) {
var url = "/companies/" + global.user.companyData.id + "/employees/" + global.user.employeeData.id + "/trips/" + travelId;
$location.url(url);
}
$scope.doLogout = function () {
$location.path("/").search({invalidate: true});
}
}
})();
```
Above we can see how the TripsControl is defined . To expose something usable in the template we just exetend/add a new property or function to the $scope variable. For example we get the list of trips if we have a successful answer for a $http request defined by js var tripsRequest.
In a nutshell the above should give you an idea how angular works and used in this POC.
### What else ?
Since the most of the time during this POC I have worked with AngularJS you might wondering if I had to implement something else which is worth mentioning?
Yes, for example implementing hardware back button support for Android devices was pretty interesting and challenging . In order to achieve this I needed to create an angular service where I had to use Cordova capabilities. The result looks like this :
```javascript
(function () {
"use strict";
angular.module("travelExpensesApp.services").factory("cordova", ["$q", "$window", "$timeout", cordova]);
/**
* Service that allows access to Cordova when it is ready.
*
* @param {!angular.Service} $q
* @param {!angular.Service} $window
* @param {!angular.Service} $timeout
*/
function cordova($q, $window, $timeout) {
var deferred = $q.defer();
var resolved = false;
// Listen to the 'deviceready' event to resolve Cordova.
// This is when Cordova plugins can be used.
document.addEventListener("deviceready", function () {
resolved = true;
deferred.resolve($window.cordova);
console.log("deviceready fired");
}, false);
// If the 'deviceready' event didn't fire after a delay, continue.
$timeout(function () {
if (!resolved && $window.cordova) {
deferred.resolve($window.cordova);
}
}, 1000);
return {
ready: deferred.promise,
platformId: cordova.platformId,
eventHandlers:[],
backAction: function (callback) {
if (typeof (callback) === "function") {
var callbackAction;
var removeCallbackAction = function () {
document.removeEventListener("backbutton", callbackAction);
}
callbackAction = function () {
callback();
removeCallbackAction();
}
// remove previously added event handler if it didn't removed itself already
// this can happen when a navigating deeper than 1 level for example :
// 1. trips - > tripDetails :: back action is added
// 2. tripDetails -> receiptDetials :: back action from step 1 removed , current action is added
if (this.eventHandlers.length > 0) {
document.removeEventListener("backbutton", this.eventHandlers[0]);
this.eventHandlers.splice(0,1);
}
document.addEventListener("backbutton", callbackAction, false);
this.eventHandlers.push(callbackAction);
}
}
};
}
})();
```
The above is not the simplest code, it might be improved, but for the POC it did a great job.
Other interesting thing worth to mention is that testing is pretty easy and straightforward using this stack.I will not detail this but it is worth to know that in one day I have managed to set up the environment and write some unit tests for the receiptDetailsController.js using karma.js and one more day took to set up the environment and create some end to end tests using protractor.js.
Overall this stack of technologies allowed to put a healthy and solid base for a prjoect which in the future can become a complex mobile app. Development was quick and resulted a nice POC app out of it. Is this stack a good choice for future mobile apps? At this moment I think that it is. Let's see what the future will bring to us :).

View File

@ -1,160 +0,0 @@
---
layout: post
title: The Automated Monolith
subtitle: Build, Deploy and Testing using Docker, Docker Compose, Docker Machine, go.cd and Azure
category: howto
tags: [devops]
author: marco_seifried
author_email: marco.seifried@haufe-lexware.com
header-img: "images/bg-post.alt.jpg"
---
Let's be honest, systems age and while we try hard not to accumulate technical depth, sometimes you realize it's time for a bigger change. In this case, we looked at a Haufe owned platform providing services like user-, licence- and subscription management for internal and external customers. Written in Java, based on various open source components, somewhat automated, fairly monolithic.
Backed by our technical strategy, we try to follow the microservices approach (a good read is [Sam Newman's book](http://shop.oreilly.com/product/0636920033158.do)). We treat infrastructure as code and automate wherever possible.
So whenever we start from scratch, it's fairly straight forward to apply those principles.
But what if you already have your system, and it's grown over the years? How do you start? Keeping in mind we have a business critical system on a tight budget and a busy team. Try to convince business it's time for a technical face lift...
We decided to look at the current painpoints and start with something that shows *immediate business results in a reasonably short timeframe*.
### Rough Idea
The team responsible for this platform has to develop, maintain and run the system. A fair amount of their time went into deploying environments for internal clients and help them get up and running. This gets even trickier when different clients use an environment for testing simultaneously. Setting up a test environment from scratch - build, deploy, test - takes 5 man days. That's the reality we tried to improve.
We wanted to have a one click deployment of our system per internal client directly onto Azure. Everything should be built from scratch, all the time and we wanted some automated testing in there as well.
To make it more fun, we decided to fix our first go live date to 8 working weeks later by hosting a public [meetup](http://www.meetup.com/de-DE/Timisoara-Java-User-Group/events/228106103/) in Timisoara and present what we did! The pressure (or fun, depending on your viewpoint) was on...
So time was an issue, we wanted to be fast to have something to work with. Meaning that we didn't spend much time in evaluating every little component we used but made sure we were flexible enough to change it easily - evolutionary refinement instead of initial perfection.
### How
Our guiding principles:
* **Infrastructure as code** - EVERYTHING. IN CONFIG FILES. CHECKED IN. No implicit knowledge in people's heads.
* **Immutable Servers** - We build from scratch, the whole lot. ALWAYS. NO UPDATES, HOT FIX, NOTHING.
* **Be independent of underlying infrastructure** - it shouldn't matter where we deploy to. So we picked Azure just for the fun of it.
Main components we used:
* [go.cd](https://www.go.cd/) for continous delivery
* [Docker](https://www.docker.com/): All our components run within docker containers
* [Bitbucket](https://bitbucket.org/) as repository for config files and scripts
* [Team Foundation Server](https://www.visualstudio.com/en-us/products/tfs-overview-vs.aspx) as code repository
* [Artifactory](https://www.jfrog.com/open-source/#os-arti) as internal docker hub
* [ELK stack](https://www.elastic.co/webinars/introduction-elk-stack) for logging
* [Grafana](http://grafana.org/) with [InfluxDB](http://grafana.org/features/#influxdb) for basic monitoring
The flow:
{:.center}
[![go.cd Flow]( /images/automated-monolith/automated_monolith_flow.jpg)](http://dev.haufe.com/images/automated-monolith/automated_monolith_flow.jpg){:style="margin:auto"}
Let's first have a quick look on how go.cd works:
Within go.cd you model your worklows using pipelines. Those pipelines contain stages which you use to run jobs which themselves contain tasks. Stages will run in order and if one fails, the pipeline will stop. Jobs will run in parallel, go.cd is taking care of that.
The trigger for a pipeline to run is called a material - so this can be a git repository where a commit will start the pipeline, but also a timer which will start the pipeline reguarly.
You can also define variables on multiple levels - we have used it on a pipeline level - where you can store things like host names and alike. There is also an option to store secure variables.
In our current setup we use three pipelines: The first on creates a docker image for every component in our infrastructure - database, message queue, application server. It builds images for the logging part - Elastic Search, Kibana and Fluentd - as well as for the monitoring and testing.
We also pull an EAR file out of our Team Foundation Server and deploy it onto the application server.
Haufe has written and open sourced a [plugin](https://github.com/Haufe-Lexware/gocd-plugins/wiki/Docker-pipeline-plugin) to help ease the task to create docker images.
Here is how to use it:
Put in an image name and point to the dockerfile:
![go.cd Flow]( /images/automated-monolith/docker_plugin_1.jpg){:style="margin:auto"}
You can also tag your image:
![go.cd Flow]( /images/automated-monolith/docker_plugin_2.jpg){:style="margin:auto"}
Our docker images get stored in our internal Artifactory which we use as a docker hub. You can add your repository and the credentials for that as well:
![go.cd Flow]( /images/automated-monolith/docker_plugin_3.jpg){:style="margin:auto"}
Those images are based on our [docker guidelines](https://github.com/Haufe-Lexware/docker-style-guide).
The next step is to deploy our environment onto Azure. For that purpose we use a second go.cd pipeline with these stages:
![go.cd Flow]( /images/automated-monolith/deploy_stages.jpg){:style="margin:auto"}
First step is to create an VM on Azure. In this case we create a custom command in go.cd and simply run a shell script:
![go.cd Flow]( /images/automated-monolith/custom_command.jpg){:style="margin:auto"}
Core of the script is a docker machine command which creates an Ubuntu based VM which will serve as a docker host:
~~~bash
docker-machine -s ${DOCKER_LOCAL} create -d azure --azure-location="West Europe" --azure-image=${AZURE_IMAGE} --azure-size="Standard_D3" --azure-ssh-port=22 --azure-username=<your_username> --azure-password=<password> --azure-publish-settings-file azure.settings ${HOST}
~~~
Once the VM is up and running, we run docker compose commands to pull our images from Artifactory (in this case the setup of the logging infrastructure):
~~~yml
version: '2'
services:
elasticsearch:
image: registry.haufe.io/atlantic_fs/elasticsearch:v1.0
hostname: elasticsearch
expose:
- "9200"
- "9300"
networks:
- hgsp
fluentd:
image: registry.haufe.io/atlantic_fs/fluentd:v1.0
hostname: fluentd
ports:
- "24224:24224"
networks:
- hgsp
kibana:
env_file: .env
image: registry.haufe.io/atlantic_fs/kibana:v1.0
hostname: kibana
expose:
- "5601"
links:
- elasticsearch:elasticsearch
networks:
- hgsp
nginx:
image: registry.haufe.io/atlantic_fs/nginx:v1.0
hostname: nginx
ports:
- "4443:4443"
restart:
always
networks:
- hgsp
networks:
hgsp:
driver: bridge
~~~
As a last step we have one pipeline to simple delete everything we've just created.
### Outcome
We kept our timeline, presented what we did and were super proud of it! We even got cake!!
![go.cd Flow]( /images/automated-monolith/cake.jpg){:style="margin:auto"}
Setting up a test environment now only takes 30 minutes, down from 5 days. And even that can be improved by running stuff in parallel.
We also have a solid base we can work with - and we have many ideas on how to take it further. More testing will be included soon, like more code- and security tests. We will include gates that only once the code has a certain quality or has improved in a certain way after the last test, the pipeline will proceed. We will not stop at automating the test environment, but look at our other environments as well.
All the steps necessary we have in code, which makes it repeatable and fast. There is no dependency to anything. This enables our internal clients to setup their personal environments in a fast and bulletproof way on their own.
---
Update: You can find slides of our talk [here](http://www.slideshare.net/HaufeDev/the-automated-monolith)

View File

@ -1,27 +0,0 @@
---
layout: post
title: CQRS, Eventsourcing and DDD
subtitle: Notes from Greg Young's CQRS course
category: conference
tags: [microservice]
author: frederik_michel
author_email: frederik.michel@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
In these notes I would like to share my thoughts about the course I took together with Rainer Michel and Raul-Andrei Firu in London about the [above mentioned topics](http://lmgtfy.com/?q=greg+young+cqrs). In this three days last November Greg Young explained with many practical examples of his career the benefits especially of CQRS and how this is related to things like Event Sourcing which is a way of reaching eventual consistency.
### CQRS and DDD
So let's get to the content of the course. It all starts with some thoughts about Domain Driven Design (DDD) especially about how to get to a design. This included strategies for getting the information out of domain experts and how to come to an ubiquitous language between different departments. All in all Greg pointed out that the software just does not have to solve every problem there is which is actually why the domain model resulting out of this is absolutely unequal to the ERM which might come to mind when solving such problems. One should more think about the actual use cases of the software than about solving each and every corner case that actually just will not happen. He showed very interesting strategies to break up relations between the domains in order to minimize the amount of getters and setters used between domains. At the end Greg spoke shortly about Domain Services which deal with logic using different aggregates and making the transactions consistent. But more often than not one should evaluate eventual consistency to use instead of such domain services as the latter explicitly show that one breaks the rule of not using more than one aggregate within one transaction. In this part Greg actually got just very briefly into CQRS describing it as a step on the way of achieving an architecture with eventual consistency.
### Event Sourcing
This topic was about applying event sourcing to a pretty common architecture that uses basically a relational database with some OR-mapper on top and above that domains/domain services. On the other side there is a thin read model based on a DB with 1st NF data. He showed that this architecture would eventually fail in production. The problem there is to keep these instances in sync especially after some problems in production might have been occurred. In these cases it is occasionally very hard to get the read and write model back on the same page. In order to change this kind of architecture using event sourcing there has to be a transition to a more command based communication between the components/containers within the architecture. This can generally be realized by introducing an event store which gathers all the commands coming from the frontend. This approach eventually leads to a point where the before mentioned 3rd NF database (which up to that point has been the write model) is going to be completely dropped in favor of the event store. This actually has 2 reasons. First of all is that the event store already has all the information stored that also is present in the database. Second and maybe more important, it stores more information than the database as the latter one generally just keeps the current state. The event store on the other hand stores every event in between also which might be relevant for analyzing the data, reporting, … What this architecture we ended up with also brings to the table is eventual consistency as the command send by the UI takes some time until it is available in the read model. The main point about eventual consistency is that the data in the read model is not false data it might just be old data which in most cases is not to be considered critical. However, there are cases where consistency is required. For these situations there are strategies to just simulate consistency. This can be done by making the time the server takes to get the data to the read model smaller than the time the client needs to retrieve the data again. Mostly this is done by just telling the user that the changes have been received by the server or the ui just fakes the output.
To sum this up - the pros about an approach like this are especially the fact that every point in time can be restored (no data loss at all) and the possibility to just pretend that the system still works even if the database is down (we just show the user that we received the message and everything can be restored when the database is up again). In addition to that if a SEDA like approach is used it is very easy to monitor the solution and determine where the time consuming processes are. One central point in this course was that by all means we should prevent widespread outrage - meaning errors that make the complete application crash or stall with effect on many or all users.
### Process Managers
This topic was essentially about separation of concerns in that regard that one should separate process logic and business logic. This is actually something that should be done as much as possible as the system can then be easily changed to using a workflow engine in the longer run. Actually Greg showed two ways of building a process manager. The first one just knows in what sequence the business logic has to be run. It triggers each one after the other. In the second approach the process manager creates a list of the processes that should be run in the correct order. It then hands over this list to the first process which passes the list on to the next and so forth. In this case the process logic is within the list or the creation of the list.
### Conclusion
Even though Greg sometimes switched pretty fast from showing very abstract thoughts to going deep into source code the course was never boring - actually rather exciting and absolutely fun to follow. The different ways of approaching a problem were shown using very good examples - Greg really did a great job there. I can absolutely recommend this course for people wanting to know more about these topics. From my point of view this kind of strategy was very interesting as I see many people trying to create the "perfect" piece of software paying attention to cases that just won't happen or spending a lot of time on cases that happen very very rarely rather to define them as known business risks.

View File

@ -1,176 +0,0 @@
---
layout: post
title: Generating Swagger from your API
subtitle: How to quickly generate the swagger documentation from your existing API.
category: howto
tags: [api]
author: tora_onaca
author_email: teodora.onaca@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
If you already have an existing API and you just want to generate the swagger documentation from it, there are a couple easy steps to make it work. First off, you should be familiar with Swagger and, in particular, with [swagger-core](https://github.com/swagger-api/swagger-core). Assuming that you coded your REST API using JAX-RS, based on which was your library of choice (Jersey or RESTEasy), there are several [guides](https://github.com/swagger-api/swagger-core/wiki/Swagger-Core-JAX-RS-Project-Setup-1.5.X) available to get you set up very fast.
In our case, working with RESTEasy, it was a matter of adding the maven dependencies:
<dependency>
<groupId>io.swagger</groupId>
<artifactId>swagger-jaxrs</artifactId>
<version>1.5.8</version>
</dependency>
<dependency>
<groupId>io.swagger</groupId>
<artifactId>swagger-jaxrs</artifactId>
<version>1.5.8</version>
</dependency>
Note: please make sure to set the jar version to the latest one available, so that the latest bug fixes are included.
In order to hook up swagger-core in the application, there are multiple solutions, the easiest of which is to just use a custom `Application` subclass.
``` java
public class SwaggerTestApplication extends Application {
public SwaggerTestApplication() {
BeanConfig beanConfig = new BeanConfig();
beanConfig.setVersion("1.0");
beanConfig.setSchemes(new String[] { "http" });
beanConfig.setTitle("My API");
beanConfig.setBasePath("/TestSwagger");
beanConfig.setResourcePackage("com.haufe.demo.resources");
beanConfig.setScan(true);
}
@Override
public Set<Class<?>> getClasses() {
HashSet<Class<?>> set = new HashSet<Class<?>>();
set.add(Resource.class);
set.add(io.swagger.jaxrs.listing.ApiListingResource.class);
set.add(io.swagger.jaxrs.listing.SwaggerSerializers.class);
return set;
}
}
```
Once this is done, you can access the generated `swagger.json` or `swagger.yaml` at the location: `http(s)://server:port/contextRoot/swagger.json` or `http(s)://server:port/contextRoot/swagger.yaml`.
Note that the `title` element for the API is mandatory, so a missing one will generate an invalid swagger file. Also, any misuse of the annotations will generate an invalid swagger file. Any existing bugs of swagger-core will have the same effect.
In order for a resource to be documented, other than including it in the list of classes that need to be parsed, it has to be annotated with @Api. You can check the [documentation](https://github.com/swagger-api/swagger-core/wiki/Annotations-1.5.X) for the existing annotations and use any of the described fields.
A special case, that might give you some head aches, is the use of subresources. The REST resource code usually goes something like this:
``` java
@Api
@Path("resource")
public class Resource {
@Context
ResourceContext resourceContext;
@GET
@Produces("application/json")
@ApiOperation(value = "Returns something")
public String getResource() {
return "GET";
}
@POST
@Produces("application/json")
public String postResource(String something) {
return "POST" + something;
}
@Path("/{subresource}")
@ApiOperation(value = "Returns a subresource")
public SubResource getSubResource() {
return resourceContext.getResource(SubResource.class);
}
}
@Api
public class SubResource {
@PathParam("subresource")
private String subresourceName;
@GET
@Produces("application/json")
@ApiOperation(value = "Returns subresource something")
public String getSubresource() {
return "GET " + subresourceName;
}
@POST
@Produces("application/json")
@ApiOperation(value = "Posts subresource something")
public String postSubresource(String something) {
return "POST " + subresourceName + something;
}
}
```
The swagger parser works like a charm if it finds the @Path and @GET and @POST annotations where it thinks they should be. In the case depicted above, the subresource is returned from the parent resource and does not have a @Path annotation at the class level. A lower version of swagger-core will generate an invalid swagger file, so please use the latest version for a correct code generation. If you want to make you life a bit harder and you have a path that goes deeper, something like /resource/{subresource}/{subsubresource}, things might get a bit more complicated.
In the Subresource class, you might have a @PathParam for holding the value of the {subresource}. The Subsubresource class might want to do the same. In this case, the generated swagger file will contain the same parameter twice, which results in an invalid swagger file. It will look like this:
parameters:
- name: "subresource"
in: "path"
required: true
type: "string"
- name: "subsubresource"
in: "path"
required: true
type: "string"
- in: "body"
name: "body"
required: false
schema:
type: "string"
- name: "subresource"
in: "path"
required: true
type: "string"
In order to fix this, use `@ApiParam(hidden=true)` for the subresource `@PathParam` in the `Subsubresource` class. See below.
``` java
@Api
public class SubSubResource {
@ApiParam(hidden=true)
@PathParam("subresource")
private String subresourceName;
@PathParam("subsubresource")
private String subsubresourceName;
@GET
@Produces("application/json")
@ApiOperation(value = "Returns subsubresource something")
public String getSomethingw() {
return "GET " + subresourceName + "/" + subsubresourceName;
}
@POST
@Produces("application/json")
@ApiOperation(value = "Posts subsubresource something")
public String postSomethingw(String something) {
return "POST " + subresourceName + "/" + subsubresourceName + " " +something;
}
}
```
There might be more tips and tricks that you will discover once you start using the annotations for your API, but it will not be a slow learning curve and once you are familiar with swagger (both spec and core) you will be able to document your API really fast.

View File

@ -1,30 +0,0 @@
---
layout: post
title: SAP CodeJam on May 12th, 2016
subtitle: Calling all SAP ABAP Developer in Freiburg area
category: general
tags: [culture]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Am Donnerstag, dem 12. Mai 2016, ist es wieder soweit: Wir werden bei uns im Haus eine weitere SAP CodeJam durchführen.
Das Thema: ABAP in Eclipse.
{:.center}
![SAP JAM]({{ site.url }}/images/sap_codejam.jpg){:style="margin:auto"}
Ein spannender Termin für alle ABAP Entwickler, die sich für die aktuellen Entwicklungswerkzeuge interessieren und verstehen
möchten wohin die Reise geht. Ein spannenden Event, um hands-on erste Erfahrungen mit Eclipse als IDE zu sammeln und einen
Ausblick zu bekommen wohin die Reise geht. Es wird mit dem eigenen Notebook gearbeitet und auf dem aktuellsten SAP Netweaver
Stack (ABAP 7.50) herumgeklopft (Zugriff auf eine von SAP via AWS bereitgestellte Instanz).
Diese Einladung richtet sich nicht nur an Haufe-interne Entwickler, sondern auch an ABAP-Gurus anderer Unternehmen in der
Region. Bitte leitet die Einladung an andere ABAP-Entwickler in anderen Unternehmen weiter.
Die Teilnahme ist kostenfrei via diesem [Registrierungslink](https://www.eventbrite.com/e/sap-codejam-freiburg-registration-24300920708).
Es gibt 30 Plätze, first come, first serve.
Viele Grüße von dem Haufe SAP Team
PS: Ja, ABAP-Skills sind für die Teilnahme erforderlich.

View File

@ -1,205 +0,0 @@
---
layout: post
title: API Management Components
subtitle: What's inside that API Management box?
category: general
tags: [cloud, api]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post-api.jpg"
---
### Introduction
API Management is one of the more hyped up buzzwords you can hear all over the place, in conferences, in various blog posts, in the space of internet of things, containers and microservices. It looks at first sight as a brilliant idea, simple and easy, and alas, it is! But unfortunately not as simple as it might look like when you draw up your first architectural diagrams.
### Where do we start?
We're accustomed to architect large scale systems, and we are trying to move into the microservice direction. It's tempting to put in API Management as one of the components used for encapsulating and insulating the microservices, in a fashion like this:
![API Management in front of "something"]( /images/apim-components/apim-as-a-simple-layer.png)
This definitely helps in laying out the deployment architecture of your system(s), but in many cases, it falls too short. When you are accustomed to introducing API Management components into your microservice architecture, and you already have your blueprints in place, this may be enough, but to reach that point, you will need to do some more research on what you actually want to achieve with an API Management Solution (in short: "APIm").
### Common Requirements for an APIm
Another "problem" is that it's easy to just look at the immediate requirements for API Management solutions and compare to various solutions on the market. Obviously, you need to specify your functional requirements first and check whether they match to the solution you have selected; common APIm requirements are for example the following:
* Proxying and securing backend services
* Rate limiting/throttling of API calls
* Consumer identification
* API Analytics
* Self-service API subscriptions
* API Documentation Portals
* Simple mediations (transformations)
* Configurability over API (APIm APIs, so to say)
* Caching
The nature of these requirements are very diverse, and not all of the requirements are usually equally important. Neither is it always the case that all features are equally featured inside all APIm solutions, even if most solutions obviously try to cover them all to some extent. Some do this via an "all inclusive" type of offering, some have a more fine granular approach.
In the next section, I will try to show which types of components usually can be found inside an API Management Solution, and where the interfaces between the different components are to be found.
### A closer look inside the box
If we open up that blue box simply called "API Management", we can find a plethora of sub-components, which may or may not be present and/or more or less feature-packed depending on the actual APIm solution you choose. The following diagram shows the most usual components inside APIm solutions on the market today:
![API Management Components]( /images/apim-components/apim-in-reality.png)
When looking at an API Management Solution, you will find that in most cases, one or more components are missing in one way or the other, or some component is less elaborate than with other solutions. When assessing APIms, checking the different components can help to find whether the APIm actually matches your requirements.
We will look at the following components:
* [API Gateway](#apigateway)
* [API Identity Provider (IdP)](#apiidp)
* [Configuration Database](#configdb)
* [Cache](#cache)
* [Administration UI](#adminui)
* [Developer Portal](#devportal)
* [Portal Identity Provider (IdP)](#portalidp)
* [Logging](#logging)
* [Analytics](#analytics)
* [Audit Log](#audit)
<a name="apigateway"></a>
#### API Gateway
The core of an APIm is quite obviously the API Gateway. It's the component of the APIm solution through which the API traffic is routed, and which is usually ensuring that the backend services are secured. Depending on the architecture of the APIm solution, the API Gateway can be more or less integrated with the Gateway Identity Provider ("API IdP" in the picture), which provides an identity for the consuming client.
APIm solution requirements usually focus on this component, as it's the core functionality. This component is always part of the APIm solution.
<a name="apiidp"></a>
#### API Identity Provider
A less obvious part of the APIm conglomerate is the API Identity Provider. Depending on your use case, you will only want to know which API Consumers are using your APIs via the API Gateway, or you will want to have full feature OAuth support. Most vendors have direct support for API Key authentication (on a machine/application to API gateway basis), but not all have built-in support for OAuth mechanisms, and/or support pluggable OAuth support.
In short: Make sure you know which your requirements are regarding the API Identity Providers *on the API plane*; this is to be treated separately from the *API Portal users*, which may have [their own IdP](#portalidp).
<a name="configdb"></a>
#### Configuration Database
In most cases, the API Gateway draws its configuration from a configuration database. In some cases, the configuration is completely separated from the API Gateway, in some cases its integrated into the API Gateway (this is especially true for SaaS offerings).
The configuration database may contain the following things:
* API definitions
* Policy rules, e.g. throttling settings, Access Control lists and similar
* API Consumers, if note stored separately in the [API IdP](#apiidp)
* API Portal Users, if not separately stored in an [API Portal IdP](#portalidp)
* API Documentation, if not stored in separate [portal](#devportal) database
The main point to understand regarding the configuration database is that in most cases, the API Gateway and/or its corresponding datastore is a stateful service which carries information which is not only coming from source code (policy definitions, API definitions and such things), but also potentially from users. Updating and deploying API management solutions must take this into account and provide for migration/upgrade processes.
<a name="cache"></a>
#### Cache
When dealing with REST APIs, it is often useful to have a dedicated caching layer. Some (actually most) APIm provide such a component out of the box, while others do not. How caches are incorporated varies between the different solutions, but it ranges from pure `varnish` installations to key-value stores such as redis or similar. Different systems have different approaches to how and what is cached during API calls, and which kinds of calls are cacheable.
It is worth paying attention to which degree of automation is offered, and to which extent you can customize the behaviour of the cache, e.g. depending on the value of headers or `GET` parameters. What you need is obviously highly depending on your requirements. In some situations you will not care about the caching layer being inside the APIm, but for high throughput, this is definitely worth considering, to be able to answer requests as high up in the chain as possible.
<a name="adminui"></a>
#### Administration UI
In order to configure an APIm, many solutions provide an administration UI to configure the API Gateway. In some cases (like with [Mashape Kong](http://www.getkong.org)), there isn't any administration UI, but only an API to configure the API Gateway itself. But usually there is some kind of UI which helps you configuring your Gateway.
The Admin UI can incoroporate many things from other components, such as administering the [API IdP](#apiidp) and [Portal IdP](#portalidp), or viewing [analytics information](#analytics), among other things.
<a name="devportal">
#### Developer Portal
The Developer Portal is, in addition to the API Gateway, what you usually think about when talking about API Management: The API Developer Portal is the place you as a developer goes to when looking for information on an API. Depending on how elaborate the Portal is, it will let you do things like:
* View API Documentation
* Read up on How-tos or best practices documents
* Self-sign up for use of an API
* Interactively trying out of an API using your own credentials ([Swagger UI](http://swagger.io/swagger-ui/) like)
Not all APIm systems actually provide an API Portal, and for quite some use cases (e.g. Mobile API gateways, pure website APIs), it's not even needed. Some systems, especially SaaS offerings, provide a fully featured Developer Portal out of the box, while some others only have very simple portals, or even none at all.
Depending on your own use case, you may need one or multiple instances of a Developer Portal. It's normal practice that a API Portal is tied to a single API Gateway, even if there are some solutions which allow more flexible deployment layouts. Checking your requirements on this point is important to make sure you get what you expect, as Portal feature sets vary wildly.
<a name="portalidp"></a>
#### Portal Identity Provider
Using an API Developer Portal (see above) usually requires the developer to sign in to the portal using some king of authentication. This is what's behind the term "Portal Identity Provider", as opposed to the IdP which is used for the actual access to the API (the [API IdP](#apiidp)). Depending on your requirements, you will want to enable logging in using
* Your own LDAP/ADFS instance
* Social logins, such as Google, Facebook or Twitter
* Developer logins, such as BitBucket or GitHub.
Most solutions will use those identities to federate to an automatically created identity inside the API Portal; i.e. the API Developer Portal will link their Portal IdP users with a federated identity and let developers use those to log in to the API Portal. Usually, enabling social or developer logins will require you to register your API Portal with the corresponding federated identity provider (such as Google or Github). Adding Client Secrets and Credentials for your API Portal is something you will want to be able to do, depending on your requirements.
<a name="logging"></a>
#### Logging
Another puzzle piece in APIm is the question on how to handle logging, as logs can be emitted by most APIm components separately. Most solutions do not offer an out-of-the-box solution for this (haven't found any APIm with logging functionality at all actually), but most allow for plugging in any kind log aggregation mechanisms, such as [log aggregation with fluentd, elastic search and kibana](/log-aggregation).
Depending on your requirements, you will want to look at how to aggregate logs from the at least following components:
* API Gateway (API Access logs)
* API Portal
* Administration UI (overlaps with [audit logs](#audit))
You will also want to verify that you don't introduce unnecessary latencies when logging, e.g. by using queueing mechanisms close to the log emitting party.
<a name="analytics"></a>
#### The Analytics Tier
The area "Analytics" is also something where the different APIm solutions vary significantly in functionality, when it's present at all. Depending on your requirements, the analytics can be handled when looking at logging, e.g. by leveraging elastic search and kibana, or similar approaches. Most SaaS offerings have pre-built analytics solutions which offer a rich variety of statistics and drill-down possibilites without having to put in any extra effort. Frequent analytics are the following:
* API Usage by API
* API Calls
* Bandwith
* API Consumers by Application
* Geo-location of API users (mobile applications)
* Error frequency and error types (4xx, 5xx,...)
<a name="audit"></a>
#### The Audit Log
The Audit Log is a special case of logging, which may or may not be separate from the general logging components. The Audit log stores changes done to the configuration of the APIm solution, e.g.
* API Configuration changes
* Additions and deletions of APIm Consumers (clients)
* Updates of API definitions
* Manually triggered restarts of components
* ...
Some solutions have built-in auditing functionality, e.g. the AWS API Gateway has this type of functionality. The special nature of audit logs is that such logs must be tamper-proof and must never be changeable after the fact. In case of normal logs, they may be subject to cleaning up, which should not (so easily) be the case with audit logs.
### API Management Vendors
{:.center}
![API Management Providers]( /images/apim-components/apim-providers.png){:style="margin:auto"}
Incomplete list of API Management Solution vendors:
* [3scale](https://www.3scale.net)
* [Akana API Management](https://www.akana.com/solution/api-management)
* [Amazon AWS API Gateway](https://aws.amazon.com/api-gateway)
* [API Umbrella](https://apiumbrella.io)
* [Axway API Management](https://www.axway.com/en/enterprise-solutions/api-management)
* [Azure API Management](https://azure.microsoft.com/en-us/services/api-management/)
* [CA API Gateway](http://www.ca.com/us/products/api-management.html)
* [Dreamfactory](https://www.dreamfactory.com)
* [IBM API Connect](http://www-03.ibm.com/software/products/en/api-connect)
* [Mashape Kong](https://getkong.org)
* [TIBCO Mashery](http://www.mashery.com)
* [Tyk.io](https://tyk.io)
* [WSO2 API Management](http://wso2.com/api-management/)
---
<small>
The [background image](/images/bg-post-api.jpg) was taken from [flickr](https://www.flickr.com/photos/rituashrafi/6501999863) and adapted using GIMP. You are free to use the adapted image according the linked [CC BY license](https://creativecommons.org/licenses/by/2.0/).
</small>

View File

@ -1,202 +0,0 @@
---
layout: post
title: How to use an On-Premise Identity Server in ASP.NET
subtitle: Log in to an ASP.NET application with ADFS identity and check membership in specific groups
category: howto
tags: [cloud]
author: Robert Fitch
author_email: robert.fitch@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
This article shows you how to develop an ASP.NET application to:
- Log in with an on-premise ADFS Identity
- Check whether the user belongs to a given group (for example, a certain mailing list)
# Prepare the Project
## Create ##
Create a new ASP.NET Web Application, for example:
{:.center}
![]( /images/adfs-identity/pic26.jpg){:style="margin:auto"}
On the next page, select MVC, then click on "Change Authentication":
{:.center}
![]( /images/adfs-identity/pic27.jpg){:style="margin:auto"}
You will be sent to this dialog:
{:.center}
![]( /images/adfs-identity/pic28.jpg){:style="margin:auto"}
- Select **Work and School Accounts**
- Select **On-Premises**
- For the **On-Premises Authority**, ask IT for the public URL of your FederationMetadata.xml on the identity server, e.g.
`https://xxxxxxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml`
- For the **App ID URI**, you must enter an identifier for your app. This is not a real URL address, just a unique identifier, for example `http://haufe/mvcwithadfs`.
**Important:** The **App ID URI** identifies your app with the on-premise ADFS identity server. This same App ID must be registered on the ADFS identity server by IT as a **Relying Party Trust** identifier (sometimes known as **Realm**), so that the server will accept requests.
Finish up the project creation process.
## Edit some Settings
Make sure that the project is set to run as HTTPS:
{:.center}
![]( /images/adfs-identity/pic29.jpg){:style="margin:auto"}
Compile the project.
## The authentication code ##
If you are wondering where all of the authentication code resides (or if you need to modify an existing project!), here are the details:
The App ID URI and the On-Premises Authority URL are stored in the `<appSettings>` node of web.config:
~~~xml
<add key="ida:ADFSMetadata" value="https://xxxxxxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml" />
<add key="ida:Wtrealm" value="http://haufe/mvcwithadfs" />
~~~
And the OWIN-Code to specify the on-premise authentication is in `Startup.Auth.cs`:
~~~csharp
public partial class Startup
{
private static string realm = ConfigurationManager.AppSettings["ida:Wtrealm"];
private static string adfsMetadata = ConfigurationManager.AppSettings["ida:ADFSMetadata"];
public void ConfigureAuth(IAppBuilder app)
{
app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);
app.UseCookieAuthentication(new CookieAuthenticationOptions());
app.UseWsFederationAuthentication(
new WsFederationAuthenticationOptions
{
Wtrealm = realm,
MetadataAddress = adfsMetadata
});
}
}
~~~
# Configure the On-Premise Identity Server (Job for IT) #
On the identity server, these are the critical configuration pages for a new **Relying Party Trust**.
## Identifiers ##
{:.center}
![]( /images/adfs-identity/pic31.jpg){:style="margin:auto"}
**Display Name:** This is the name under which IT sees the **Relying Party Trust**.
**Relying Party identifiers:** This is a list of relying party identifiers, known on "our" ASP.NET side as **App ID URI**. The only important one is the **App ID URI** we assigned to our app when creating it. On this screen, you also see `https://localhost:44306`. This was automatically set by the Relying Party Trust Wizard when it asked for the first endpoint, since it assumed that the endpoint is also a default identifier. But since we specified a custom **App ID URI** (which gets transmitted by the user's browser), the `http://haufe/mvcwithadfs` entry is the only one which really matters.
## Endpoints ##
{:.center}
![]( /images/adfs-identity/pic32.jpg){:style="margin:auto"}
This is the page which lists all browser source endpoints which are to be considered valid by the identity server. Here you see the entry which comes into play while we are debugging locally. Once your application has been uploaded to server, e.g. Azure, you must add the new endpoint e.g.:
`https://xxxxxxxxxx.azurewebsites.net/`
(not shown in the screen shot)
## Claim Rules ##
**Issuance Authorization Rules**
{:.center}
![]( /images/adfs-identity/pic33.jpg){:style="margin:auto"}
**Issuance Transform Rules**
This is where we define which identity claims will go out to the requesting application.
Add a rule named e.g. **AD2OutgoingClaims**
{:.center}
![]( /images/adfs-identity/pic34.jpg){:style="margin:auto"}
and edit it like this:
{:.center}
![]( /images/adfs-identity/pic35.jpg){:style="margin:auto"}
The last line is the special one (the others being fairly standard). The last line causes AD to export every group that the user belongs to as a role, which can then be queried on the application side.
# Run #
At this point, the app can be compiled and will run. You can log in (or you might be automatically logged in if you are running from a browser in the your company's domain).
# Check Membership in a certain Group #
Because we have configured the outgoing claims to include a role for every group that the user belongs to, we can now check membership. We may, for example, want to limit a given functionality to members of a certain group.
## Create an Authorizing Controller ##
You may create a controller with the Authorize attribute like this:
~~~csharp
[Authorize]
public class RoleController : Controller
{
}
~~~
The **Authorize** attribute forces the user to be logged in before any requests are routed to this controller. The log in dialog will be opened automatically if necessary.
It is also possible to use the **Authorize** attribute not on the entire controller, but just on those methods which need authorization.
Once inside a controller (or method) requiring authorization, you have access to the security Information of the user. In particular, you can check membership in a given role (group) like this:
~~~csharp
if (User.IsInRole("_Architects")
{
// do something
}
else
{
// do something else
}
~~~
Within a `cshtml` file, you may also want to react to user membership in a certain role. One way to do this is to bind the cshtml file to a model class which contains the necessary boolean flags. Set those flags in the controller, e.g.:
~~~csharp
model.IsArchitect = User.IsInRole("_Architects");
~~~
Pass the model instance to the view, then evaluate those flags in the cshtml file:
~~~csharp
@if (Model.IsArchitect)
{
<div style="color:#00ff00">
<text><b>Yes, you are in the Architect group.</b></text>
</div>
}
else
{
<div style="color:#ff0000">
<text><b>No, you are not in the Architect group.</b></text>
</div>
}
~~~
Instead of using flags within the data binding model, it may be easier to have the controller just assign a property to the ViewBag and evaluate the ViewBag in the cshtml file.

View File

@ -1,60 +0,0 @@
---
layout: post
title: IRC and the Age of Chatops
subtitle: How developer culture, devops and ux are influenced by the renaisance of IRC
category: general
tags: [culture, devops, cto]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Since April 8th Haufe Group has a new group-wide tool to facilitate internal communication between individuals and teams: [Rocket.chat](https://rocket.chat). If you have heard about Slack, then Rocket.chat is just like it.
### What is it
Rocket.chat is a group chat tool you can use to communicate internally in projects, exchange information on different topics in open channels and integrate tooling via bots. If you were around for the beginning of the internet, its like IRC but with history. If you know Slack… then its exactly like that.
### Another tool?
… but we already have so many!
We know. But Slack has taken the software industry by storm over the last 3 years. We felt that IRC-style communication fits into a niche where social tools dont. We experimented with Slack and many of us loved it so we used it daily. We got a lot of good feedback from our Slack pilot over the last year and already more than 100 colleagues registered in the first 24h after our Rocket.chat instance went live.
If you are curious why we felt the need to support this very distinct form of communication, you might find some interesting information and ideas in the following articles:
* [Modelling mediums of communication](http://techcrunch.com/2015/04/07/modeling-mediums-of-communication/)
* [IRC - The secret to success of Open Source](https://developer.ibm.com/opentech/2015/12/20/irc-the-secret-to-success-in-open-source/)
* [Is Slack the new LMS](https://medium.com/synapse/is-slack-the-new-lms-7d1c15ff964f#.m6r5c1b31)
IRC-style communication has been around since the dawn of the Internet and continues to draw a large group of active users. As we strive to create an open and collaborative culture at Haufe, we felt that there was a need to complement the linear social-media style form of communication of something like Yammer with an active IRC-style chat model. As mentioned above, IRC style chat seems to encourage the active exchange of knowledge and helps us in creating a [learning organisation](https://en.wikipedia.org/wiki/Learning_organization).
But there is more. Based on the phenomenal success of Slack in the software industry, companies are starting to experiment with Chatops as a new take on devops:
* [What is Chatops](https://www.pagerduty.com/blog/what-is-chatops/)
* [Chatops Adaption Guide](http://blogs.atlassian.com/2016/01/what-is-chatops-adoption-guide/)
And last but not least, there is even a trend in the UX community to leverage chat (or so called `conversational interfaces`) as a new User Experience paradigm:
* [On Chat as an interface](https://medium.com/@acroll/on-chat-as-interface-92a68d2bf854#.vhtlcvkxj)
* [The next phase of UX is designing chatbots](http://www.fastcodesign.com/3054934/the-next-phase-of-ux-designing-chatbot-personalities)
Needless to say, we felt that there is not just a compelling case for a tool matching the communication needs of our developer community, but even more a chance to experience first hand through our daily work some of the trends shaping our industry.
### So why not Slack
I give full credit to Slack to reimagine what IRC can look like in the 21st century. But for our needs as a forum across our developer community it has two major drawbacks. The price tag rises very quickly if wanted to role it out aross our entire company. But even more importantly we could not get approval from our legal department due to Germany's strict data privacy rules.
Rocket.chat on the other hand is Open Source and we are hosting it in our infrastructure. We are keeping costs extremely low by having operations completely automated (which has the welcome side effect of giving our ops team a proving ground to support our Technology Stratgy around Docker and CI/CD). And we got full approval by our legal department on top.
### How to use it?
We dont have many rules, and we hope we dont have to. The language tends to be English in open channels and in #general (where everyone is by default). We strive to keep in mind that there might be colleagues that dont speak German. Beyond that we ask everyone to be courtegeous, open, helpful, respectful and welcoming the same way we would want to be treated.
### Beyond chat
Chat and chat bots are very trendy this year there is plenty of experimentation around leveraging it as a new channel for commerce, marketing, products, customers and services. Microsoft, Facebook, Slack they are all trying it out. We now have the platform to do so as well if we want to.
But dont take our word for it check out the following links:
* [2016 will be the year of conversational commerce](https://medium.com/chris-messina/2016-will-be-the-year-of-conversational-commerce-1586e85e3991#.aathpymsh)
* [Conversational User Interfaces](http://www.wired.com/2013/03/conversational-user-interface/)
* [Microsoft to announce Chatbots](http://uk.businessinsider.com/microsoft-to-announce-chatbots-2016-3)
* [Facebook's Future in Chatbots](http://www.platformnation.com/2016/04/15/a-future-of-chatbots/)
Rocket.chat comes with a simple but good API and [a framework for building bots](https://github.com/RocketChat/hubot-rocketchat). We are already looking at integrating with our internal tools like Git, Confluence, Jira, Jenkins and Go.CD.

View File

@ -1,992 +0,0 @@
---
layout: post
title: Secure Internet Access to an On-Premise API
subtitle: Connect an ASP.NET identity to an on-premise API login identity, then relay all requests through the Azure Service Bus
category: howto
tags: [cloud]
author: Robert Fitch
author_email: robert.fitch@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
This article shows you how to use the Microsoft Azure Service Bus to relay requests to an on-premise API through the internet in a secure manner.
# Preparation
You will need a Microsoft Azure account. Create a new "Service Bus Namespace" (in this example it is "HaufeMessageBroker"). Under the "Configure" tab, add a new shared access policy, e.g. "OnPremiseRelay":
{:.center}
![]( /images/secure-internet-access/pic43.jpg){:style="margin:auto"}
Use the namespace, the policy name, and the primary key in the code samples below.
# The On-Premise API
We make some assumptions about the on-premise API. These are not prerequisites in the sense that otherwise no access would be possible, but they should apply to many standard situations. It should also be fairly clear which aspects of the solution would have to be adapted to other situations.
- The on-premise API is an HTTP-REST-API.
- There is a Login-method, taking user name and password as parameters.
- The Login-method returns a Session-Id, which must be included in a header in all subsequent calls to the API to identify the user.
- The password is necessary in the on-premise API to identify the user, but it does not otherwise have an internal meaning.
- Counterexample: If the password is also necessary, for example, as credentials for a database login, then we have a problem.
- Reason: The solution binds an external identity (ASP.NET, Facebook, etc.) with the on-premise User-Id and allows the user to login with that identity, so the on-premise password is superfluous.
- Solution: If the on-premise password is actually necessary (e.g. for the database login), then it would have to be entered as part of or after the external login, which is of course possible but not really what we are hoping for in an SSO solution.
- The same API may be running on on-premise servers in different locations. For example a Lexware-API accessing the Lexware-Pro database would be running on every customer's server.
One easy way to create an on-premise API is using the self-host capabilities of ASP.NET with Owin. There are many how-tos available for doing this. However, this solution does not dictate how the on-premise API is to be implemented, and any one will do.
# Microsoft Azure Service Bus
The Azure Service Bus provides an easy way to access an on-premise WCF (Windows Communications Foundation) interface from any cloud server. Of course, we do not want to rewrite our entire business API to become a WCF Interface, so part of the solution is to develop a small and generic WCF Interface, which resides in a new on-premise service and simply relays HTTP request/response information to and from the actual on-premise API. This is the "On-Premise Relay Service" below.
We also need two ASP.NET applications running in the cloud:
1. An ASP.NET web site ("Identity Portal") where a user can create a web identity (possibly using another identity like Facebook), then connect that identity to the on-premise login of the API running on his/her company's server. For this one-time action, the user needs to:
- enter a Host Id, which is the identification of the on-premise relay service running at his/her company location. This is necessary to tell the Azure Service Bus which of the many existing on-premise relay services this user wants to connect to.
- enter his on-premise user name and password. These get relayed to the on-premise API to confirm that the user is known there.
- From this time on, the web identity is connected to a specific on-premise relay service and to a specific on-premise identity, allowing SSO-access to the on-premise API.
2. An ASP.NET WebApi ("Cloud Relay Service") allowing generic access via the Service Bus to the on-premise API. This means, for example, that an application which consumes the on-premise API only need change the base address of the API to become functional through the Internet.
- Example: A browser app, which previously ran locally and called the API at, say:
`http://192.168.10.10/contacts/v1/contacts`
can now run anywhere and call:
`https://lexwareprorelay.azurewebsites.net/relay/contacts/v1/contacts`
with the same results.
- The only difference is that the user must first login using his web credentials instead of his on-premise credentials. The application then gets a token which identifies the user for all subsequent calls. The token contains appropriate information (in the form of custom claims) to relay each call to the appropriate on-premise relay service.
So there are actually two relays at work, neither of which has any business intelligence, but simply route the http requests and responses:
1. The ASP.NET WebApi "Cloud Relay Service", hosted in the cloud, which:
- receives an incoming HTTP request from the client, e.g. browser or smartphone app.
- converts it to a WCF request object, then relays this via the Azure Service Bus to the proper on-premise relay service.
- receives a WCF response object back from the on-premise relay service.
- converts this to a true HTTP response, and sends it back to the caller.
2. The "On-Premise Relay Service", which:
- receives an incoming WCF request object.
- converts it to a true HTTP request, then relays this to the endpoint of the on-premise API.
- receives the HTTP response from the on-premise API.
- converts it to a WCF response object and returns it via the Azure Service Bus to the ASP.NET WebApi "Cloud Relay Service".
In addition, there is the Azure Service Bus itself, through which the "Cloud Relay Service" and the "On-Premise Relay Service" communicate with each other.
# Sequence Diagrams
## On-Premise Solution
Here we see a local client logging in to the on-premise API, thereby receiving a session-id, and using this session-id in a subsequent call to the API to get a list of the user's contacts.
{:.center}
![]( /images/secure-internet-access/pic36.jpg){:style="margin:auto"}
## One-Time Registration
This shows registration with the Identity Portal in two steps:
1. Create a new web identity.
2. Link that web identity to a certain on-premise API and a certain on-premise user id.
*(Please right-click on image, "open in new tab" to see better resolution)*
{:.center}
![]( /images/secure-internet-access/pic37.jpg){:style="margin:auto"}
After this process, the identity database contains additional information linking the web identity to a specific on-premise API (the "OnPremiseHostId") and to a specific on-premise identity (the "OnPremiseUserId"). From now on, whenever a client logs in to the ASP.NET Cloud Relay with his/her web credentials, this information will be added to the bearer token in the form of claims.
## Client now uses the Cloud Relay Service
Now the client activity shown in the first sequence diagram looks like this:
*(Please right-click on image, "open in new tab" to see better resolution)*
{:.center}
![]( /images/secure-internet-access/pic38.jpg){:style="margin:auto"}
What has changed for the client?
- The client first logs in to the ASP.NET Cloud Relay:
`https://lexwareprorelay.azurewebsites.net/api/account/externallogin` using its web identity credentials
- The client then logs in to the on-premise API:
`https://lexwareprorelay.azurewebsites.net/relay/account/v1/external_login` instead of `http://192.168.10.10/account/v1/login`
and does not include any explicit credentials at all, since these are carried by the bearer token.
- The client then makes "normal" API calls, with two differences:
- The base URL is now `https://lexwareprorelay.azurewebsites.net/relay/` instead of http://192.168.10.10/
- The client must include the authorization token (as a header) in all API calls.
What has changed for the on-premise API?
- The API provides a new method `accounts/v1/user_id` (used only once during registration!), which checks the provided credentials and returns the internal user id for that user. This is the value which will later be added as a claim to the bearer token.
- The API provides a new method `accounts/v1/external_login`, which calls back to the ASP.NET WebApi to confirm the user id, then does whatever it used to do in the original `accounts/v1/login` method. In this sample, that means starting a session linked to this user id and returning the new session-id to the caller.
- The other API methods do not change at all, though it should be noted that an authorization header is now always included, so that if, for example, the session-id should be deemed not secure enough, the on-premise API could always check the bearer token within every method.
# Code
The following sections show the actual code necessary to implement the above processes. Skip all of this if it's not interesting for you, but it is documented here to make the job easier for anyone actually wanting to implement such a relay.
## New Methods in the On-Premise API
Here are the new methods in the accounts controller of the on-premise API which are necessary to work with the external relay.
~~~csharp
#region New Methods for External Access
// base url to the ASP.NET WebApi "Cloud Relay Service"
// here local while developing
// later hosted at e.g. https://lexwareprorelay.azurewebsites.net/
static string secureRelayWebApiBaseAddress = "https://localhost:44321/";
/// <summary>
/// confirm that the bearer token comes from the "Cloud Relay Service"
/// </summary>
/// <param name="controller"></param>
/// <returns></returns>
/// <remarks>
/// Call this from any API method to get the on-premise user id
/// </remarks>
internal static UserInfo CheckBearer(ApiController controller)
{
// get the Authorization header
var authorization = controller.Request.Headers.Authorization;
Debug.Assert(authorization.Scheme == "Bearer");
string userId = null;
try
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(secureRelayWebApiBaseAddress + "api/account/OnPremiseUserId");
webRequest.Headers.Add("Authorization", authorization.Scheme + " " + authorization.Parameter);
using (var hostResponse = (HttpWebResponse)webRequest.GetResponse())
{
string content = null;
using (StreamReader reader = new StreamReader(hostResponse.GetResponseStream()))
{
content = reader.ReadToEnd();
}
userId = content;
userId = JsonConvert.DeserializeObject<string>(userId);
}
}
catch (Exception)
{
throw new UnauthorizedAccessException();
}
var userInfo = Users.UserInfos.Values.FirstOrDefault(u => u.UserId.Equals(userId));
if (userInfo == null)
{
throw new UnauthorizedAccessException();
}
return userInfo;
}
/// <summary>
/// GetUserId
/// </summary>
/// <param name="credentials"></param>
/// <returns></returns>
/// <remarks>
/// This method returns the internal user id for the given credentials.
/// The method is called during the registration process so that
/// the user id can be added to the claims of any future bearer tokens.
/// </remarks>
[HttpPost]
[Route("userid")]
[ResponseType(typeof(string))]
public IHttpActionResult GetUserId([FromBody] LoginCredentials credentials)
{
var userInfo = Users.UserInfos.Values.SingleOrDefault(u => u.UserName.Equals(credentials.UserName) && u.Password.Equals(credentials.Password));
if (userInfo != null)
{
return Ok(userInfo.UserId);
}
else
{
return Unauthorized();
}
}
/// <summary>
/// ExternalLogin
/// </summary>
/// <returns></returns>
/// <remarks>
/// This is called by the client via the relays and replaces the "normal" login.
/// </remarks>
[HttpGet]
[Route("external_login")]
[ResponseType(typeof(string))]
public IHttpActionResult ExternalLogin()
{
try
{
// get the user info from the bearer token
// This also confirms for us that the bearer token comes from
// "our" Cloud Relay Service
var userInfo = CheckBearer(this);
// create session id, just like the "normal" login
string sessionId = Guid.NewGuid().ToString();
SessionInfos.Add(sessionId, userInfo);
return Ok(sessionId);
}
catch (Exception)
{
return Unauthorized();
}
}
#endregion
~~~
## The On-Premise Relay Service
In `IRelay.cs`, define the WCF service (consisting of a single method "Request"). Also, define the WCF Request and Response classes.
~~~csharp
/// <summary>
/// IRelay
/// </summary>
[ServiceContract]
public interface IRelay
{
/// <summary>
/// A single method to relay a request and return a response
/// </summary>
/// <param name="requestDetails"></param>
/// <returns></returns>
[OperationContract]
ResponseDetails Request(RequestDetails requestDetails);
}
/// <summary>
/// The WCF class to hold all information for an HTTP request
/// </summary>
public class RequestDetails
{
public Verb Verb { get; set; }
public string Url { get; set; }
public List<Header> Headers = new List<Header>();
public byte[] Content { get; set; }
public string ContentType { get; set; }
}
/// <summary>
/// The WCF class to hold all information for an HTTP response
/// </summary>
public class ResponseDetails
{
public HttpStatusCode StatusCode { get; set; }
public string Status { get; set; }
public string Content { get; set; }
public string ContentType { get; set; }
}
/// <summary>
/// an HTTP header
/// </summary>
public class Header
{
public string Key { get; set; }
public string Value { get; set; }
}
/// <summary>
/// the HTTP methods
/// </summary>
public enum Verb
{
GET,
POST,
PUT,
DELETE
}
~~~
And the implementation in `Relay.cs`
~~~csharp
public class Relay : IRelay
{
// the local base url of the on-premise API
string baseAddress = http://localhost:9000/;
/// <summary>
/// Copy all headers from the incoming HttpRequest to the WCF request object
/// </summary>
/// <param name="requestDetails"></param>
/// <param name="webRequest"></param>
private void CopyIncomingHeaders(RequestDetails requestDetails, HttpWebRequest webRequest)
{
foreach (var header in requestDetails.Headers)
{
string key = header.Key;
if ((key == "Connection") || (key == "Host"))
{
// do not copy
}
else if (key == "Accept")
{
webRequest.Accept = header.Value;
}
else if (key == "Referer")
{
webRequest.Referer = header.Value;
}
else if (key == "User-Agent")
{
webRequest.UserAgent = header.Value;
}
else if (key == "Content-Type")
{
webRequest.ContentType = header.Value;
}
else if (key == "Content-Length")
{
webRequest.ContentLength = Int32.Parse(header.Value);
}
else
{
webRequest.Headers.Add(key, header.Value);
}
}
}
/// <summary>
/// Relay a WCF request object and return a WCF response object
/// </summary>
/// <param name="requestDetails"></param>
/// <returns></returns>
public ResponseDetails Request(RequestDetails requestDetails)
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(baseAddress + requestDetails.Url);
CopyIncomingHeaders(requestDetails, webRequest);
switch (requestDetails.Verb)
{
case Verb.GET:
webRequest.Method = "GET";
break;
case Verb.POST:
webRequest.Method = "POST";
break;
case Verb.PUT:
webRequest.Method = "PUT";
break;
case Verb.DELETE:
webRequest.Method = "DELETE";
break;
default:
webRequest.Method = "GET";
break;
}
var responseDetails = new ResponseDetails();
if ((requestDetails.Verb == Verb.POST) || (requestDetails.Verb == Verb.PUT))
{
// serialize the body object for POST and PUT
byte[] bytes = requestDetails.Content;
webRequest.ContentType = requestDetails.ContentType;
webRequest.ContentLength = bytes.Length;
// relay the body object to the request stream
try
{
using (Stream requestStream = webRequest.GetRequestStream())
{
requestStream.Write(bytes, 0, bytes.Length);
requestStream.Flush();
requestStream.Close();
}
}
catch (WebException ex)
{
responseDetails.StatusCode = HttpStatusCode.ServiceUnavailable;
responseDetails.Status = ex.Message;
return responseDetails;
}
}
// send request and get response
try
{
using (HttpWebResponse hostResponse = (HttpWebResponse)webRequest.GetResponse())
{
string content = null;
string contentType = null;
using (StreamReader reader = new StreamReader(hostResponse.GetResponseStream()))
{
content = reader.ReadToEnd();
}
contentType = hostResponse.ContentType.Split(new char[] { ';' })[0];
// build the response object
responseDetails.StatusCode = hostResponse.StatusCode;
responseDetails.ContentType = contentType;
responseDetails.Content = content;
}
}
catch (WebException ex)
{
if (ex.Response == null)
{
responseDetails.StatusCode = HttpStatusCode.ServiceUnavailable;
}
else
{
responseDetails.StatusCode = ((HttpWebResponse)ex.Response).StatusCode;
}
responseDetails.Status = ex.Message;
}
return responseDetails;
}
}
~~~
And finally, the code while starting the service to connect to the Azure Service Bus under a unique path.
This code could be in `program.cs` of a console application (as shown) or in the start-method of a true service):
~~~csharp
static void Main(string[] args)
{
// instantiate the Relay class
using (var host = new ServiceHost(typeof(Relay)))
{
// the unique id for this location, hard-coded for this sample
// (could be e.g. a database id, or a customer contract id)
string hostId = "bf1e3a54-91bb-496b-bda6-fdfd5faf4480";
// tell the Azure Service Bus that our IRelay service is available
// via a path consisting of the host id plus "\relay"
host.AddServiceEndpoint(
typeof(IRelay),
new NetTcpRelayBinding(),
ServiceBusEnvironment.CreateServiceUri("sb", "haufemessagebroker", hostId + "/relay"))
.Behaviors.Add(
new TransportClientEndpointBehavior(
TokenProvider.CreateSharedAccessSignatureTokenProvider("OnPremiseRelay", "7Mw+Njy52M95axVlCzHdk4QxxxYUPxPORCKRbGk9bdM=")));
host.Open();
Console.WriteLine("On-Premise Relay Service running...");
Console.ReadLine();
}
}
~~~
Notes:
- The hostId must be unique for each on-premise location.
- The service bus credentials (here, the name "haufemessagebroker" and the "OnPremiseRelay" must all be prepared via the Azure Portal by adding a new service bus namespace, as described in the introduction. In a live environment, you might want some kind of Service Bus Management API, so that each on-premise relay service could get valid credentials after, say, its company signed up for the relay service, and not have them hard-coded.
Once the on-premise relay service is running, you will see it listed with its host id in the Azure Management Portal under the "Relays" tab:
{:.center}
![]( /images/secure-internet-access/pic44.jpg){:style="margin:auto"}
## ASP.NET Identity Portal
Create a new ASP.NET Project (named e.g. "IdentityPortal") and select "MVC". Before compiling and running the first time, change the class ApplicationUser (in `IdentityModels.cs`) as follows:
~~~csharp
public class ApplicationUser : IdentityUser
{
public string OnPremiseHostId { get; set; }
public string OnPremiseUserId { get; set; }
public async Task<ClaimsIdentity> GenerateUserIdentityAsync(UserManager<ApplicationUser> manager)
{
// Note the authenticationType must match the one defined in CookieAuthenticationOptions.AuthenticationType
var userIdentity = await manager.CreateIdentityAsync(this, DefaultAuthenticationTypes.ApplicationCookie);
// Add custom user claims here
userIdentity.AddClaim(new Claim("OnPremiseHostId", OnPremiseHostId ?? String.Empty));
userIdentity.AddClaim(new Claim("OnPremiseUserId", OnPremiseUserId ?? String.Empty));
return userIdentity;
}
}
~~~
This adds two fields to the user identity, which we will need later to link each user to a specific on-premise API and specific on-premise user id. And, importantly, it adds the content of the two new fields as custom claims to the ApplicationUser instance.
By adding this code **before** running for the first time, the fields will automatically be added to the database table. Otherwise, we would need to add them as code-first migration step. So this just saves a bit of trouble.
Now compile and run, and you should immediately be able to register a new web identity and log in with that identity.
*Prepare to register with the on-premise API*
Use `NuGet` to add "WindowsAzure.ServiceBus" to the project.
Also, add a reference to the OnPremiseRelay DLL, so that the IRelay WCF Interface, as well as the Request and Response classes, are known.
In `AccountViewModels.cs`, add these classes:
~~~csharp
public class RegisterWithOnPremiseHostViewModel
{
[Required]
[Display(Name = "On-Premise Host Id")]
public string HostId { get; set; }
[Required]
[Display(Name = "On-Premise User Name")]
public string UserName { get; set; }
[Required]
[DataType(DataType.Password)]
[Display(Name = "On-Premise Password")]
public string Password { get; set; }
}
public class LoginCredentials
{
[JsonProperty(PropertyName = "user_name")]
public string UserName { get; set; }
[JsonProperty(PropertyName = "password")]
public string Password { get; set; }
}
~~~
In `_Layout.cshtml`, add this line to the navbar:
~~~html
<li>@Html.ActionLink("Register With Host", "RegisterWithOnPremiseHost", "Account")</li>
~~~
Add the following methods to the AccountController class:
~~~csharp
// this must point to the Cloud Relay WebApi
static string cloudRelayWebApiBaseAddress = "https://localhost:44321/";
//
// GET: /Account/RegisterWithOnPremiseHost
public ActionResult RegisterWithOnPremiseHost()
{
ViewBag.ReturnUrl = String.Empty;
return View();
}
//
// POST: /Account/RegisterWithOnPremiseHost
[HttpPost]
[ValidateAntiForgeryToken]
public async Task<ActionResult> RegisterWithOnPremiseHost(RegisterWithOnPremiseHostViewModel model, string returnUrl)
{
if (!ModelState.IsValid)
{
return View(model);
}
string userId = null;
try
{
// open the Azure Service Bus
using (var cf = new ChannelFactory<IRelay>(
new NetTcpRelayBinding(),
new EndpointAddress(ServiceBusEnvironment.CreateServiceUri("sb", "haufemessagebroker", model.HostId + "/relay"))))
{
cf.Endpoint.Behaviors.Add(new TransportClientEndpointBehavior
{
TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider("OnPremiseRelay", "7Mw+Njy52M95axVlCzHdxxxxxhYUPxPORCKRbGk9bdM=")
});
IRelay relay = null;
try
{
// get the IRelay Interface of the on-premise relay service
relay = cf.CreateChannel();
var credentials = new LoginCredentials
{
UserName = model.UserName,
Password = model.Password
};
var requestDetails = new RequestDetails
{
Verb = Verb.POST,
Url = "accounts/v1/userid",
Content = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(credentials)),
ContentType = "application/json"
};
// call the on-premise relay service
var response = await Task.Run(() =>
{
try
{
return relay.Request(requestDetails);
}
catch (EndpointNotFoundException)
{
return null;
}
});
if ((response == null) || (response.StatusCode == HttpStatusCode.ServiceUnavailable))
{
ModelState.AddModelError("", "Login zur Zeit nicht möglich, weil der lokale Dienst nicht erreichbar ist.");
return View(model);
}
else if (response.StatusCode == HttpStatusCode.Unauthorized)
{
ModelState.AddModelError("", "Login fehlgeschlagen.");
return View(model);
}
else if (response.StatusCode != HttpStatusCode.OK)
{
ModelState.AddModelError("", "Login zur Zeit nicht möglich.\nDetails: " + response.Status);
return View(model);
}
// alles ok
userId = response.Content;
userId = JsonConvert.DeserializeObject<string>(userId);
}
catch (Exception)
{
ModelState.AddModelError("", "Login zur Zeit nicht möglich, weil der lokale Dienst nicht erreichbar ist.");
return View(model);
}
}
}
catch (CommunicationException)
{
return View(model);
}
ApplicationUser user = await UserManager.FindByIdAsync(User.Identity.GetUserId());
user.OnPremiseUserId = userId;
user.OnPremiseHostId = model.HostId;
UserManager.Update(user);
return RedirectToAction("RegisterWithOnPremiseHostSuccess");
}
// GET: Account/RegisterWithOnPremiseHostSuccess
public ActionResult RegisterWithOnPremiseHostSuccess()
{
ViewBag.ReturnUrl = String.Empty;
return View();
}
~~~
Note:
- The note about the service bus credentials (in the on-premise relay service) applies here, too, of course.
To Views\Account, add `RegisterWithOnPremiseHost.cshtml`:
~~~html
@model IdentityPortal.Models.RegisterWithOnPremiseHostViewModel
@{
ViewBag.Title = "Register With On-Premise Host";
}
<h2>Register With On-Premise Host</h2>
@using (Html.BeginForm())
{
@Html.AntiForgeryToken()
<div class="form-horizontal">
<hr />
@Html.ValidationSummary(true, "", new { @class = "text-danger" })
<div class="form-group">
@Html.LabelFor(model => model.HostId, htmlAttributes: new { @class = "control-label col-md-2" })
<div class="col-md-10">
@Html.EditorFor(model => model.HostId, new { htmlAttributes = new { @class = "form-control" } })
@Html.ValidationMessageFor(model => model.HostId, "", new { @class = "text-danger" })
</div>
</div>
<div class="form-group">
@Html.LabelFor(model => model.UserName, htmlAttributes: new { @class = "control-label col-md-2" })
<div class="col-md-10">
@Html.EditorFor(model => model.UserName, new { htmlAttributes = new { @class = "form-control" } })
@Html.ValidationMessageFor(model => model.UserName, "", new { @class = "text-danger" })
</div>
</div>
<div class="form-group">
@Html.LabelFor(model => model.Password, htmlAttributes: new { @class = "control-label col-md-2" })
<div class="col-md-10">
@Html.EditorFor(model => model.Password, new { htmlAttributes = new { @class = "form-control" } })
@Html.ValidationMessageFor(model => model.Password, "", new { @class = "text-danger" })
</div>
</div>
<div class="form-group">
<div class="col-md-offset-2 col-md-10">
<input type="submit" value="Register" class="btn btn-default" />
</div>
</div>
</div>
}
@section Scripts {
@Scripts.Render("~/bundles/jqueryval")
}
~~~
Also to Views\Account, add `RegisterWithOnPremiseHostSuccess.cshtml`:
~~~html
@{
ViewBag.Title = "Success";
}
<h2>@ViewBag.Title</h2>
<div class="row">
<div class="col-md-8">
<section id="loginForm">
@using (Html.BeginForm("HaufeLogin", "Account", new { ReturnUrl = ViewBag.ReturnUrl }, FormMethod.Post, new { @class = "form-horizontal", role = "form" }))
{
@Html.AntiForgeryToken()
<hr />
<h4>Your on-premise login credentials have been confirmed..</h4>
}
</section>
</div>
</div>
@section Scripts {
@Scripts.Render("~/bundles/jqueryval")
}
~~~
Now you can log in to the Identity Portal and select "Register With Host".
Assuming:
- the on-premise relay service has a host id = bf1e3a54-91bb-496b-bda6-fdfd5faf4480
- the on-premise API has a user with user name = "Ackermann"
Then fill in the form appropriately:
{:.center}
![]( /images/secure-internet-access/pic39a.jpg){:style="margin:auto"}
Once this registration is successful, any client can now communicate with the on-premise API using the Cloud Relay Service, defined below.
## Cloud Relay Service
Create a new ASP.NET Project (named e.g. "CloudRelayService") and select "Web Api".
- Before compiling and running the first time, make the same changes to the ApplicationUser class as mentioned above for the Identity Portal.
- Also, edit web.config and change the connection string for "DefaultConnection" to work with the same database as the Identity Portal by copying the connection string from that project.
- Important: if the connection string contains a `|DataDirectory|` reference in the file path, you will have to replace this with the true physical path to the other project, otherwise the two projects will not point to the same database file.
Add the following method to the AccountController (for this, you must include the System.Linq namespace):
~~~csharp
// GET api/Account/OnPremiseUserId
[HostAuthentication(DefaultAuthenticationTypes.ExternalBearer)]
[Route("OnPremiseUserId")]
public IHttpActionResult GetOnPremiseUserId()
{
// get the on-premise user id
var identity = (ClaimsIdentity)User.Identity;
var onPremiseUserIdClaim = identity.Claims.SingleOrDefault(c => c.Type == "OnPremiseUserId");
if (onPremiseUserIdClaim == null)
{
return Unauthorized();
}
return Ok(onPremiseUserIdClaim.Value);
}
~~~
Use `NuGet` to add "WindowsAzure.ServiceBus" to the project.
Also, add a reference to the OnPremiseRelay DLL, so that the IRelay WCF Interface, as well as the Request and Response classes, are known.
Then add a new controller `RelayController` with this code:
~~~csharp
[Authorize]
[RoutePrefix("relay")]
public class RelayController : ApiController
{
private void CopyIncomingHeaders(RequestDetails request)
{
var headers = HttpContext.Current.Request.Headers;
// copy all incoming headers
foreach (string key in headers.Keys)
{
request.Headers.Add(new Header
{
Key = key,
Value = headers[key]
});
}
}
[HttpGet]
[Route("{*url}")]
public async Task<IHttpActionResult> Get(string url)
{
return await Relay(url, Verb.GET);
}
[HttpPost]
[Route("{*url}")]
public async Task<IHttpActionResult> Post(string url)
{
return await Relay(url, Verb.POST);
}
[HttpPut]
[Route("{*url}")]
public async Task<IHttpActionResult> Put(string url)
{
return await Relay(url, Verb.PUT);
}
[HttpDelete]
[Route("{*url}")]
public async Task<IHttpActionResult> Delete(string url)
{
return await Relay(url, Verb.DELETE);
}
private async Task<IHttpActionResult> Relay(string url, Verb verb)
{
byte[] content = null;
if ((verb == Verb.POST) || (verb == Verb.PUT))
{
// for POST and PUT, we need the body content
content = await Request.Content.ReadAsByteArrayAsync();
}
// get the host id from the token claims
var identity = (ClaimsIdentity)User.Identity;
var onPremiseHostIdClaim = identity.Claims.SingleOrDefault(c => c.Type == "OnPremiseHostId");
if (onPremiseHostIdClaim == null)
{
return Unauthorized();
}
try
{
// open the Azure Service Bus
using (var cf = new ChannelFactory<IRelay>(
new NetTcpRelayBinding(),
new EndpointAddress(ServiceBusEnvironment.CreateServiceUri("sb", "haufemessagebroker", onPremiseHostIdClaim.Value + "/relay"))))
{
cf.Endpoint.Behaviors.Add(new TransportClientEndpointBehavior
{
TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider("OnPremiseRelay", "7Mw+Njy52M95axVlCzHdxxxxxhYUPxPORCKRbGk9bdM=")
});
// get the IRelay Interface of the on-premise relay service
IRelay relay = cf.CreateChannel();
var requestDetails = new RequestDetails
{
Verb = verb,
Url = url
};
// copy the incoming headers
CopyIncomingHeaders(requestDetails);
if ((verb == Verb.POST) || (verb == Verb.PUT))
{
requestDetails.Content = content;
var contentTypeHeader = requestDetails.Headers.FirstOrDefault(h => h.Key == "Content-Type");
if (contentTypeHeader != null)
{
requestDetails.ContentType = contentTypeHeader.Value;
}
}
// call the on-premise relay service
var response = await Task.Run(() =>
{
try
{
return relay.Request(requestDetails);
}
catch (EndpointNotFoundException)
{
// set response to null
// this will be checked after the await, see below
// and result in ServiceUnavailable
return null;
}
});
if (response == null)
{
return Content(HttpStatusCode.ServiceUnavailable, String.Empty);
}
// normal return
return Content(response.StatusCode, response.Content);
}
}
catch (CommunicationException)
{
return Content(HttpStatusCode.ServiceUnavailable, String.Empty);
}
}
}
~~~
Note:
- The note about the service bus credentials (in the on-premise relay service) applies here, too, of course.
The Cloud Relay WebApi should now be ready to return an authorization token for the web identity, and also relay http requests via WCF and the Azure Service Bus to the on-premise relay service.
Note that all relay methods are protected by the class's Authorize attribute.
*Examples using Chrome Postman:*
Get a token using a web identity (Note the path `/Token`, the content-type, and the content):
{:.center}
![]( /images/secure-internet-access/pic40.jpg){:style="margin:auto"}
Using the token, with prefix "Bearer", log in to the on-premise API and receive a session-id:
{:.center}
![]( /images/secure-internet-access/pic41.jpg){:style="margin:auto"}
Now use the session-id to make normal calls to the API:
{:.center}
![]( /images/secure-internet-access/pic42.jpg){:style="margin:auto"}

View File

@ -1,91 +0,0 @@
---
layout: post
title: Software Architecture Day Timisoara on May 18th, 2016
subtitle: Architecture Strategies for Modern Web Applications
category: conference
tags: [api, microservice]
author: doru_mihai
author_email: doru.mihai@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
This year, me and a couple of my colleagues from Timisoara attended again the Software Architecture Day conference, a yearly event that in the last years has brought big names as speakers.
Last year, [Neal Ford](http://nealford.com/abstracts.html) was the speaker and he introduced us to concepts relating to continuous delivery and microservices, some of which we have already applied within our company.
This year, it was [Stefan Tilkov's](https://www.innoq.com/blog/st/) turn to grace us with his presence.
{:.center}
![Software Architecture Day 2016]({{ site.url }}/images/software-arch-day/doru_badge.jpg){:style="margin:auto"}
The title of this year's talk was pretty ambiguous, *Architecture strategies for modern web applications*. Still, the organizers sent us a list of topics that would be discussed within this whole-day event and they are as follows: modularization, REST & Web APIs, single page apps vs. ROCA, pros and cons of different persistence options, scaling in various dimensions.
## Start
{:.center}
![Software Architecture Day 2016]({{ site.url }}/images/software-arch-day/stefan_tilkov.jpg){:style="margin:auto"}
The presentation kicked off with a rant about how different enterprises have struggled over the years to provide frameworks and tools that would abstract away the complexities of the web.
He illustrated as an example, the Java EE stack, enumerating the different layers one would have in an enterprise built application, with the example use case of receiving a JSON payload and sending another one out. Or course the point of all of this was to show what a ridiculous amount of effort has been put into abstracting away the web.
It is at this point that he expressed his hatred for Java and .Net because of all the problems that were created by trying to simplify things.
## Backend
After the initial rant, the purpose of which was to convince us that it is better to work with a technology that sticks closer to what is really there all along (a request, a header, cookie, session etc.), he continued with a talk about the different choices one may have when dealing with the backend. Below are my notes:
- Process vs Thread model for scaling
- .Net I/O Completion Ports
- Request/Response vs Component based frameworks
- Async I/O
- Twisted (Python)
- Event Machine (Ruby)
- Netty
- NodeJS
- [Consistent hashing](http://michaelnielsen.org/blog/consistent-hashing/) - for cache server scaling
- Eventual consistency
- The CAP theorem
- Known issues with prolific tools. Referenced [Aphyr](https://aphyr.com/posts/317-jepsen-elasticsearch) as a source of examples of failures of such systems.
- NoSQL scaling
- N/R/W mechanisms
- BASE vs ACID dbs
## REST
This was the same presentation that I had seen on [infoq](https://www.infoq.com/presentations/rest-misconceptions) some time ago.
He basically rants about how many people think or say they are doing Rest when actually they are not. Or how many people spend a lot of time discussing how the URL should be formed when that actually has nothing to do with Rest.
One thing in particular was interesting for me, when he was asked about rest api documentation tools he didn't have a preference for one in particular but he did mention explicitly that he is against Swagger, for the sole reason that Swagger doesn't allow hypermedia in your api definition.
After the talk I asked him about validation, since he mentioned Postel's Law. In the days of WS-* we would use XML as the format and we would do XSD validation, (he commented that xsd validation is costly and in the large scale projects, he would skip it) but now that we mainly use JSON as the format, and [JSON Schema](http://json-schema.org/documentation.html) is still in a Draft stage. Sadly he didn't have a solution for me :)
## Frontend
Towards the end of the day he talked to us about what topics you should be concerned with when thinking about the frontend.
Amongst them, noteworthy were the talks about CSS Architecture, and how it is beginning to be more and more important. To the extent that within his company he has a CSS Architect, and he raised the awareness that when adopting a framework for the frontend, you must be aware that there were decisions taken within that framework, that you are basically inheriting. And that framework's architecture becomes your architecture.
For CSS he mentioned the following CSS methodologies:
- BEM
- OOCSS
- SMACSS
- Atomic-CSS
- Solid CSS
After presenting solutions for different aspects that one may need to consider for the frontend he proceeded to discuss about Single Page Applications and what are the drawbacks of that approach and presented [Resource Oriented Client Architecture](http://roca-style.org/).
## Modularization
The last part of the day was dedicated to modularization, and here he proposed a methodology that is close to Microservices, can be used in tandem with microservices, but is slightly different.
He called them [Self Contained Systems](http://scs-architecture.org/vs-ms.html) and you can read all about them following the link (it will explain things better than I can :) ).
## Conclusion
It was a lot of content to take in, and due to the fact that he presented content from several whole-day workshops he has in his portfolio, none of the topics were presented into too much depth. If you want to get an idea of what was presented feel free to watch the presentations below.
- [Web development Techniques](https://www.infoq.com/presentations/web-development-techniques)
- [Rest Misconceptions](https://www.infoq.com/presentations/rest-misconceptions)
- [Breaking the Monolith](https://www.infoq.com/presentations/Breaking-the-Monolith)
- [NodeJS Async I/O](https://www.infoq.com/presentations/Nodejs-Asynchronous-IO-for-Fun-and-Profit)

View File

@ -1,140 +0,0 @@
---
layout: post
title: Rocket.Chat Integrations
subtitle: How to integrate social media and other information streams into your Rocket.Chat instance via Webhooks
category: howto
tags: [automation, devops]
author: doru_mihai
author_email: doru.mihai@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
As you probably [already know](http://dev.haufe.com/irc-chatops/), within Haufe we have decided to switch to Rocket.chat from Slack.
One of the things we had grown accustomed to with Slack were it's variuous built in integrations. And Rocket lacks in that department but it does have support for generic [Webhook Integrations](https://rocket.chat/docs/administrator-guides/integrations/), and there are guides and how-to's for the usual integrations you might need/want.
## Azure App Insight Alerts
One of the cool things you can to in Azure is to have Alerts set on different kinds of conditions.
You can test if your website or web application is accesible, or that it returns what you expect, or if the response time or load is within certain bounds. You can read more about them [here](https://azure.microsoft.com/en-us/documentation/articles/app-insights-alerts/).
I did a pull request with this guide on the rocket.chat official documentation and as a result you can now also find it in the [Rocket.Chat.Docs](https://rocket.chat/docs/administrator-guides/integrations/azurealerts-md/).
In order to do the necessary configuration in Rocket.Chat you need administrative rights.
Just follow these steps:
1. In Rocket.Chat go to "Administration"->"Integrations" and create "New Integration"
2. Choose Incoming WebHook.
3. Follow all instructions like Enable, give it a name, link to channel etc.
4. **Most Important Step**: Set "Enable Script" to true and enter the javascript snippet below in the "Script" box.
4. Press Save changes and copy the *Webhook URL* (added just below the script box).
5. Go to the azure portal and on the specific resource you want to enable Alerts for follow the steps for enabling Alerts and set the previously copied URL as the webhook URL for the Azure Alert. You can follow the steps shown here: https://azure.microsoft.com/en-us/documentation/articles/insights-webhooks-alerts/
Paste this in javascript in the "Script" textarea on Rocket.Chat webhook settings:
```javascript
class Script {
process_incoming_request({ request }) {
// console is a global helper to improve debug
console.log(request.content);
var alertColor = "warning";
if(request.content.status === "Resolved"){ alertColor = "good"; }
else if (request.content.status === "Activated") { alertColor = "danger"; }
var condition = request.content.context.condition;
return {
content:{
username: "Azure",
text: "Azure Alert Notification",
attachments: [{
title: request.content.context.name,
pretext: request.content.context.description,
title_link: request.content.context.portalLink,
text: condition.failureDetails,
color: alertColor,
fields: [
{
title: "Status",
value: request.content.status + " @ " + request.content.context.timestamp
},
{
title: "Condition",
value: condition.metricName + ": " + condition.metricValue + " " + condition.metricUnit + " for more than " + condition.windowSize + " min."
},
{
title: "Threshold",
value: condition.operator + " " + condition.threshold
}
]
}]
}
};
return {
error: {
success: false,
message: 'Error'
}
};
}
}
```
This example shows basic processing of azure alerts that will give you the necessary information as to what happened and what is the current status, along with a status color to get an idea at a quick glimpse of the message.
The schema of the incoming message as of the official [Azure Alert Webhook Docs](https://azure.microsoft.com/en-us/documentation/articles/insights-webhooks-alerts/) is:
```json
{
"status": "Activated",
"context": {
"timestamp": "2015-08-14T22:26:41.9975398Z",
"id": "/subscriptions/s1/resourceGroups/useast/providers/microsoft.insights/alertrules/ruleName1",
"name": "ruleName1",
"description": "some description",
"conditionType": "Metric",
"condition": {
"metricName": "Requests",
"metricUnit": "Count",
"metricValue": "10",
"threshold": "10",
"windowSize": "15",
"timeAggregation": "Average",
"operator": "GreaterThanOrEqual"
},
"subscriptionId": "s1",
"resourceGroupName": "useast",
"resourceName": "mysite1",
"resourceType": "microsoft.foo/sites",
"resourceId": "/subscriptions/s1/resourceGroups/useast/providers/microsoft.foo/sites/mysite1",
"resourceRegion": "centralus",
"portalLink": "https://portal.azure.com/#resource/subscriptions/s1/resourceGroups/useast/providers/microsoft.foo/sites/mysite1"
},
"properties": {
"key1": "value1",
"key2": "value2"
}
}
```
The result is that you will see messages in your specified channel or direct message looking something like this:
{:.center}
![Rocket.Chat Azure Alert]({{ site.url }}/images/rocket-chat-integrations/rocket_azure_alerts.JPG){:style="margin:auto"}
You see Azure and a cloud icon there because when configuring the Rocket.Chat incoming webhook I specified Azure as the Alias and i set :cloud: as the Emoji. Both of these settings are optional.
### Caution
Be aware, I have noticed there is a latency sometimes between the e-mail notifications and the webhook notifications, in the sense that I would sooner receive an e-mail than receive a webhook call from Azure.
So be wary of relying on this mechanism to trigger perhaps some other automations that you would want to happen in order to fix potential issues because if the issue is transient, you might find that by the time you receive the webhook call that your application is no longer responding, you might receive an e-mail telling you it's already recovered and well.
Enjoy :)

View File

@ -1,29 +0,0 @@
---
layout: post
title: DevOps Day and Meetup @Haufe on June 1st, 2016
subtitle: A full day of talks on continuous delivery, test automation, Docker, cloud and much more
category: conference
tags: [devops, docker, automation]
author: marco_seifried
author_email: marco.seifried@haufe-lexware.com
header-img: "images/bg-post.alt.jpg"
---
What started as a little get together in one of our locations last year, grew a little bit bigger - last week 150 Haufe colleagues met in Freiburg for our internal DevOps Day 2016. Taking the opportunity of so many tech people in town, we also set up a public [DevOps meetup](http://www.meetup.com/de-DE/DevOps-Freiburg/events/231185858/) in the evening.
{:.center}
![DevOps Day 2016]({{ site.url }}/images/devopsday2016/audience.jpg){:style="margin:auto"}
As special guest we had [Moritz Heiber](https://twitter.com/moritzheiber) from ThoughtWorks, talking about Continuous Delivery and immutable infrastructure. He also talked about how DevOps empowers teams, not just developers, to work together, leverage technology, to deliver software in a better, more effective way - and that's exactly what we are seeing within Haufe as well.
Small teams, getting the full responsibilty over the complete software delivery process, can change things massively. We had several talks from our projects where the build and deploy cycle came down from several people days to minutes, fully automated!
We are experimenting a lot with [Go.CD](https://www.go.cd/), an open source continuous delivery tool by ThoughtWorks, but we are also still using Jenkins, and other tools. The point for Haufe at this stage is not just the tooling though, it's the bringing it all together: Understanding what your software process is, including for example testing, made us realise how many different people, departments and processes are typically involved. Not always 100% in sync... Writing it all down, making it explicit, through config files, stored in a repository, is the foundation for improving things, to automate. Following the concept of immutable infrastructure, we pack all components into Docker containers and build from scratch, everytime. Which then makes us independent of where to deploy to - locally on a dev machine, hosted or onto the cloud. So far we stayed away from using any cloud specific offerings, like Amazon EC2 or similar, but manage it all ourselves, using Docker machine, compose and swarm.
Automated testing was another topic for discussion - is there a need for manual testing? How can others deploy several times a day and still test everything? Again, testing, development, deployment all moves closer together and therefore automating your test processes across the board is a natural goal - start with unit tests, system tests, don't forget security, include API testing, end-to-end and UI. Include it in your pipelines, set up the test environments from scratch and pull the test cases out of a repository (immutable infrastructure!). Then just make sure you can run it all fast - and here comes the challenge for us again. Test automation is not new for us, we have automated test cases - a lot in some cases. But combine all of those and you realize there is not enough time in a night to run them all!
So apart from the different tools and frameworks you can use - we talked about SoapUI, Selenium, HP Lean FT and others - we have to think about when to test what. And maybe restrict ourselves more, leave some tests out, not run them all, all the time. After all, testing being part of the build and deploy cycle, which is being streamlined, we can always redeploy. Combined with the Microservices approach, you deploy small pieces - so there is only so much which can go wrong in one deployment.
We ended the day with the public meetup - finally being able to have a beer helped to relax a bit - and were happy to see so many guests from Freiburg and around. It's so good and important to network, talk about experiences others have made, realizing how many of us work on the same topics!
We're already gathering topics for another DevOps day, hopefully soon!

View File

@ -1,268 +0,0 @@
---
layout: post
title: Summary of QCon New York, 2016
subtitle: Impressions, links and summaries of QCon New York
category: conference
tags: [qcon, culture, devops, microservices]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Here is a quick summary of my highlights of QCon New York from June 13th to June 15th, 2016. The following are the notes from sessions I attended. To make it easy to pick the most interesting ones, I grouped them according to topics. Slides from other presentations can be downloaded directly from the [QCon Schedule](https://qconnewyork.com/ny2016/schedule/tabular)
##### Culture
* [How to optimize your culture for learning](#how-to-optimize-your-culture-for-learning)
* [Learnings from a culture first startup](#learnings-from-a-culture-first-startup)
##### Devops
* [Implementing Infrastructure as Code](#implementing-infrastructure-as-code)
* [Think before you tool](#think-before-you-tool)
* [Container mgmt at Netflix](#container-mgmt-at-netflix)
##### Architecture
* [What they do not tell you about microservices](#what-they-do-not-tell-you-about-microservices)
* [Lessons learned on Ubers journey into microservices](#lessons-learned-on-ubers-journey-into-microservices)
* [The deadly sins of microservices](#the-deadly-sins-of-microservices)
##### Security
* [Security in cloud environments](#security-in-cloud-environments)
* [Cryptocurrency key storage](#cryptocurrency-key-storage)
---
### Implementing Infrastructure as Code
* [Description](https://qconnewyork.com/ny2016/presentation/implementing-infrastructure-code) and [slides](https://qconnewyork.com/system/files/presentation-slides/implementing_iac_-_qcon_nyc_2016.pdf)
* Website at <http://infrastructure-as-code.com>
* Motivations:
* speed: get something to market fast, iterate, continuously improve it
* heavy process to reduce danger vs everything goes
* goal: be able to make changes rapidly, frequently and responsibly
* challenges
* server sprawl: config drift, automation fear cycle
* Infrastructure-as-Code:
* Applying software engineering tools to managing the infrastructure
* Unattended automation (enforces discipline, discourages out-of-band changes
* Changes need to be tested as well, before doing a **DevOoops**
* See <http://serverspec.org>
* The process for applying changes is auditable (the responsible part)
* Changes are tracked by commit
* Automation enforces that processes are executed
* See <http://47ron.in/blog/2015/01/16/terraform-dot-io-all-hail-infrastructure-as-code.html>
* Think about duplication
* Re-use by forking: divergence vs decoupling
* Sharing elements avoid monoliths - optimize to simplify changes
---
### Think before you tool
* [Description](https://qconnewyork.com/ny2016/presentation/think-before-you-tool-opinionated-journey)
* Centralized Log Analysis: <https://prometheus.io>
* Microservice dependency graphing and monitoring: <http://zipkin.io>
---
### Security in cloud environments
* [Description](https://qconnewyork.com/ny2016/presentation/access-secret-management-cloud-services) and [Slides](https://qconnewyork.com/system/files/presentation-slides/identity_access_and_secret_management_-_ryan_lane_-_qcon.pdf)
* Additional links for password and secret managers
* <http://docs.ansible.com/ansible/playbooks_vault.html>
* <https://gist.github.com/tristanfisher/e5a306144a637dc739e7>
* <http://cheat.readthedocs.io/en/latest/ansible/secrets.html>
* <https://github.com/DavidAnson/PassWeb>
* <https://passwork.me/info/enterprise/>
* <https://lyft.github.io/confidant/>
* Detecting secrets in source code: <https://eng.lyft.com/finding-a-needle-in-a-haystack-b7e0627b01f6#.f0lazahyo>
---
### How to optimize your culture for learning
* [Description](https://qconnewyork.com/ny2016/presentation/optimize-your-culture-learning)
* About creating high learning environments in [Recurse](https://www.recurse.com)
* Company mantra 'You are doing your thing at your time, and we bring the place and the community'
* RC is partnering with companies:
* value for participants improve their software skills,
* value to companies: hiring access
* Motivation
* Fear is an obstacle to learning
* People dont want to look stupid
* Create a positive feel around I do not know
* RC social rules to reduce fear
* No feigning surprise (What, you don't know?)
* No “well, actually” (do not correct details which are irrelevant for the conversation)
* No backset driving (lobbying criticism over the wall without participating)
* No subtle-isms (no racism, sexism, even in a subtle way: where do you really come from?)
* What works for us
* Being transparent about our beliefs re-enforces learning
* Being vocal about our values
* Treat people like adults
* Don't need to check in on people every day
* Choice to participate in activities, meetings, etc vs mandating participation
* Key message
* Hire attitude over skill
* You can learn any skill, but you cant learn curiosity
---
### Learnings from a culture first startup
* [Description](https://qconnewyork.com/ny2016/presentation/learnings-culture-first-startup)
* About creating the right culture at [Buffer](http://buffer.com)
* How do we know how to build a good culture
* What is culture
* In every team: the explicit and implicit behaviors which are valued by the team
* Evolves and changes with each hiring
* Best teams carefully manage culture
* At buffer culture is as important as the product
* The result are our [buffer values](https://open.buffer.com/buffer-values/)
* Crafting culture is hard: you hire for culture, you should be firing for culture
* Build the core team which aligns on culture
* Interviews/hiring around culture fit
* Spend less time convincing people, more time finding people who are already convinced
* In order to hire for cultural fit, the team has to be on the same page
* Lessons learnt from experimenting with culture
* Transparency breeds trust (for team and customers)
* See <hhtps://buffer.com/transparency>
* Buffer transparency salary calculator
* See <https://buffer.com/salary?r=1&l=10&e=2&q=0>
* Term sheet and valuation of round A are public
* See <https://open.buffer.com/raising-3-5m-funding-valuation-term-sheet/>
* It is even more important to be transparent when things dont go well
* Culture is truly tested and defined during hard times
* Implementing culture for a globally distributed team
* Can hire the best people in the world
* Hard to brainstorm (teams need mini-retreats)
* Harder to get on the same page
* Hard to disengage from work when working through Timezones
* Cultivate culture for remote work
* Need to be self-motivated and genuinely passionate about your work
* Need to be resourceful, can get through roadblocks
* But hard time to hire juniors/interns
* Written communication is our main medium
- But cant replace in-person interactions: We have retreats
* Make mistakes, keep tight feedback loops, iterate fast
* Growing a remote team without managers was a bad choice
* Instead of hiding mistakes, we share them openly
* There are no balanced people, only balanced teams
* Culture fit -> culture contribution
* Its the leaders job to hire for balance
* Hiring for culture fit assumes that culture is perfect and static
* See <http://diversity.buffer.com>
* A/B testing to attract different demographics
* Taking hiring risks consciously (instead of reducing it)
* Everyone is hired for a 45 day work bootcamp (full-time contracting period)
* Cant copy other cultures
* Culture as differentiator (from 300 to 4000 job applicants)
---
### Container mgmt at Netflix
* [Description](https://qconnewyork.com/ny2016/presentation/scheduling-fuller-house) and [Slides](https://qconnewyork.com/system/files/presentation-slides/schedulingfullerhouse_nflx.pdf)
* Running container on AWS results in loosing EC2 metadata and IAM control
* Lesson: making container act like VMs
* Implemented EC2 Metadata Proxy to bridge EC2 metadata into container
* Why?
* Amazon service sdks work
* Service discovery continues to work
* Lesson: Image Layering
* Engineering tool team generates base images (blessed, secured) for app envs (i.e. node.js)
* Application images are derived from base image
---
### Cryptocurrency key storage
* [Description](https://qconnewyork.com/ny2016/presentation/banking-future-cryptocurrency-key-storage)
* How cryptocurrency is stored at [Coinbase](https://www.coinbase.com)
* Sharding of crypto keys using [shamir secret sharing](https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing)
* Using cold and hot storage (consensus access)
* Cold storage
* Most of the crypto currency is stored in cold storage (disconnected)
* Generated on hardware never connected to the internet
* Stored on usb
* Private key is being split into shards and encrypted independently
* Restoring private key requires majority of shards (individual parts can go rogue)
* Example: Ethereum Cold Storage for smart contracts
* 4 of 7 can retrieve the contract
* 6 of 7 can change the contract
- Hot storage
* Fully insured
* Ssingle server requiring a quorum of senior engineers to unlock/unscramble
* Multisig Vault
* <https://www.coinbase.com/multisig>
* Cold Storage as-a-Service (User Key, Shared Key, Coinbase Key)
* User needs paper and passphrase
* m-of-n sharding of key is possible
---
### What they do not tell you about microservices
* [Description](https://qconnewyork.com/ny2016/presentation/what-they-dont-tell-you-about-micro-services) and [Slides](https://qconnewyork.com/system/files/presentation-slides/qcon-microservices_talk_v7_for_web_upload.pdf)
* See also
* <http://www.slideshare.net/DanielRolnick/microservices-and-devops-at-yodle>
* <http://www.yodletechblog.com/2016/04/25/yodle-hackathon-april-2016-edition/>
* Good pragmatic steps for evolving from monolith to microservice architecture
* After split Postgres started to break down with connection pooling, used an external connection pooler like <https://pgbouncer.github.io>
* Choose mesos/marathon
* Thrift-based macro services
* Smart pipes vs context-aware apps
* Decoupling application from service discovery
* (v1) curator framework from Netflix brought into Zookeeper
* (v2) hibachi by dotCloud (dedicated routing hosts)
* (v3) haproxy
* Marathon has built-in routing concept using haproxy (generates haproxy config)
* Started using qubit bamboo
* Can iterate routing and discovery independently from application, but run into scale problem around 300 service (square on every service needs to know every other service and health)
* Moving back to topology of (v2) but with HAProxy
* Continious Integration/ Continious Deployment CI/CD
* Using Sentinel to manage services
* Concept of [canary release](https://www.infoq.com/news/2013/03/canary-release-improve-quality)
* Container make things more simple but leave mess behind
* Need to clean up container images: [garbage collection in registry?](http://www.yodletechblog.com/2016/01/06/docker-registry-cleaner/)
* Monitoring
* Graphite and Grafana
* Did not scale, since every team had to build own dashboard
* Too much manual effort and no alerting
* Switched to New Relic
* Fully monitored if agent is present
* Goal was 100 apps in 100 days
* Source code management
* Using [Hound](https://github.com/houndci/hound) to help with code searching
* Using [GitRepo](https://code.google.com/p/git-repo/) to help keep repos up to date
* Human service discovery
* Using [Sentinel](http://www.yodletechblog.com/2015/12/14/yodles-continuous-improvement-of-continuous-delivery/) for developer finding services
---
### Lessons learned on Ubers journey into microservices
* [Description](https://qconnewyork.com/ny2016/presentation/project-darwin-uber-jourbey-microservices) and [Slides](https://qconnewyork.com/system/files/presentation-slides/uber-journey_to_microservices_public.pdf)
* See also <https://eng.uber.com/building-tincup/>
* Very good presentation on the motivators to break apart the monolith
---
### The deadly sins of microservices
* [Description](https://qconnewyork.com/ny2016/presentation/seven-deadly-sins-microservices) and [Slides](https://qconnewyork.com/system/files/presentation-slides/qcon_nyc_2016_-_seven_more_deadly_sins_final.pdf)
* See also
* <https://speakerdeck.com/acolyer/making-sense-of-it-all>
* <http://philcalcado.com/2015/09/08/how_we_ended_up_with_microservices.html>
* <http://www.slideshare.net/dbryant_uk/craftconf-preview-empathy-the-hidden-ingredient-of-good-software-development>
* <https://acaseyblog.wordpress.com/2015/11/18/guiding-principles-for-an-evolutionary-architecture/>
* Strategic goals <-> architecture principles <-> design and delivery practices
* Neal Ford: MSA as evolutionary architecture
* Architecture is hard to change, so make architecture itself evolvable
* The spine model
* Needs -> Values -> Principles -> Practices -> Tools
* going up to the spine to break deadlock
* Cargo cutting
* Understand the practices, principles, and values
* But getting lazy with non-functional requirements
* Just Enough Software Architecture
* Recommended book [Just enough software architecture](https://www.amazon.com/Just-Enough-Software-Architecture-Risk-Driven/dp/0984618104/)
* Ebook format available through <http://georgefairbanks.com/e-book/>
* Embrace BDD-Security framework for BDD-style security testing
* <https://www.continuumsecurity.net/bdd-intro.html>
* Devops
* Topologies: <http://web.devopstopologies.com>
* Testing
* Continues Delivery: <https://dzone.com/articles/continuously-delivering-soa>
* Service virtualization: <https://github.com/SpectoLabs/hoverfly>
* Hoverfly is a proxy written in Go. It can capture HTTP(s) traffic between an application under test and external services, and then replace the external services. It can also generate synthetic responses on the fly.

View File

@ -1,218 +0,0 @@
---
layout: post
title: A node.js Primer for us Old School Developers
subtitle: Things in node.js that caught me on the wrong foot when I saw them the first time
category: general
tags: [cto, open-source, culture]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
For long years, I developed almost exclusively for the Windows platform. We're talking desktop applications, a mix of C, C++ and later on C#, mixed with COM and .NET interoperability. Throw in MFC and WinForms, and you know approximately what I am talking about. Additionally, I have been fiddling with many other frameworks and languages, but the above were until now my area of expertise.
Recently, I have been doing a lot of development in node.js, and as the <s>old school developer I am/have been/will always be/never was</s> Solution Architect I am, a couple of things struck some nerves with me, and I would like to share them with you.
### What you require, you always get the same thing back (or: Pointers, pointers, oh, I mean references)
Your upbringing and development experience tend to make you identify things in new programming languages (and I have seen quite a few over the years) with things in languages you know really well, which is why I immediately equaled the node function `require` with the C pragma `#include`. This sort of makes sense, as you use both to include libraries and other files into your current code.
What I had totally missed out on is how `require` actually works under the hood, and how you can use that to do really nasty stuff if you are inclined to. From C, I was used to the fact that `#include` actually always does something; it "copies" in the include file to where you put your `#include` pragma, in the preprocessor. Not so node.js; it works a lot more elaborately. I can't actually tell what it does exactly under the hood, but this is how it behaves: When it first sees a `require` call, it will check in its "required files" map if it has already read that file from disk or not. If it hasn't, it will read and evaluate the code, and otherwise it will just return a reference to the object it has in its "require map".
This is an important thing to remember: You will always get the same object back, disregarding how often you `require` the same file. This also means: If you change anything in the object you get back, all other references to this object will also change. Well, actually and more correctly, as it's the same object, everybody else will also see the changes. This can be intended, but sometimes it makes for a good debugging session when it's not.
If you're into C/C++ pointers or C# references (or Java references), you will feel quite comfortable with this. Just keep in mind that everything which is not an atomic element (numbers and strings, which are immutable, just like in C#) is a reference. Any copying is shallow copying; to do deep copying of objects, you have to jump through a couple of hoops.
Oh, another nifty thing: You can also `require` JSON files. As JSON is JavaScript code (JavaScript Object Notation), it will work just fine to do this:
~~~javascript
var jsonData = require('./data/settings.json');
~~~
### Remember the Win32 message loop?
When starting to work with node.js, I was always confused when seeing all these callback functions and nested looks and things that look recursive at first sight. The following things helped me lose the fear of these calls:
* Node.js is single threaded, and works very much like the old Windows Message loop (for those who were unfortunate enough to have to actually work with that): Asynchronuous calls are put on a message queue, and then they are called one after the other in the order they were put on there
* The following notion helped me "get" it: An async call is very much like `::PostMessage()`, and direct function calls are much like `::SendMessage()`
Node.js relies on everybody "playing by the rules": Anything I/O should be done asynchronuously, and if you have to do long running processing, you should split it up into pieces, so that you don't block everybody else. **Remember**: Node.js is single threaded. In case you have **really time consuming stuff**, you should consider splitting your work onto different servers (UI server and Worker server) to make sure your web UI always responds. This is the same thing as when you did long running tasks in your message handlers in Win32: Everything freezes until you return from your processing.
The fact that a single `node` instance is always running just on a single thread is both the bliss and problem with node.js: Things which work on a single node.js instance are not 100% guaranteed to run nicely when you load balance multiple instances. In case you have state in your application (be it just a session), you will have to make sure you get your persistence layer right. Fortunately, support for various databases (redis, postgres, mysql,... you name it) is very good. Putting your sessions into a `redis` instance is extremely simple for example (especially when deploying with `docker`).
### Don't fear the Async - Embrace it
So, what does it mean that we should "play by the rules"? It means that we should utilize asynchronuous functions where they make sense, and that you better get used to those callback methods rather quickly, or at least understand how they work, and why they are important (see above on the message queue as well).
**Example**: In the beginning you then end up with these kinds of structures in your code whenever you are doing subsequent asynchronuous calls (this is code calling some REST services one after the other):
~~~javascript
var request = require('request'); // https://github.com/request/request
app.get('/users/statistics', function (req, res, next) {
request.get({
url: 'https://api.contoso.com/endpoint/v1/users'
}, function(err, apiRes, apiBody) {
if (err) {
// Oooh, an error, what now?
console.log('Something went wrong.');
return;
}
// Assume body is a JSON array of user ids
var users = JSON.parse(body);
var userDataList = [];
for (var i = 0; i < users.length; i++) {
request.get({
url: 'https://api.contoso.com/endpoint/v1/users/' + users[i]
}, function (err, apiRes, apiBody) {
// Now, what do I do with the results?
// When do I get those results?
userDataList.push(JSON.parse(body));
});
}
// As an "old school" developer, I would like to use
// userDataList here, but it's still empty!
// Try to render with jade/pug
res.render('user_statistics', {
userList: userDataList
});
});
});
~~~
This kind of code is where it gets really interesting and challenging to work asynchronuously: We want to do do a series of REST calls to a backend service, but we don't know how many of them, and we want to gather the results and continue working on them as soon as we have them.
The above code has a lot of problems. To just list a few of them:
* The error handling after the first `request.get()` call is suboptimal; it just outputs something to the console and returns; this will result in a "hanging web page" for the end user, as the `GET` on the URL `/users/statistics` defined never outputs anything to the `res` response variable.
* This is due to the fact that (here it's express) the request gets routed into this call; `next()` is never called (which would in the end render a `404` if no other routes exists), and `res` is never filled.
* Keep in mind: As everything is asynchronuous, the framework **can not know**, at the time this function returns, whether it will be returning anything useful, or will have failed! Async calls can have been put on the message queue, but may have not yet rendered any result.
* We're doing async calls in a `for` loop; this is not a real no-go, but it has some problematic properties you have to be aware of:
* Each `request.get()` call inside the `for` loop is asynchronuous; this means that the request will be issued sometime in the future, and will return sometime in the future; it feels parallel.
* The callback looks "inline" and nice, but you can't tell when it will be executed, as it depends on when the call to the backend service finishes. By the way: We don't get into trouble because we `push` into the `userDataList` variable - we're single threaded, so no race conditions or threading problems there, that's fine.
* In the program execution, we want to render the `userDataList` when we have all the data back from the REST service, but right after the `for` loop, `userDataList` will **still be empty**. Everytime. This is because the `userDataList` isn't filled until the callbacks from the requests inside the `for` loop are called, and that will **never** be until the current function has finished (the message loop principle, you recall?).
OK, so, we're doomed, right?
Fortunately not. Many people have encountered these things, and have written super-useful libraries to remedy these kinds of things. Some prefer using "Promises" (see [Promise JS](https://www.promisejs.org) for example), and some things are more easily solved by using a library like `async` ([http://caolan.github.io/async/](http://caolan.github.io/async/)). In this case, I will use `async` to rewrite the above code.
~~~javascript
var request = require('request'); // https://github.com/request/request
var async = require('async'); // http://caolan.github.io/async/
app.get('/users/statistics', function (req, res, next) {
request.get({
url: 'https://api.contoso.com/endpoint/v1/users'
}, function(err, apiRes, apiBody) {
if (err) {
// Pass on error to error handler (express defines one by default, see app.js).
return next(err);
}
// Assume body is a JSON array of user ids
var users = JSON.parse(body);
async.map(
users,
function (user, callback) {
request.get({
url: 'https://api.contoso.com/endpoint/v1/users/' + user
}, function (err, apiRes, apiBody) {
if (err)
return callback(err);
callback(null, JSON.parse(apiBody));
});
},
function (err, results) {
// results is an array of the results of the async calls
// inside the mapped function
if (err)
return next(err);
res.render('user_statistics', {
userData: results
});
}
);
});
});
~~~
In the new version of the code, `async.map()` does the heavy lifting: It calls the (anonymous) mapping function (signature `function (user, callback)`) once per user ID, and then automatically assembles all the error messages and results into a single `err` and `results` return parameter. In the background, it will remember all calls it did and wait until all calls have returned correctly. Only after data for all user IDs has been retrieved, you will be presented with the entire result list (or with an error if something went wrong). The `async` library has a lot of different ways of calling async functions reliably and conveniently, e.g. in series (one after the other), in parallel or as a waterfall (passing on results to the next function).
The code is a lot clearer now, and you get back what you need without doing too much syncing overhead; that's abstracted away in the `async` implementation. [Roam the documentation](http://caolan.github.io/async/), it's totally worth it.
### Functional programming, anyone?
What ought to strike you when looking at node.js code are the plethora of anonymous function definitions strewn all over the place. If you're used to C, C++, Java or C#, you may be unaccustomed to those, even if (at least Java and C#) many constructs are in place for those languages nowadays as well, like anonymous delegates or lambda functions.
The difference is that, in node.js, everybody is using anonymous functions all over the place and implicitly expect you to know intuitively how closures work. If, like me, you learnt the hard way that variables have a certain life span and scope, and it's not easy to pass in and out stuff into function definitions, the concept of closures and currying function calls is something one may have a hard time with. If you are not "contaminated" with these other (older) language, chances are you find it super intuitive and don't get why I find it so hard.
So, closures. What does that mean? In short (and in my words, which may be wrong, but the notion works for me), it means that everything you pass into a function definitions will be remembered the way they were (I have a footnote for that) at the time the function was defined. If the variable is in scope, you can use it, even if the actual function you define will be executed asynchronuously, which, as we have learnt, is the default case in node.js.
Looking at the code above, we are implicitly using a ton of closures. The most striking example is perhaps the `res` variable we're using after having gathered all the results. We're inside an anonymous callback function which defines another callback function,... This means, we're by no means in the same execution context ("stack trace") as when the top most function is called when the `res.render()` call is made. Still, it will just work. This is the concept of closures. In C++ (at least pre-2015), you could not create such a construct without resorting to keeping large state structures someplace else and passing them around. But as I said: If you haven't seen it the old school way, you wouldn't even wonder why this is "magic" to some.
The promised **Footnote**: Keep in mind that even the closures are only keeping the references. If you change the *content* of the object you're referring to, it will have changed (whilst the reference has not).
The principle of closures can also be used for "currying" functions: Creating parametrized functions. Even if this perhaps isn't something you will use everyday all day, it's important to have seen the concept. It's quite frequently used in libraries such as passport.js, where not understanding the concepts makes you go slightly nuts with the syntax. It looks like magic or super weird syntax, but actually it isn't.
Let's look at an example of how that might look. Here, we want to run a couple of shell commands (it's a simplified snippet from a node.js component of mine which will run on docker, so this is okay, we control the environment).
~~~javascript
const exec = require('child_process').exec;
var async = require('async');
var backupExec = "tar cfz ...";
var rmExec = "rm -rf ...";
var untarExec = "tar xfz ...";
var execHandler = function (desc, callback) {
return function (err, stdin, stdout) {
if (err) {
console.error(desc + ' failed.');
console.error(err.stack);
return callback(err);
}
console.log(desc + ' succeeded.');
callback(null);
}
}
async.series([
function (callback) {
exec(backupExec, options, execHandler('Backup', callback));
},
function (callback) {
exec(rmExec, options, execHandler('Deleting previous configuration', callback));
},
function (callback) {
exec(untarExec, options, execHandler('Unpacking imported configuration', callback));
}
], function (err) {
if (err)
return cleanupAfterFailure(err);
// We're done.
// ...
});
~~~
The really interesting bit here is the `execHandler`. It's defining a function which in turn returns a function which has the correct signature which is needed for the callback of the `exec()` calls (`function (err, stdin, stdout)`). The "currying" takes place where we pass in the `desc` and `callback` parameters into the function calls (this is again closures), so we end up with a parameterized function we can pass in to `exec`. This makes the code a lot more readable (if you do it right) and compact, and it can help to pull out recurring code you couldn't pull out otherwise, due to minimal differences (like here the description and callback). Misuse this concept, and everybody will hate your code because they don't understand what it's doing.
On a side note, we're once more using `async` here, this time the `series()` functionality, which calls the async functions one after the other and returns the results after the last one has finished, or stops immediately if one fails.
**Footnote 2**: If you're in nitpicking mode, what I describe above is not the classical "currying" you might know from real functional languages such as Haskell or SML, where currying means automatic partial parameter evaluation. This is something you may also do in JavaScript, but you don't get it as a language construct as in Haskell. Perhaps I should just call it "parameterized function definition" or something similar, as that's more to the point.
### On `callback` and `err` parameters, exceptions
There's a last small thing which I found out about most node.js libraries, and probably I'm just too thick to find this written out someplace, so I do it here: If you're dealing with async functions (and, remember, you are, most of the time), follow these conventions:
* The first parameter of a callback function is `err`. Check for errors the first thing you do and react sensibly, e.g. by passing on the error to an upstream callback or render an error message. If you play by this convention, error handling turns out to be fairly painless. Try to fight the system and you're quickly doomed in error handling hell.
* If you need a callback for an asynchronuous function, this goes into the **last parameter**. This is quite obvious when you think about it; in many cases you will have an anonymous function serving as the callback, and passing parameters after this anonymous definition just looks weird.
* If you make use of exceptions, make sure you catch them inside the same execution context (message loop handler) as it was caught. This means, if you're in an async function (and you will be most of the time), wrap things which might go wrong in a try/catch and pass the exception as an `err` to your callback. Otherwise you may end up with a crashing node.js instance very easily, as soon as you have an uncaught exception bubbling up to the main message loop.
### Conclusion: It's not (that much) magic in the end
These were a couple of the main "gotchas" I encountered in the last couple of months. I hope you found them interesting and perhaps, if you're <s>an old school</s> a developer with experience in other domains, it could enlighten you a little regarding some peculiarities of node.js. Some things look like magic, but actually aren't. But if you are not aware of the concepts which lie behind, you will try to fight the system, and that is most probably not going to end well.
I might also throw in a couple of experiences and "Aha!" moments I had with [express.js](http://expressjs.com) in a future post. But as this post is already too long, that will have to wait ;-)

View File

@ -1,44 +0,0 @@
---
layout: post
title: Build an Eloqua Action Service and make it Open-Source
subtitle:
category: general
tags: [cto, open-source, culture]
author: melania_andrisan
author_email: melania.andrisan@haufe-lexware.com
header-img: "images/bg-post.alt.jpg"
---
[Here is our Open Source][github] project on Github after some digging into Eloqua documentation and building some docker files. Take a couple of minutes and read the entire story.
Some time ago... the story of building an [Eloqua App][eloqua app] begins. First, [Bogdan][bogdan] (one of my colleagues) starts to investigate what can be done and after digging and digging in the old and new documentation he realizes what its possible. After 2 weeks of building and debugging the first version of our Eloqua App appears.
Our Eloqua App is a service meant to provide a small box in a campaign which can receive a list of contacts from the campaign and deliver emails with a form with their data. We are using Eloqua to create different Marketing Campaigns and in case you are not familiar with it you can have a look at the [official Oracle Page][Oracle].
To be able to do this we needed to build a Node Service with Express (it could be any type of REST service) which can serve the needed Endpoints:
- Create - the endpoint is called when the App is initialized, and this is happening when the marketer drags the app box into the campaign
- Configure - is called when the marketer chooses to configure the app by double clocking the App box in the campaign. This Endpoint delivers some HTML to make the configuration possible.
- Notify - is called automatically by Eloqua when the campaign is active and the list of contacts ends up into the App box
- Delete - is called when the App is deleted from the campaign
And back to the story now... we deployed the App in Azure, and we started using it in a campaign. Some weeks after, Eloqua changes the API and some static fields which were configured in the needed form where not appearing anymore.
Here I enter into the story and start investigating; it looks like Eloqua does not offer the possibility to store other fields than the ones attached to an Eloqua entity anymore. Having this problem to solve I added a [Mongodb][Mongodb] with [Mongoose][Mongoose] to the project and saved the needed fields there. Doing this I realized that we can improve our code and instead of using the old callbacks I switched to promises.
I built also some Docker scripts to have the app containerized and made everything opened source.
[On Github][github] you can find the Server, the docker containers and a Readme file which explains everything we learned from building this App.
Enjoy! and Happy cloning!
[bogdan]:https://github.com/cimpoesub
[eloqua app]:https://docs.oracle.com/cloud/latest/marketingcs_gs/OMCAB/#Developers/AppCloud/Develop/develop-action-service.htm%3FTocPath%3DAppCloud%2520Development%2520Framework%7CDevelop%2520Apps%7C_____3
[Mongodb]:https://www.mongodb.com/
[Mongoose]:http://mongoosejs.com/
[Oracle]:https://www.oracle.com/marketingcloud/products/marketing-automation/index.html
[github]:https://github.com/Haufe-Lexware/eloqua-contract-to-form-action-service

View File

@ -1,56 +0,0 @@
<<<<<<< HEAD
---
layout: post
title: Summer Internship @Haufe
subtitle: An experience that greatly helped me to improve myself
category: general
tags: [culture, docker]
author: Bogdan Bledea
author_email: bogdan.b19c@gmail.com
header-img: "images/summerInternships1.jpg"
---
I'm Bledea Bogdan and together with Patricia Atieyeh, both second year students at Polytechnic University of Timisoara, we built a feedback box app for Haufe-Lexware.
Today is the last day spent @Haufe. And I must admit, this summer internship was simply awesome. My work colleagues were so friendly, they helped me when I was in trouble. And I didn't thought it's so fun to go to work.
Below, is a screenshot of the application we built during this intership, built with the Meteor framework. Meteor is a brand new framework, both front-end and back-end, which makes Meteor a great framework.
![Screenshot of the app](/images/summerInternships2.jpg){:style="margin:auto"}
This application is for internal company use and it takes a feedback from a user, post it on the site, and sort the feedbacks by the number of votes. Users can delete only their feedbacks, and the feedback can be modified only if there is no reply or vote.
The development process was a little bit tricky for us, and, furthermore, it was the first time we used Meteor to develop web apps. But, in the end, we proved that nothing is impossible, and if you truly want, you can learn new things anytime.
As you can see, this feedback box app has an user login&registration form, but, we restricted the account creation to the company email domain, haufe-lexware.com. After the user logs in, he can post and reply to the other feedbacks.
And when I say it was tricky, I mean, the support on the internet for docker and meteor was so poor. I've had a lot of bugs, the meteor pack for docker it wasn't at least official, and the problems we encountered, was already posted on the internet, with a lot of troubleshooting, but, nothing for our case.
=======
---
layout: post
title: Summer Internship @Haufe
subtitle: An experience that greatly helped me to improve myself
category: general
tags: [culture, docker]
author: Bogdan Bledea
author_email: bogdan.b19c@gmail.com
header-img: "images/summerInternships1.jpg"
---
I'm Bledea Bogdan and together with Patricia Atieyeh, both second year students at Polytechnic University of Timisoara, we built a feedback box app for Haufe-Lexware.
Today is the last day spent @Haufe. And I must admit, this summer internship was simply awesome. My work colleagues were so friendly, they helped me when I was in trouble. And I didn't thought it's so fun to go to work.
Below, is a screenshot of the application we built during this intership, built with the Meteor framework. Meteor is a brand new framework, both front-end and back-end, which makes Meteor a great framework.
{:.center}
![Screenshot of the app](/images/summerInternships2.jpg){:style="margin:auto"}
This application is for internal company use and it takes a feedback from a user, post it on the site, and sort the feedbacks by the number of votes. Users can delete only their feedbacks, and the feedback can be modified only if there is no reply or vote.
The development process was a little bit tricky for us, and, furthermore, it was the first time we used Meteor to develop web apps. But, in the end, we proved that nothing is impossible, and if you truly want, you can learn new things anytime.
As you can see, this feedback box app has an user login&registration form, but, we restricted the account creation to the company email domain, haufe-lexware.com. After the user logs in, he can post and reply to the other feedbacks.
And when I say it was tricky, I mean, the support on the internet for docker and meteor was so poor. I've had a lot of bugs, the meteor pack for docker it wasn't at least official, and the problems we encountered, was already posted on the internet, with a lot of troubleshooting, but, nothing for our case.
>>>>>>> haufe/master

View File

@ -1,58 +0,0 @@
---
layout: post
title: The state of our API Strategy
subtitle: From a response to a sales call by an API Management vendor.
category: api
tags: [api, cto]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
This is a (slightly adapted) version of a response to a sales enquiry of an API Management vendor. We had contacted them a year ago but the sales lead back then felt that our focus on 'just enough API management' was too narrow and not addressing the larger needs (and bigger deal) of the 'Digital Transformation' of the Haufe Group.
I am sure that sales person would have been more than happy to help us find out exactly what transformational impact his entire product portfolio would have been on our enterprise architecture if we had just let him (which we did not).
*Disclaimer: I know personally some of the key folks working at this vendor and I have nothing but the highest respect for what they are building. So this exchange does not try to show disrepect to their product and team, but rather illustrate how not going beyond the sales script can sometimes lead to unintended consequences.*
---
Dear XXX
Yes, indeed a year has passed. Well, back then you guys kind of blew it when your sales lead insisted on discussing an entire enterprise transformation strategy and our distributed API-first architecture and planned budget wasnt quite in your general (deal) ballpark. Since our Technology Strategy for the Haufe Group calls for being [like the web and not behind the web](http://martinfowler.com/articles/microservices.html) your commercial model apparently made it quite a bit difficult to engage on such a small scale.
But I was delighted to see your most recent enterprise architecture white papers closely tracking our Technology Strategy. I think it fully validates our approach to provide decentralized API management based on the basis of a bounded (business) context (Conway's law applies to API Management too).
In the meantime we have settled on [Mashape's Kong](https://github.com/Mashape/kong) and our [own API Mgmt Portal](http://wicked.haufe.io) (one developer fulltime for 3 months) for our internal API deployments. I think you will find our portal to approach API Management from quite a different perspective than most traditional API Mgmt vendors - it fully embraces `infrastructure as code` and `immutable servers`. In our opinion it simply doesnt make any sense to (re)introduce long running API gateway and portal servers to manage and service APIs from Microservices, which are deployed fully automated through our CI/CD pipeline. We like to think that this brings us closer to the concept of [APIOps](http://www.slideshare.net/jmusser/why-api-ops-is-the-next-wave-of-devops-62440606) - applying the same basic concepts of DevOps but to API operations.
You can find more details at <http://wicked.haufe.io>.
On the design governance side we also progressed rather nicely. You might find our [API Styleguide](https://github.com/Haufe-Lexware/api-style-guide) of interest - I think it represents some of the best best practices from the industry. We are planning to use [Gitbook](https://www.gitbook.com) or [ReadTheDocs](https://readthedocs.org) to publish it in a better e-book style format. We took a lot of inspiration from the [Zalando API Styleguide](http://zalando.github.io/restful-api-guidelines/).
The one remaining missing piece in our API story is an API registry. But again I am not looking for a repeat of the fallacy of a centralized UDDI or WSRR registry, but taking the Web as example and working something along <http://apis.io> (Source code available at <https://github.com/apisio/apis.io>). Central registries never worked, but Google does. Hence an API search engine with a choice or combination of
* a single git repo (containing API definitions) supporting pull requests and/or
* the ability to register commit web hooks to many git repos (each representing one or more API definitions) and/or
* an active crawler which actively looks for new API definitions (similar to [ATOM Pub service document](http://bitworking.org/projects/atom/rfc5023.html#find-collections) at the root or a well known location of the service URL)
will do. I found [Zalando's API Discovery](http://zalando.github.io/restful-api-guidelines/api-discovery/ApiDiscovery.html) strategy to be very inspiring, but we might start with a Repo-based approach to learn and iterate.
I am still looking for contributors for that last piece of our API strategy to fall in place. But based on the already existing work in <http://apis.io> from [3Scale](https://www.3scale.net) and the [API Evangelist](http://apievangelist.com) I think we are not that far off from where we need to be .. and if necessary we will develop the missing functionality and provide it as open source to the community.
I hope this gives you a good overview over the current status of the API piece in our Technology Strategy. You can follow us at [@HaufeDev](https://twitter.com/haufedev) or find up to date information on our [Developer Blog](http://dev.haufe-lexware.com). We are tentatively planning to make an announcement of our API portal in the September time frame.
BTW our API Portal is written such that it can be placed on top of other API Gateways. So if you (or another vendor) are interested in trying it out to make it work for your API gateway, ping us.
Cheers,
Holger (CTO Haufe.Group)
---
While this blog post was largely sponteneous, our offer to provide the API Management Portal as Open Source to API Gateway vendors is serious.
Our industry has benefited greatly from the openness and sharing of knowledge not just within the API community but also through the commercial sponsorship of API Management vendors like [Mulesoft](https://www.mulesoft.com), [Layer7](http://www.ca.com/us/products/api-management.html), [Apigee](http://apigee.com), [3Scale](https://www.3scale.net) and many others. (Disclosure: I am a former member of the Layer7 sponsored [API Academy](http://www.apiacademy.co))
---
PS: If you are like me you might be curious why we called our API Portal `wicked` - well we first had a different name but the Mashape folks asked us to change it as to not confuse it with their commercial offerings. Since Mashape has been very supportive and also provided Kong as Open Source, we felt that we owed them. We then thought of our goal to povide `wicked (good) APIops` and hence the name `Wicked` was born. It helps that it is also a play on [Wicket](http://www.thefreedictionary.com/wicket) as in "..1. A small door or gate, especially one built into or near a larger one. .."

View File

@ -1,180 +0,0 @@
---
layout: post
title: Summary of PayPal InnerSource Summit, 2016
subtitle: Summary of the PayPal InnerSource Summit in London.
category: conference
tags: [devops, culture, open-source]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
From April 21st to 22nd, 2016 we were fortunate to attend the [PayPal Inner Source Summit](http://paypal.github.io/InnerSourceCommons/events/) in London.
We first got exposed to the InnerSource concept through a talk by PayPal at the [OSCON 2015](https://hlgr360.wordpress.com/2015/11/04/notes-from-oscon-europe-2015/) in Amsterdam in fall of 2015. At that point we were struggling to resolve a multitude of project dependencies on our backend platform team. The modus operandi of the team was highly interrupt driven, reactive and managed largely through tickets raised by other teams looking for changes or additions to existing services. There was precious little time in which the team could proactively reduce technical debt or improve operational efficiency. Needless to say the foundation of the so called 'Haufe Group Service Platform' (HGSP) continued to deteriorate. (The HGSP was also the topic of [my recent talk on the Automated Monolith](http://www.apiacademy.co/resources/api360-microservices-summit-the-automated-monolith/) at the API360 Microservice conference in New York).
Before I dive more into the topic, let me first summarize what InnerSource stands for: To apply the concepts of Open Source to the internal software development inside of your company. You can read more about it at the [InnerSource Commons](http://paypal.github.io/InnerSourceCommons/) and/or download a free copy of the [InnerSource eBook](http://www.oreilly.com/programming/free/getting-started-with-innersource.csp). If you google it you will also find some articles, for instance [here](http://thenewstack.io/github-bloomberg-talk-using-innersource-build-open-source-project-development-behind-company-firewalls/) and [here](https://www.infoq.com/news/2015/10/innersource-at-paypal).
What made us excited about the InnerSource concept was the premise to unwind or at least greatly reduce those external dependencies, and thereby freeing up the core team to focus on evolving the platform itself. It does so by offering dependent projects the ability to add their required enhancements to the platform code base vs. having to wait for the platform team do it for them.
While it appears to be counter-intuitive at first, remember that (a) this is how Open Source works (and you can hardly argue that it does not scale) and (b) the external team regains control over their own project schedule in exchange for extra work. The latter is an extremely powerful motivator, especially if you consider that the change might be small or incremental, but keeps being deprioritzed by the platform team due to some other feature from some other projects.
To me there are two macro patterns at work here, which seem to point into the same direction:
* Change the perspective from a point solution (add more developer) to changing the motivators in the system (enable external teams to take care of itself). The same can be said about the [Netflix approach of 'Chaos Engineering'](http://readwrite.com/2014/09/17/netflix-chaos-engineering-for-everyone/). Instead of pretending that QA can catch every bug (and thereby contributing to an illusion of bug-free systems), Netflix deliberatly introduces failure into the system to force engineers to design software anticipating the presence of failure.
* Efficiency and Speed are goals at opposite ends (I owe that insight to the folks from Thoughtworks). You can not have both of them at the same time. Microservices (MSA) embrace speed over efficiency through its emphasize on a `shared nothing` approach. In effect MSA is saying that databases and app servers are commodity by now and that you do not gain significant business value by using them efficiently. MSA emphasizes duplication and the reduction of cross dependencies over having a central instance which will become the bottleneck.
I would like to thank Danese Cooper and her team for so openly sharing their lessons and knowledge.
Here are my notes from the various sessions under the [Chatham House Rule](https://www.chathamhouse.org/about/chatham-house-rule).
---
### How does InnerSource work at Paypal
* concept of a trusted [committer](https://en.wikipedia.org/wiki/Committer) (TC) within core team
* define a formal contributor document
* pull request builder
* based on jenkins
* generates metrics before/after pull
* metric can not be worse after merge
* contains code checking, style, fortify etc
* both for internal and external pull requests
* peer reviewed pull requests internal
* rule: pull requestor can not be pull committer
* pay attention to InnerSource activation and incentive
* need to have documentation
* system documentation in markdown in the repo
* so keep documentation and source together in same pull request
* the teams should have a chance to meet (there difference between inner and open source)
* extrensic vs intrinsic motivation - accept the difference
* Core Motivation: It takes too long, lets do it ourself
* Learning penalty vs intrinsic understanding of the system
* Motivation: customer (only) sees product as a whole
* irregardless how many system boundaries are hidden within
* Take a customer centric view - take responsibility for the whole stack
* pull request helps to improve code structure
* InnerSource as company policy
* do to others what you want them to do to you
* Security concerns
* developer does not have production access
* legal information is isolated
* production access nur ueber audited tool
* *Example for such an audting framework on AWS from Zalando at <https://stups.io>*
### Workshop
* Existing model: variations of 'big cheese gets stuff done'
* *I could not find a good explanation for it, but the expression means that some inidvidual's identity and self-worth are tied to 'being the one which get stuff done'.*
* OSS Apache Model:
* ratio of users/contributors/trusted committers/lead is 1000/100/10/1
* How can we make trusted committer to not be the chokepoint
* super powers come with responsibilities
* code mentorship (not rewriting)
* its like on-boardig new team members
* if it is not written down, it does not exist
* think of rewards for trusted committer and team
* in open source the submissions to projects stay with the contributor
* refactoring clues for core team
* lazy documentation through discussion threads
* also extrinsic rewards
* i.e. I give you a beer for that, or Badges
* Could there be rewards for archiving committer status on external projects?
* Tooling for inner source
* <http://innersourcecommons.org>
* <https://www.youtube.com/watch?v=r4QU1WJn9f8>
* <http://www.inner-sourcing.com>
* KPIs to measure the success of the openness
* To change culture, you can not just do it from the inside, but also create pressure from the outside
* Create transparency by making all code repos by default visible/public within the company
* Challenges
* How to get PO bought into it (most of them like management by exception and `big cheese`)
* **If ownership is culture, part of it is keeping others out**
* this code is mine, this is yours
* it must be ok to fail for ownership to stop being exclusive
* Operational responsibility
* agreement on the time window of operational responsibility for merged patch by contributor
* Training of trusted committers
* keeper of the flame
* not everybody will be good at this (rotating)
* its all about mentorship (did you get that far by yourself?)
* what mentor do you want to be
* growing a new culture
* do it a sprint at the time
* have rules of engagement
* why - because it is leadership (it is about mentoring)
* visible rewards
* <http://openbadges.org>
* <https://en.wikipedia.org/wiki/Mozilla_Open_Badges>
### Interviews and Lessons Learnt
#### Company 1
* from central dev to separate dev per business units
* resulting in a lot of redundancy over the years
* challenge:
* how can we speed up product development
* AND keep the place interesting for engineers to join the company
#### Company 2
* optimize developer productivity
* low friction, high adaption
* ease of use, ease of contribution
* developer community is something to opt-in individually, can not be mandatory
* how to motivate people:
* is it a personality strait or can it be taught?
* are people not motivated or do not know how to?
* was/is there a hiring bias discouraging the right dev to join
* developer happiness through transparency
* **if you have a PO who is only focussed on his goals, he will eventually loose the team**
#### Company 3
* someone critiquing your code is like someone reading your journal
* someone critiquing your service is like someone complaining about your children (one step removed)
* there is an implizit cultural hierarchy/snobbism of programmers depending how hard it is to learn and how many years you needed to put it into
* law of unintended consequences (start with experiments)
* modularize software such that it becomes more intelligible for other people
* accept diskrete contributions and mentor them through it, observe to learn what to document and what to modularize
* run experiments long enough to gather useful data (engineers tend to rather write code than listening to feedback)
* trusted committers need to be taken out sprint rotation for the duration and focus on mentoring (but can be escalated in with clear tracking of costs)
#### Company 4
* paper comparing different appropaches ["Inner Source - Adapting Open Source Within Organizations"](https://www.computer.org/csdl/mags/so/preprint/06809709.pdf)
* factors for success:
* Candidate product
* Stakeholders
* Modularity,
* Bazaar-style Development
* maintenance is continuous: moving target or dead corpse
* caters to the individual style
* quick turnaround but potentially incoherent approach
* Bazaar-style quality assurance
* (true) peer review of contributions
* releasing regularly improves quality
* no rushing in code
* releasing becomes no big deal
* Standardized or at least compatible tools
* incompatible toolsets inhibit collaboration
* Coordination & leadership to support meritocracy
* advocate and evangelists
* emerging leadership
* Transparency
* needed for visibility
* cultural fit to accept working in a fishbowl
* 1:n communication over 1:1
* management support
* the importance of slack (pict of number slide)
* Motivation
* There is **learned helplessness' that is afflicting teams without slack and some sense of self-determination**
---
On a side note: For me personally it was eye opening to discuss the implications of an institutional bias towards `ownership` and 'single responsibility' and how this can counteract sharing and agility. It appears that too much focus on ownership might directly contribute towards risk avoidance and lack of openness:
* because being the owner means `if it breaks it is on me` and therefor I will do everything in my power to limit my risk
* which counteracts agility and controlled risk taking
The key here seems to be an institutional bias on 'if it breaks'. If the default assumption is that it can go wrong, it is clear that we would prefer to have one person responsible. But obviously embracing the possibility of (controlled) failure is what makes all the difference in execution speed between a startup and an enterprise, between a 'fail fast' and a risk avoidance culture. But this is a topic worthy of a separate blog post.

View File

@ -1,59 +0,0 @@
---
layout: post
title: Open Tabs No 1
subtitle: This week in Open Tabs.
category: opinion
tags: [custdev, culture, cto]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.alt.jpg"
---
[This week in Open Tabs](http://dev.haufe.com/meta/category/opinion/) is my new weekly column to share the list of links I have (or had) open in my browser tabs to read during this week.
I was looking for a format to easily share these with the larger developer and business community here at Haufe, but it might also be of interest to others. My goal is not to be comprehensive or most up to date, but to provide a cross section of the topics I find interesting and worth investing my time in.
Depending on the week you might find some interesting links or nothing at all. And that is ok. I might even include some stories from the trenches or decide that they warrant a separate column. That is ok too. I might even include links I collected in Evernote as reference material for the future. I am sure you don't mind that, or? The only constrain I would like to set myself is that it should be only the material of a single week. Not more and not less.
So without much ado, here is the first **Open Tabs** editions for the week of August 29th.
##### Lean
* <http://www.allaboutagile.com/7-key-principles-of-lean-software-development-2/>
* <https://social-biz.org/2013/12/27/goldratt-the-theory-of-constraints/>
* <http://leanmagazine.net/lean/cost-of-delay-don-reinertsen/>
* <https://blog.leanstack.com/expose-your-constraints-before-chasing-additional-resources-cc17929cfac4>
##### Product
* <http://www.romanpichler.com/blog/product-roadmap-vs-release-plan/>
* <https://hbr.org/2016/08/what-airbnb-understands-about-customers-jobs-to-be-done>
* <https://hbr.org/2016/09/know-your-customers-jobs-to-be-done>
##### Business
* <http://disruptorshandbook.com/disruptors-handbooks/>
* <http://blog.gardeviance.org/2016/08/on-being-lost.html>
* <http://blog.gardeviance.org/2016/08/finding-path.html>
* <http://blog.gardeviance.org/2016/08/exploring-map.html>
* <http://blog.gardeviance.org/2016/08/doctrine.html>
* <http://blog.gardeviance.org/2016/08/the-play-and-decision-to-act.html>
* <http://blog.gardeviance.org/2016/08/getting-started-yourself.html>
##### Project
* [OpenCredo: Evolving Project Management from the Sin to the Virtue](https://www.youtube.com/watch?v=BpwjDcl8Ae8)
* <http://www.romanpichler.com/blog/product-roadmap-vs-release-plan/>
##### Culture
* <http://www.strategy-business.com/feature/10-Principles-of-Organizational-Culture>
* <https://github.com/blog/2238-octotales-mailchimp>
* <http://www.oreilly.com/webops-perf/free/files/release-engineering.pdf>
##### Technology
* [IT-Trends 2016 for the insurance industry](https://www.munichre.com/en/reinsurance/magazine/topics-online/2016/04/it-trends-2016/index.html)
* <http://raconteur.net/technology/blockchain-is-more-than-the-second-coming-of-the-internet>
* <http://blog.getjaco.com/jaco-labs-nodejs-docker-missing-manual/>
* <http://techblog.netflix.com/2016/08/vizceral-open-source.html>
* <https://readthedocs.org>
* <http://learning.blogs.nytimes.com/2015/11/12/skills-and-strategies-annotating-to-engage-analyze-connect-and-create/>
Now I just need to find the time to read them. :)

View File

@ -1,31 +0,0 @@
---
layout: post
title: Building a Highly-Available PostgreSQL Cluster on Azure
subtitle:
category: howto
tags: [cloud, automation]
author: esmaeil_sarabadani
author_email: esmaeil.sarabadani@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
The possibility to create a PostgreSQL cluster on your Azure subscription is now only few clicks away. The use of PostgreSQL database (a.k.a the most advanced open-source database) has increased in different software development projects in Haufe and setting it up in a cluster on Azure in an easy and convenient way (preferably as-a-Service) was always a wish for developers.
We would like to announce that with our new Azure template it is now possible to automate the creation of a highly-available PostgreSQL cluster on your Azure subscription.
It uses Ubuntu 14.04 LTS machines with 128 GB SSD data disks for high performance. A [Zookeeper] ensemble of three machines is used to orchestrate the behavior of the postgres cluster. For automated PostgreSQL server management and leader election the open source solution [Patroni] (developed by zalando) is used and installed side by side with PostgreSQL 9.5 on the machines.
To use this template simply click [here] and you will be redirected to the Azure login page where you can log in and provide values for the following parameters:
- ClusterName: The name of the cluster to create. Avoid spaces and special characters in this name
- InstanceCount: The number of postgreSQL servers to create. Minimum: 2, Maximum: 5
- AdminUsername: Name for user account with root privileges. Can be used to connect to the machines using ssh
- AdminPassword: Password for admin user account
After deployment, you can connect to clusterName.regionName.cloudapp.azure.com on postgreSQL default port 5432 using username "admin" and the password you set as a parameter value in the template.
In order to connect to the postgreSQL instances, use any ssh client on port 10110 for instance postgres0, 10111 for postgres1 and etc.
We hope this brings some joy and of course convenience on your journey to cloud.
[Zookeeper]: <http://zookeeper.apache.org/>
[Patroni]: <https://github.com/zalando/patroni>
[here]: <https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Ftangibletransfer.blob.core.windows.net%2Fpublic%2Fpostgresha%2FPostgresHA.json>

View File

@ -1,54 +0,0 @@
---
layout: post
title: Open Tabs No 2
subtitle: This week in Open Tabs.
category: opinion
tags: [custdev, culture, cto]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.alt.jpg"
---
[This week in Open Tabs](http://dev.haufe.com/meta/category/opinion/) is my weekly column to share links and commentary based on the list of my open (browser) tabs.
I finished reading [last week's Open Tab](http://dev.haufe.com/open-tabs-1/) and I would like to highlight Simon Wardley's [Introduction to Wardley maps](http://blog.gardeviance.org/2016/08/on-being-lost.html) as my absolute favourite. I have been deeply immersed within [Lean and Business Modeling](https://4launchd.wordpress.com/2013/08/14/lean-entrepreneurship-reading/) for the last couple of years, but I can see how [Value Chain Mapping using Wardley maps](http://blog.gardeviance.org/2015/02/an-introduction-to-wardley-value-chain.html) adds yet another perspective.
So the coveted first spot of my open browser tabs is yet another of his articles about [Other Tools I Use With Mapping](http://blog.gardeviance.org/2015/03/other-tools-i-use-with-mapping.html), which opens up some other interesting usecases on how to apply mapping to product, business and technology strategy. You might also be interested in checking out [Atlas](https://github.com/cdaniel/wardleymapstool), an open-source Wardley mapping tool (and yes, it [run's in docker](https://github.com/cdaniel/wardleymapstool/wiki/Running-your-own-instance) too).
Regarding lean, you might have find yourselve at the receiving end of one of my frequent rants about how a lot of folks in business love to use the term [Minimal Viable Product without bothering to understand what it means](https://www.quora.com/What-is-a-minimum-viable-product). Hint - it is NOT a [Minimal Marketable Product](http://www.romanpichler.com/blog/minimum-viable-product-and-minimal-marketable-product/). In that regard you might find the article on [Minimal Viable Problem](http://tynerblain.com/blog/2016/07/22/minimum-valuable-problem/) for product design interesting.
And since we are on the topic of 'listening to your customer' check out the article on [Using On-Site Customer Feedback Surveys To Get Inside Your Customer's Mind At The Point of Purchase](http://www.growandconvert.com/conversion-rate-optimization/customer-feedback-survey/).
And to top this list off one of my most favourite business strategy bloggers just published a new article on [The Evolution of Transportation-as-a-Service](https://stratechery.com/2016/google-uber-and-the-evolution-of-transportation-as-a-service/).
But enough of the business stuff, lets move on to my favorite field of API strategy. There were a couple of interesting links I stumbled upon last week:
* [Mike Amundsen's excellent talk on Hypermedia patterns in API design](http://amundsen.com/talks/2016-04-sacon-patterns/2016-04-sacon-patterns.pdf)
* [Internal API Design for Distributed Teams](https://www.lullabot.com/articles/internal-api-design-for-distributed-teams)
* [API Evangelist is keeping an open mind on GraphQL](http://apievangelist.com/2016/09/02/i-am-keeping-my-mind-open-and-looking-forward-to-learning-more-about-graphql/) (and yes, me too)
One of the key lessons I learned is that simply building an API is not going to be enough. You will need to evangelize its use (and that is true both for internal and external APIs). You will find a pretty good role description of an developer advocate in [What does a developer evangelist/advocate do?](https://www.christianheilmann.com/2016/08/29/what-does-a-developer-evangelistadvocate-do/).
Continuing with "big picture" topics, head over to O'Reilly and read up on [The critical role of system thinking in software development](https://www.oreilly.com/ideas/the-critical-role-of-systems-thinking-in-software-development).
Let's hop over to Devops and check out those links in my open tabs:
* [From DevOps to BizDevOps: Its All About the People](https://opencredo.com/key-takeaways-devops-enterprise-summit-2016-eu/)
* [Jenkins makes a UX splash with Blue Ocean](http://blog.alexellis.io/jenkins-splashes-with-blue-ocean/)
* [How To Build Docker Images Automatically With Jenkins Pipeline](http://blog.nimbleci.com/2016/08/31/how-to-build-docker-images-automatically-with-jenkins-pipeline/)
A fair number of open tabs point to projects I would like to explore:
* [Setting up my own instance of Gitbook](https://github.com/GitbookIO/gitbook)
* [A free to use web-based music making app](https://github.com/BlokDust/BlokDust)
* [How To Create a Calibre Ebook Server on Ubuntu 14.04](https://www.digitalocean.com/community/tutorials/how-to-create-a-calibre-ebook-server-on-ubuntu-14-04)
* [Install Docker 1.12 on the $9 C.H.I.P. computer](http://blog.hypriot.com/post/install-docker-on-chip-computer/)
On the topic of Docker I keep having to read up on the new load balancing features built-in in Docker 12.0: [Improved Options for Service Load Balancing in Docker 1.12.0](https://www.infoq.com/news/2016/08/docker-service-load-balancing). Did you know, that Docker comes with an embedded DNS server which can be used to map aliases to container IP addresses (since 1.10). And since version 1.11 it also supports round robin DNS based load balancing. Well, version 1.12 might have some other goodies for you.
For the 'Internet of Things' (IoT) the circle closes back to Simon Wardley and his talk on being [In Search of Spime Script](http://blog.gardeviance.org/2012/02/in-search-of-spime-script.html), a talk inspired by an (out of print) book [Shaping Things](https://mitpress.mit.edu/books/shaping-things). Once again, I would highly recommend picking up [Makers](http://craphound.com/category/makers/) from Cory Doctorow, which is the underground manifesto on how 'Makers' might have a similar impact on our economic system as the steam engine had on feudal society. (Fun fact - [Edward Snowden was reading Cory's book 'Homeland'](http://craphound.com/homeland/2014/12/02/when-ed-snowden-met-marcus-yallow/) during the interview filmed for [Citizen Four](https://www.rottentomatoes.com/m/citizenfour/))
On a personal note: Like so many of my peers, I struggle to carve out enough uninterruted time to work vs attending meetings. I found the article on [Manager Schedule vs Make Schedule](http://www.paulgraham.stfi.re/makersschedule.html?sf=yrezkzg#aa) very enlighting.
This should cover it for this week. Plenty to read and catch up on. See you again next week.

View File

@ -1,47 +0,0 @@
---
layout: post
title: Open Tabs No 3
subtitle: This week in Open Tabs.
category: opinion
tags: [custdev, culture, cto]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.alt.jpg"
---
[Open Tabs](http://dev.haufe.com/meta/category/opinion/) is my weekly column to share links and commentary based on the list of my open (browser) tabs.
Last week was slow since it is prime vacation time here in Southern Germany. It has been, in turn, a good time to step outside the daily email, telephone and meeting vortex and do something truely revolutionary - work hands-on with some of the technologies on my personal short list.
And hands-on work I did! I finally installed [Rancher](http://rancher.com) on my Digital Ocean cluster and put it to good use to bring my application stacks. I have to say I am very impressed - I used to be a big [Tutum](https://blog.tutum.co) fan but Docker's pricing has decisively moved it outside of my hobby range. With Rancher I have finally found an easy-to-use docker management tool, which is open source and can be installed locally. And in addition it uses native docker-compose file syntax for application deployment. What more could I wish for?
{:.center}
![Rancher - Docker Host]({{ site.url }}/images/open-tabs/ot-3-rancher-engine.png){:style="margin:auto"}
The experience has been so smooth that I am wondering if Rancher could have a place next to our CI/CD pipeline - not for automated deployments, but for ease of experimentation with various application stacks. I now run my own instances of [RocketChat](https://gist.github.com/hlgr360/d7f6ae9452f77c193fea81fc94e5c730), [ownCloud](https://gist.github.com/hlgr360/d8832ee7d02ca6fa4ab6be4857bac26d), [Calibre](https://gist.github.com/hlgr360/39ee1f7c45ec39cf4c4832df3219fb4e), and - yes - my very own [Minecraft](https://gist.github.com/hlgr360/c8cfc249de2e6f4548e9ad231051187f).
{:.center}
![Rancher - Deployed Stacks]({{ site.url }}/images/open-tabs/ot-3-rancher-stacks.png){:style="margin:auto"}
But rolling my own is not without risk - so hence the first entry of my Open Tabs is [Hacking Developers](http://bouk.co/blog/hacking-developers/). I am currently looking at securing my setup using SSL as described in [How To Secure HAProxy with Let's Encrypt ](https://www.digitalocean.com/community/tutorials/how-to-secure-haproxy-with-let-s-encrypt-on-ubuntu-14-04) and Rancher's own [Load Balancer Service](http://docs.rancher.com/rancher/v1.1/zh/cattle/adding-load-balancers/). [Adding certificates](http://docs.rancher.com/rancher/v1.1/zh/environments/certificates/) to Rancher seems to be rather straightforward. Last but not least I am also looking into [Securing Container Orchestration](http://blogs.adobe.com/security/2016/08/security-considerations-for-container-orchestration.html)
Next up are [Lessons from Launching Billions of Containers](http://www.infoworld.com/article/3112875/application-development/lessons-from-launching-billions-of-docker-containers.html) from the folks at <http://iron.io>. And even though I used Rancher for launching my stacks manually, I also want to read up on [Tips for an effective Docker-based Workflow](https://www.oreilly.com/ideas/4-tips-for-an-effective-docker-based-workflow)
In the Devops corner I stumbled over a Thoughtworks article on [When to Automate and Why](https://www.thoughtworks.com/insights/blog/when-to-automate-and-why). I really like their concept of `ruthless automation`.
On the API and architecture side of the house I would like to read the new article from Netflix about the [Engineering Tradeoffs and the Netflix API Re-architecture](http://techblog.netflix.com/2016/08/engineering-trade-offs-and-netflix-api.html) and the corresponding blog entry from Apievangelist on [Netflix Public API Was The Most Successful API Failure Ever](http://apievangelist.com/2016/09/07/the-netflix-public-api-was-the-most-successful-api-failure-ever/).
Even though we keep talking about our Cloud journey, a large portion of our business continues to come from Desktop products - and based on my own experiences using the Apple App Store on my Macbook, I tend to see predictions of Desktop dying anytime soon very sceptical. This is why I am so excited about cross-OS desktop platform like [Electron](http://electron.atom.io), which originated from Github's [Atom Editor](https://atom.io) project. This is definetly an area I would like to get more hands-on. Since I have fairly large number of eBooks which I manage with [Calibre](https://calibre-ebook.com), I was thinking to maybe try my hands on an [Open Publication Distribution System (OPDS)](http://opds-spec.org/about/) desktop client for digital libraries.
Which actually gets us back into the API story, since OPDS is nothing but an [Atom](http://www.ietf.org/rfc/rfc4287.txt) protocol tailored for digital publications. You can learn the story behind OPDS by reading the wonderful article on [How the New York Public Library made ebooks open, and thus one trillion times better](https://boingboing.net/2016/08/21/how-the-new-york-public-librar.html) which points to another wonderful article on [The Enterprise Media Distribution Platform At The End Of This Book](https://www.crummy.com/writing/speaking/2015-RESTFest/), and yes - Hypermedia as an inspiration for OPDS. In that latter presentation you will find a link to [Library Simplified](http://www.librarysimplified.org), an open source eBook library system with beautiful designed mobile clients.
For Internet of Things (IoT) I have currently two articles in my tabs: [Javascript in the Realm of IoT with NodeRed](https://blog.pusher.com/javascript-in-the-realm-of-iot-with-node-red/) and [Deploying an IoT Swarm with Docker Machine](http://blog.hypriot.com/post/deploy-swarm-on-chip-with-docker-machine/).
You probably have read about the Apple event last week. Stratechery's latest blog post looks at [Beyond the iPhone](https://stratechery.com/2016/beyond-the-iphone/). Which brings us to business strategy in general. I am currently reading Kotter's book [Accelerate](http://www.kotterinternational.com/book/accelerate/) which has some interesting parallels to the debate on Gartner's model of [Bimodal-IT](http://www.gartner.com/it-glossary/bimodal).
If you work a lot with eBooks like me you probably end up with a large number of "highlights"". To make those highlights usuable in my normal work flow I am currently going to my account on <http://kindle.amazon.com> and take a snapshot of the page in [Evernote](http://evernote.com). This makes it searchable. But I also haven't given up my idea of converting them (in markdown format) to beautiful mindmaps. You can find my "weekend" project [here](https://github.com/hlgr360/mindmap.js).
Last but not least I had a very interesting discussion over at the [API Academy](http://www.apiacademy.co) Slack channel how the use of incentives can scew the results, as best seen in the recent news of [Wells Fargo employees opening fake accounts](https://twitter.com/ritholtz/status/774236789624205312). An interesting pointer in that discussion was to [Goodhart's Law](https://en.m.wikipedia.org/wiki/Goodhart%27s_law) which I had not heard before. It states that `When a measure becomes a target, it ceases to be a good measure.`
This should cover it for this week. Plenty to read, think and catch up on. See you again next week.

View File

@ -1,136 +0,0 @@
---
layout: post
title: Introducing wicked.haufe.io
subtitle: Why we wrote our own Open Source API Management Stack based on Mashape Kong and node.js.
category: api
tags: [cto, open-source, api, devops]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post-clover.jpg"
---
As you will have noticed over the last year or so, we are currently working on making our company composable and flexible, and a main building block of that strategy are APIs as enablers. Tightly connected to APIs are the questions on how to document, promote and publish the APIs using suitable means, such as API Portals.
There are quite some solutions for API Portals (most including the API Gateway and Analytics parts as one do-it-all API Management solution), such as (non-comprehensive list!) [Azure API Management](https://azure.microsoft.com/de-de/services/api-management/), [Apigee](https://apigee.com) (recently acquired by Google), [3scale](https://3scale.net) (recently acquired by Red Hat), [Mashape](https://mashape.com) or [CA API Management](http://www.ca.com/de/products/api-management.html).
So, why did we (in parts) roll our own? This blog post will try to shed some additional light on what led to this, in addition to what Holger already wrote about in his blog post on the current [state of our API strategy](/state-of-our-api-strategy).
### What is wicked.haufe.io and what features does it offer?
[Wicked is an API Portal/API Management](http://wicked.haufe.io) package built around the [API Gateway Kong](https://getkong.org) by Mashape. Kong itself is a "headless" component sporting only a REST style interface for configuration. Our API Portal makes using Kong a lot easier (we think), plus that it gives the following features in addition to the API Gateway Kong offers. Our claim is: **Wicked Good API Management** ;-)
{:.center}
![Wicked Logo](/images/introducing-wicked/wicked-256.png){:style="margin:auto"}
* **API Gateway**: Leveraging Mashape Kong, Wicked delivers a powerful API Gateway you can use to secure your APIs behind
* **API Self Service**: Using the Portal, Developers can sign up for using your APIs themselves; they will be provisioned API Keys/OAuth Credentials they can use to access the APIs via the API Gateway (i.e., Kong)
* **API Documentation**: Inside the API Portal, you may document your APIs using OpenAPI Spec (aka "Swagger"), and this documentation is automatically served using a hosted Swagger UI distribution inside the API Portal
* **Additional Documentation**: In addition to OpenAPI Specs, you may add additional markdown or HTML content to the portal which is automatically served and secured (if desired)
A more extensive list of features can be found here: [wicked.haufe.io/features](http://wicked.haufe.io/features.html).
To illustrate what Wicked does more in detail, please regard the following picture:
{:.center}
![Wicked Usage](/images/introducing-wicked/application-usage.png){:style="margin:auto"}
The main use case of the API Portal goes as follows:
1. The developer is currently developing an application, for which he needs access to a specific API
2. The dev goes to the API Portal (Wicked) and browses the API documentation until he finds the API he needs
3. To use the API, the developer registers his application with the API Portal and signs up for the application to use the API
4. The API Portal will provide the developer with API credentials (OAuth Client ID/Secret or a plain API Key, depending on the API)
5. The developer incorporates the credentials into his application and subsequently uses the API
The operator of the API and API Gateway can thus be sure that nobody unknown to the API Gateway is able to use the API.
### What kinds of problems does Wicked solve?
The most compelling "feature" of wicked though is not what the software can do, but rather how it can be deployed. With most other API Management Solutions we struggled to get them fit inside our Tech Strategy, mostly regarding the following topics we regard very important:
* **DevOps Support**: Can we deploy API Management like any other application, i.e. using CI/CD pipelines (Jenkins, Travis, GoCD,...)? Preferably - if needed - including infrastructure as code (Phoenix Deployments), and/or using [Blue/Green Deployment techniques](https://github.com/Haufe-Lexware/wicked.haufe.io/blob/master/doc/continuous-deployment.md).
* **Configuration as Code**: Can we store the entire configuration of the API Management solution inside source control? This was in many cases a main stopper for adopting other solutions; either extracting/deploying configuration was not simple, or only partially possible.
* **Docker Support**: As a rule, we want to be able to run everything in containers, i.e. using Docker. This we accepted as the only restriction on the runtime environment; supporting Docker means we are free to deploy to any infrastructure supporting Docker hosts, including *our own premises*.
The rest of the things Wicked "solves" are the normal use cases solved by most any API Gateway, and, as already pointed out, this is done by leveraging the already existing Kong API Gateway (we mentioned we really like Kong, right?).
By really enforcing everything to be in "code", e.g. in a git repository, it is possible to completely adapt the deployment model of your API Management system to the way you work, and not the other way around. You deploy from configuration-as-code (from your code repository), and this means you are free to version your code as you like it, or, as it suits your needs best.
The documentation of Wicked contains a more thorough discussion of possible [configuration versioning strategies](https://github.com/Haufe-Lexware/wicked.haufe.io/blob/master/doc/versioning-strategies.md).
Another thing which was a main reason driving the development of our own API Portal in combination with Kong was that we want to enable individual teams to deploy and run their own instances of API Management Systems. We explicitly **did not want to centralize** API Management, and this also turned out to be a real cost driver for commercial solutions (see also below). In our opinion and according to our tech strategy, your API and thus also API Management belongs to the teams implementing and running services (in the DevOps sense). Cutting off the operation at a centralized API Management hub leads in the wrong direction. Instead we want a decentralized approach which even more pressed the need to be able to deploy API Management more or less infrastructure-agnostic (hence Docker).
### What alternatives are available we considered using instead?
As APIs are gaining traction, there are also many contendors on the market. We looked at quite some of those, and with some solutions we are still working for specific use cases (i.e. Azure API Management for APIs deployed to Azure). The following list (not including all alternatives, obviously) gives one or two short reasons why they weren't considered fit to implement our Tech Strategy:
* **3scale API Management**: 3scale has a magnificent cloud solution for API Management, but it is a SaaS solution, which means you're always running one or more parts of the API Management on 3scales premises. This is by no means bad, but in some cases our data is such that we aren't allowed to do that for regulatory reasons. Additionally, as a default, 3scale sees itself as a centralized API Hub, which is not what we wanted. If you want to get the "good features" and flexible deployments, things will quickly get costly.
* **Azure API Management**: Azure APIm is also a SaaS only solution, and a quite potent one as well. Considering the Azure DE offerings, Azure APIm is still a valid approach for our company, but there are still some drawbacks which we do not like particularly: DevOps features are present, but not complete (additional documentation cannot be added/delete using REST interfaces for example), and also setting the API Gateway itself is not completely automatable: It's intended to be long running, not to deployed anew at changes.
* **CA API Management**: For the CA API Management solution, we quickly ran into cost bounds and problems regarding how we want to be able to deploy things (for each team individually). Short: It was far too expensive. Running on your own premises is though not a problem, deploying into Docker was (to that time at least). The cost aspects and the more traditional licensing offers we had made us not even look very much further into the product (which itself looks very good though).
* **AWS API Gateway**: Another SaaS only offering; if you are on AWS, this may be very interesting, as it's by all means automatable and configurable from the outside, but it has a quite strong locking to AWS (not surprisingly). For authentication, it resorts either to very basic API Keys or to AWS IAM, which may be fine if you are already on AWS. Otherwise it's rather complicated. And: It does not (yet) have a Developer Portal, at least not of the kind we wanted to have.
We also evaluated a couple of other open source solutions, such as API Umbrella and Tyk.io.
* **API Umbrella**: API Umbrella looked really promising at first, but when working more with it, we did not quite like how it was built up; it is also intended to be long running, and the deployment strategies did not match our tech strategy. We managed to run it in Docker, but we weren't able to split up the installation into the different components as we wanted. In addition to this, API Umbrella (half a year ago) was in the middle of a major rewrite from node.js to Lua.
* **Tyk.io**: Also tyk.io is a very promising product, and in the (commercial) version 2.0 even more so. The version we evaluated before we decided to go for our own portal was the 1.x version, and there we also encountered the "usual" problems we had regarding how to configure and deploy the instances. The operation model of Tyk needs Tyk to be long-running, which was one main no-go here.
**Conclusion**: Main show stoppers were deployment/operation topics, cost aspects and the lack of on premise support.
### What kinds of technologies and products does it build upon?
When we built Wicked, we deliberately picked one of the "newer" languages/frameworks to get some hands-on experience with that as well; in this case the API Portal is built entirely using node.js, which turned out to be extremely productive for this kind of application. We'll look in some more detail on the deployment architecture:
{:.center}
![Deployment Architecture](/images/introducing-wicked/architecture-components.png){:style="margin:auto"}
Each box in this diagram depicts (at least) one Docker container, so this is the first bullet point on the list:
* **Docker**: All Wicked components run (or can run) in a Docker container. This ensures you are able to deploy onto almost any kind of infrastructure (Azure, AWS or your own premises), as long as you can provide a Docker host (or Swarm) to run on.
The other components are built as follows:
* **HAProxy**: In front of both the Portal and the Gateway sits a Docker HAProxy container which distributes and load balances the incoming requests; this component is using the official `docker-haproxy` implementation which also Docker Swarm is using.
* **Portal Components**: All Portal components (the UI/the actual portal parts) are implemented using node.js, more specifically using (among others) the following standard frameworks:
* Express
* Jade/Pug for HTML templating
* **Kong**: The API Gateway is a plain vanilla Mashape Kong docker image. We did not have to make any changes at all to the Kong source code; we are really using Kong "as-is", which was what we had hoped for, to make upgrading scenarios as simple as possible
* **PostgreSQL**: Likewise, we're using a standard PostgreSQL docker image without any kinds of changes (currently version 9.4). The PostgreSQL instance is needed by Kong to store runtime data (e.g. for rate limiting) and configuration data; please note that we *never* talk directly to the database, but only to the Kong REST interface.
We are deliberately **not using** any database for storing the configuration or runtime data. This is saved in plain JSON files (encrypted where applicable) as data-only docker containers for the API Portal API Container. This makes deploying, extracting and restoring configuration extremely simple, once more taking our deployment tech strategy into account.
### Why did we decide to offer wicked.haufe.io as open source?
We decided early on that our API Portal was going to be open source. This has various reasons, of which I will state a few:
* We are standing on the shoulders of many other open source projects, such as node.js, Express and first and foremost on Mashape Kong (which in turns stands on NGINX and Lua); we feel obliged to give back something for the effort everybody else has put in to the ground work
* API Management software is quite obviously not the core business of Haufe-Lexware (we're a media and publishing company), and thus an API Management Solution will be quite difficult to sell and/or put into any kind of portfolio
* We hope to gain a little attention in the API community by also pitching in our work into what we think is a promising thing
* Hopefully, we will be able to attract other developers also interested in "APIOps", so that we can really make Wicked into a great go-to solution in terms of Open Source API Management.
### What is on the roadmap for future releases?
Whereas Wicked already (at version 0.9.1) is at a very usable state, there are still things on our plate which we will try to address over the next couple of weeks and months, among which are the following topics:
* Currently, Wicked only supports machine-to-machine authentication (using API Keys or the OAuth 2.0 Client Credentials flow); one main research topic will be how to integrate Kong/Wicked with our existing SAML user authentication, based on OpenAM. Additionally, leveraging Kong's support for the other OAuth 2.0 Flows (such as the Authorization Flow) will be looked at.
* Further integration testing suites, especially for checking Kong upgrade compatibilities need to be implemented to further gain trust in the build and deployment automation.
* Tagging, Search support for APIs and Documentation
* Some Social Component for the Portal, such as Feedback forms and optionally Github Issue integration
* Better support and documentation of the logging features (both of Kong and the Portal)
There are many other major and minor ideas flying around, and in the course of the next couple of days we will add Github issues for the things we already know of, so that we can start a discussion and find good solutions to any problems coming up.
### How can you get involved?
As already stated: Wicked is totally open source, and you are perfectly free to participate in developing it, or even just in giving feedback on what you would like to see in it. We have published the source code under the very permissive Apache 2.0 license.
We are currently building/finishing the first version of the documentation, which includes instructions on how to build the API Portal on your local machine, so that you can get started quickly. A good starting point for reading up on technical details is the main Github page: [github.com/Haufe-Lexware/wicked.haufe.io](https://github.com/Haufe-Lexware/wicked.haufe.io) or the [documentation index page](https://github.com/Haufe-Lexware/wicked.haufe.io/blob/master/doc/index.md). There you will also find further information on how to get involved.
We do hope you like what we have to offer and consider having a peek and test drive of [wicked.haufe.io](http://wicked.haufe.io).
Cheers, Martin
### Links
* [wicked.haufe.io](http://wicked.haufe.io) - The wicked.haufe.io micro site
* [github/wicked.haufe.io](https://github.com/Haufe-Lexware/wicked.haufe.io) - The main GitHub repository for Wicked, containing all the documentation and further links to the other components
* [github/wicked.haufe.io/doc](https://github.com/Haufe-Lexware/wicked.haufe.io/blob/master/doc/index.md) - The documentation index for Wicked.

View File

@ -1,34 +0,0 @@
---
layout: post
title: Open Tabs No 4
subtitle: On Microservice Benefits, API Design and Offline-first Mobile Apps.
category: opinion
tags: [devops, culture, cto]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.alt.jpg"
---
Here is the latest from [Open Tabs](http://dev.haufe.com/meta/category/opinion/) - my weekly column with links and commentary on my browser tabs.
The recently published paper on [The Hidden Dividends of Microservices](http://queue.acm.org/detail.cfm?id=2956643) caught my attention. It inspired me to open an issue against our tech strategy to incorporate some of its conclusions. And it is only fitting that [The Five Principles of Monitoring Microservices](http://thenewstack.io/five-principles-monitoring-microservices/) is open right next to it.
You might have seen our [Haufe API Styleguide](https://github.com/Haufe-Lexware/api-style-guide/blob/master/readme.md). We were honored to be included in the new [API Stylebook](http://apistylebook.com/design/guidelines/) which was published last week. I still want to read up on [The New API Design And Deployment Solution Materia](http://apievangelist.com/2016/09/12/the-new-api-design-and-deployment-solution-materia-is-pretty-slick/) and the [Restful API Versioning Insights](https://dzone.com/articles/restful-api-versioning-insights-1)
Last weeks post talked about me playing with [Rancher](http://rancher.com) as container management solution. After finishing my private setup on Digital Ocean I am now replicating part of it on AWS. My goal here is a complete CI/CD environment including [Building docker images with Jenkins](http://blog.nimbleci.com/2016/08/31/how-to-build-docker-images-automatically-with-jenkins-pipeline/) and a private [Docker Registry](https://docs.docker.com/registry/deploying/). In order to make the latter accessible from beyond 'localhost' I need to set it up with TLS. While the registry appears to support [Let's Encrypt](https://letsencrypt.org/docs/) out of the box, I nevertheless started researching for projects providing a containerized and automated SSL termination proxy: [Dead-simple HTTPS Set up with Docker and Let's Encrypt](http://steveltn.me/2015/12/18/nginx-acme/) pointing to <https://github.com/steveltn/https-portal> and [Docker Registry 2.0 proxy with SSL and authentication](https://github.com/ContainerSolutions/docker-registry-proxy)
On a somewhat related note - the following initiative at the intersection of Container and DevOps caught my eye: [Label Schema: A New Standard Approach to Container Metadata](http://thenewstack.io/label-schema-launches-provide-standard-approach-container-metadata/) pointing to [Label Schema Specification DRAFT (RC1)](http://label-schema.org/rc1/).
In this week's IoT corner we have [Build your own robotic arm out of cardboard](https://blog.arduino.cc/2016/09/14/build-your-own-robotic-arm-out-of-cardboard/) and [Add Motion to Your Project](http://thenewstack.io/off-shelf-hacker-add-motion-project/).
And I finally came across someone who seems to be as passionate about 'Offline-first' in mobile apps as I am - Check out his post at [Build More Reliable Web Apps with Offline-First Principles](http://thenewstack.io/build-better-customer-experience-applications-using-offline-first-principles/).
Node.js is hardly emerging technology anymore but it is sometimes worth remembering how it all began with [Ryan Dahl: Original Node.js presentation](https://www.youtube.com/watch?v=ztspvPYybIY).
[Build more and Manage less With the Serverless Framework](https://serverless.com) on the other hand is cutting edge and definitely worth keeping an eye on. A bit more meta yet equally important is [The need for algorithmic accountability](https://techcrunch.com/2016/09/08/the-need-for-algorithmic-accountability/) for the software industry in general.
Two more links on cultural topics. I definitely recommend checking out the
[Open Innovation Toolkit by Mozilla](https://toolkit.mozilla.org/methods/) and [How WD-40 Created a Learning-Obsessed Company Culture](https://hbr.org/2016/09/how-wd-40-created-a-learning-obsessed-company-culture),
This should cover it for this week. Plenty to read, think and catch up on. See you again next week.

View File

@ -1,30 +0,0 @@
---
layout: post
title: Azure Active Directory and Authentication the Cloud Way
subtitle:
category: howto
tags: [cloud, automation]
author: esmaeil_sarabadani
author_email: esmaeil.sarabadani@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Authenticating our users securely to our applications at Haufe has always been an important thing for us and these days with more and more cloud-based apps it is essential to be able to provide an authentication method in the cloud as a service. That is exactly where Azure Active Directory comes in to play.
Azure AD provides identity as a service and supports industry-standard protocols such as OAuth 2.0, OpenID Connect, WS-Federation, or SAML 2.0. It uses public key cryptography to sign keys and to ensure their validity. Azure AD issues security tokens which include information about the authenticated user/subject and their authorizations. These tokens are then used by applications to allow access for different tasks. [Here] you can get more information about the included information in a token.
One of the common questions I am often asked is if Azure AD is only suitable for internally-used applications to which our internal users need to authenticate?
The answer is clearly No. You can create multiple directories and use them for different applications. Of course our corporate Active Directory is one of them and is in constant synchronization with our on-premise AD database. It is also possible to design multi-tenant application (in terms of authentication) to be able to authenticate to different Azure AD directories.
To be able to use Azure AD you need to register your application in the target directory(ies). To register the application Azure requires the following information to be able to communicate with it:
- Application ID URI: The application identifier.
- Reply URL and Redirect URI: The location which Azure AD sends the authentication response to.
- Client ID: Application ID generated by Azure AD
- Key: Generate by Azure AD
You are then even able to set custom permissions to allow the application to access directory data and that is pretty much it.
I personally believe Azure AD is a very convenient way to authenticate our users to our applications. For any questions please do not hesitate to contact me.
[here]: <https://azure.microsoft.com/en-us/documentation/articles/active-directory-token-and-claims/>

View File

@ -1,51 +0,0 @@
---
layout: post
title: Open Tabs No 5
subtitle: On Innovation, Emerging Technology and a Hitchhikers Guide to APIs.
category: opinion
tags: [api, culture, cto]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.alt.jpg"
---
Here is the latest from [Open Tabs](http://dev.haufe.com/meta/category/opinion/) - my weekly column with links and commentary on my browser tabs.
One week overdue and from the road, this weeks edition will have less commentary than usual.
#### Innovation
I presented an overview over [Corporate Innovation](http://www.slideshare.net/HolgerReinhardt/blue-ocean-corporate-innovation) methodology at the BWCon Blue Ocean Meetup. Preparing for my presentation I (re)discovered a very good article from Steve Blank on [Why Internal Ventures are Different from External Startups](https://steveblank.com/2014/03/26/why-internal-ventures-are-different-from-external-startups/)
#### Emerging Technology
Serverless is all the rage as cloud vendors like AWS try to prevent Docker from commoditizing their platforms. Check out [Serverless Computing & Machine Learning](https://blog.alexcasalboni.com/serverless-computing-machine-learning-baf52b89e1b0#.68s3z3gpb) if that is of interest to you. And somewhat related: [With one click you'll have a sandboxed JavaScript environment where you can instantly switch node versions, use every npm module without having to wait to install it, and even visualize your results.](https://runkit.com/home)
I keep an close watch on blockchains since I truely believe in their disruptive potential. So I took note that [Microsoft delivered version 1 of 'Bletchley' Azure blockchain as a service middleware](http://www.zdnet.com/article/microsoft-delivers-version-1-of-bletchley-azure-blockchain-as-a-service-middleware/).
#### Development
You have kids (like me) and like gaming (like me)? How about [Teaching Unity to non-programmers](https://blogs.unity3d.com/2016/09/20/teaching-unity-to-non-programmers-playground-project/). And yes, not strictly development, but you got to admit that the [RPi](https://www.raspberrypi.org) is a great way to get 'code where code has never gone before': [Turn Your Raspberry Pi into Out-of-band Monitoring Device using Docker](http://collabnix.com/archives/1885) and [Build your PiZero Swarm with OTG networking](http://blog.alexellis.io/pizero-otg-swarm/?).
#### Frontend
Check out [How Twitter deploys its webpage widgets](https://blog.twitter.com/2016/how-twitter-deploys-its-widgets-javascript).
#### Product & Marketing
If you are not content building products for the customers you already have, check out [The Power of Designing Products for Customers You Dont Have Yet](https://hbr.org/2016/08/the-power-of-designing-products-for-customers-you-dont-have-yet). And yes - [The Presentation of Your Value Proposition Matters ](http://conversionxl.com/research-study/value-proposition-study/)
#### Data
How about [Five Ways To Create Engaging Data-Driven Stories](http://buzzsumo.com/blog/how-to-write-data-driven-stories-5-core-narratives/)? And if you are like me constantly being amazed what people do with [Github](http://github.com), check out [How We Turned Our GitHub README Model Into a Microservice](http://blog.algorithmia.com/how-we-hosted-our-model-as-a-microservice/).
#### Culture
Yes, [How much communication is too much?](https://blog.intercom.com/qa-how-much-communication-is-too-much/) is primarily concerned about marketing, but I am looking for patterns on how to fine tune my own communication to my team and the company at large. And talking of patterns: [A personal view on the 800+ results of a tech due diligence survey to early stage startups](https://medium.com/point-nine-news/12-observations-from-a-tech-due-diligence-survey-8fe32f650b50#.x7cq2fuof).
#### API
'Thank you for the fish' - I am looking forward reading the [Hitchhikers Guide to Twilio Programmable Voice](https://www.twilio.com/blog/2016/09/hitchhikers-guide-to-twilio-programmable-voice.html) as well as [GraphQL Subscriptions in Apollo Client - Experimental web socket system for near-realtime updates](https://medium.com/apollo-stack/graphql-subscriptions-in-apollo-client-9a2457f015fb#.tsqinhn4i).
Over in the API developer ecosystem corner we find a good writeup on how [Virtual Assistants Harness Third Party Developer Power](http://nordicapis.com/virtual-assistants-harness-third-party-developer-power/).
The folks at [NordicAPIs](http://nordicapis.com) keep publishing amazing content, like how to [Decouple User Identity from API Design to Build Scalable Microservices](http://nordicapis.com/decouple-user-identity-from-api-design-to-build-scalable-microservices/).
And the [@apievangelist](https://twitter.com/apievangelist) scores two tabs this week: [My Forkable Minimum API Portal Definition](http://apievangelist.com/2016/09/19/my-forkable-minimum-api-portal-definition/) and [Providing YAML driven XML, JSON, and Atom using Jekyll And Github](http://apievangelist.com/2016/09/19/providing-yaml-driven-xml-json-and-atom-using-jekyll-and-github/)
Last but not least - an article on [A Twilio Process To Emulate Within Your Own API Operations](http://apievangelist.com/2016/09/19/a-twilio-process-to-emulate-within-your-own-api-operations/)
This should cover it for this week. Plenty to read, think and catch up on. See you again next week.

View File

@ -1,81 +0,0 @@
---
layout: post
title: Project - B#1
subtitle: Insights from the Freiburg Hackathon - New online Queue Management in Bürgeramt
category: product
tags: [Mobile, Open Source]
author: anja_kienzler
author_email: anja.kienzler@haufe-lexware.com
header-img: "images/bg-post.alt.jpg"
---
A Hackathon is a kind of software prototype development marathon. This year the second Freiburg hackathon took place. The target was to develop an application for “Newcomers” to the city of Freiburg in 48 hours (Friday evening to Sunday morning). We accepted this challenge with the idea to create less-stressful way to manage the necessary administrative procedures at “Burgeramt Freiburg” (city government administrative services) for foreigners and employees alike with the possibility to extend the system to other official city departments at a later point.
![Hackathon Logo]({{ site.url }}/images/FR-hackathon-2016/2016_09_08_11_22_05_Hackathon_2016.png){:style="margin:auto"}
### The technologies
Among newcomers, smartphones are more common than personal computers, and we decided to build an smartphone app instead of a website because the solution required personal user settings and an offline mode function which we shall address below. The project goals we defined were:
1. An online function for receiving a number (ticket) for a queue at one of the city offices. The application should also provide users with information about current waiting times so they are on time for their appointments. If you use this app, you should feel like you are the first in line upon your arrival.
2. Retrieval of digital forms in various languages, including a checklist of the required documents and forms needed for “Bürgeramt” appointments. This ensures that you do not have to come a second time because you forgot something. This feature also includes a database of translated forms that are normally only available in German. Even if they have to fill out the German forms, newcomers may see the questions in a language they are familiar with.
3. An App-UX that is easy to understand and comfortable to use even if the user is not always online.
Due to the short timeframe, our team had to make use of one technology that was known by the whole team, and so we also decided at very the beginning to develop in C#.
As platform options, we had Xamarin and Universal App and we decided to use Universal App because the team was also more familiar with this technology.
### Technical doing
The project was then split into the subprojects UX-concept, Data Layer, Office and Queue Management, QRCode Scanning, and Document Storage. One developer was responsible for one subproject. Several times during the Hackathon, developers switched subprojects.
### UX-concept
Working on the UX-concept and on the interface design started some days before the Hackathon. This allowed us to complete our UX-concept shortly after the event started so the developers could get the most out of the short timeframe of the hackathon.
The aim of the UX-concept was to make an interface that is easy to understand - the exact opposite of German bureaucracy. The user should always be guided to the best action, moving through the application in a few simple steps and reaching the desired goal quickly and easily.
To achieve the UX-concept, it was important to start from the user's point of view by reorganizing and resquencing the existing appointment categories of the “Bürgeramt” to meet newcomer needs.
The design is friendly and welcoming and supports easy navigation by using flat buttons and a clear menu prompt. Because of the apps name, B#1, and the characteristic behavior (diligent and useful) the mascot is a friendly bee.
### Data layer
While the design was being finalized, the developers started on the data layer. This subproject is the major backend component, and the data layer holds all user data, offices, lines, advices (representations of Bürgeramt appointments) and requirements - like required documents - for these advices. Because advices are built in a tree structure we built in a self-reference from advice to advice.
![Hackathon class model]({{ site.url }}/images/FR-hackathon-2016/2017-09-09_HackathonClassDiagram.png){:style="margin:auto"}
A simple XML solution for the data, stored directly on the phone, was our first choice to get the version of this main component running quickly. This was important so other developers could start using data for our app very early.
The next step was to connect via WebService to a server and to update this data. The information should always stay on the phone to enable offline browsing of the data. Only the functions of receiving a line number, checking current waiting information and downloading documents should require an internet connection. This was clearly defined because newcomers often are dependent on Wi-Fi which is generally not available.
The offices addresses and required data were taken from the city of Freiburg homepage. For the demonstration, we implemented a test data generator to create the rest of the data (number of people currently waiting, next number in line and so on).
### Queue management
For each possible type of appointment, we set a fixed waiting time, in a later version, we will improve this by adding self-learning. For each office, we defined one FIFO (First in, First out) queue filled with waiting tickets.
By using the queue information, it was possible to calculate the current waiting time and the next ticket number.
#### QRCode scanning
To realize this function, we included the XZing.Net component Nuget package. The usage was very straightforward, the only thing that was a little bit tricky was to improve focusing of the phone camera. But in the end we figured it out.
In the QRCodes, we embedded the IDs that are associated with links to specific documents stored in our data layer. So for "advice requirements", we were able to use the same ID and link for QR Codes and associated documents and to avoid storing duplicate data.
The ID made it possible to change the storage locations for documents without needng to reprint the QR Codes.
### Document storage
Document storage was a nice to have feature which is why we started working on it last. We settled for a very quick solution. We only stored IDs (Used for QRCodes) along with http links to the documents so we could use cloud storage to address the documents.
Because we did not know at that point whether a centralized or an individual storage from the independent office should be used, the document storage solution functions with both local storage and remote storage.
### Conclusion
Getting a project done in two days is not an easy task. It requires great effort and teamwork. Building a team consisting of members from different Haufe Group departments is also not easy - especially considering that each member is busy with his/her own projects.
In the end, we completed the project and it was possible to produce a prototype within 48 hours. For this kind of a project, our takeaway is that you have to concentrate on the core functions and use as much external code as possible (Nuget, OpenSource Libraries etc.). Another takeaway was that it is good to have a finished design concept at the beginning of the project. The completed UX-concept helped guide our coding.
Freiburg Hackathon was a good challenge to learn new things and test our skills in a “new project environment”. We all liked the concept of the hackathon.
Finally, we believe our idea is a good concept that will be a really nice, working product. Because of the short timeframe there is still a lot of work to do, turning many of the functions, that are right now “dummies”, into full-fledged features.

View File

@ -1,48 +0,0 @@
---
layout: post
title: Open Tabs No 6
subtitle: On Makers, Microservices, and Balanced Teams.
category: opinion
tags: [api, culture, cto, devops]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.alt.jpg"
---
Here is the latest from [Open Tabs](http://dev.haufe.com/meta/category/opinion/) - my weekly column with links and commentary on my browser tabs.
#### Innovation
Lets start this week edition with a survey about [PropTech-Startups und Digitalizierung in der Immobilienwirtschaft](http://www.zia-deutschland.de/pressemeldung/studie-von-zia-und-ey-proptech-startups-und-grownups-entern-immobilienwirtschaft/) (sorry, German only). If you are more inclined towards the latest fashion in wearables, check out the latest in
[SnapChat Spectacles and the Future of Wearables](https://stratechery.com/2016/snapchat-spectacles-and-the-future-of-wearables/).
#### Emerging Technology
Machine Learning is an area we are actively investigating - here is a somewhat dated but hopefully still relevant survey on [What are the most powerful open source sentiment analysis tools?](https://breakthroughanalysis.com/2012/01/08/what-are-the-most-powerful-open-source-sentiment-analysis-tools/). There has been literally an explosion of voice-enabled interfaces - and some interesting thoughts from the CEO of IFTTT can be found in
[Why voice is the catalyst for compatibility](https://medium.com/startup-grind/why-voice-is-the-catalyst-for-compatibility-bec7cc7e5d57#.p2cnovhdl). And obviously no week goes by without one more blockchain based services showing up on my radar - this weeks featured blip is [Pikcio - Blockchain-based messaging and transaction platform](https://www.matchupbox.com). And for an interesting new entry under the serverless topic, check out [webtask.io](https://webtask.io/docs/how).
#### Devops
Might as well been posted under Microservices or Container, but [Automating the continous deployment pipeline with containerized microservices](http://public.ludekvesely.cz/the-devops-2-toolkit.pdf) is broader than just Container or Microservices. Something similar could be said about [API First Transformation at Etsy Operations](https://codeascraft.com/2016/09/26/api-first-transformation-at-etsy-operations/), but again API here is just a means to a cultural shift in operations.
#### Microservices
One of the most vexing questions in Microservice Architecture is how to find bounded context's. I owe Daniel from OpenCredo the link to [Code as a Crime Scene](http://www.adamtornhill.com/articles/crimescene/codeascrimescene.htm). My friend and former collegue Irakli also keeps exploring it in [Microservices: Rule of Twos](http://www.freshblurbs.com/blog/2016/10/09/microservicies-rule-of-twos.html). And even though I have metioned already in [my report from QCon 16 in NYC](http://dev.haufe.com/qcon-ny-summary/#think-before-you-tool), you really should [Meet Zipkin: A Tracer for Debugging Microservices](http://thenewstack.io/meet-zipkin-tracer-debugging-microservices/).
#### Maker
For those of you who are not content in building things with bits and bytes, check out [The complete 3D guide to joinery](https://twitter.com/TheJoinery_jp).
#### Frontend
On the topic of conversational interfaces, this weeks frontend link goes to [Amazing Chat Interface Inspiration](https://medium.muz.li/amazing-chat-interface-inspiration-9ce35222b93a#.mti7whgp5).
#### Product & Marketing
In the product section, I definitly plan to read [On Writing Product Specs](https://goberoi.com/on-writing-product-specs-5ca697b992fd#.q706rrtke) as well as [Drive development with budgets not estimates](https://signalvnoise.com/posts/3746-drive-development-with-budgets-not-estimates). And for hiring the new Product Owner for our Foundational Services team I would like to brush up on my interview skills with the [The Ultimate Guide to Product Manager Interview Questions](http://www.venturegrit.com/how-to-interview-a-product-manager-the-ultimate-guide/).
#### Container
Always a popular topic for containers in production: [Assessing the current state of container security](http://thenewstack.io/assessing-the-state-current-container-security/)
#### Culture
So here is a bit more controversial article on [Agile in management and leadership](http://alistair.cockburn.us/Agile+in+management+and+leadership). The one principle which caught my eye was the first one. The one I struggle with is number 3. Who am I to know who the right team members are for any given situation? Instead I prefer to take [a lesson from buffer (again from QCon)](http://dev.haufe.com/qcon-ny-summary/#learnings-from-a-culture-first-startup) and take the responsibility to create a balanced team instead.
#### API
[Atlassian joins Open API Initiative, open sources RADAR doc generator](https://developer.atlassian.com/blog/2016/05/open-api-initiative/) is the first link under the API heading. GraphQL continues to occupy significan tab space in my browser: [5 Potential Benefits of Integrating GraphQL](http://nordicapis.com/5-potential-benefits-integrating-graphql/) and
[GitHub Dumps REST Calls for Facebooks GraphQL](http://thenewstack.io/github-dumps-rest-graphql-api/). It is also always a good sport to keep an eye on our competition: [Introducing Postman for the QuickBooks Online API](https://developer.intuit.com/hub/blog/2016/09/19/introducing-postman-quickbooks-online-api). APIEvangelist is at it again with [Github Needs Client OAuth Proxy For More Complete Client-Side Apps On Pages](http://apievangelist.com/2016/09/27/github-needs-client-oauth-proxy-for-more-complete-clientside-apps-on-pages/).
This should cover it for this week. Plenty to read, think and catch up on. See you again next week.

View File

@ -1,48 +0,0 @@
---
layout: post
title: Two factor authentication with Windows Hello and Google Authenticator
subtitle: Exploring new ways to make customer login more secure
category: howto, product
tags: [Security, Mobile, Open Source, API]
author: daniel_wehrle
author_email: daniel.wehrle@haufe-lexware.com
header-img: "images/bg-post.alt.jpg"
---
Currently all of our Lexware "on-premise" products work, using the well-known user/password login authentication. But, in the last couple of years, new techniques for authentication have become available, and we tested some of these technologies - Windows Hello and Google Athtenticator - to make proposals for alternative authentication and authorization technologies for Lexware products - especially for our "on premise" products.
### Windows Hello
"Windows Hello" has been available since the release of Windows 10 and is integrated into Microsofts sign-on service "Microsoft Passport". Windows uses this service to enable login by face recognition or by other biometric methods, like fingerprint recognition. The face recognition requires a special camera ("Intel RealSense"), consisting of two cameras for visible light (for 3D scanning), and one infrared camera, to ensure the face recognition is not run on a photograph. These cameras are not widely distributed across the laptop market.
I started to check and go through the information Microsoft provides for integrating "Passport" and "Hello" into applications.
I also started recoding the sample from the pages, creating a simple Universal Windows Platform (UWP) app that performed indentification by Face recognition - [See Sample on MSDN](https://msdn.microsoft.com/en-us/windows/uwp/security/microsoft-passport-login-auth-service). The sample was short and pretty straightforward. It contains a simple xml serialization framework that would need to be replaced by a more secure data layer for productive usage. But to get started it was a really good resource.
The next step I had planned was to transfer this sample from UWP into a normal desktop app. Here I was confronted with a show stopper: The Microsoft Passport and Windows Hello components were located in the WinRT Framework, but I planned to use the .Net Framework. I found a lot of information how to use WinRT Components in normal .Net applications - [e.g. on CodeProject](http://www.codeproject.com/Articles/457335/How-to-call-WinRT-APIs-from-NET-desktop-apps). There is also a [compatibility list](https://msdn.microsoft.com/en-us/library/windows/desktop/dn554295(v=vs.85).aspx), but Microsoft Passport and Windows Hello is not part of it, so there is no guarantee that it will work. After I finished the import I was faced with fact that it was impossible to initialize the Passport framework.
We verified this by asking our Microsoft contact, who gave us the same information: Hello was only supported for UWP, not for old style Windows applications.
After learning that not all WinRT features can be used in the .Net Framework we had to put the project on hold.
### Google Authenticator
Now we had go back and rethink about how this project was defined, the main goal was a more secure authentication. So we checked for other possibilities and remembered that there was a time limited token system from Google.
Time limited tokes are also known as TOTP (Time-Based One-Time Password Algorithm see RFC 6238). Those systems generate passwords that are only valid for a limited time, those passwords are also called tokens. Normally the generation of a token is limited to one hardware device. In the past, token generators were a small piece of hardware including an LCD display, showing the current token. Google Authenticator does not rely on dedicated hardware and makes it possible to turn every smartphone into a security token generator.
So I began to research how to use Google Authenticator with .Net. I found out that there are open source .Net Projects on [GitHub](https://github.com/brandonpotter/GoogleAuthenticator). I integrated those to my failed port of the Windows Hello app and was happily up and running with very low effort.
It was clear that biometric authentication can definitely make authentication more secure, and so I did a little more research on recommendations for secure authentication.
### Takeaways for moving forward with two factor authentication technologies
Two factor authentication can indeed bring a lot more security to applications. Data thieves not only have to get the password but also the token or biometrical information. And this information cannot be replicated as easily as a password.
But the technology that holds the second factor must also be secure itself. Windows Hello and Google Authenticator seem to be secure technologies. So it makes sense to use them as a second factor for higher-priority security issues. And, it also makes sense to use these technologies to build an up-to-date, secure authorization service. In any case two factor authorization should be adopted. Both technologies are easy to use, both for integrating into a software, and also from the customer-use standpoint. Like this, the security of an authorization process can be tightened with just some simple steps.
Its too bad that Windows Hello does not work for classic desktop apps. Another drawback is that the availability of hardware (cameras) may limit the number of possible users. But, with Google Authenticator, there is an available technology that can be used on most smartphones and with all kinds of applications.
Two factor authentication may not be a requirement for each simple login. But at administrator login or for a task with a higher security risk, it makes much sense to perform a second authentication step, at least as an option for the user. This does not require more effort or extra steps from the users but does heighten the security for critical operations.
Since it only requires low effort to integrate and use, I would recommend this technology to every developer, to enhance security for their applications and make use of Windows Hello or Google Authenticator, and I am also proposing two factor authentication to our product management because it would definitely be a product-feature quick win for us and our customers.

View File

@ -1,118 +0,0 @@
---
layout: post
title: SCS - Self-Contained Systems
subtitle: Thoughts about Self-Contained Systems architecture pattern
category: api
tags: [cto, microservice, devops]
author: rainer_zehnle
author_email: rainer.zehnle@haufe-lexware.com
header-img: "images/bg-post-clover.jpg"
---
In September 2016 I attended the [Software Architecture Summit 2016](http://software-architecture-summit.de/) in Berlin.
I listened to a talk from [Eberhard Wolff](https://www.innoq.com/de/staff/eberhard-wolff/) about "**Self-contained Systems: Ein anderer Ansatz für Microservices**".
The idea behind the [SCS](http://scs-architecture.org/) approach is really convincing. It's like a recipe with valuable ingredients.
Use the mindset of microservices as basis, add your web application know-how, season it with asynchronous communication and finally decorate it with an own UI per SCS.
The result is a concept to split a monolithic application in many smaller web applications collaborating with another.
The website [scs-architecture.org](http://scs-architecture.org/) describes the architecture pattern and contains a self-explanatory [slidedeck](https://speakerdeck.com/player/e74a068d06a949cdb358a55ca17d2dc5#).
It's not possible to give a better introduction. Please read the website [scs-architecture.org](http://scs-architecture.org/).
Nevertheless I copied the main characteristics:
### SCS Characteristics
1. Each SCS is an autonomous web application
2. Each SCS is owned by one team
3. Communication with other SCSs or 3rd party systems is asynchronous wherever possible
4. An SCS can have an optional service API
5. Each SCS must include data and logic
6. An SCS should make its features usable to end-users by its own UI
7. To avoid tight coupling an SCS should share no business code with other SCSs
{:.center}
![SCS parts]({{ site.url }}/images/scs-parts.png){:style="margin:auto"}
### Akademie and SCS
The more I heard about Self-contained Systems the more I was convinced that the pattern describes the way the Akademie Domain is reorganized.
Quote from [Frequently Asked Questions page](http://scs-architecture.org/faq.html)
> Each SCS is responsible for a part of the domain. Dividing the domain into bounded contexts and understanding their relationships is what we refer to as domain architecture. Ideally, there is one SCS per bounded context.
This is the example of SCS systems from Eberhard Wolff.
{:.center}
![SCS modules example]({{ site.url }}/images/scs-modules.png){:style="margin:auto"}
Looks a lot like the Akademie domain strategy.
Isn't it cool to refactor a whole domain and finally find a named pattern for it?
### Start with a small amount of systems
What attracts me most on SCS is the approach to divide an existing monolith in a small amount of separate web applications.
You divide in e.g. 2-5 SCS. I'm sure that you learn a lot even while splitting a monolith in two parts.
I believe it's easier to define a clear boundary for a few systems than to divide in ten or even fifty microservices.
After you learned your lessons I'm sure you can savely increase the number of microservices.
### Admin versus end user
Some of our systems have two explicit user roles. An admin role and an end user role (Learning Management System, Content Management System, API Management, Travel Expenses etc.).
* The functionality for the end user is often only a small subset of the admin functionality.
* The requirements for a cool and easy to use UI are different from those for a functional driven admin UI.
* Mobile support for end users might be a Must but not for the admin functionality.
* The number of admins is often a fraction of the end users.
* System availability during normal office hours for admins might be ok but a no go for end users.
Therefore it is a reasonable first step to divide the system in an end user and an admin system.
I'm sure you will encounter a lot of obstacles to solve.
* divide the data storage and rethink the storage approach for each system
* do not share the business code
* establish devops chain for two (now) independent systems
* introduce asynchronous communication
* and much more
### Corporate application mashups
Goal of the SCS pattern is to help splitting monolithic applications.
But I think it is also a pattern how to compose existing applications to a combined offer.
And that's exactly what we strive for when we talk about our **corporate strategy**.
The [Frequently Asked Questions page](http://scs-architecture.org/faq.html) of scs-architecture.org contains some explanations that fit to our strategy.
> “Self-Contained System” describes best whats at the core of the concept: Each system should work by itself. Together they form a “System of Systems”.
{:.center}
![SCS modules example]({{ site.url }}/images/scs-realsystems.png){:style="margin:auto"}
SCS divides micro and macro architecture
> ### SCSs are very isolated — how can they still form one system?
>
> The goal of the SCS approach is to localize decisions. However, to make a system of SCSs work together, some decisions affecting all of them need to be made. We can distinguish:
>
> * Local decisions in one SCS are called micro architecture. This includes almost all technical decisions e.g. the programming language or the frameworks.
> * Decisions that can only be made on the global level are called macro architecture. Actually, only very few things fall into this category, most importantly the protocol SCSs can use to communicate with each other, the approach used for UI integration, and possibly a way to do data replication.
SCS also includes statements about UIs.
> We believe [ROCA](http://roca-style.org/) is a good approach for web front ends for SCS because that approach makes it easy to combine UIs from several SCSs to one.
The article [Transclusion in self-contained systems](https://www.innoq.com/en/blog/transclusion/) contains a good discussion about different possible UI approaches.
### Conclusion
I like the SCS approach. It describes how a migration can happen in small, manageable steps with minimized risk of failure.
It leads to an evolutionary modernization of big and complex systems.
I think we should give it a try!

View File

@ -1,138 +0,0 @@
---
layout: post
title: wicked.haufe.io 0.10.0 released
subtitle: Release Notes and Hints
category: api
tags: [cloud, api, open-source]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post-api.jpg"
---
### Introduction
Both [Holger](/state-of-our-api-strategy/) and I have been [writing quite some](/introducing-wicked-haufe-io/) about our Open Source API Management System [wicked.haufe.io](http://wicked.haufe.io) ([GitHub Repository](https://github.com/Haufe-Lexware/wicked.haufe.io)), and today we want to announce the new release 0.10.0 of wicked, which was published yesterday.
We have been quite busy with various things, mostly concerning
* [Support for the OAuth 2.0 Implicit Grant and Resource Owner Password Grant](#oauth-20-implicit-grant-and-resource-owner-password-grant)
* [Upgrading to the latest Kong version (as of writing 0.9.4)](#upgrading-to-latest-kong-094)
* [Stability and Diagnostic Improvements](#stability-and-diagnostics)
* Integration testing Kong
* Displaying Version information
* [API Lifecycle Support (deprecating and unsubscribing by Admin)](#api-lifecycle-support)
I would like to tell you some more about the things we did, in the order of the list above.
{:.center}
![Wicked Logo](/images/introducing-wicked/wicked-256.png){:style="margin:auto"}
### OAuth 2.0 Implicit Grant and Resource Owner Password Grant
In the previous versions, wicked was focussing on machine-to-machine type communication, where both parties (API consumer and API provider) are both capable of keeping secrets (i.e., in most cases server side implementations). As I was writing in my post introducing wicked.haufe.io, the next logical step would be to ease up not only the Client Credentials (M2M) OAuth 2.0 flow, but also additional flows which are needed in modern SPA and Mobile App development:
* Implicit Grant Flow
* Resource Owner Password Grant Flow
#### What do we want to achieve by using these flows?
The main problem with SPAs (Single Page Applications) and Mobile Apps is that they are not capable of keeping secrets. Using an approach like the Client Credentials flow would require e.g. the SPA to keep both Client ID and Client Secret inside its JavaScript code (remember: SPAs are often pure client side implementations, they don't have a session with their server). All code, as it's in the hands of either the Browser (SPA) or Mobile Device (Mobile App) have to be considered as public.
What we want to achieve is a way to let the SPA (or Mobile App) get an Access Token without the need to keep Username and Password of the end user in its storage, so that, using the Access Token, the API can be accessed on behalf of this user:
![Using an Access Token](/images/wicked-0-10-0/using-an-access-token.png){:style="martin:auto"}
The SPA only has an access token, which (a) is an opaque string, and (b) does not contain any manipulatable content, so that the API Gateway can inject the desired information into additional headers based on this access token (here: `X-Authenticated-Userid` and `X-Authenticated-Scope`). The Backend REST API then can be 100% sure the data which comes via the API Gateway is reliable: We know the end user, and we know what he is allowed to, because it's tied to the Access Token.
#### How does the Implicit Grant work?
The way of the Implicit Grant to push an Access Token to an SPA works as follows, always assuming the end user has a user agent (i.e. a Browser or component which behaves like one, being able to follow redirects and get `POST`ed to):
1. The SPA realizes it needs an Access Token to access the API (e.g. because the current one returns a `4xx`, or because it does not have one yet)
2. The end user is redirected away from the SPA onto an Authorization server, or is made to click a link to the Authorization Server (depending on your UI)
3. The Authorization Server checks the identity (authenticates) the end user (by whatever means it wants),
4. The Authorization Server checks whether the authenticated end user is allowed to access the API, and with which scopes (this is the Authorization Step)
5. An access token tied to the end user's identity (authenticated user ID) and scopes is created
6. ... and passed back to the SPA using a last redirect, giving the `access_token` in the fragment of the redirect URI
Applied to e.g. an Authorization Server which relies on a SAML IdP, the simplified version of a swim lane diagram could look as follows:
![Implicit Grant](/images/wicked-0-10-0/implicit-flow.png){:style="margin:auto"}
#### How does the Resource Owner Password Grant work?
In contrast to the Implicit Grant Flow, where the user does **not** have to enter his username and password into the actual SPA/Mobile App, the Resource Owner Password Grant requires you to do just that: The user authenticates using his username and password, and these are exchanged for an access token and refresh token which can subsequently be used instead of the actual credentials. The username and password should actually be thrown away explicitly after they have been used for authenticating. This flow is intended for use with trusted Mobile Applications, not with Web SPAs.
The flow goes like this, and does not require any user agent (Browser):
1. The Mobile App realizes it does not have a valid Access Token
2. A UI prompting the user for username and password to the service is displayed
3. The Mobile App passes on username, password and its client ID to the Authorization Server
4. The Authorization Server authenticates the user (by whatever means it wants), and authorizes the user (or rejects the user)
5. Access token and refresh token are created and passed back to the Mobile App
6. The Access Token is stored in a volatile memory, whereas the Refresh Token should be stored as safely as possible in the Mobile App
7. The API can now be accessed on behalf of the end user, using the Access Token (and the user credentials can be discarded from memory)
In case the Access Token has expired, the Refresh Token can be used to refresh the Access Token.
#### wicked and Authorization Servers
Now, how does this tie in with wicked? Creating an Authorization Server which knows how to authenticate and authorize a user is something you still have to implement, as this has to be part of your business logic (licensing, who is allowed to do what), but [wicked now has an SDK](https://www.npmjs.org/package/wicked-sdk) which makes it a lot easier to implement such an Authorization Server. There is also a sample implementation for a simple Authorization Server which just authenticates with Google and deems that enough to be authorized to use the API: [wicked.auth-google](https://github.com/Haufe-Lexware/wicked.auth-google). In addition to that, there is also a SAML SDK in case you want to do SAML SSO federation with wicked (which we have successfully done for one of our projects in house): [wicked-saml](https://www.npmjs.com/package/wicked-saml).
After having implemented an Authorization Server, this component simply has to be deployed alongside the other wicked components, as - via the wicked SDK - the authorization server needs to talk to both wicked's backend API and the kong adapter service, which does the heavy lifting in talking to the API Gateway ([Mashape Kong](https://getkong.org)).
In case you want to implement an Authorization Server using other technology than the proposed node.js, you are free to just use the API which the Kong Adapter implements:
* [Kong Adapter OAuth2 Helper API](https://github.com/Haufe-Lexware/wicked.portal-kong-adapter/blob/master/resources/swagger/oauth2-swagger.yml)
Details of the API can be found in the [wicked SDK documentation](https://www.npmjs.com/package/wicked-sdk). Especially the functions `oauth2AuthorizeImplicit`, `oauth2GetAccessTokenPasswordGrant` and `oauth2RefreshAccessToken` are the interesting ones there.
### Upgrading to latest Kong (0.9.4)
Not only wicked has evolved and gained features during the last months, but also the actual core of the system, Mashape Kong has been released in newer versions since. We have done quite some testing using the newer versions, and found out some changes had to be done to make migration from older versions (previously, wicked used Kong 0.8.3).
In short: We have migrated to Kong 0.9.4, and you should be able to just upgrade your existing wicked deployment to all latest versions of the wicked components, including `wicked.kong`, which just adds `dockerize` to the official `kong:0.9.4` docker image.
In addition to upgrading the Kong version, we have now also an integration test suite in place which at each checkin checks that Kong's functionality still works as we expect it. Running this integration test suite (based on docker) gives us a good certainty that an upgrade from one Kong version to the next will not break wicked's functionality. And if it does (as was the case when migrating from 0.8.x to 0.9.x), we will notice ;-)
### Stability and Diagnostics
As mentioned above, we have put some focus on integration testing the components with the actual official docker images, using integration tests "from the outside", i.e. using black box testing via APIs. Currently, there are test suites for the `portal-api` (quite extensive), the `portal` and the `portal-kong-adapter`. These tests run at every checkin to both the `master` and `next` branches of wicked, so that we always know if we accidentally broke something.
We know, this is standard development practice. We just wanted you to know we actually do these things, too.
If you have a running portal instance, you will now also be able to see which version of which component is currently running in your portal deployment:
{:.center}
![System Health](/images/wicked-0-10-0/systemhealth.png){:style="margin:auto"}
### API Lifecycle Support
By request of one of the adopters of wicked, we introduced some simple API lifecycle functionality, which we think eases up some of the more tricky topics and running an API:
* How do you phase out the use of an API?
* How can you make sure an API does not have any subscriptions anymore?
* How can you contact the consumers of an API?
To achieve this, we implemented the following functionality:
* It is now possible to deprecate an API; developers will not be able to create new subscriptions to that API, but existing subscriptions are still valid
* You can now - as an administrator - see the list of subscribed applications on the API Page
* It's also possible to download a CSV file with all subscribed applications, and the owner's email addresses, e.g. for use with mass mailing systems (or just Outlook)
* Finally, you can delete all subscriptions to a deprecated API, to be able to safely remove it from your API Portal configuration (trying to delete an API definition which still has existing subscriptions in the database will end up with `portal-api` not starting due to the sanity check failing at startup)
#### A deprecated API in the API Portal
![Deprecated API](/images/wicked-0-10-0/deprecated-api.png){:style="margin:auto"}
#### CSV Download and Subscription Deletion
![CSV download and Subscription deletion](/images/wicked-0-10-0/api-lifecycle.png){:style="margin:auto"}
### What's next?
Right now, we have reached a point where we think that most features we need on a daily basis to make up a "drop in" API Management Solution for Haufe-Lexware have been implemented. We know how to do OAuth 2.0 with all kinds of Identity Providers (including our own "Atlantic" SAML IdP and ADFS), and we can do both machine-to-machine and end-user facing APIs. If you watched carefully, we do not yet have yet explicit support for the OAuth 2.0 Authorization Code Grant, so this might be one point we'd add in the near future. It's (for us) not that pressing, as we don't have that many APIs containing end user resources we want to share with actual third party developers (like Google, Twitter or Facebook and such), but rather APIs which required authorization, but we (as Haufe-Lexware) are the actual resource owners.
The next focus points will rather go in the direction of making running wicked in production simpler and more flexible, e.g. by exploring docker swarm and its service layer. But that will be the topic of a future blog post.
Man, why do I always write so long posts? If you made it here and you liked or disliked what you read, please leave a short comment below and tell me why. Thanks!

View File

@ -1,59 +0,0 @@
---
layout: post
title: Using HALBuilder with german characters
subtitle: How to make sure that umlauts are properly displayed
category: Howto
tags: [devops]
author: Filip Fiat
author_email: filip.fiat@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Just to set the scene on this post, a couple of definitions and links, before I present a solution to a very annoying problem when dealing with "umlauts":
>HATEOAS (Hypermedia as the Engine of Application State) is a constraint of the REST application architecture. A hypermedia-driven site provides information to navigate the site's REST interfaces dynamically by including hypermedia links with the responses.
[Understanding HATEOAS - Spring](https://spring.io/understanding/HATEOAS)
>Hypertext Application Language (HAL) is a standard convention for defining hypermedia such as links to external resources within JSON or XML code. [...] The two associated MIME types are media type `application/hal+xml` and media type `application/hal+json`.
[Hypertext Application Language (HAL)](https://en.wikipedia.org/wiki/Hypertext_Application_Language)
[MeinKonto's RESTful API](https://github.com/Haufe-Lexware/api-meinkonto-mylexware) was designed with this constraint in mind. Thus was decided to use HAL Representation to implement HATEOAS.
### TheoryInPractice's HalBuilder 4.x
There are a lot of builders for this standard Json representations and the chosen one was TheoryInPractice's HAL implementation builder (http://www.theoryinpractice.net/ ).
The [HalBuilder](https://github.com/HalBuilder) library provides an API for both generating and consuming HAL based resource representations.
These days I was doing some updates on the Meinkonto RESTful API using this HAL implementation (http://www.theoryinpractice.net/ ) and like every time I work with some new technology I ended up indubitably at the following theme: "umlauts".
Basically this library is very robust and we are using it without any major impediments for generating and consuming HAL resources in pure Java, but seems to have some flaws in working with UTF-8 encoded special characters.
For instances of
```java
com.theoryinpractise.halbuilder.api.Representation
```
they provide this method
```java
toString(RepresentationFactory.HAL_JSON)
```
for generating the JSON string from a `JsonRepresentation `object.
Unfortunately for strings with German characters the outcome is not quite the expected one, i.e. "ä" becomes "ä" and so on.
Searching the documentation, some relevat tech forums and codebase, I was not able to find a way to specify the correct encoding, thus I've ended using the following construction.
### "Good old JAVA" solves any problem
Create a String object from the outcome of this method with the right encoding (UTF-8), i.e. my code looks like this
```java
halRepresentationUTF8 = new String(halRepresentation.toString(RepresentationFactory.HAL_JSON).getBytes(), "UTF-8")
```
And this does the trick ;)... the string, `halRepresentationUTF8` is properly UTF-8 encoded.
In fact this is a simple way to make sure the outcome (`HAL_JSON` representation), which is basically a string, will be properly encoded as UTF-8.
Comments and suggestions are more than welcome. **Happy coding!**

View File

@ -1,39 +0,0 @@
---
layout: post
title: Microservices day wrap up
subtitle:
category: conference
tags: [culture, microservice, api]
author: scott_speights
author_email: scott.speights@haufe-lexware.com
header-img: "images/bg-post-api.jpg"
---
### A start with microservices
Keeping with the trend of having events based on technical topics that we are getting into at Haufe Group, we just had our Microservices Architecture Day - #msaday - internal event. We are currently looking to adopt microservices design patterns for new software, to refactor our older systems towards microservices and to make it easier to manage and grow our software. #msaday was a lot of fun and we were joined by many colleagues from other parts of Germany and from our offices in Switzerland, Romania and Spain.
{:.center}
![Microservices audience](/images/Microservices-Day/Satisfied-Microservices-Customer.JPG){:style="margin:auto"}
### Help from outside
For #msaday our event partners, who is also helping us to tackle decomposing some of our monolithic systems into service domains, were Daniel Bryant and Lorenzo Nicora of Open Credo. The guys from Open Credo delivered talks on topics for creating successful microservices environments, what can go wrong with microservices and how you can avoid these pitfalls. But also, there were detailed talks on how to design reactive microservices or perform concurrent and non-blocking programming through specialized programming models.
### We presented our own technology too
In addition to these excellent presentations, some of our own people showcased Haufe Group technology solutions at an equally high level. Andreas Plaul and Thomas Schüring presented the Haufe Group IT strategy towards a cloud first approach and the upcoming central logging solution based on FluentD and Graylog. The API Management presentation by Martin Danielsson showed how to perform authentication and authorization for registered APIS with OAuth protocol against any service that supports OAuth - using Haufe Groups own Open Source API Management Platform [wicked.haufe.io](https://github.com/Haufe-Lexware/wicked.haufe.io)!
### Haufe's dev culture is changing
When I think back on where we were five years ago. Its great to see that the technical development and Dev Community culture have come far enough along to even start thinking about developing using microservices patterns. And in fact, Haufe Group is actively targeting several systems as good candidates to begin the process of breaking down our monolithic code into service domains.
This is definitely going to be a longer process and we may never even reach a full microservices based architecture for some of these systems, but adopting a microservices view will definitely make further development easier and will be an interesting challenge.
### Part of the Freiburg dev community
We ended #msaday by hosting the Freiburg DevsMeetup. Interesting presentations again and kicking back with something to eat and drink got us talking there were more guests than ever before from Freiburg and around. The Akademie team talked about learnings and best practices on their way to decoupled systems, Daniel Bryant showed us that its not all rosy in microservices-land and which antipatterns to avoid. Finally, Martin Danielsson gave a condensed version of his previous presentation about Haufes Free and Open Source API Management solution Wicked.haufe.io, inviting the Freiburg dev community to contribute to further development. By the way, this time around we posted content to [YouTube](https://www.youtube.com/channel/UCLyuIumQe2DjYIuwnvCo4uA) and [SlideShare](http://www.slideshare.net/HaufeDev/presentations)!
It was great to see colleagues, who usually work somewhere else, and to get to know Open Credo and the Freiburg dev community folks better to share microservices experiences with them. A long day came to a close and we are proud about what we could show to prove that the Haufe Group is on its way.
Were not Netflix. Were Haufe.

View File

@ -1,94 +0,0 @@
---
layout: post
title: Updates in wicked 0.11.0
subtitle: Enabling alternative deployment orchestrations
category: api
tags: [cloud, api, open-source]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post-api.jpg"
---
### Introduction
Last Friday we released version 0.11.0 of our Open Source API Management System [wicked.haufe.io](http://wicked.haufe.io) ([GitHub Repository](https://github.com/Haufe-Lexware/wicked.haufe.io), [Release Notes](https://github.com/Haufe-Lexware/wicked.haufe.io/blob/master/doc/release-notes.md#0110-beta)). Over the course of the last couple of weeks our focus has been to make deployments to production easier and how to enable deployments to alternative runtimes (other than using `docker-compose`), or at least how to make them easier.
This blog post will explain some of the changes and enhancements that were made and how you can update your existing API Configurations to benefit from them.
### Recap - Deployment Architecture
To set the scene a little, let's recap how the deployment architecture of a typical wicked deployment looks like:
{:.center}
![Deployment Architecture](/images/wicked-0-11-0/deployment-architecture.png){:style="margin:auto"}
The two main blocks are the portal components and the Kong components (the actual API Gateway). Both blocks are made up of further smaller containers which fulfill specific tasks, such as the "mailer" or the "portal". The Kong components consist of _n_ Kong Gateways (depending on your scalability needs) and a Postgres database in which Kong stores its configuration.
In front of both blocks resides the load balancer. This can be any load balancer capable of doing SSL termination and resolving VHOSTs (i.e. almost any LB). The default implementation is leveraging `docker-haproxy` for this purpose, but using an Apache LB, nginx proxy, or even an ELB or Azure LB is equally possible (albeit not preconfigured).
### Configuration and Persistent Data
The API Portal needs to keep its configuration somewhere, and also needs to keep some data persistent. The configuration data is static and is usually retrieved from a source code repository (preferably git). This means this data does not need to be persisted, as it can be recreated from scratch/from code any time ("STATIC CONFIG"). Data which comes from the usage of the API Portal (users, applications, subscriptions and such) does need to be persisted in some way ("PERSISTENT DATA"):
{:.center}
![Portal API configuration](/images/wicked-0-11-0/portal-api-data.png){:style="margin:auto"}
How persistent storage is kept varies from orchestration to orchestration, but this is usually not a big issue; it's different, but as long as the runtime is able to mount a persistent volume at `/var/portal-api/dynamic` which remains the same over the entire lifecycle of the `portal-api` service container, all is fine.
With regards to the static configuration which usually resides in the `/var/portal-api/static` folder, the default deployment using `docker-compose` has always assumed that your are able to build a private image using `docker build` **on the docker host/swarm you are deploying to**. This is quite an issue with e.g. Kubernetes or Mesosphere which always assume that images are downloaded ("pulled") to the system, and not built on the cluster.
The first workaround you can apply to circumvent this is to build a private image derived from `haufelexware/wicked.portal-api` which prepopulates the `/var/portal-api/static` directory with your API Configuration and push that to a private repository, from where the orchestration runtime (e.g. Kubernetes or Mesos) pulls it when deploying. This has a couple of obvious drawbacks, such as needing a private docker repository, or that you need to build a new docker image each time you want to deploy a new API configuration.
### Static Configuration using `git clone`
In order to make the use case of updating the API configuration easier, we enhanced the startup script of the `portal-api` container to be able to by itself clone a git repository into the `/var/portal-api/static` folder. By specifying the following environment variables for the portal API container, this will be done at each container creation and restart:
| Variable | Description |
| ---| --- |
| `GIT_REPO` | The git repository of the API configuration, e.g. `bitbucket.org/yourorg/apim.config.git` |
| `GIT_CREDENTIALS` | The git credentials, e.g. `username:password` |
| `GIT_BRANCH` | The branch to clone; if specified, `HEAD` of that branch is retrieved, otherwise `master` |
| `GIT_REVISION` | The exact SHA1 of the commit to retrieve; mutually exclusive with `GIT_BRANCH` |
This enables running the "vanilla" portal API image also in environments which do not support building docker images on the docker host, such as Kubernetes.
### Automatic Configuration Refresh
The portal API container now also calculates a "config hash" MD5 value of the content of the configuration directory at `/var/portal-api/static` as soon as the container starts. This is not depending on the git clone method, but is also done if the configuration is prepopulated using a derived docker image or if it still uses the data only container approach to mount the static configuration directory into the container.
This configuration hash is used by the other containers to detect whether a configuration change has taken place. Except for the portal API container, all other wicked containers are stateless and draw their configuration from the API configuration (if necessary). This means they need to re-pull the settings as soon as the configuration has changed. When using the `docker-compose` method of deploying, this wasn't a big issue if you just did a `docker-compose up -d --force-recreate` after creating the new configuration container, but with runtimes such as Kubernets, this would introduce quite some overhead each time the configuration changed: All dependant containers would need to be restarted one by one.
This is how the core wicked containers behave now:
{:.center}
![API configuration update](/images/wicked-0-11-0/config-update.png){:style="margin:auto"}
The diagram above depicts how the portal API interacts with e.g. the portal container (Kong adapter, mailer and chatbot also behave exactly the same):
* At initial startup, the portal container retrieves the current config hash via the `/confighash` end point of the portal API; this hash is kept for comparison in the container
* The configuration is updated, which triggers a CI/CD pipeline
* The CI/CD pipeline extracts the current git revision SHA1 hash and updates the `GIT_REVISION` environment variable for the `portal-api` container (this is highly depending on the orchestration runtime how this works)
* The CI/CD pipeline triggers a recreate on the `portal-api` which in turn clones the new revision of the API configuration repository at startup (via the `GIT_REVISION` env variable)
* At the next check interval (every 10 seconds) the portal detects that a configuration has taken place, as the config hash returned by `/confighash` has changed
* The portal triggers a restart of the portal; when starting anew, the portal will retrieve the new config hash and restart polling the `/confighash` end point
This means that all wicked containers can be treated as individually deployable units, which makes your life a lot easier e.g. when working with Kubernetes.
The polling functionality is built in to the [wicked Node SDK](https://www.npmjs.org/package/wicked-sdk), which means that any plugin or authorization server built on top of this SDK also automatically benefits from the automatic config hash checking; example implementations of authorization servers are [wicked.auth-passport](https://github.com/Haufe-Lexware/wicked.auth-passport) (for social logins like Twitter, Facebook, Google+ and GitHub) and [wicked.auth-saml](https://github.com/Haufe-Lexware/wicked.auth-saml) (for SAML identity providers).
### Upgrading to wicked 0.11.0
In case you are running wicked in a version prior to 0.11.0, you do not necessarily **have to** update any of your deployment pipelines or `docker-compose.yml` files. The `git clone` method does though make updating the API configuration a lot simpler, so we recommend you to change your `docker-compose.yml` in the following way:
* Remove the `portal-api-data-static` service entirely; it is no longer needed
* Add `GIT_REPO`, `GIT_CREDENTIALS`, `GIT_REVISION` and/or `GIT_BRANCH` to the `variables.env` file
* Remove the entry `portal-api-data-static` from the `volumes_from` section of the `portal-api` service
* Additionally provide values for the `GIT_*` env vars at deployment time, in your CI/CD scripting solution
You can find more and more [detailed information in the documentation](https://github.com/Haufe-Lexware/wicked.haufe.io/blob/master/doc/deploying-to-docker-host.md#updating-the-api-configuration).
### Oh, and one more thing
We also implemented the promised support for the OAuth 2.0 Authorization Code Grant since the [last blog post](http://dev.haufe.com/wicked-0-10-0-released/) (actually in 0.10.1). Just for the sake of completeness.
If you have any questions or suggestions regarding wicked, please [file an issue on GitHub](https://github.com/Haufe-Lexware/wicked.haufe.io/issues/new). Hearing what you think and what kinds of scenarios you are trying to implement using wicked is super exciting!

View File

@ -1,49 +0,0 @@
---
layout: post
title: Lessons Learned from the Haufe Dev Microservices Architecture Day
subtitle:
category: conference
tags: [culture, microservice, api]
author: daniel_bryant
author_email: daniel.bryant@opencredo.com
header-img: "images/pexels-photo-112934_800px_dark.jpg"
---
Two weeks ago we (Lorenzo Nicora and Daniel Bryant from OpenCredo) had the pleasure of visiting the Haufe Group architect and development teams in Freiburg, Germany, to contribute to the Microservices Architecture Day conference. Topics covered by the Haufe and OpenCredo teams included microservice development and antipatterns, reactive principles, actor-based systems, event sourcing, and API management. Weve summarised our key learnings below, and we are always keen to receive feedback and questions
### Looking at the Big Picture of Microservices
Daniel kicked off the conference by presenting [“Building a Microservices Ecosystem”](http://www.slideshare.net/opencredo/haufe-msaday-building-a-microservice-ecosystem-by-daniel-bryant), and examined what it takes to build and integrate microservices, from development through to production. [Local development](https://opencredo.com/working-locally-with-microservices/) of more than a few services can be challenging (and is a big change in comparison with coding against a single monolith), and as such Daniel recommended using tooling like [HashiCorps Vagrant](https://www.vagrantup.com/about.html), [Docker](https://github.com/docker/docker), mocking and stubbing, and service virtualisation applications like [Hoverfly](http://hoverfly.io/). Creating and utilising a build pipeline suitable for the build, test and deployment of microservices is also vital, and must be done as early as possible within a microservices project.
Next, Lorenzo discussed the design and operation of systems following principles presented within the [Reactive Manifesto](http://www.reactivemanifesto.org/) in a talk entitled [“Reactive Microservices”](http://www.slideshare.net/opencredo/reactive-microservices-by-lorenzo-nicora). Reactive can be thought of as a set of architectural patterns and principles that can be applied both at the microservice system (macro) and individual service (micro) level. Key concepts discussed included the need for non-blocking processing, message-based communication, asynchronous delegation, resilience, isolation and replication, elasticity, and location transparency. Lorenzo shared lots of anecdotes and lessons learned with working with this type of technology at the coal face, and was keen to stress that the audience can still apply and benefit from reactive principles without using an explicit reactive technology.
### To Infinity and the Cloud
After a quick break [Andreas Plaul](https://www.linkedin.com/in/andreasplaul) and [Thomas Schuering](https://twitter.com/thomsch98) from the Haufe team presented a deep dive into the Haufe cloud strategy, and also presented an approach to providing tooling (and a platform) for unified logging across the organisation. The focus on using containers was clear, as were the benefits of providing centralised support for a specific toolset without the need to enforce its use.
Lorenzo was then back on stage presenting a short talk about the [Actor Model](http://www.slideshare.net/opencredo/haufe-msaday-the-actor-model-an-alternative-approach-to-concurrency-by-lorenzo-nicora), an alternative approach to concurrency. Alan Kays original idea with object-oriented Programming (OOP) centered around little computers passing message between themselves, and Lorenzo explained the similarities with the actor paradigm - actors interact only by messaging, and actors react to messages. An actor handles one message at a time (never concurrently) and has internal state. Therefore, an actor is inherently thread-safe. Using the actor pattern does require the learning of a new paradigm, but there is the benefit of easily managed asynchronous communication and thread-safety.
### The Deadly Sins of Microservices, and the Heavenly Virtues of API Management
After lunch Daniel stepped back onto the stage and talked about some of the [“deadly sins” antipatterns](http://www.slideshare.net/HaufeDev/haufe-seven-deadly-sins-final) that the OpenCredo team have seen when working on microservice projects. Some of the key takeaways included: the need to evaluate the latest and greatest technologies before their use; standardising on communication approaches; realising that implementing microservices is as much about people (and organisation design) as it is the technology; be wary of creating a distributed monolith; implement fault-tolerance within your services and system; dont create a canonical data model (and look instead at bounded contexts); and make sure you adapt your testing practices when working with a distributed system.
[Martin Danielsson](https://twitter.com/donmartin76) from Haufe was next to present a new approach to [API management within Haufe](https://www.youtube.com/watch?v=2lyADLYnXc0), using the Wicked API gateway framework. [Wicked](http://wicked.haufe.io/) is available as open source software, and is built upon the open source Mashape [Kong API Gateway](https://github.com/Mashape/kong) (which in turn is built upon nginx). Martin began the talk by examining the role of an API: providing access to APIs, providing usage insights, implementing cross-cutting security (authentication), traffic control, and decoupling the inside systems from the outside allowing external interfaces to be some degree shielded from changes. Wicked also uses Docker, Node.js and Swagger and the code can be found in the [wicked.haufe.io](https://github.com/Mashape/kong) repo with the Haufe-Lexware GitHub account.
### Closing the Event with ES and CQRS...
The final talk of the day was presented by Lorenzo, and focused on [A Visual Introduction to Event Sourcing and CQRS](http://www.slideshare.net/opencredo/a-visual-introduction-to-event-sourcing-and-cqrs-by-lorenzo-nicora). After a brief introduction to the concept of an aggregate from Domain-driven Design (DDD), Lorenzo walked the audience through various models of data storage and access, from synchronous access using a RDBMS to event sourcing and CQRS (via asynchronous message-driven command sourcing). Benefits of event sourcing include easy eventual business consistency (via corrective events), being robust to data corruption, providing the ability to store history (“for free”) that allows state to be rebuilt at a point in time, and scalability (e.g. utilising distributed k/v stores and asynchronous processing). Lorenzo cautioned that there are also drawbacks, for example no ACID transactions, no “one-size-fits-all”, and the additional complexity (and developer skill required).
### Wrapping up a Great Day!
The day concluded with the Haufe Group running a public meetup in the same venue, with Daniel presenting his updated “Seven (More) Deadly Sins of Microservices” talk, Martin reprising his presentation on Wicked.io, and the Haufe Akademie team talking about their journey with DevOps processes, infrastructure as code and dockerising an existing suite of applications.
There was lots to think about after watching all of the talks and chatting to attendees, and we concluded that there are many challenges with implementing changes like moving to a microservices architecture or migrating to the cloud within a company that has the successful history and size of Haufe. The primary issue for a leadership team is defining the role that IT will play within any transformation, and being very clear what the organisation is optimising for - the drive to minimise costs and maximise innovation are typically mutually exclusive.
From the technical perspective of transformation, the most difficult challenges are: knowing what questions to ask, and when.
To ask the right question at the right time, developers must accept that constraints - centralised solutions, “guide rails” and platforms - are necessary to deliver and conduct maintenance at a sustainable pace. They must also focus on fundamentals - agile processes, correctly applied architectural principles. And finally, they must additionally constrain themselves to bounded experimentation and learning - rather than being distracted by the shiny new technology.
We are confident that the Haufe leadership is well aware of these challenges and we could see the effects of plans put in place to mitigate the risks, but its always beneficial to remind ourselves of them from time to time!
As far as we saw everyone left the event with lots to think about, and there were many great conversations had throughout the day. Heres to next years event!

View File

@ -1,155 +0,0 @@
---
layout: post
title: State of Kubernetes on Azure
subtitle: Assessment of the Azure Container Service regarding Kubernetes support
category: general
tags: [cloud, microservice, devops, docker]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post-clover.jpg"
---
**IMPORTANT NOTE**: This blog post was written with the state of Kubernetes Support in Azure as of end of December 2016. If you're reading this significantly later, things will most probably have changed.
#### Table of Contents
* [Azure Container Services](#acs)
* [Deployment Modes](#deployment)
* [What you get](#whatyouget)
* [What works well](#whatworkswell)
* [Standard use of `kubectl`](#kubectl)
* [Ingress Configurations](#ingress)
* [Automatic Load Balancer Configuration](#lbconfig)
* [Combining with other Kubernetes Components](#prometheus)
* [What does not work well (yet)](#whatworkswellnot)
* [Scaling Agent VMs](#scaling)
* [Setting up H/A masters](#hamasters)
* [Upgrading Kubernetes (e.g. to 1.5.1)](#upgrading)
* [Persistent Storage](#storage)
* [Future work](#futurework)
* [tl;dr](#tldr)
* [Links](#links)
As we all know, containers are all the rage, and of course Microsoft also takes a shot at implementing a reliable runtime for running those Docker containers, called "[Azure Container Service](https://azure.microsoft.com/de-de/services/container-service/)". Fairly recently, they also [introduced support for one of the most interesting - and mature - runtime orchestration layers for containers: Kubernetes](https://azure.microsoft.com/en-us/blog/azure-container-service-the-cloud-s-most-open-option-for-containers/) (in addition to Docker Swarm and DC/OS).
This blog post will cover how well Kubernetes is supported on Azure today, and where there is still room to improve.
### Azure Container Services {#acs}
![Azure Container Service](https://azurecomcdn.azureedge.net/cvt-556d5b89aaa50eecaebfddb9370f603d4de2b20a9a4d3aa06639b68db336f1c8/images/page/services/container-service/01-create.png)
First, just a few words on what Azure Container Services (ACS) does: It's not a proprietary implementation of some container orchestration runtime, but more of a collection of best practice templates on how to deploy and operate container clusters on Azure, very much like [`kops` (Kubernetes Operations) for AWS](https://github.com/kubernetes/kops). Azure tries to stick to standards, and just leverages Azure Infrastructure where it fits. All efforts are open sourced (you've come a long way, Microsoft!) and [available on GitHub](https://github.com/Azure/acs-engine). For most things, ACS leverages Azure Resource Manager (ARM) templates.
### Deployment Modes {#deployment}
If you want to deploy a Kubernetes Cluster to Azure, you have the choice to do it in two ways: Either using the ["new" Portal](https://portal.azure.com), or using the [Azure Command Line Interface (Azure-CLI 2.0)](https://docs.microsoft.com/en-us/cli/azure/install-az-cli2). Both ways work - under the hood - in a similar fashion, by creating ARM templates and instanciating those into an Azure subscription. In the end you will get the same end result: A resource group containing all necessary IaaS components you need to run a Kubernetes cluster.
The main difference between the two deployment methods is that the command line is able to automatically create a Service Principal for the Kubernetes cluster, which has to be done in advance when deploying via the Portal. The command line will also let you specify an existing Service Principal, so that method is the more flexible one. Before you ask: The Service Principal is needed to be able to change infrastructure components during the runtime of the cluster. This is important to be able to change Load Balancer settings to expose services. In the future this may also be important to scale Agents, but currently, that's not (yet) implemented (more later).
* [Creating a cluster via the Portal](https://docs.microsoft.com/de-de/azure/container-service/container-service-deployment)
* [Creating a cluster via the CLI 2.0](https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough)
**Important Note**: Getting the Service Principal right is extremely important. Passing on wrong credentials (client ID and secret) will not make any error messages surface, but some things will simply not work (such as accessing the deployed cluster,...). Client ID is the Application ID of the Service Principal, the Client Secret is a create Key for that Application.
#### What you get {#whatyouget}
Using either of those ways you end up with a correctly configured Kubernetes cluster with a predefined number of Worker Agents and a single Master. The Agents' VM size can be selected from the available ones in Azure, whereas the Master VM is currently(?) set to the `Standard_D2` size (and cannot be changed).
The Master node also runs the `etcd` database which is the place where Kubernetes stores all its data. This is a native install of `etcd` directly on the Master VM, and thus using the VMs Virtual Hard Drive (VHD) directly for storing its data. This choice is both good and bad: Kubernetes recommends running `etcd` in a clustered mode using three (or more) dedicated machines for really high availability, and ACS runs `etcd` on a single VM, not dedicated, but shared with the Kubernetes Master.
For many situations this may even be sufficient for production use in case you don't have very high demands on availability. If you can settle for a good MTTR instead of a high MTBF, this may be enough, as Azure by default stores its VHDs in a redundant and safe way. Adding a backup strategy for `etcd` may still be advisable though.
In short: Instanciating a Kubernetes Cluster is very simple, and you get a fairly decent Kubernetes deployment. Now let's get on to what actually works and what does not.
### What works well {#whatworkswell}
#### Standard use of `kubectl` {#kubectl}
ACS gives you a Kubernetes Deployment which works very much as a vanilla Kubernetes 1.4.6 deployment. This means you can use the standard `kubectl` tooling to deploy and change things on your Kubernetes cluster.
Microsoft do supply a specific version of `kubectl` via their `az acs kubernetes install-cli` command, but any already installed version (which is compatible with the deployed 1.4.6 server version) will work.
The Kubernetes components seem to be unchanged versions of Kubernetes, built without any additions. Just the parts which are intended to be implemented by Cloud Providers are Azure specific. This by contrast to e.g. Rancher.io which deploys a special version of Kubernetes which contains tweaks to make it run on Rancher. This is a really good thing on Azure, as it means that an upgrade will be very much more likely to succeed than if there is a need to repackage Kubernetes for a specific runtime.
#### Ingress Configurations (e.g. nginx) {#ingress}
The above on `kubectl` also means that standard configurations of e.g. Ingress Controllers are much more likely to "just work", which is the case for current nginx Ingress Controllers (tested: 0.8.3). The tests I did using the plain Kubernetes nginx Ingress Controller 0.8.3 were all successful and behaved as advertised.
Ingress Controllers on Kubernetes are used to route traffic from the "border" of the Kubernetes Cluster to services on the inside. Typical use cases for ingress controllers are TLS termination and Host routing. This works out of the box with the Azure Kubernetes Cluster.
#### Automatic Load Balancer Configuration {#lbconfig}
Hand in hand with the ingress configuration comes the exposure of ports on the cluster via external load balancers. This is a Kubernetes concept which needs to be taken care of by the Cloud Provider which runs the cluster. A special kind of service is defined to expose a port to the public internet, and the cloud provider (here: Azure) needs to take care of the load balancing of the port. This works very well, both using plain services as well as with ingress controllers: After exposing the service, it takes around 2-3 minutes, and a route from the public load balancer to the services is built up and announced via `kubectl`, just as it's defined to work in the official Kubernetes documentation. I like!
What this does is very basic though: The Azure LB does **not** take care of TLS termination or Host routing; that has to be done via an Ingress Controller. This may change in the future; Azure does have corresponding functionality in the "Application Gateway", which is considered for adaption for Kubernetes. But it's not quite there yet.
#### Combining with other Kubernetes Components {#prometheus}
The Azure Kubernetes Cluster being a standard deployment of Kubernetes enables you to use other prepackaged standard tooling adapted to Kubernetes, such as [Prometheus](https://prometheus.io). A vanilla Prometheus installation is out of the box able to scrape all usual and useful metrics from the master, the agents and the containers which run on the agents, without any kind of advanced configuration. This makes it fairly easy to set up monitoring of the Kubernetes cluster, as everything behaves just as it was intended by the Kubernetes folks.
### What does not work well (yet) {#whatworkswellnot}
There are some things which do not work that well yet though, and those should also be mentioned. Kubernetes on Azure is still in "Preview", and is not deemed to be ready for production workloads just yet. The following things are the ones which stick out.
#### Scaling Agent VMs {#scaling}
One of the most compelling features of container clusters would be that it's very easy to scale out whenever your infrastructure peaks in terms of workload. Simply adding more Agents and scaling out the number of containers running on them is a basic use case.
Unfortunately, this is something which is not yet implemented for Kubernetes for Azure. Using both the Portal or the CLI 2.0, it is not possible to scale the number of agent VMs. It's exposed both in the Portal UI and in the command line interface (`az acs scale`), but it just does not work. I suspect this will be implemented very soon though.
Underneath the hood in Azure there are multiple ways of implementing such a thing, and the newest feature in Azure for this would be using [VM Scale Sets (VMSS)](https://azure.microsoft.com/en-us/services/virtual-machine-scale-sets/). The `acs` templates currently do **not** use this feature, and I suspect this is why it's not yet implemented without it. Doing it with VMSS is supposedly a lot easier or more flexible, but it takes some time to port the current templates to using VMSS.
Other open issues are e.g. the support for multiple Agent Pools (e.g. for different types of workloads/containers; Kubernetes supports tagging agents/nodes and letting containers choose by selecting via "labels"). Agent Pools are exposed in the UI, which means that the infrastructure has been implemented, but the management parts of Agent Pools does not seem to be fully implemented yet. It may very well be possible right now to define multiple agent pools if you write your own ARM templates, but there is no rock solid guidance yet on how to do it.
#### Setting up H/A masters {#hamasters}
Another thing which is currently not supported by the deployment templates is the deployment of multiple masters (at least this is not possible via the Portal UI). This has the effect that it is not possible to set up a clustered `etcd` environment using the templates, and that the master nodes will not be highly available.
As mentioned above, this may or may not be an issue for you (depending on your workload), and I suspect this will be addressed in the near future. It's not a real technical issue, Azure provides all necessary infrastructure to enable this, it's just a question of crafting suitable templates. This also applies to the question of clustered `etcd` deployments. It's not actually difficult to create an ARM template for such a deployment, it's just not done yet.
#### Upgrading Kubernetes (e.g. to 1.5.1) {#upgrading}
The Kubernetes cluster you get is currently a 1.4.6 cluster. This is not bad, it's a stable version, but currently you can't influence it in a convenient way. It's 1.4.6 you get, period. Kubernetes on Azure being the standard version though, it's possible to do a manual upgrade of the Kubernetes version, as one of the [issues on Cole Mickens' `azure-kubernetes-status` repository](https://github.com/colemickens/azure-kubernetes-status/issues/15) suggests.
I raised an issue on the Azure Support asking for how this will be done once Kubernetes on Azure reaches GA (General Availability), and the answer I got for this was that it's a "reasonable expectation" that there either will be explicit support for automatically upgrading the Kubernetes components, **or** that there will be documentation on how to accomplish this manually, as part of the official documentation of the Azure Container Service. Both options are valid in my opinion.
#### Persistent Storage {#storage}
The last thing I investigated a bit more extensive was the topic of persistence with Kubernetes on Azure. There is explicit support for both Azure Disks and Azure Files baked into Kubernetes, so that looks like it's a no-brainer on Azure. But: Unfortunately, that's far from true.
The most compelling option is to use the [Kubernetes Storage Class "Azure Disk"](http://kubernetes.io/docs/user-guide/persistent-volumes/#azure-disk). This is supposed to automatically provision disks on demand for use within Pods. I never managed to get this to work; it gets stuck in a "Pending" state, and does not provision anything. It may very well be that I'm too stupid to make this work, but that's also a measurement (or that it only works on Kubernetes 1.5+). In principle, it should be possible to let Azure provision disks on the fly as soon as you need them (via the excellent [Persistent Volume (PV)/Persistent Volume Claim (PVC) mechanism](http://kubernetes.io/docs/user-guide/persistent-volumes/)).
The other two options can be used in a Pod definition to [mount either an Azure Disk Volume or Azure File Volume](http://kubernetes.io/docs/user-guide/volumes/#azurefilevolume) into a Pod. I tested mounting a VHD (Disk Volume) into a Pod, and this actually works. But there are several drawbacks:
* It takes around two minutes to mount the VHD into the Pod
* Upon destroying the Pod, it takes around ten minutes(!) to release the Azure Disk for re-use again
* Using the `azureDisk` volume ties a volume explicitly to a Pod, making it impossible to create provider independent configuration (like it would be possible using PVs and PVCs)
The other option is using a Azure File Volume (which leverages SMB 2.1 or 3.0), but that also has the following severe drawback: [All files are created (hardcoded) with permission 777](https://github.com/kubernetes/kubernetes/issues/37005), which e.g. makes it impossible to run PostgreSQL against such a share.
In the end, [setting up your own NFS server](https://help.ubuntu.com/community/SettingUpNFSHowTo) (Kubernetes plays very well with NFS) or using any of the provider independent solutions might currently be the only reasonable solution. Possibly using [`StatefulSet`](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) in Kubernetes 1.5.x with an Azure Disk may be a valid option as well (didn't try that out). Azure can in principle solve a lot of these hard problems, but the solutions aren't yet that well integrated into ACS. I do expect this to get a lot better over time.
### Future Announced Work {#futurework}
Microsoft is handling everything regarding the state of Azure Container Service very openly, and this also means that one of the developers has a meta Github repository for the state of Kubernetes on Azure: [`azure-kubernetes-status`](https://github.com/colemickens/azure-kubernetes-status). On his bucket list you can currently find the following things:
* Leveraging Application Gateway for use as Ingress Controller (as mentioned above)
* Auto-Scaling Backends
* Support for VMSS (Virtual Machine Scale Sets)
* ... and more.
### tl;dr {#tldr}
Kubernetes on Azure already looks very promising. There are some rough edges, but it looks as if the Azure team is on the right path: They leverage standard solutions which do not introduce a lot of Azure-specific components, except where that's actually designed by the Kubernetes team to be handled vendor-specificly.
I will continue tracking this project and come back in a couple of months with an updated status!
### Links {#links}
* [Status of Kubernetes on Azure](https://github.com/colemickens/azure-kubernetes-status)
* [Best practices for software updates on Microsoft Azure IaaS](https://docs.microsoft.com/en-us/azure/security/azure-security-best-practices-software-updates-iaas)
* [Example for mounting an Azure VHD into a Pod](https://github.com/kubernetes/kubernetes/blob/master/examples/volumes/azure_disk/azure.yaml)
* [Kubernetes Documentation on Azure Disk (possible only as of 1.5.1)](http://kubernetes.io/docs/user-guide/persistent-volumes/#azure-disk)
* [Kubernetes Issue regarding Azure Disk timing issues](https://github.com/kubernetes/kubernetes/issues/35180)
* [StatefulSets in Kubernetes 1.5+](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/)
* [Microsoft Azure Container Service Engine - Kubernetes Walkthrough (`az` CLI deployment)](https://docs.microsoft.com/en-us/azure/container-service/container-service-kubernetes-walkthrough)

View File

@ -1,64 +0,0 @@
---
layout: page
title: Impressum
permalink: /impressum/
---
Holger Reinhardt
(verantwortlich i.S.d. § 55 Abs. 2 RStV)
### Anschrift:
Haufe-Lexware GmbH & Co. KG
Ein Unternehmen der Haufe Gruppe
Munzinger Straße 9
79111 Freiburg
**Telefon**: 0761 898-0
**Telefax**: 0761 898-3990
Kommanditgesellschaft, Sitz Freiburg
Registergericht Freiburg, HRA 4408
Komplementäre: Haufe-Lexware Verwaltungs GmbH, Sitz Freiburg,
Registergericht Freiburg, HRB 5557; Martin Laqua
Geschäftsführung: Isabel Blank, Markus Dränert, Jörg Frey, Birte Hackenjos, Randolf Jessl, Markus Reithwiesner, Joachim Rotzinger, Dr. Carsten Thies
Beiratsvorsitzende: Andrea Haufe
Steuernummer: 06392/11008
Umsatzsteuer-Identifikationsnummer: DE 812398835
### Public Relations:
Haufe-Lexware GmbH & Co. KG
Public Relations
Munzinger Straße 9
79111 Freiburg
Telefon: 0761 898 3940
Telefax: 0761 898 3900
E-Mail: presse(at)haufe-lexware.com
### Bitte wenden Sie sich bei Fragen an:
Haufe Service Center GmbH
Munzinger Straße 9
79111 Freiburg
Telefon: 0800 50 50 445 (kostenlos)
Telefax: 0800 50 50 446
E-Mail: service(@)haufe-lexware.com
-
Alle Rechte, auch die des auszugsweisen Nachdrucks, der fotomechanischen Wiedergabe (einschließlich Mikrokopie) sowie der Auswertung durch Datenbanken, auch der Einspeisung, Verarbeitung in elektronischen Systemen vorbehalten, gleiches gilt auch für Multimedia-Daten (Ton, Bilder, Programme etc.). Alle Angaben / Daten nach bestem Wissen, jedoch ohne Gewähr für Vollständigkeit und Richtigkeit. Die Verwendung Ihrer Adressdaten und die Verarbeitung bei neutralen Dienstleistern erfolgt unter strikter Beachtung des Datenschutzgesetzes durch die Haufe-Lexware GmbH & Co. KG und die angeschlossenen und befreundeten Unternehmen ausschließlich zu diesem Zweck.

View File

@ -1,6 +1,6 @@
---
layout: page
description: "Dev Blog by Haufe Group"
description: "creating new local low-carbon collaborations across Scotland"
---
{% for post in paginator.posts %}
@ -24,7 +24,8 @@ description: "Dev Blog by Haufe Group"
{% else %}
{% assign author_name = site.title %}
{% endif %}
<p class="post-meta">Posted by {{ author_name }} on {{ post.date | date: "%B %-d, %Y" }}</p>
<span class="post-meta">Posted by {{ author_name }} on {{ post.date | date: "%B %-d, %Y" }}</span>
<p itemprop="description">{{ post.excerpt | strip_html }}</p>
</div>
<hr>
{% endfor %}

View File

@ -4,11 +4,12 @@ title: Resources
permalink: /resources/
---
### [API Style Guide](https://github.com/Haufe-Lexware/api-style-guide/blob/master/readme.md)
A List of rules, best practices, resources and our way of creating REST APIs in the Haufe Group. The style guide addresses API Designers, mostly developers and architects, who want to design an API.
### Which Kind of Sustainability?
### [Docker Style Guide](https://github.com/Haufe-Lexware/docker-style-guide/blob/master/README.md)
A set of documents representing mandantory requirements, recommended best practices and informational resources for using Docker in official (public or internal) Haufe products, services or solutions.
The "Finding Common Ground" project aims to develop new knowledge about collaborations across different kinds of low-carbon and sustainability groups. There are a huge variety of ways to care for the earth: growing food, reducing waste, participating in the sharing economy, protecting animals amd habitats, and many different kinds of groups which combine and emphasise different practices and strategies. By cultivating a better understanding of how groups like permaculture, transition, or eco-congregations work, we thing new kinds of regional collaborations can be unlocked. Never heard of an Eco-Congregation? Transition town? No problem! On this site, we will be gathering interviews, statistics, and research into all kinds of sustainability, keep an eye out for more here soon!
### [Design Style Guide](http://do.haufe-group.com/goodlooking-haufe/)
A set of design kits and style guides for the Haufe brands: [Haufe](http://do.haufe-group.com/goodlooking-haufe/), [Lexware](http://do.haufe-group.com/goodlooking-lexware/), and [Haufe Academy](http://do.haufe-group.com/goodlooking-haufe-akademie/)
### Participatory Workshops
We love participatory workshops, and firmly believe that everyone has an important role to play in the research process? Curious to know more about this style lf research? Here are a few of our favorite books:
- Open Space Technology: A User's Guide

9
workshops.md Normal file
View File

@ -0,0 +1,9 @@
---
layout: page
title: Workshops
permalink: /workshops/
---
### Want to find common ground in your area?
The most important piece of this project consists of regional workshops across Scotland which we will be running in Spring 2007. If you would like us to come and run a workshop in your area we'd be delighted to respond to an invitation. Just email [jeremy](mailto:j.kidwell@bham.ac.uk) and we will take it from there!