Merge remote-tracking branch 'refs/remotes/Haufe-Lexware/master'

This commit is contained in:
Marco Seifried 2016-06-03 12:03:04 +02:00
commit b393f25183
44 changed files with 1891 additions and 13 deletions

2
CNAME
View File

@ -1 +1 @@
dev.haufe-lexware.com
dev.haufe.com

View File

@ -57,4 +57,62 @@ $ export https_proxy=https://10.12.1.236:8083
The short version of this is: It's complicated, and not actually advisable.
The most promising path to doing this is most probably to set up a Linux VM and do it from there; that involves setting up ruby correctly, which may also be challenging, but it's still a lot simpler (and more supported) than directly on Windows.
The most promising path to doing this is most probably to set up a Linux VM and do it from there; that involves setting up ruby correctly, which may also be challenging, but it's still a lot simpler (and more supported) than directly on Windows.
But you can try this:
### Setting up jekyll using docker
**Note**: This will work both on Windows and Mac OS X, in case you do not want to "pollute" your local machine with ruby packages.
If you have a working `docker` setup on your machine, you can use the prepackaged docker image by the jekyll team to try out the blog generation using that image.
Pull the `jekyll/jekyll:pages` image to get something which behaves almost exactly (or really close to) the github pages generation engine:
```sh
$ docker pull jekyll/jekyll:pages
```
Inside the docker Quickstart terminal, `cd` into your `Haufe-Lexware.github.io` fork containing your changes, and then issue the following command:
```sh
$ docker run --rm --label=jekyll --volume=$(pwd):/srv/jekyll \
-it -p $(docker-machine ip `docker-machine active`):4000:4000 \
jekyll/jekyll:pages
```
If everything works out, the jekyll server will serve the blog preview on `http://<ip of your docker machine>:4000`. More information on running jekyll inside docker can be found here: [github.com/jekyll/docker](https://github.com/jekyll/docker).
### Setting up jekyll using Kitematic###
If you are working with Kitematic (which has fewer proxy issues behind company firewalls than the Quickstart terminal), follow these steps:
First make sure the local copy of your Haufe-Lexware.github.io clone is located somewhere under your documents folder, for example:
`C:\Users\<username>\Documents\GitHub\Haufe-Lexware.github.io`
In Kitematic, click on the "DOCKER CLI" button (lower left), opening a power shell window.
Pull the `jekyll/jekyll:pages` image:
`> docker pull jekyll/jekyll:pages`
In this environment, you cannot use the mapping variables $(pwd) or $(docker-machine ...), so you need to enter two things explicitly:
- The path to your local repository in the following format, for example:
`/c/Users/<username>/Documents/GitHub/Haufe-Lexware.github.io`
- The ip of your docker VM. To get this, enter
`> docker-machine ip`
Now enter the following to compile the project and start the web server:
`> docker run --rm --label=jekyll --volume=/c/Users/<username>/Documents/GitHub/Haufe-Lexware.github.io:/srv/jekyll -it -p 192.168.99.100:4000:4000 jekyll/jekyll:pages`
(replacing the path and ip with your values)
The web server should now be running, so start your browser at `http://<ip>:4000` to see the results. When finished, shut down the web server with `^C` in the power shell window.

View File

@ -3,7 +3,7 @@
#
# Your site
title: Design + Dev + Ops at haufe-lexware.de
title: Design, Dev and Ops at haufe-lexware.com
description: The Development, Design and Operations Blog from Haufe-Lexware
headline: Silicon Black Forest
header-img: images/bg-home.jpg
@ -44,7 +44,7 @@ google_analytics: UA-70047300-1
# Your website URL (e.g. http://barryclark.github.io or http://www.barryclark.co)
# Used for Sitemap.xml and your RSS feed
url: http://dev.haufe-lexware.com
url: http://dev.haufe.com
# If you're hosting your site at a Project repository on GitHub pages
# (http://yourusername.github.io/repository-name)
@ -78,6 +78,7 @@ gems:
- jekyll-sitemap # Create a sitemap using the official Jekyll sitemap gem
- jekyll-paginate
- jekyll-feed
- jekyll-seo-tag
# Exclude these files from your production _site
exclude:

View File

@ -48,3 +48,18 @@ carol_biro:
email: carol.biro@haufe-lexware.com
github: birocarol
linkedin : carol-biro-5b0a5342
frederik_michel:
name: Frederik Michel
email: frederik.michel@haufe-lexware.com
github: FrederikMichel
twitter: frederik_michel
tora_onaca:
name: Teodora Onaca
email: teodora.onaca@haufe-lexware.com
github: toraonaca
twitter: toraonaca
eric_schmieder:
name: Eric Schmieder
email: eric.schmieder@haufe-lexware.com
github: EricAtHaufe
twitter: EricAtHaufe

View File

@ -32,4 +32,5 @@
{% feed_meta %}
{% seo %}
</head>

View File

@ -47,6 +47,8 @@ layout: default
{% assign author_content = author_content_temp %}
{% if author.twitter %}
{% capture author_twitter %}<a href="https://twitter.com/{{ author.twitter }}" target="_blank"><i class="fa fa-twitter-square">&nbsp;</i></a>{% endcapture %}
{% capture tweet_link %} by @{{ author.twitter }}{% endcapture %}
{% capture twitter_follow_author %}<a class="twitter-follow-button" data-show-count="false" href="https://twitter.com/{{ author.twitter }}">Follow @{{ author.twitter }}</a>{% endcapture %}
{% endif %}
{% if author.linkedin %}
{% capture author_linkedin %}<a href="https://www.linkedin.com/in/{{ author.linkedin }}" target="_blank"><i class="fa fa-linkedin-square"> </i></a>{% endcapture %}
@ -79,6 +81,15 @@ layout: default
<!-- Post Content -->
<article>
<div class="container">
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
<script src="//platform.linkedin.com/in.js" type="text/javascript"> lang: en_US</script>
<script type="IN/Share"></script>
<a href="https://twitter.com/share" class="twitter-share-button" data-text='"{{ page.title }}" blog post{{ tweet_link }}' data-via="HaufeDev">Tweet</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
<a class="twitter-follow-button" data-show-count="false" href="https://twitter.com/HaufeDev">Follow @HaufeDev</a>
{{ twitter_follow_author }}
</div>
</div>
<div class="row">
<div class="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">

View File

@ -20,12 +20,12 @@ But what if you already have your system, and it's grown over the years? How do
We decided to look at the current painpoints and start with something that shows *immediate business results in a reasonably short timeframe*.
### Rough Idea
The team responsible for this platform has to develop, maintain and run the system. A fair amount of their time went into deploying environments for internal clients and help them get up and running. This gets even trickier when different clients use an environment for testing simultanously. Setting up a test environment from scratch - build, deploy, test - takes 5 man days. That's the reality we tried to improve.
The team responsible for this platform has to develop, maintain and run the system. A fair amount of their time went into deploying environments for internal clients and help them get up and running. This gets even trickier when different clients use an environment for testing simultaneously. Setting up a test environment from scratch - build, deploy, test - takes 5 man days. That's the reality we tried to improve.
We wanted to have a one click deployment of our system per internal client directly onto Azure. Everything should be build from scratch, all the time and we wanted some automated testing in there as well.
We wanted to have a one click deployment of our system per internal client directly onto Azure. Everything should be built from scratch, all the time and we wanted some automated testing in there as well.
To make it more fun, we decided to fix our first go live date to 8 working weeks later by hosting a public [meetup](http://www.meetup.com/de-DE/Timisoara-Java-User-Group/events/228106103/) in Timisoara and present what we did! The pressure (or fun, depending on your viewpoint) was on...
So time was an issue, we wanted to be fast to have something to work with. Meaning that we didn't spend much time in evaluating every little component we used but made sure we are flexible enough to change it easily - evolutionary refinement instead of initial perfection.
So time was an issue, we wanted to be fast to have something to work with. Meaning that we didn't spend much time in evaluating every little component we used but made sure we were flexible enough to change it easily - evolutionary refinement instead of initial perfection.
### How
Our guiding principles:
@ -47,7 +47,7 @@ Main components we used:
The flow:
{:.center}
![go.cd Flow]( /images/automated-monolith/automated_monolith_flow.jpg){:style="margin:auto"}
[![go.cd Flow]( /images/automated-monolith/automated_monolith_flow.jpg)](http://dev.haufe.com/images/automated-monolith/automated_monolith_flow.jpg){:style="margin:auto"}
Let's first have a quick look on how go.cd works:
Within go.cd you model your worklows using pipelines. Those pipelines contain stages which you use to run jobs which themselves contain tasks. Stages will run in order and if one fails, the pipeline will stop. Jobs will run in parallel, go.cd is taking care of that.
@ -153,4 +153,8 @@ Setting up a test environment now only takes 30 minutes, down from 5 days. And e
We also have a solid base we can work with - and we have many ideas on how to take it further. More testing will be included soon, like more code- and security tests. We will include gates that only once the code has a certain quality or has improved in a certain way after the last test, the pipeline will proceed. We will not stop at automating the test environment, but look at our other environments as well.
All the steps necessary we have in code, which makes it repeatable and fast. There is no dependency to anything. This enables our internal clients to setup their personal environments in a fast and bulletproof way on their own.
All the steps necessary we have in code, which makes it repeatable and fast. There is no dependency to anything. This enables our internal clients to setup their personal environments in a fast and bulletproof way on their own.
---
Update: You can find slides of our talk [here](http://www.slideshare.net/HaufeDev/the-automated-monolith)

View File

@ -0,0 +1,27 @@
---
layout: post
title: CQRS, Eventsourcing and DDD
subtitle: Notes from Greg Young's CQRS course
category: conference
tags: [microservice]
author: frederik_michel
author_email: frederik.michel@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
In these notes I would like to share my thoughts about the course I took together with Rainer Michel and Raul-Andrei Firu in London about the [above mentioned topics](http://lmgtfy.com/?q=greg+young+cqrs). In this three days last November Greg Young explained with many practical examples of his career the benefits especially of CQRS and how this is related to things like Event Sourcing which is a way of reaching eventual consistency.
### CQRS and DDD
So let's get to the content of the course. It all starts with some thoughts about Domain Driven Design (DDD) especially about how to get to a design. This included strategies for getting the information out of domain experts and how to come to an ubiquitous language between different departments. All in all Greg pointed out that the software just does not have to solve every problem there is which is actually why the domain model resulting out of this is absolutely unequal to the ERM which might come to mind when solving such problems. One should more think about the actual use cases of the software than about solving each and every corner case that actually just will not happen. He showed very interesting strategies to break up relations between the domains in order to minimize the amount of getters and setters used between domains. At the end Greg spoke shortly about Domain Services which deal with logic using different aggregates and making the transactions consistent. But more often than not one should evaluate eventual consistency to use instead of such domain services as the latter explicitly show that one breaks the rule of not using more than one aggregate within one transaction. In this part Greg actually got just very briefly into CQRS describing it as a step on the way of achieving an architecture with eventual consistency.
### Event Sourcing
This topic was about applying event sourcing to a pretty common architecture that uses basically a relational database with some OR-mapper on top and above that domains/domain services. On the other side there is a thin read model based on a DB with 1st NF data. He showed that this architecture would eventually fail in production. The problem there is to keep these instances in sync especially after some problems in production might have been occurred. In these cases it is occasionally very hard to get the read and write model back on the same page. In order to change this kind of architecture using event sourcing there has to be a transition to a more command based communication between the components/containers within the architecture. This can generally be realized by introducing an event store which gathers all the commands coming from the frontend. This approach eventually leads to a point where the before mentioned 3rd NF database (which up to that point has been the write model) is going to be completely dropped in favor of the event store. This actually has 2 reasons. First of all is that the event store already has all the information stored that also is present in the database. Second and maybe more important, it stores more information than the database as the latter one generally just keeps the current state. The event store on the other hand stores every event in between also which might be relevant for analyzing the data, reporting, … What this architecture we ended up with also brings to the table is eventual consistency as the command send by the UI takes some time until it is available in the read model. The main point about eventual consistency is that the data in the read model is not false data it might just be old data which in most cases is not to be considered critical. However, there are cases where consistency is required. For these situations there are strategies to just simulate consistency. This can be done by making the time the server takes to get the data to the read model smaller than the time the client needs to retrieve the data again. Mostly this is done by just telling the user that the changes have been received by the server or the ui just fakes the output.
To sum this up - the pros about an approach like this are especially the fact that every point in time can be restored (no data loss at all) and the possibility to just pretend that the system still works even if the database is down (we just show the user that we received the message and everything can be restored when the database is up again). In addition to that if a SEDA like approach is used it is very easy to monitor the solution and determine where the time consuming processes are. One central point in this course was that by all means we should prevent widespread outrage - meaning errors that make the complete application crash or stall with effect on many or all users.
### Process Managers
This topic was essentially about separation of concerns in that regard that one should separate process logic and business logic. This is actually something that should be done as much as possible as the system can then be easily changed to using a workflow engine in the longer run. Actually Greg showed two ways of building a process manager. The first one just knows in what sequence the business logic has to be run. It triggers each one after the other. In the second approach the process manager creates a list of the processes that should be run in the correct order. It then hands over this list to the first process which passes the list on to the next and so forth. In this case the process logic is within the list or the creation of the list.
### Conclusion
Even though Greg sometimes switched pretty fast from showing very abstract thoughts to going deep into source code the course was never boring - actually rather exciting and absolutely fun to follow. The different ways of approaching a problem were shown using very good examples - Greg really did a great job there. I can absolutely recommend this course for people wanting to know more about these topics. From my point of view this kind of strategy was very interesting as I see many people trying to create the "perfect" piece of software paying attention to cases that just won't happen or spending a lot of time on cases that happen very very rarely rather to define them as known business risks.

View File

@ -0,0 +1,176 @@
---
layout: post
title: Generating Swagger from your API
subtitle: How to quickly generate the swagger documentation from your existing API.
category: howto
tags: [api]
author: tora_onaca
author_email: teodora.onaca@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
If you already have an existing API and you just want to generate the swagger documentation from it, there are a couple easy steps to make it work. First off, you should be familiar with Swagger and, in particular, with [swagger-core](https://github.com/swagger-api/swagger-core). Assuming that you coded your REST API using JAX-RS, based on which was your library of choice (Jersey or RESTEasy), there are several [guides](https://github.com/swagger-api/swagger-core/wiki/Swagger-Core-JAX-RS-Project-Setup-1.5.X) available to get you set up very fast.
In our case, working with RESTEasy, it was a matter of adding the maven dependencies:
<dependency>
<groupId>io.swagger</groupId>
<artifactId>swagger-jaxrs</artifactId>
<version>1.5.8</version>
</dependency>
<dependency>
<groupId>io.swagger</groupId>
<artifactId>swagger-jaxrs</artifactId>
<version>1.5.8</version>
</dependency>
Note: please make sure to set the jar version to the latest one available, so that the latest bug fixes are included.
In order to hook up swagger-core in the application, there are multiple solutions, the easiest of which is to just use a custom `Application` subclass.
``` java
public class SwaggerTestApplication extends Application {
public SwaggerTestApplication() {
BeanConfig beanConfig = new BeanConfig();
beanConfig.setVersion("1.0");
beanConfig.setSchemes(new String[] { "http" });
beanConfig.setTitle("My API");
beanConfig.setBasePath("/TestSwagger");
beanConfig.setResourcePackage("com.haufe.demo.resources");
beanConfig.setScan(true);
}
@Override
public Set<Class<?>> getClasses() {
HashSet<Class<?>> set = new HashSet<Class<?>>();
set.add(Resource.class);
set.add(io.swagger.jaxrs.listing.ApiListingResource.class);
set.add(io.swagger.jaxrs.listing.SwaggerSerializers.class);
return set;
}
}
```
Once this is done, you can access the generated `swagger.json` or `swagger.yaml` at the location: `http(s)://server:port/contextRoot/swagger.json` or `http(s)://server:port/contextRoot/swagger.yaml`.
Note that the `title` element for the API is mandatory, so a missing one will generate an invalid swagger file. Also, any misuse of the annotations will generate an invalid swagger file. Any existing bugs of swagger-core will have the same effect.
In order for a resource to be documented, other than including it in the list of classes that need to be parsed, it has to be annotated with @Api. You can check the [documentation](https://github.com/swagger-api/swagger-core/wiki/Annotations-1.5.X) for the existing annotations and use any of the described fields.
A special case, that might give you some head aches, is the use of subresources. The REST resource code usually goes something like this:
``` java
@Api
@Path("resource")
public class Resource {
@Context
ResourceContext resourceContext;
@GET
@Produces("application/json")
@ApiOperation(value = "Returns something")
public String getResource() {
return "GET";
}
@POST
@Produces("application/json")
public String postResource(String something) {
return "POST" + something;
}
@Path("/{subresource}")
@ApiOperation(value = "Returns a subresource")
public SubResource getSubResource() {
return resourceContext.getResource(SubResource.class);
}
}
@Api
public class SubResource {
@PathParam("subresource")
private String subresourceName;
@GET
@Produces("application/json")
@ApiOperation(value = "Returns subresource something")
public String getSubresource() {
return "GET " + subresourceName;
}
@POST
@Produces("application/json")
@ApiOperation(value = "Posts subresource something")
public String postSubresource(String something) {
return "POST " + subresourceName + something;
}
}
```
The swagger parser works like a charm if it finds the @Path and @GET and @POST annotations where it thinks they should be. In the case depicted above, the subresource is returned from the parent resource and does not have a @Path annotation at the class level. A lower version of swagger-core will generate an invalid swagger file, so please use the latest version for a correct code generation. If you want to make you life a bit harder and you have a path that goes deeper, something like /resource/{subresource}/{subsubresource}, things might get a bit more complicated.
In the Subresource class, you might have a @PathParam for holding the value of the {subresource}. The Subsubresource class might want to do the same. In this case, the generated swagger file will contain the same parameter twice, which results in an invalid swagger file. It will look like this:
parameters:
- name: "subresource"
in: "path"
required: true
type: "string"
- name: "subsubresource"
in: "path"
required: true
type: "string"
- in: "body"
name: "body"
required: false
schema:
type: "string"
- name: "subresource"
in: "path"
required: true
type: "string"
In order to fix this, use `@ApiParam(hidden=true)` for the subresource `@PathParam` in the `Subsubresource` class. See below.
``` java
@Api
public class SubSubResource {
@ApiParam(hidden=true)
@PathParam("subresource")
private String subresourceName;
@PathParam("subsubresource")
private String subsubresourceName;
@GET
@Produces("application/json")
@ApiOperation(value = "Returns subsubresource something")
public String getSomethingw() {
return "GET " + subresourceName + "/" + subsubresourceName;
}
@POST
@Produces("application/json")
@ApiOperation(value = "Posts subsubresource something")
public String postSomethingw(String something) {
return "POST " + subresourceName + "/" + subsubresourceName + " " +something;
}
}
```
There might be more tips and tricks that you will discover once you start using the annotations for your API, but it will not be a slow learning curve and once you are familiar with swagger (both spec and core) you will be able to document your API really fast.

View File

@ -0,0 +1,30 @@
---
layout: post
title: SAP CodeJam on May 12th, 2016
subtitle: Calling all SAP ABAP Developer in Freiburg area
category: general
tags: [culture]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Am Donnerstag, dem 12. Mai 2016, ist es wieder soweit: Wir werden bei uns im Haus eine weitere SAP CodeJam durchführen.
Das Thema: ABAP in Eclipse.
{:.center}
![SAP JAM]({{ site.url }}/images/sap_codejam.jpg){:style="margin:auto"}
Ein spannender Termin für alle ABAP Entwickler, die sich für die aktuellen Entwicklungswerkzeuge interessieren und verstehen
möchten wohin die Reise geht. Ein spannenden Event, um hands-on erste Erfahrungen mit Eclipse als IDE zu sammeln und einen
Ausblick zu bekommen wohin die Reise geht. Es wird mit dem eigenen Notebook gearbeitet und auf dem aktuellsten SAP Netweaver
Stack (ABAP 7.50) herumgeklopft (Zugriff auf eine von SAP via AWS bereitgestellte Instanz).
Diese Einladung richtet sich nicht nur an Haufe-interne Entwickler, sondern auch an ABAP-Gurus anderer Unternehmen in der
Region. Bitte leitet die Einladung an andere ABAP-Entwickler in anderen Unternehmen weiter.
Die Teilnahme ist kostenfrei via diesem [Registrierungslink](https://www.eventbrite.com/e/sap-codejam-freiburg-registration-24300920708).
Es gibt 30 Plätze, first come, first serve.
Viele Grüße von dem Haufe SAP Team
PS: Ja, ABAP-Skills sind für die Teilnahme erforderlich.

View File

@ -0,0 +1,205 @@
---
layout: post
title: API Management Components
subtitle: What's inside that API Management box?
category: general
tags: [cloud, api]
author: martin_danielsson
author_email: martin.danielsson@haufe-lexware.com
header-img: "images/bg-post-api.jpg"
---
### Introduction
API Management is one of the more hyped up buzzwords you can hear all over the place, in conferences, in various blog posts, in the space of internet of things, containers and microservices. It looks at first sight as a brilliant idea, simple and easy, and alas, it is! But unfortunately not as simple as it might look like when you draw up your first architectural diagrams.
### Where do we start?
We're accustomed to architect large scale systems, and we are trying to move into the microservice direction. It's tempting to put in API Management as one of the components used for encapsulating and insulating the microservices, in a fashion like this:
![API Management in front of "something"]( /images/apim-components/apim-as-a-simple-layer.png)
This definitely helps in laying out the deployment architecture of your system(s), but in many cases, it falls too short. When you are accustomed to introducing API Management components into your microservice architecture, and you already have your blueprints in place, this may be enough, but to reach that point, you will need to do some more research on what you actually want to achieve with an API Management Solution (in short: "APIm").
### Common Requirements for an APIm
Another "problem" is that it's easy to just look at the immediate requirements for API Management solutions and compare to various solutions on the market. Obviously, you need to specify your functional requirements first and check whether they match to the solution you have selected; common APIm requirements are for example the following:
* Proxying and securing backend services
* Rate limiting/throttling of API calls
* Consumer identification
* API Analytics
* Self-service API subscriptions
* API Documentation Portals
* Simple mediations (transformations)
* Configurability over API (APIm APIs, so to say)
* Caching
The nature of these requirements are very diverse, and not all of the requirements are usually equally important. Neither is it always the case that all features are equally featured inside all APIm solutions, even if most solutions obviously try to cover them all to some extent. Some do this via an "all inclusive" type of offering, some have a more fine granular approach.
In the next section, I will try to show which types of components usually can be found inside an API Management Solution, and where the interfaces between the different components are to be found.
### A closer look inside the box
If we open up that blue box simply called "API Management", we can find a plethora of sub-components, which may or may not be present and/or more or less feature-packed depending on the actual APIm solution you choose. The following diagram shows the most usual components inside APIm solutions on the market today:
![API Management Components]( /images/apim-components/apim-in-reality.png)
When looking at an API Management Solution, you will find that in most cases, one or more components are missing in one way or the other, or some component is less elaborate than with other solutions. When assessing APIms, checking the different components can help to find whether the APIm actually matches your requirements.
We will look at the following components:
* [API Gateway](#apigateway)
* [API Identity Provider (IdP)](#apiidp)
* [Configuration Database](#configdb)
* [Cache](#cache)
* [Administration UI](#adminui)
* [Developer Portal](#devportal)
* [Portal Identity Provider (IdP)](#portalidp)
* [Logging](#logging)
* [Analytics](#analytics)
* [Audit Log](#audit)
<a name="apigateway"></a>
#### API Gateway
The core of an APIm is quite obviously the API Gateway. It's the component of the APIm solution through which the API traffic is routed, and which is usually ensuring that the backend services are secured. Depending on the architecture of the APIm solution, the API Gateway can be more or less integrated with the Gateway Identity Provider ("API IdP" in the picture), which provides an identity for the consuming client.
APIm solution requirements usually focus on this component, as it's the core functionality. This component is always part of the APIm solution.
<a name="apiidp"></a>
#### API Identity Provider
A less obvious part of the APIm conglomerate is the API Identity Provider. Depending on your use case, you will only want to know which API Consumers are using your APIs via the API Gateway, or you will want to have full feature OAuth support. Most vendors have direct support for API Key authentication (on a machine/application to API gateway basis), but not all have built-in support for OAuth mechanisms, and/or support pluggable OAuth support.
In short: Make sure you know which your requirements are regarding the API Identity Providers *on the API plane*; this is to be treated separately from the *API Portal users*, which may have [their own IdP](#portalidp).
<a name="configdb"></a>
#### Configuration Database
In most cases, the API Gateway draws its configuration from a configuration database. In some cases, the configuration is completely separated from the API Gateway, in some cases its integrated into the API Gateway (this is especially true for SaaS offerings).
The configuration database may contain the following things:
* API definitions
* Policy rules, e.g. throttling settings, Access Control lists and similar
* API Consumers, if note stored separately in the [API IdP](#apiidp)
* API Portal Users, if not separately stored in an [API Portal IdP](#portalidp)
* API Documentation, if not stored in separate [portal](#devportal) database
The main point to understand regarding the configuration database is that in most cases, the API Gateway and/or its corresponding datastore is a stateful service which carries information which is not only coming from source code (policy definitions, API definitions and such things), but also potentially from users. Updating and deploying API management solutions must take this into account and provide for migration/upgrade processes.
<a name="cache"></a>
#### Cache
When dealing with REST APIs, it is often useful to have a dedicated caching layer. Some (actually most) APIm provide such a component out of the box, while others do not. How caches are incorporated varies between the different solutions, but it ranges from pure `varnish` installations to key-value stores such as redis or similar. Different systems have different approaches to how and what is cached during API calls, and which kinds of calls are cacheable.
It is worth paying attention to which degree of automation is offered, and to which extent you can customize the behaviour of the cache, e.g. depending on the value of headers or `GET` parameters. What you need is obviously highly depending on your requirements. In some situations you will not care about the caching layer being inside the APIm, but for high throughput, this is definitely worth considering, to be able to answer requests as high up in the chain as possible.
<a name="adminui"></a>
#### Administration UI
In order to configure an APIm, many solutions provide an administration UI to configure the API Gateway. In some cases (like with [Mashape Kong](http://www.getkong.org)), there isn't any administration UI, but only an API to configure the API Gateway itself. But usually there is some kind of UI which helps you configuring your Gateway.
The Admin UI can incoroporate many things from other components, such as administering the [API IdP](#apiidp) and [Portal IdP](#portalidp), or viewing [analytics information](#analytics), among other things.
<a name="devportal">
#### Developer Portal
The Developer Portal is, in addition to the API Gateway, what you usually think about when talking about API Management: The API Developer Portal is the place you as a developer goes to when looking for information on an API. Depending on how elaborate the Portal is, it will let you do things like:
* View API Documentation
* Read up on How-tos or best practices documents
* Self-sign up for use of an API
* Interactively trying out of an API using your own credentials ([Swagger UI](http://swagger.io/swagger-ui/) like)
Not all APIm systems actually provide an API Portal, and for quite some use cases (e.g. Mobile API gateways, pure website APIs), it's not even needed. Some systems, especially SaaS offerings, provide a fully featured Developer Portal out of the box, while some others only have very simple portals, or even none at all.
Depending on your own use case, you may need one or multiple instances of a Developer Portal. It's normal practice that a API Portal is tied to a single API Gateway, even if there are some solutions which allow more flexible deployment layouts. Checking your requirements on this point is important to make sure you get what you expect, as Portal feature sets vary wildly.
<a name="portalidp"></a>
#### Portal Identity Provider
Using an API Developer Portal (see above) usually requires the developer to sign in to the portal using some king of authentication. This is what's behind the term "Portal Identity Provider", as opposed to the IdP which is used for the actual access to the API (the [API IdP](#apiidp)). Depending on your requirements, you will want to enable logging in using
* Your own LDAP/ADFS instance
* Social logins, such as Google, Facebook or Twitter
* Developer logins, such as BitBucket or GitHub.
Most solutions will use those identities to federate to an automatically created identity inside the API Portal; i.e. the API Developer Portal will link their Portal IdP users with a federated identity and let developers use those to log in to the API Portal. Usually, enabling social or developer logins will require you to register your API Portal with the corresponding federated identity provider (such as Google or Github). Adding Client Secrets and Credentials for your API Portal is something you will want to be able to do, depending on your requirements.
<a name="logging"></a>
#### Logging
Another puzzle piece in APIm is the question on how to handle logging, as logs can be emitted by most APIm components separately. Most solutions do not offer an out-of-the-box solution for this (haven't found any APIm with logging functionality at all actually), but most allow for plugging in any kind log aggregation mechanisms, such as [log aggregation with fluentd, elastic search and kibana](/log-aggregation).
Depending on your requirements, you will want to look at how to aggregate logs from the at least following components:
* API Gateway (API Access logs)
* API Portal
* Administration UI (overlaps with [audit logs](#audit))
You will also want to verify that you don't introduce unnecessary latencies when logging, e.g. by using queueing mechanisms close to the log emitting party.
<a name="analytics"></a>
#### The Analytics Tier
The area "Analytics" is also something where the different APIm solutions vary significantly in functionality, when it's present at all. Depending on your requirements, the analytics can be handled when looking at logging, e.g. by leveraging elastic search and kibana, or similar approaches. Most SaaS offerings have pre-built analytics solutions which offer a rich variety of statistics and drill-down possibilites without having to put in any extra effort. Frequent analytics are the following:
* API Usage by API
* API Calls
* Bandwith
* API Consumers by Application
* Geo-location of API users (mobile applications)
* Error frequency and error types (4xx, 5xx,...)
<a name="audit"></a>
#### The Audit Log
The Audit Log is a special case of logging, which may or may not be separate from the general logging components. The Audit log stores changes done to the configuration of the APIm solution, e.g.
* API Configuration changes
* Additions and deletions of APIm Consumers (clients)
* Updates of API definitions
* Manually triggered restarts of components
* ...
Some solutions have built-in auditing functionality, e.g. the AWS API Gateway has this type of functionality. The special nature of audit logs is that such logs must be tamper-proof and must never be changeable after the fact. In case of normal logs, they may be subject to cleaning up, which should not (so easily) be the case with audit logs.
### API Management Vendors
{:.center}
![API Management Providers]( /images/apim-components/apim-providers.png){:style="margin:auto"}
Incomplete list of API Management Solution vendors:
* [3scale](https://www.3scale.net)
* [Akana API Management](https://www.akana.com/solution/api-management)
* [Amazon AWS API Gateway](https://aws.amazon.com/api-gateway)
* [API Umbrella](https://apiumbrella.io)
* [Axway API Management](https://www.axway.com/en/enterprise-solutions/api-management)
* [Azure API Management](https://azure.microsoft.com/en-us/services/api-management/)
* [CA API Gateway](http://www.ca.com/us/products/api-management.html)
* [Dreamfactory](https://www.dreamfactory.com)
* [IBM API Connect](http://www-03.ibm.com/software/products/en/api-connect)
* [Mashape Kong](https://getkong.org)
* [TIBCO Mashery](http://www.mashery.com)
* [Tyk.io](https://tyk.io)
* [WSO2 API Management](http://wso2.com/api-management/)
---
<small>
The [background image](/images/bg-post-api.jpg) was taken from [flickr](https://www.flickr.com/photos/rituashrafi/6501999863) and adapted using GIMP. You are free to use the adapted image according the linked [CC BY license](https://creativecommons.org/licenses/by/2.0/).
</small>

View File

@ -0,0 +1,202 @@
---
layout: post
title: How to use an On-Premise Identity Server in ASP.NET
subtitle: Log in to an ASP.NET application with ADFS identity and check membership in specific groups
category: howto
tags: [cloud]
author: Robert Fitch
author_email: robert.fitch@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
This article shows you how to develop an ASP.NET application to:
- Log in with an on-premise ADFS Identity
- Check whether the user belongs to a given group (for example, a certain mailing list)
# Prepare the Project
## Create ##
Create a new ASP.NET Web Application, for example:
{:.center}
![]( /images/adfs-identity/pic26.jpg){:style="margin:auto"}
On the next page, select MVC, then click on "Change Authentication":
{:.center}
![]( /images/adfs-identity/pic27.jpg){:style="margin:auto"}
You will be sent to this dialog:
{:.center}
![]( /images/adfs-identity/pic28.jpg){:style="margin:auto"}
- Select **Work and School Accounts**
- Select **On-Premises**
- For the **On-Premises Authority**, ask IT for the public URL of your FederationMetadata.xml on the identity server, e.g.
`https://xxxxxxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml`
- For the **App ID URI**, you must enter an identifier for your app. This is not a real URL address, just a unique identifier, for example `http://haufe/mvcwithadfs`.
**Important:** The **App ID URI** identifies your app with the on-premise ADFS identity server. This same App ID must be registered on the ADFS identity server by IT as a **Relying Party Trust** identifier (sometimes known as **Realm**), so that the server will accept requests.
Finish up the project creation process.
## Edit some Settings
Make sure that the project is set to run as HTTPS:
{:.center}
![]( /images/adfs-identity/pic29.jpg){:style="margin:auto"}
Compile the project.
## The authentication code ##
If you are wondering where all of the authentication code resides (or if you need to modify an existing project!), here are the details:
The App ID URI and the On-Premises Authority URL are stored in the `<appSettings>` node of web.config:
~~~xml
<add key="ida:ADFSMetadata" value="https://xxxxxxxxxx.com/FederationMetadata/2007-06/FederationMetadata.xml" />
<add key="ida:Wtrealm" value="http://haufe/mvcwithadfs" />
~~~
And the OWIN-Code to specify the on-premise authentication is in `Startup.Auth.cs`:
~~~csharp
public partial class Startup
{
private static string realm = ConfigurationManager.AppSettings["ida:Wtrealm"];
private static string adfsMetadata = ConfigurationManager.AppSettings["ida:ADFSMetadata"];
public void ConfigureAuth(IAppBuilder app)
{
app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);
app.UseCookieAuthentication(new CookieAuthenticationOptions());
app.UseWsFederationAuthentication(
new WsFederationAuthenticationOptions
{
Wtrealm = realm,
MetadataAddress = adfsMetadata
});
}
}
~~~
# Configure the On-Premise Identity Server (Job for IT) #
On the identity server, these are the critical configuration pages for a new **Relying Party Trust**.
## Identifiers ##
{:.center}
![]( /images/adfs-identity/pic31.jpg){:style="margin:auto"}
**Display Name:** This is the name under which IT sees the **Relying Party Trust**.
**Relying Party identifiers:** This is a list of relying party identifiers, known on "our" ASP.NET side as **App ID URI**. The only important one is the **App ID URI** we assigned to our app when creating it. On this screen, you also see `https://localhost:44306`. This was automatically set by the Relying Party Trust Wizard when it asked for the first endpoint, since it assumed that the endpoint is also a default identifier. But since we specified a custom **App ID URI** (which gets transmitted by the user's browser), the `http://haufe/mvcwithadfs` entry is the only one which really matters.
## Endpoints ##
{:.center}
![]( /images/adfs-identity/pic32.jpg){:style="margin:auto"}
This is the page which lists all browser source endpoints which are to be considered valid by the identity server. Here you see the entry which comes into play while we are debugging locally. Once your application has been uploaded to server, e.g. Azure, you must add the new endpoint e.g.:
`https://xxxxxxxxxx.azurewebsites.net/`
(not shown in the screen shot)
## Claim Rules ##
**Issuance Authorization Rules**
{:.center}
![]( /images/adfs-identity/pic33.jpg){:style="margin:auto"}
**Issuance Transform Rules**
This is where we define which identity claims will go out to the requesting application.
Add a rule named e.g. **AD2OutgoingClaims**
{:.center}
![]( /images/adfs-identity/pic34.jpg){:style="margin:auto"}
and edit it like this:
{:.center}
![]( /images/adfs-identity/pic35.jpg){:style="margin:auto"}
The last line is the special one (the others being fairly standard). The last line causes AD to export every group that the user belongs to as a role, which can then be queried on the application side.
# Run #
At this point, the app can be compiled and will run. You can log in (or you might be automatically logged in if you are running from a browser in the your company's domain).
# Check Membership in a certain Group #
Because we have configured the outgoing claims to include a role for every group that the user belongs to, we can now check membership. We may, for example, want to limit a given functionality to members of a certain group.
## Create an Authorizing Controller ##
You may create a controller with the Authorize attribute like this:
~~~csharp
[Authorize]
public class RoleController : Controller
{
}
~~~
The **Authorize** attribute forces the user to be logged in before any requests are routed to this controller. The log in dialog will be opened automatically if necessary.
It is also possible to use the **Authorize** attribute not on the entire controller, but just on those methods which need authorization.
Once inside a controller (or method) requiring authorization, you have access to the security Information of the user. In particular, you can check membership in a given role (group) like this:
~~~csharp
if (User.IsInRole("_Architects")
{
// do something
}
else
{
// do something else
}
~~~
Within a `cshtml` file, you may also want to react to user membership in a certain role. One way to do this is to bind the cshtml file to a model class which contains the necessary boolean flags. Set those flags in the controller, e.g.:
~~~csharp
model.IsArchitect = User.IsInRole("_Architects");
~~~
Pass the model instance to the view, then evaluate those flags in the cshtml file:
~~~csharp
@if (Model.IsArchitect)
{
<div style="color:#00ff00">
<text><b>Yes, you are in the Architect group.</b></text>
</div>
}
else
{
<div style="color:#ff0000">
<text><b>No, you are not in the Architect group.</b></text>
</div>
}
~~~
Instead of using flags within the data binding model, it may be easier to have the controller just assign a property to the ViewBag and evaluate the ViewBag in the cshtml file.

View File

@ -0,0 +1,60 @@
---
layout: post
title: IRC and the Age of Chatops
subtitle: How developer culture, devops and ux are influenced by the renaisance of IRC
category: general
tags: [culture, devops, cto]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
Since April 8th Haufe Group has a new group-wide tool to facilitate internal communication between individuals and teams: [Rocket.chat](https://rocket.chat). If you have heard about Slack, then Rocket.chat is just like it.
### What is it
Rocket.chat is a group chat tool you can use to communicate internally in projects, exchange information on different topics in open channels and integrate tooling via bots. If you were around for the beginning of the internet, its like IRC but with history. If you know Slack… then its exactly like that.
### Another tool?
… but we already have so many!
We know. But Slack has taken the software industry by storm over the last 3 years. We felt that IRC-style communication fits into a niche where social tools dont. We experimented with Slack and many of us loved it so we used it daily. We got a lot of good feedback from our Slack pilot over the last year and already more than 100 colleagues registered in the first 24h after our Rocket.chat instance went live.
If you are curious why we felt the need to support this very distinct form of communication, you might find some interesting information and ideas in the following articles:
* [Modelling mediums of communication](http://techcrunch.com/2015/04/07/modeling-mediums-of-communication/)
* [IRC - The secret to success of Open Source](https://developer.ibm.com/opentech/2015/12/20/irc-the-secret-to-success-in-open-source/)
* [Is Slack the new LMS](https://medium.com/synapse/is-slack-the-new-lms-7d1c15ff964f#.m6r5c1b31)
IRC-style communication has been around since the dawn of the Internet and continues to draw a large group of active users. As we strive to create an open and collaborative culture at Haufe, we felt that there was a need to complement the linear social-media style form of communication of something like Yammer with an active IRC-style chat model. As mentioned above, IRC style chat seems to encourage the active exchange of knowledge and helps us in creating a [learning organisation](https://en.wikipedia.org/wiki/Learning_organization).
But there is more. Based on the phenomenal success of Slack in the software industry, companies are starting to experiment with Chatops as a new take on devops:
* [What is Chatops](https://www.pagerduty.com/blog/what-is-chatops/)
* [Chatops Adaption Guide](http://blogs.atlassian.com/2016/01/what-is-chatops-adoption-guide/)
And last but not least, there is even a trend in the UX community to leverage chat (or so called `conversational interfaces`) as a new User Experience paradigm:
* [On Chat as an interface](https://medium.com/@acroll/on-chat-as-interface-92a68d2bf854#.vhtlcvkxj)
* [The next phase of UX is designing chatbots](http://www.fastcodesign.com/3054934/the-next-phase-of-ux-designing-chatbot-personalities)
Needless to say, we felt that there is not just a compelling case for a tool matching the communication needs of our developer community, but even more a chance to experience first hand through our daily work some of the trends shaping our industry.
### So why not Slack
I give full credit to Slack to reimagine what IRC can look like in the 21st century. But for our needs as a forum across our developer community it has two major drawbacks. The price tag rises very quickly if wanted to role it out aross our entire company. But even more importantly we could not get approval from our legal department due to Germany's strict data privacy rules.
Rocket.chat on the other hand is Open Source and we are hosting it in our infrastructure. We are keeping costs extremely low by having operations completely automated (which has the welcome side effect of giving our ops team a proving ground to support our Technology Stratgy around Docker and CI/CD). And we got full approval by our legal department on top.
### How to use it?
We dont have many rules, and we hope we dont have to. The language tends to be English in open channels and in #general (where everyone is by default). We strive to keep in mind that there might be colleagues that dont speak German. Beyond that we ask everyone to be courtegeous, open, helpful, respectful and welcoming the same way we would want to be treated.
### Beyond chat
Chat and chat bots are very trendy this year there is plenty of experimentation around leveraging it as a new channel for commerce, marketing, products, customers and services. Microsoft, Facebook, Slack they are all trying it out. We now have the platform to do so as well if we want to.
But dont take our word for it check out the following links:
* [2016 will be the year of conversational commerce](https://medium.com/chris-messina/2016-will-be-the-year-of-conversational-commerce-1586e85e3991#.aathpymsh)
* [Conversational User Interfaces](http://www.wired.com/2013/03/conversational-user-interface/)
* [Microsoft to announce Chatbots](http://uk.businessinsider.com/microsoft-to-announce-chatbots-2016-3)
* [Facebook's Future in Chatbots](http://www.platformnation.com/2016/04/15/a-future-of-chatbots/)
Rocket.chat comes with a simple but good API and [a framework for building bots](https://github.com/RocketChat/hubot-rocketchat). We are already looking at integrating with our internal tools like Git, Confluence, Jira, Jenkins and Go.CD.

View File

@ -0,0 +1,992 @@
---
layout: post
title: Secure Internet Access to an On-Premise API
subtitle: Connect an ASP.NET identity to an on-premise API login identity, then relay all requests through the Azure Service Bus
category: howto
tags: [cloud]
author: Robert Fitch
author_email: robert.fitch@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
This article shows you how to use the Microsoft Azure Service Bus to relay requests to an on-premise API through the internet in a secure manner.
# Preparation
You will need a Microsoft Azure account. Create a new "Service Bus Namespace" (in this example it is "HaufeMessageBroker"). Under the "Configure" tab, add a new shared access policy, e.g. "OnPremiseRelay":
{:.center}
![]( /images/secure-internet-access/pic43.jpg){:style="margin:auto"}
Use the namespace, the policy name, and the primary key in the code samples below.
# The On-Premise API
We make some assumptions about the on-premise API. These are not prerequisites in the sense that otherwise no access would be possible, but they should apply to many standard situations. It should also be fairly clear which aspects of the solution would have to be adapted to other situations.
- The on-premise API is an HTTP-REST-API.
- There is a Login-method, taking user name and password as parameters.
- The Login-method returns a Session-Id, which must be included in a header in all subsequent calls to the API to identify the user.
- The password is necessary in the on-premise API to identify the user, but it does not otherwise have an internal meaning.
- Counterexample: If the password is also necessary, for example, as credentials for a database login, then we have a problem.
- Reason: The solution binds an external identity (ASP.NET, Facebook, etc.) with the on-premise User-Id and allows the user to login with that identity, so the on-premise password is superfluous.
- Solution: If the on-premise password is actually necessary (e.g. for the database login), then it would have to be entered as part of or after the external login, which is of course possible but not really what we are hoping for in an SSO solution.
- The same API may be running on on-premise servers in different locations. For example a Lexware-API accessing the Lexware-Pro database would be running on every customer's server.
One easy way to create an on-premise API is using the self-host capabilities of ASP.NET with Owin. There are many how-tos available for doing this. However, this solution does not dictate how the on-premise API is to be implemented, and any one will do.
# Microsoft Azure Service Bus
The Azure Service Bus provides an easy way to access an on-premise WCF (Windows Communications Foundation) interface from any cloud server. Of course, we do not want to rewrite our entire business API to become a WCF Interface, so part of the solution is to develop a small and generic WCF Interface, which resides in a new on-premise service and simply relays HTTP request/response information to and from the actual on-premise API. This is the "On-Premise Relay Service" below.
We also need two ASP.NET applications running in the cloud:
1. An ASP.NET web site ("Identity Portal") where a user can create a web identity (possibly using another identity like Facebook), then connect that identity to the on-premise login of the API running on his/her company's server. For this one-time action, the user needs to:
- enter a Host Id, which is the identification of the on-premise relay service running at his/her company location. This is necessary to tell the Azure Service Bus which of the many existing on-premise relay services this user wants to connect to.
- enter his on-premise user name and password. These get relayed to the on-premise API to confirm that the user is known there.
- From this time on, the web identity is connected to a specific on-premise relay service and to a specific on-premise identity, allowing SSO-access to the on-premise API.
2. An ASP.NET WebApi ("Cloud Relay Service") allowing generic access via the Service Bus to the on-premise API. This means, for example, that an application which consumes the on-premise API only need change the base address of the API to become functional through the Internet.
- Example: A browser app, which previously ran locally and called the API at, say:
`http://192.168.10.10/contacts/v1/contacts`
can now run anywhere and call:
`https://lexwareprorelay.azurewebsites.net/relay/contacts/v1/contacts`
with the same results.
- The only difference is that the user must first login using his web credentials instead of his on-premise credentials. The application then gets a token which identifies the user for all subsequent calls. The token contains appropriate information (in the form of custom claims) to relay each call to the appropriate on-premise relay service.
So there are actually two relays at work, neither of which has any business intelligence, but simply route the http requests and responses:
1. The ASP.NET WebApi "Cloud Relay Service", hosted in the cloud, which:
- receives an incoming HTTP request from the client, e.g. browser or smartphone app.
- converts it to a WCF request object, then relays this via the Azure Service Bus to the proper on-premise relay service.
- receives a WCF response object back from the on-premise relay service.
- converts this to a true HTTP response, and sends it back to the caller.
2. The "On-Premise Relay Service", which:
- receives an incoming WCF request object.
- converts it to a true HTTP request, then relays this to the endpoint of the on-premise API.
- receives the HTTP response from the on-premise API.
- converts it to a WCF response object and returns it via the Azure Service Bus to the ASP.NET WebApi "Cloud Relay Service".
In addition, there is the Azure Service Bus itself, through which the "Cloud Relay Service" and the "On-Premise Relay Service" communicate with each other.
# Sequence Diagrams
## On-Premise Solution
Here we see a local client logging in to the on-premise API, thereby receiving a session-id, and using this session-id in a subsequent call to the API to get a list of the user's contacts.
{:.center}
![]( /images/secure-internet-access/pic36.jpg){:style="margin:auto"}
## One-Time Registration
This shows registration with the Identity Portal in two steps:
1. Create a new web identity.
2. Link that web identity to a certain on-premise API and a certain on-premise user id.
*(Please right-click on image, "open in new tab" to see better resolution)*
{:.center}
![]( /images/secure-internet-access/pic37.jpg){:style="margin:auto"}
After this process, the identity database contains additional information linking the web identity to a specific on-premise API (the "OnPremiseHostId") and to a specific on-premise identity (the "OnPremiseUserId"). From now on, whenever a client logs in to the ASP.NET Cloud Relay with his/her web credentials, this information will be added to the bearer token in the form of claims.
## Client now uses the Cloud Relay Service
Now the client activity shown in the first sequence diagram looks like this:
*(Please right-click on image, "open in new tab" to see better resolution)*
{:.center}
![]( /images/secure-internet-access/pic38.jpg){:style="margin:auto"}
What has changed for the client?
- The client first logs in to the ASP.NET Cloud Relay:
`https://lexwareprorelay.azurewebsites.net/api/account/externallogin` using its web identity credentials
- The client then logs in to the on-premise API:
`https://lexwareprorelay.azurewebsites.net/relay/account/v1/external_login` instead of `http://192.168.10.10/account/v1/login`
and does not include any explicit credentials at all, since these are carried by the bearer token.
- The client then makes "normal" API calls, with two differences:
- The base URL is now `https://lexwareprorelay.azurewebsites.net/relay/` instead of http://192.168.10.10/
- The client must include the authorization token (as a header) in all API calls.
What has changed for the on-premise API?
- The API provides a new method `accounts/v1/user_id` (used only once during registration!), which checks the provided credentials and returns the internal user id for that user. This is the value which will later be added as a claim to the bearer token.
- The API provides a new method `accounts/v1/external_login`, which calls back to the ASP.NET WebApi to confirm the user id, then does whatever it used to do in the original `accounts/v1/login` method. In this sample, that means starting a session linked to this user id and returning the new session-id to the caller.
- The other API methods do not change at all, though it should be noted that an authorization header is now always included, so that if, for example, the session-id should be deemed not secure enough, the on-premise API could always check the bearer token within every method.
# Code
The following sections show the actual code necessary to implement the above processes. Skip all of this if it's not interesting for you, but it is documented here to make the job easier for anyone actually wanting to implement such a relay.
## New Methods in the On-Premise API
Here are the new methods in the accounts controller of the on-premise API which are necessary to work with the external relay.
~~~csharp
#region New Methods for External Access
// base url to the ASP.NET WebApi "Cloud Relay Service"
// here local while developing
// later hosted at e.g. https://lexwareprorelay.azurewebsites.net/
static string secureRelayWebApiBaseAddress = "https://localhost:44321/";
/// <summary>
/// confirm that the bearer token comes from the "Cloud Relay Service"
/// </summary>
/// <param name="controller"></param>
/// <returns></returns>
/// <remarks>
/// Call this from any API method to get the on-premise user id
/// </remarks>
internal static UserInfo CheckBearer(ApiController controller)
{
// get the Authorization header
var authorization = controller.Request.Headers.Authorization;
Debug.Assert(authorization.Scheme == "Bearer");
string userId = null;
try
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(secureRelayWebApiBaseAddress + "api/account/OnPremiseUserId");
webRequest.Headers.Add("Authorization", authorization.Scheme + " " + authorization.Parameter);
using (var hostResponse = (HttpWebResponse)webRequest.GetResponse())
{
string content = null;
using (StreamReader reader = new StreamReader(hostResponse.GetResponseStream()))
{
content = reader.ReadToEnd();
}
userId = content;
userId = JsonConvert.DeserializeObject<string>(userId);
}
}
catch (Exception)
{
throw new UnauthorizedAccessException();
}
var userInfo = Users.UserInfos.Values.FirstOrDefault(u => u.UserId.Equals(userId));
if (userInfo == null)
{
throw new UnauthorizedAccessException();
}
return userInfo;
}
/// <summary>
/// GetUserId
/// </summary>
/// <param name="credentials"></param>
/// <returns></returns>
/// <remarks>
/// This method returns the internal user id for the given credentials.
/// The method is called during the registration process so that
/// the user id can be added to the claims of any future bearer tokens.
/// </remarks>
[HttpPost]
[Route("userid")]
[ResponseType(typeof(string))]
public IHttpActionResult GetUserId([FromBody] LoginCredentials credentials)
{
var userInfo = Users.UserInfos.Values.SingleOrDefault(u => u.UserName.Equals(credentials.UserName) && u.Password.Equals(credentials.Password));
if (userInfo != null)
{
return Ok(userInfo.UserId);
}
else
{
return Unauthorized();
}
}
/// <summary>
/// ExternalLogin
/// </summary>
/// <returns></returns>
/// <remarks>
/// This is called by the client via the relays and replaces the "normal" login.
/// </remarks>
[HttpGet]
[Route("external_login")]
[ResponseType(typeof(string))]
public IHttpActionResult ExternalLogin()
{
try
{
// get the user info from the bearer token
// This also confirms for us that the bearer token comes from
// "our" Cloud Relay Service
var userInfo = CheckBearer(this);
// create session id, just like the "normal" login
string sessionId = Guid.NewGuid().ToString();
SessionInfos.Add(sessionId, userInfo);
return Ok(sessionId);
}
catch (Exception)
{
return Unauthorized();
}
}
#endregion
~~~
## The On-Premise Relay Service
In `IRelay.cs`, define the WCF service (consisting of a single method "Request"). Also, define the WCF Request and Response classes.
~~~csharp
/// <summary>
/// IRelay
/// </summary>
[ServiceContract]
public interface IRelay
{
/// <summary>
/// A single method to relay a request and return a response
/// </summary>
/// <param name="requestDetails"></param>
/// <returns></returns>
[OperationContract]
ResponseDetails Request(RequestDetails requestDetails);
}
/// <summary>
/// The WCF class to hold all information for an HTTP request
/// </summary>
public class RequestDetails
{
public Verb Verb { get; set; }
public string Url { get; set; }
public List<Header> Headers = new List<Header>();
public byte[] Content { get; set; }
public string ContentType { get; set; }
}
/// <summary>
/// The WCF class to hold all information for an HTTP response
/// </summary>
public class ResponseDetails
{
public HttpStatusCode StatusCode { get; set; }
public string Status { get; set; }
public string Content { get; set; }
public string ContentType { get; set; }
}
/// <summary>
/// an HTTP header
/// </summary>
public class Header
{
public string Key { get; set; }
public string Value { get; set; }
}
/// <summary>
/// the HTTP methods
/// </summary>
public enum Verb
{
GET,
POST,
PUT,
DELETE
}
~~~
And the implementation in `Relay.cs`
~~~csharp
public class Relay : IRelay
{
// the local base url of the on-premise API
string baseAddress = http://localhost:9000/;
/// <summary>
/// Copy all headers from the incoming HttpRequest to the WCF request object
/// </summary>
/// <param name="requestDetails"></param>
/// <param name="webRequest"></param>
private void CopyIncomingHeaders(RequestDetails requestDetails, HttpWebRequest webRequest)
{
foreach (var header in requestDetails.Headers)
{
string key = header.Key;
if ((key == "Connection") || (key == "Host"))
{
// do not copy
}
else if (key == "Accept")
{
webRequest.Accept = header.Value;
}
else if (key == "Referer")
{
webRequest.Referer = header.Value;
}
else if (key == "User-Agent")
{
webRequest.UserAgent = header.Value;
}
else if (key == "Content-Type")
{
webRequest.ContentType = header.Value;
}
else if (key == "Content-Length")
{
webRequest.ContentLength = Int32.Parse(header.Value);
}
else
{
webRequest.Headers.Add(key, header.Value);
}
}
}
/// <summary>
/// Relay a WCF request object and return a WCF response object
/// </summary>
/// <param name="requestDetails"></param>
/// <returns></returns>
public ResponseDetails Request(RequestDetails requestDetails)
{
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(baseAddress + requestDetails.Url);
CopyIncomingHeaders(requestDetails, webRequest);
switch (requestDetails.Verb)
{
case Verb.GET:
webRequest.Method = "GET";
break;
case Verb.POST:
webRequest.Method = "POST";
break;
case Verb.PUT:
webRequest.Method = "PUT";
break;
case Verb.DELETE:
webRequest.Method = "DELETE";
break;
default:
webRequest.Method = "GET";
break;
}
var responseDetails = new ResponseDetails();
if ((requestDetails.Verb == Verb.POST) || (requestDetails.Verb == Verb.PUT))
{
// serialize the body object for POST and PUT
byte[] bytes = requestDetails.Content;
webRequest.ContentType = requestDetails.ContentType;
webRequest.ContentLength = bytes.Length;
// relay the body object to the request stream
try
{
using (Stream requestStream = webRequest.GetRequestStream())
{
requestStream.Write(bytes, 0, bytes.Length);
requestStream.Flush();
requestStream.Close();
}
}
catch (WebException ex)
{
responseDetails.StatusCode = HttpStatusCode.ServiceUnavailable;
responseDetails.Status = ex.Message;
return responseDetails;
}
}
// send request and get response
try
{
using (HttpWebResponse hostResponse = (HttpWebResponse)webRequest.GetResponse())
{
string content = null;
string contentType = null;
using (StreamReader reader = new StreamReader(hostResponse.GetResponseStream()))
{
content = reader.ReadToEnd();
}
contentType = hostResponse.ContentType.Split(new char[] { ';' })[0];
// build the response object
responseDetails.StatusCode = hostResponse.StatusCode;
responseDetails.ContentType = contentType;
responseDetails.Content = content;
}
}
catch (WebException ex)
{
if (ex.Response == null)
{
responseDetails.StatusCode = HttpStatusCode.ServiceUnavailable;
}
else
{
responseDetails.StatusCode = ((HttpWebResponse)ex.Response).StatusCode;
}
responseDetails.Status = ex.Message;
}
return responseDetails;
}
}
~~~
And finally, the code while starting the service to connect to the Azure Service Bus under a unique path.
This code could be in `program.cs` of a console application (as shown) or in the start-method of a true service):
~~~csharp
static void Main(string[] args)
{
// instantiate the Relay class
using (var host = new ServiceHost(typeof(Relay)))
{
// the unique id for this location, hard-coded for this sample
// (could be e.g. a database id, or a customer contract id)
string hostId = "bf1e3a54-91bb-496b-bda6-fdfd5faf4480";
// tell the Azure Service Bus that our IRelay service is available
// via a path consisting of the host id plus "\relay"
host.AddServiceEndpoint(
typeof(IRelay),
new NetTcpRelayBinding(),
ServiceBusEnvironment.CreateServiceUri("sb", "haufemessagebroker", hostId + "/relay"))
.Behaviors.Add(
new TransportClientEndpointBehavior(
TokenProvider.CreateSharedAccessSignatureTokenProvider("OnPremiseRelay", "7Mw+Njy52M95axVlCzHdk4QxxxYUPxPORCKRbGk9bdM=")));
host.Open();
Console.WriteLine("On-Premise Relay Service running...");
Console.ReadLine();
}
}
~~~
Notes:
- The hostId must be unique for each on-premise location.
- The service bus credentials (here, the name "haufemessagebroker" and the "OnPremiseRelay" must all be prepared via the Azure Portal by adding a new service bus namespace, as described in the introduction. In a live environment, you might want some kind of Service Bus Management API, so that each on-premise relay service could get valid credentials after, say, its company signed up for the relay service, and not have them hard-coded.
Once the on-premise relay service is running, you will see it listed with its host id in the Azure Management Portal under the "Relays" tab:
{:.center}
![]( /images/secure-internet-access/pic44.jpg){:style="margin:auto"}
## ASP.NET Identity Portal
Create a new ASP.NET Project (named e.g. "IdentityPortal") and select "MVC". Before compiling and running the first time, change the class ApplicationUser (in `IdentityModels.cs`) as follows:
~~~csharp
public class ApplicationUser : IdentityUser
{
public string OnPremiseHostId { get; set; }
public string OnPremiseUserId { get; set; }
public async Task<ClaimsIdentity> GenerateUserIdentityAsync(UserManager<ApplicationUser> manager)
{
// Note the authenticationType must match the one defined in CookieAuthenticationOptions.AuthenticationType
var userIdentity = await manager.CreateIdentityAsync(this, DefaultAuthenticationTypes.ApplicationCookie);
// Add custom user claims here
userIdentity.AddClaim(new Claim("OnPremiseHostId", OnPremiseHostId ?? String.Empty));
userIdentity.AddClaim(new Claim("OnPremiseUserId", OnPremiseUserId ?? String.Empty));
return userIdentity;
}
}
~~~
This adds two fields to the user identity, which we will need later to link each user to a specific on-premise API and specific on-premise user id. And, importantly, it adds the content of the two new fields as custom claims to the ApplicationUser instance.
By adding this code **before** running for the first time, the fields will automatically be added to the database table. Otherwise, we would need to add them as code-first migration step. So this just saves a bit of trouble.
Now compile and run, and you should immediately be able to register a new web identity and log in with that identity.
*Prepare to register with the on-premise API*
Use `NuGet` to add "WindowsAzure.ServiceBus" to the project.
Also, add a reference to the OnPremiseRelay DLL, so that the IRelay WCF Interface, as well as the Request and Response classes, are known.
In `AccountViewModels.cs`, add these classes:
~~~csharp
public class RegisterWithOnPremiseHostViewModel
{
[Required]
[Display(Name = "On-Premise Host Id")]
public string HostId { get; set; }
[Required]
[Display(Name = "On-Premise User Name")]
public string UserName { get; set; }
[Required]
[DataType(DataType.Password)]
[Display(Name = "On-Premise Password")]
public string Password { get; set; }
}
public class LoginCredentials
{
[JsonProperty(PropertyName = "user_name")]
public string UserName { get; set; }
[JsonProperty(PropertyName = "password")]
public string Password { get; set; }
}
~~~
In `_Layout.cshtml`, add this line to the navbar:
~~~html
<li>@Html.ActionLink("Register With Host", "RegisterWithOnPremiseHost", "Account")</li>
~~~
Add the following methods to the AccountController class:
~~~csharp
// this must point to the Cloud Relay WebApi
static string cloudRelayWebApiBaseAddress = "https://localhost:44321/";
//
// GET: /Account/RegisterWithOnPremiseHost
public ActionResult RegisterWithOnPremiseHost()
{
ViewBag.ReturnUrl = String.Empty;
return View();
}
//
// POST: /Account/RegisterWithOnPremiseHost
[HttpPost]
[ValidateAntiForgeryToken]
public async Task<ActionResult> RegisterWithOnPremiseHost(RegisterWithOnPremiseHostViewModel model, string returnUrl)
{
if (!ModelState.IsValid)
{
return View(model);
}
string userId = null;
try
{
// open the Azure Service Bus
using (var cf = new ChannelFactory<IRelay>(
new NetTcpRelayBinding(),
new EndpointAddress(ServiceBusEnvironment.CreateServiceUri("sb", "haufemessagebroker", model.HostId + "/relay"))))
{
cf.Endpoint.Behaviors.Add(new TransportClientEndpointBehavior
{
TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider("OnPremiseRelay", "7Mw+Njy52M95axVlCzHdxxxxxhYUPxPORCKRbGk9bdM=")
});
IRelay relay = null;
try
{
// get the IRelay Interface of the on-premise relay service
relay = cf.CreateChannel();
var credentials = new LoginCredentials
{
UserName = model.UserName,
Password = model.Password
};
var requestDetails = new RequestDetails
{
Verb = Verb.POST,
Url = "accounts/v1/userid",
Content = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(credentials)),
ContentType = "application/json"
};
// call the on-premise relay service
var response = await Task.Run(() =>
{
try
{
return relay.Request(requestDetails);
}
catch (EndpointNotFoundException)
{
return null;
}
});
if ((response == null) || (response.StatusCode == HttpStatusCode.ServiceUnavailable))
{
ModelState.AddModelError("", "Login zur Zeit nicht möglich, weil der lokale Dienst nicht erreichbar ist.");
return View(model);
}
else if (response.StatusCode == HttpStatusCode.Unauthorized)
{
ModelState.AddModelError("", "Login fehlgeschlagen.");
return View(model);
}
else if (response.StatusCode != HttpStatusCode.OK)
{
ModelState.AddModelError("", "Login zur Zeit nicht möglich.\nDetails: " + response.Status);
return View(model);
}
// alles ok
userId = response.Content;
userId = JsonConvert.DeserializeObject<string>(userId);
}
catch (Exception)
{
ModelState.AddModelError("", "Login zur Zeit nicht möglich, weil der lokale Dienst nicht erreichbar ist.");
return View(model);
}
}
}
catch (CommunicationException)
{
return View(model);
}
ApplicationUser user = await UserManager.FindByIdAsync(User.Identity.GetUserId());
user.OnPremiseUserId = userId;
user.OnPremiseHostId = model.HostId;
UserManager.Update(user);
return RedirectToAction("RegisterWithOnPremiseHostSuccess");
}
// GET: Account/RegisterWithOnPremiseHostSuccess
public ActionResult RegisterWithOnPremiseHostSuccess()
{
ViewBag.ReturnUrl = String.Empty;
return View();
}
~~~
Note:
- The note about the service bus credentials (in the on-premise relay service) applies here, too, of course.
To Views\Account, add `RegisterWithOnPremiseHost.cshtml`:
~~~html
@model IdentityPortal.Models.RegisterWithOnPremiseHostViewModel
@{
ViewBag.Title = "Register With On-Premise Host";
}
<h2>Register With On-Premise Host</h2>
@using (Html.BeginForm())
{
@Html.AntiForgeryToken()
<div class="form-horizontal">
<hr />
@Html.ValidationSummary(true, "", new { @class = "text-danger" })
<div class="form-group">
@Html.LabelFor(model => model.HostId, htmlAttributes: new { @class = "control-label col-md-2" })
<div class="col-md-10">
@Html.EditorFor(model => model.HostId, new { htmlAttributes = new { @class = "form-control" } })
@Html.ValidationMessageFor(model => model.HostId, "", new { @class = "text-danger" })
</div>
</div>
<div class="form-group">
@Html.LabelFor(model => model.UserName, htmlAttributes: new { @class = "control-label col-md-2" })
<div class="col-md-10">
@Html.EditorFor(model => model.UserName, new { htmlAttributes = new { @class = "form-control" } })
@Html.ValidationMessageFor(model => model.UserName, "", new { @class = "text-danger" })
</div>
</div>
<div class="form-group">
@Html.LabelFor(model => model.Password, htmlAttributes: new { @class = "control-label col-md-2" })
<div class="col-md-10">
@Html.EditorFor(model => model.Password, new { htmlAttributes = new { @class = "form-control" } })
@Html.ValidationMessageFor(model => model.Password, "", new { @class = "text-danger" })
</div>
</div>
<div class="form-group">
<div class="col-md-offset-2 col-md-10">
<input type="submit" value="Register" class="btn btn-default" />
</div>
</div>
</div>
}
@section Scripts {
@Scripts.Render("~/bundles/jqueryval")
}
~~~
Also to Views\Account, add `RegisterWithOnPremiseHostSuccess.cshtml`:
~~~html
@{
ViewBag.Title = "Success";
}
<h2>@ViewBag.Title</h2>
<div class="row">
<div class="col-md-8">
<section id="loginForm">
@using (Html.BeginForm("HaufeLogin", "Account", new { ReturnUrl = ViewBag.ReturnUrl }, FormMethod.Post, new { @class = "form-horizontal", role = "form" }))
{
@Html.AntiForgeryToken()
<hr />
<h4>Your on-premise login credentials have been confirmed..</h4>
}
</section>
</div>
</div>
@section Scripts {
@Scripts.Render("~/bundles/jqueryval")
}
~~~
Now you can log in to the Identity Portal and select "Register With Host".
Assuming:
- the on-premise relay service has a host id = bf1e3a54-91bb-496b-bda6-fdfd5faf4480
- the on-premise API has a user with user name = "Ackermann"
Then fill in the form appropriately:
{:.center}
![]( /images/secure-internet-access/pic39a.jpg){:style="margin:auto"}
Once this registration is successful, any client can now communicate with the on-premise API using the Cloud Relay Service, defined below.
## Cloud Relay Service
Create a new ASP.NET Project (named e.g. "CloudRelayService") and select "Web Api".
- Before compiling and running the first time, make the same changes to the ApplicationUser class as mentioned above for the Identity Portal.
- Also, edit web.config and change the connection string for "DefaultConnection" to work with the same database as the Identity Portal by copying the connection string from that project.
- Important: if the connection string contains a `|DataDirectory|` reference in the file path, you will have to replace this with the true physical path to the other project, otherwise the two projects will not point to the same database file.
Add the following method to the AccountController (for this, you must include the System.Linq namespace):
~~~csharp
// GET api/Account/OnPremiseUserId
[HostAuthentication(DefaultAuthenticationTypes.ExternalBearer)]
[Route("OnPremiseUserId")]
public IHttpActionResult GetOnPremiseUserId()
{
// get the on-premise user id
var identity = (ClaimsIdentity)User.Identity;
var onPremiseUserIdClaim = identity.Claims.SingleOrDefault(c => c.Type == "OnPremiseUserId");
if (onPremiseUserIdClaim == null)
{
return Unauthorized();
}
return Ok(onPremiseUserIdClaim.Value);
}
~~~
Use `NuGet` to add "WindowsAzure.ServiceBus" to the project.
Also, add a reference to the OnPremiseRelay DLL, so that the IRelay WCF Interface, as well as the Request and Response classes, are known.
Then add a new controller `RelayController` with this code:
~~~csharp
[Authorize]
[RoutePrefix("relay")]
public class RelayController : ApiController
{
private void CopyIncomingHeaders(RequestDetails request)
{
var headers = HttpContext.Current.Request.Headers;
// copy all incoming headers
foreach (string key in headers.Keys)
{
request.Headers.Add(new Header
{
Key = key,
Value = headers[key]
});
}
}
[HttpGet]
[Route("{*url}")]
public async Task<IHttpActionResult> Get(string url)
{
return await Relay(url, Verb.GET);
}
[HttpPost]
[Route("{*url}")]
public async Task<IHttpActionResult> Post(string url)
{
return await Relay(url, Verb.POST);
}
[HttpPut]
[Route("{*url}")]
public async Task<IHttpActionResult> Put(string url)
{
return await Relay(url, Verb.PUT);
}
[HttpDelete]
[Route("{*url}")]
public async Task<IHttpActionResult> Delete(string url)
{
return await Relay(url, Verb.DELETE);
}
private async Task<IHttpActionResult> Relay(string url, Verb verb)
{
byte[] content = null;
if ((verb == Verb.POST) || (verb == Verb.PUT))
{
// for POST and PUT, we need the body content
content = await Request.Content.ReadAsByteArrayAsync();
}
// get the host id from the token claims
var identity = (ClaimsIdentity)User.Identity;
var onPremiseHostIdClaim = identity.Claims.SingleOrDefault(c => c.Type == "OnPremiseHostId");
if (onPremiseHostIdClaim == null)
{
return Unauthorized();
}
try
{
// open the Azure Service Bus
using (var cf = new ChannelFactory<IRelay>(
new NetTcpRelayBinding(),
new EndpointAddress(ServiceBusEnvironment.CreateServiceUri("sb", "haufemessagebroker", onPremiseHostIdClaim.Value + "/relay"))))
{
cf.Endpoint.Behaviors.Add(new TransportClientEndpointBehavior
{
TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider("OnPremiseRelay", "7Mw+Njy52M95axVlCzHdxxxxxhYUPxPORCKRbGk9bdM=")
});
// get the IRelay Interface of the on-premise relay service
IRelay relay = cf.CreateChannel();
var requestDetails = new RequestDetails
{
Verb = verb,
Url = url
};
// copy the incoming headers
CopyIncomingHeaders(requestDetails);
if ((verb == Verb.POST) || (verb == Verb.PUT))
{
requestDetails.Content = content;
var contentTypeHeader = requestDetails.Headers.FirstOrDefault(h => h.Key == "Content-Type");
if (contentTypeHeader != null)
{
requestDetails.ContentType = contentTypeHeader.Value;
}
}
// call the on-premise relay service
var response = await Task.Run(() =>
{
try
{
return relay.Request(requestDetails);
}
catch (EndpointNotFoundException)
{
// set response to null
// this will be checked after the await, see below
// and result in ServiceUnavailable
return null;
}
});
if (response == null)
{
return Content(HttpStatusCode.ServiceUnavailable, String.Empty);
}
// normal return
return Content(response.StatusCode, response.Content);
}
}
catch (CommunicationException)
{
return Content(HttpStatusCode.ServiceUnavailable, String.Empty);
}
}
}
~~~
Note:
- The note about the service bus credentials (in the on-premise relay service) applies here, too, of course.
The Cloud Relay WebApi should now be ready to return an authorization token for the web identity, and also relay http requests via WCF and the Azure Service Bus to the on-premise relay service.
Note that all relay methods are protected by the class's Authorize attribute.
*Examples using Chrome Postman:*
Get a token using a web identity (Note the path `/Token`, the content-type, and the content):
{:.center}
![]( /images/secure-internet-access/pic40.jpg){:style="margin:auto"}
Using the token, with prefix "Bearer", log in to the on-premise API and receive a session-id:
{:.center}
![]( /images/secure-internet-access/pic41.jpg){:style="margin:auto"}
Now use the session-id to make normal calls to the API:
{:.center}
![]( /images/secure-internet-access/pic42.jpg){:style="margin:auto"}

View File

@ -0,0 +1,91 @@
---
layout: post
title: Software Architecture Day Timisoara on May 18th, 2016
subtitle: Architecture Strategies for Modern Web Applications
category: conference
tags: [api, microservice]
author: doru_mihai
author_email: doru.mihai@haufe-lexware.com
header-img: "images/bg-post.jpg"
---
This year, me and a couple of my colleagues from Timisoara attended again the Software Architecture Day conference, a yearly event that in the last years has brought big names as speakers.
Last year, [Neal Ford](http://nealford.com/abstracts.html) was the speaker and he introduced us to concepts relating to continuous delivery and microservices, some of which we have already applied within our company.
This year, it was [Stefan Tilkov's](https://www.innoq.com/blog/st/) turn to grace us with his presence.
{:.center}
![Software Architecture Day 2016]({{ site.url }}/images/software-arch-day/doru_badge.jpg){:style="margin:auto"}
The title of this year's talk was pretty ambiguous, *Architecture strategies for modern web applications*. Still, the organizers sent us a list of topics that would be discussed within this whole-day event and they are as follows: modularization, REST & Web APIs, single page apps vs. ROCA, pros and cons of different persistence options, scaling in various dimensions.
## Start
{:.center}
![Software Architecture Day 2016]({{ site.url }}/images/software-arch-day/stefan_tilkov.jpg){:style="margin:auto"}
The presentation kicked off with a rant about how different enterprises have struggled over the years to provide frameworks and tools that would abstract away the complexities of the web.
He illustrated as an example, the Java EE stack, enumerating the different layers one would have in an enterprise built application, with the example use case of receiving a JSON payload and sending another one out. Or course the point of all of this was to show what a ridiculous amount of effort has been put into abstracting away the web.
It is at this point that he expressed his hatred for Java and .Net because of all the problems that were created by trying to simplify things.
## Backend
After the initial rant, the purpose of which was to convince us that it is better to work with a technology that sticks closer to what is really there all along (a request, a header, cookie, session etc.), he continued with a talk about the different choices one may have when dealing with the backend. Below are my notes:
- Process vs Thread model for scaling
- .Net I/O Completion Ports
- Request/Response vs Component based frameworks
- Async I/O
- Twisted (Python)
- Event Machine (Ruby)
- Netty
- NodeJS
- [Consistent hashing](http://michaelnielsen.org/blog/consistent-hashing/) - for cache server scaling
- Eventual consistency
- The CAP theorem
- Known issues with prolific tools. Referenced [Aphyr](https://aphyr.com/posts/317-jepsen-elasticsearch) as a source of examples of failures of such systems.
- NoSQL scaling
- N/R/W mechanisms
- BASE vs ACID dbs
## REST
This was the same presentation that I had seen on [infoq](https://www.infoq.com/presentations/rest-misconceptions) some time ago.
He basically rants about how many people think or say they are doing Rest when actually they are not. Or how many people spend a lot of time discussing how the URL should be formed when that actually has nothing to do with Rest.
One thing in particular was interesting for me, when he was asked about rest api documentation tools he didn't have a preference for one in particular but he did mention explicitly that he is against Swagger, for the sole reason that Swagger doesn't allow hypermedia in your api definition.
After the talk I asked him about validation, since he mentioned Postel's Law. In the days of WS-* we would use XML as the format and we would do XSD validation, (he commented that xsd validation is costly and in the large scale projects, he would skip it) but now that we mainly use JSON as the format, and [JSON Schema](http://json-schema.org/documentation.html) is still in a Draft stage. Sadly he didn't have a solution for me :)
## Frontend
Towards the end of the day he talked to us about what topics you should be concerned with when thinking about the frontend.
Amongst them, noteworthy were the talks about CSS Architecture, and how it is beginning to be more and more important. To the extent that within his company he has a CSS Architect, and he raised the awareness that when adopting a framework for the frontend, you must be aware that there were decisions taken within that framework, that you are basically inheriting. And that framework's architecture becomes your architecture.
For CSS he mentioned the following CSS methodologies:
- BEM
- OOCSS
- SMACSS
- Atomic-CSS
- Solid CSS
After presenting solutions for different aspects that one may need to consider for the frontend he proceeded to discuss about Single Page Applications and what are the drawbacks of that approach and presented [Resource Oriented Client Architecture](http://roca-style.org/).
## Modularization
The last part of the day was dedicated to modularization, and here he proposed a methodology that is close to Microservices, can be used in tandem with microservices, but is slightly different.
He called them [Self Contained Systems](http://scs-architecture.org/vs-ms.html) and you can read all about them following the link (it will explain things better than I can :) ).
## Conclusion
It was a lot of content to take in, and due to the fact that he presented content from several whole-day workshops he has in his portfolio, none of the topics were presented into too much depth. If you want to get an idea of what was presented feel free to watch the presentations below.
- [Web development Techniques](https://www.infoq.com/presentations/web-development-techniques)
- [Rest Misconceptions](https://www.infoq.com/presentations/rest-misconceptions)
- [Breaking the Monolith](https://www.infoq.com/presentations/Breaking-the-Monolith)
- [NodeJS Async I/O](https://www.infoq.com/presentations/Nodejs-Asynchronous-IO-for-Fun-and-Profit)

View File

@ -13,7 +13,8 @@ p {
line-height: 1.5;
margin: 30px 0;
}
p a {
p a,
li a {
text-decoration: underline;
}
h1,

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

BIN
images/bg-post-api.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 309 KiB

BIN
images/sap_codejam.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 165 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

View File

@ -4,7 +4,11 @@ title: Resources
permalink: /resources/
---
### API Style Guide
### [API Style Guide](https://github.com/Haufe-Lexware/api-style-guide/blob/master/readme.md)
A List of rules, best practices, resources and our way of creating REST APIs in the Haufe Group. The style guide addresses API Designers, mostly developers and architects, who want to design an API.
Goto our [API Style Guide](http://htmlpreview.github.io/?https://raw.githubusercontent.com/Haufe-Lexware/api-style-guide/gh-pages/index.html)
### [Docker Style Guide](https://github.com/Haufe-Lexware/docker-style-guide/blob/master/README.md)
A set of documents representing mandantory requirements, recommended best practices and informational resources for using Docker in official (public or internal) Haufe products, services or solutions.
### [Design Style Guide](http://do.haufe-group.com/goodlooking-haufe/)
A set of design kits and style guides for the Haufe brands: [Haufe](http://do.haufe-group.com/goodlooking-haufe/), [Lexware](http://do.haufe-group.com/goodlooking-lexware/), and [Haufe Academy](http://do.haufe-group.com/goodlooking-haufe-akademie/)

Binary file not shown.