Coursera Full Stack Web Development Capstone Project

A couple of days ago I finished my capstone project for the Full Stack Web Development specialization on Coursera. It marks the end of the 6-course specialization about Bootstrap CSS, AngularJS and NodeJS.

The Assignment

The assignment itself is simple:

Build a web application with the tools taught in the course (Bootstrap CSS, AngularJS, NodeJS, MongoDB).

Everybody has the free choice what to implement, however it should be a small app as it must be implemented during a short period of time. I decided to implement a message board where users can create boards and post messages (more below).

The Schedule

The capstone project takes 8 weeks in total, while only 2 or 3 weeks are dedicated to actual coding. Each weeks has about 3 hours of workload. So you can spend approximately 9 hours to coding – but you will definitely spend more time!

Week Topic Assignment
1 Ideation
2 Ideation Report Report (PDF, 2 pages) about your idea
3 UI Design and Prototyping
4 UI Design and Prototyping Report Report (PDF, 2 pages) with UI mockups
5 Architecture Design and Software Structure
6 Architecture Design and Software Structure Report Report (PDF, 3 pages) with architecture, structure, REST URLs, data model
7 Project Implementation and Final Report
8 Final Submission and Report Report (PDF, 2 pages), code on GitHub, running app on IBM Bluemix

The Workload

You will spend most of your time for coding, but there will also be a huge amount of time you will spend on organisational tasks. Those tasks might take a lot of time:

  • Making a GitHub project including, .gitignore and so on
  • Setup a Travis CI build
  • Deploy to IBM Bluemix (read tutorials, write a .cfignore and manifest.yml)
  • Do all dependency management
  • Install and configure MongoDB
  • Manage different configs for you local development and IBM Bluemix

While all of those tasks are absolutely necessary for every project setup, they can easily take up some hours nevertheless. This melts down your coding time.

My Project

I called my project MEBO which simply stands for Message Board. It’s a small application where users can create boards and post messages on them. Every who know the link to a board can access it and edit all messages. There’s no login or so. You can find the project on GitHub:

Lessons Learned

  • Make it small! Pf course, everybody wants to make this super fancy application with its own user management, administration view and all kinds of features. But make your life easy and do what’s needed to demonstrate your skills.
  • Start early with the deployment and integration to IBM Bluemix. No matter how well your project is implemented, if it doesn’t run on IBM Bluemix (or anywhere else) where I can see it, you will not pass the assignment. Get things up and running from the beginning.
  • Let it looks nice. The first impression is very important. If your project looks nice and your GitHub project has a, your’re on the right path.

Best regards,

Coursera Full Stack Web Development Course Review

During the last 6 months I did the Full Stack Web Development course on Coursera. Since I’m currently about to finish the course by implementing my final capstone project (, I wanted to share my thoughts about the course and its pros and cons.

About the course

The Full Stack Web Development course consists of 6 single courses. Altogether, they make up the complete course which Coursera calls a specialization.

  1. HTML, CSS and JavaScript (3 weeks)
  2. Front-End Web UI Frameworks and Tools (4 weeks)
  3. Front-End JavaScript Frameworks: AngularJS (4 weeks)
  4. Multiplatform Mobile App Development with Web Technologies (4 weeks)
  5. Server-side Development with NodeJS (4 weeks)
  6. Full Stack Web Development Specialization Capstone Project (8 weeks)

It’s possible to take single courses, but of course it’s recommended to take all of them and do them one after another in the given order.

The complete specialization takes 27 weeks and costs 70 € per course, so 420 € in total.

HTML, CSS and JavaScript

This course thoughts the basics about HTML, CSS and JavaScript. It was made for beginners, so if you have a little knowledge of those topics you will probably be board. On the other side, if you have absolutely no knowledge about this HMLT, SSC and LavaScipting, you will have a hard time to learn everything in just 3 weeks. Because that’s how long the course will take.

IMHO: I don’t recommend this one. If you know HTML, CSS and Javascript you won’t learn anything from the course. And if you are absolutely new to those technologies, the course is far too short.

Front-End Web UI Frameworks and Tools

This course focuses on Bootstrap CSS. Although Bootstrap CSS isn’t too complicated, a lot of people don’t understand the principles behind it (e.g. the grid system with its rows and columns). So even if you used Bootstrap CSS before, this course will help you to understand things better.

IMHO: I recommend this one as it really helps to better understand one of the most popular UI frameworks right now.

Front-End JavaScript Frameworks: AngularJS

What the second course was for Bootstrap CSS, this one is for AngularJS. And again, if you know AngularJS there will be nothing new to you. But if you are new to AngularJS this course will find the right tempo to give you a good first glance. However, the course is not really up-to-date with AngularJS’ latest version.

IMHO: I recommend it, as this course gives you a good introduction to AngularJS.

Multiplatform Mobile App Development with Web Technologies

This course is made up around the Ionic Framework for mobile apps. If you are interested in building mobile apps, this course will teach you one out of a thousand possibilities to do this. I think the Ionic Framework is not the worst way to make mobile apps, however the course is still very opinionated and has a very small focus.

IMHO: Out of all 6 courses, I can recommend this one at least. Ionic might be good for some use cases, but the example app made in the course doesn’t benefit from it in any way. It’s just the very same app as made before. Making everything responsive would be a much better approach. This course also relies heavily on installed software like Ionic itself and Android or iOS simulators. If one of those pieces of software don’t run on your device, you are screwed. It took me hours to get the Android simulator to run, before I switched to the iOS simulator which also took me hours.

Server-side Development with NodeJS

This course is about NodeJS and MongoDB. Although the architecture of the example app of the course is terrible (they make database queries in the REST controllers!), the course gives a nice introduction to server-side JavaScript. Both technologies – NodeJS and MongoDB – are state of the art.

IMHO: I can recommend this one to get started with NodeJS and MongoDB, but also to see some draw backs of those very hyped technologies.

Full Stack Web Development Specialization Capstone Project

At the end of the course, everybody must implement a final project. The project should show the learned skills and should – of course – use technologies thought in the course. So you are forced to get your hands dirty and write some own code.

IMHO: This part of the specialization is very interesting, but has some pitfalls, too. It’s important to choose the project idea wisely. The course only takes 8 weeks from which only 2 are dedicated to actual programming. So whatever you implement, it must be something small.

Special note: To complete the course, you must deploy your project to IBM’s PaaS Bluemix for which you will get a test account. As I worked with AWS and some other DevOp technologies before, this wasn’t too hard for me to do. However, Bluemix is a terrible plattform. If you are not familiar with PaaS, plan some extra time to get things working. The course will not prepare you for that in any way.

How Coursera works

Before you take a course at Coursera, you should first understand how the plattform works: Courses on Coursera are mode up of online videos, text and PDFs, exercises and assignments. It’s up to the teacher of the course is laid out.

Every course runs regularly at some specific date and will end at some specific date. This means a course might start every 2 months beginning at the first of the month and ends 4 weeks later. You must (!) take the course at this period of time. It’s just like a physical class you would take at school or university.

Most courses require assignments to complete them. This means there will be some exercise at the end you must fulfil and upload the solution. Most assignments are peer-graded which means that you must review your classmates and you will be reviewed yourself by them.

At the end, you will get a certificate with a lot of buzzwords for this specific course.

IMHO: Coursera is nice, but it’s not the same as a real physical class at university. Especially the peer-graded assignments are problem. Some people tend to criticize the most odd things, while others just give you the point without even looking at your work. It’s completely up to you how serious you take it.

What I learned

Things I liked to learn

Things I didn’t like after learning them

What I missed

What I missed completely during all 6 courses was unit testing. None of the courses teach anything about testing, neither in the front end (Jasmine, Protractor) nor in the backend (Mocha, Sinon).


Best regards,

DeployMan (command line tool to deploy Docker images to AWS)


2014-07-29 11_34_11-Java EE - Eclipse

Yesterday, I published a tool called DeployMan on GitHub. DeployMan is a command line tool to deploy Docker images to AWS and was the software prototype for my master thesis. I wrote my thesis at Informatica in Stuttgart-Weilimdorf, so first of all, I want to say thank you to Thomas Kasemir for the opportunity to put this online!


At the time I am writing this post, DeployMan is a pure prototype. It was created for academic research and as a demo for my thesis. It is not ready ready for production. If you need a solid tool to deploy Docker images (to AWS), have a look at Puppet, CloudFormation (for AWS), Terraform, Vagrant, fig (for Docker) or any other orchestration tool that came up in the last couple of years.

What DeployMan does

DeployMan can create new AWS EC2 instances and deploy a predefined stack of Docker images on it. To do so, DeployMan takes a configuration file called a formation. A formation specifies how the EC2 machine should look like and which Docker images (and which configurations) should be deployed. Docker images can either be deployed from a Docker registry (the public one or a private) or a tarballs from a S3 storage. Together with each image, a configuration folder will pulled from a S3 storage and mounted to the running container.

Here is an example of a formation which deploys a Nginx server with a static HTML page:


DeployMan provides a command line interface to start instances and do some basic monitoring of the deployment process. Here is a screenshot which shows some formations (which can be started) and the output of a started Logstash server:


To keep track of the deployment process in a more pleasant way, DeployMan has a web interface. The web interface shows details to machines, such as the deployed images and which containers are running. Here is how a Logstash server would look like:


The project


You can find the project on GitHub at I wrote a detailed which explains how to build and use DeployMan. To test DeployMan, you need an AWS account (there are also free accounts).

The project is made with Java 8, Maven, the AWS Java API, the Docker Java API and a lot of small stuff like Apache Commons. The web interface is based on Spark (for the server), Google’s AngularJS and Twitter’s Bootstrap CSS.

Best regards,

Presentation of my master thesis

Over the last six months, I wrote my master thesis about porting an enterprise OSGi application to a PaaS. Last Monday, the 21th Juli 2014, I presented the main results of my thesis to my professor (best greetings to you, Mr. Goik!) and to my colleges (thanks to all of you!) at Informatica in Stuttgart-Weilimdorf, Germany (where I had written my thesis based on one of their product information management applications, called Informatica PIM).

Here are the slides of my presentation.

While my master thesis also covers topics like OSGi, VMs and JEE application servers, the presentation focuses on my final solution of a deployment process for the cloud. Based on Docker, the complete software stack used for the Informatica PIM server was packaged into separate, self-contained images. Those images have been stored in a repository and were used to automatically setup cloud instances on Amazon Web Services (AWS).

The presentation gives answers to the following questions:

  • What is cloud computing and what is AWS?
  • What are containers and what is Docker?
  • How can we deploy containers?

To automate the deployment process of Docker images, I implemented my own little tool called DeployMan. It will show up at the end of my slides and I will write about it in a couple of days here. Although there are a lot of tools out there to automate Docker deployments (e.g. fig or Maestro), I wanted to do my own experiments and to create a prototype for my thesis.


Best regards,

Docker Registry Rest API

The Docker Registry

The Docker registry is Docker’s in-build way to share images. It is an open-source project and can be found at in the official repository of DotCloud. You can set it up on your private server (maybe in the cloud) at push and pull your images to it. You can also secure it, e.g. with SSL and a NGINX (maybe I will write about this later).

The Rest API

Similar to Docker itself, the registry provides a Rest API to interact with it. Using the Rest API, you can list all images, search or brows a certain repository. The only prerequisite is that you define a search back-end in the registry’s config.yaml:

Now you can use the Rest API like this:

List a certain repository


Get info to a certain image

List all image

And thanks to bwilcox from StackOverflow, this is how you can list all images:


Best regards,

Cloud vendors with Windows

The cloud is build on Linux – that is my own humbling opinion. But is it really? To answer this question for myself, I took a look at a bunch of cloud vendors to see what they got under the hood. Here is what I found.

But note that the list is neither complete nor representative. I am also comparing two very different things: IaaS and PaaS. While IaaS vendors like AWS provide virtual machines, PaaS vendors like Heroku provide a tooling to setup complete environments.

However, the list shows that most of the vendors use Linux as their base system and the more you go to the PaaS direction, the more Windows vanishes.

Vendor Windows Linux Type Comment
Microsoft Azure yes yes IaaS
AWS yes yes IaaS AWS has a lot of Linux distributions and Windows version on their IaaS EC2.
AWS Elastic Beanstalk yes yes IaaS
eNlight Cloud yes yes CentOS, Red Hat Enterprise Linux, SUSE Linux, Oracle Linux, Ubuntu, Fedora, Debian, Window Server 2003, Windows Server 2008, Windows 7.
Google App Engine PaaS Google App Engine has a sandbox and hides the OS.
Google Compute Engine yes yes IaaS Linux, FreeBSD, Microsoft Windows
Heroku yes PaaS Ubuntu
Jelastic yes PaaS
HP Cloud yes IaaS Based on OpenStack.
OpenShift yes PaaS Red Hat Enterprise Linux
Engine Yard yes PaaS Ubuntu, Gentoo
Rackspace yes yes
Cloud Foundry yes PaaS

Best regards,

How to know you are inside a Docker container

How to know that you are living in the Matrix? Well, I do not know, but at least I know how to tell you if you are inside a Docker container or not.

The Docker Matrix

Docker provides virtualization based on Linux Containers (LXC). LXC is a technology to provide operating system virtualization for processes on Linux. This means, that processes can be executed in isolation without starting a real and heavy virtual machine. All processes will be executed on the same Linux kernel, but will still have their own namespaces, users and file system.

An important feature of such virtualization is that applications inside a virtual environment do not know that they are not running on real hardware. An application will see the same environment, no matter if it is running on real or virtual resources.


However, there are some tricks. The /proc file system provides an interface to kernel data structures of processes. It is a pseudo file system and most of it is read-only. But every process on Linux will have an entry in this file system (named by its PID):

In this directory, we find information about the executed program, its command line arguments or working directory. And since the Linux kernel 2.6.24, we also find a file called cgroup:

This file contains information about the control group the process belongs to. Normally, it looks something like this:

But since LXC (and therefore Docker) makes use of cgroups, this file looks different inside a container:

As you can see, some resources (like the CPU) are belonging to a control group with the name of the container. We can make this a little bit easier if we use the keyword self instead of the PID. The keyword self will always reference the folder of the calling process:

And we can wrap this into a function (thanks to Henk Langeveld from StackOverflow):


Best regards,

Layering of Docker images

Docker images are great! They are not only portable application containers, they are also building blocks for application stacks. Using a Docker registry or the public Docker index, you can compose setups just by downloading the right Docker image.

But Docker images are not only building blocks for applications, they also use a kind of “build block” themselves: layers. Every Docker image consists of a set of layers which make up the final image.


Let us consider the following Dockerfile to build a simple Ubuntu image with an Apache installation:

If we build the image by calling docker build -t test/a . we get an image called a, belonging to a repository called test. We can see the history of your image by calling docker history test/a:

The final image a consists of six intermediate images as we can see. The first three layers belongs to the Ubuntu base image and the rest is ours: one layer for every build instruction.

We will see the benefit of this layering if build a slightly different image. Let’s consider this Dockerfile to build nearly the same image (only the text file in the last instruction has a different name):

When we build this file, the first thing we will notice is that the build is much faster. Since we already created intermediate images for the first three instructions (namely FROM..., RUN... and RUN...), Docker will reuse those layers for the new image. Only the last layer will be created from scratch. The history of this image will look like this:

As we see, all layers are the same as for image a, except of the first one where we touch a different file!


Those layers (or intermediate images or whatever you call them) have some benefits. Once we build them, Docker will reuse them for new builds. This makes the builds much faster. This is great for contentious integration, where we want to build an image at the end of each successful build (e.g. in Jenkins). But the build is not only faster, the images are also smaller, since intermediate images are shared between images.

But maybe the best things are rollbacks: since every image contains all of its building steps, we can easily go back to a previous step if we want so. This can be done tagging a certain layer. Let’s take a look at image b again:

If we want to make a rollback and remove the last layer (maybe the file should be called c.txt instead of b.txt) we can do so by tagging the layer 9977b78fbad7:

Let’s take a look at the new history:

Our last layer is gone and with the layer the text file b.txt!

Best regards,

Docker vs. Heroku

Untitled drawing

Since a couple of weeks I am working with Docker as an application container for Amazon’s EC2. Despite my eternal fight with the Docker registry, I am absolutely amazed about Docker and enjoyed my experience.

But sometimes it is hard to explain what Docker is and what is has to do with all this cloud and PaaS and scalability topic. So I thought a little bit about some similar concepts between Docker and Heroku -maybe the most popular PaaS provider. But let’s start with a small…


Docker and Heroku maybe have similar concepts (as you will see below), but they are two completely different things: while Docker is an open source software project, Heroku is a commercial service provider. You can download, build and install Docker on your own laptop or participate on its online community. On Heroku, you can create yourself an user account, pay some money (maybe) and get a really great service and hosting experience for your applications and code. So obviously, Docker and Heroku are very different things. But some of their core concepts have at least some similarities.

Docker vs. Heroku

Docker Heroku
Dockerfile BuildPack
Image Slug
Container Dyno
Index Add-Ons

Docker and Heroku have a lot of similarities, especially in their core concepts. This makes Docker an interesting alternative for people who are looking for an alternative for Heroku – maybe on their own infrastructure.

Dockerfile vs. BuildPack

Docker images can be build with a Dockerfile. A Dockerfile is a set of commands, e.g. to add files and folders or to install packages. It defines how the final image should look like. Here is an example of a Dockerfile which installs memcache from the official website:

Heroku’s pendant are so called BuildPacks. BuildPacks are also a set of scripts which are used to setup the final state of an image. Heroku comes with a couple of default BuildPacks such as for Java, Python or the Play! framework. But you can also write your own. Here’s a snippet of the Heroku BuildPack for Java apps:

BTW, there are even projects to enable the usage of Heroku’s BuildPacks for Docker images (like this).

Image vs. Slug

When you run a Dockerfile, it creates a Docker image. Such an image contains all data, files, dependencies and settings you need for your application. You can exchange those images and start them right away on any machine with Docker installed.

When you run a build on Heroku, the BuildPack creates a so called Slug. Those slugs are “are compressed and pre-packaged copies of your application” as Heroku says. Similar to Docker’s images, they contain all dependencies and can be deployed and started in a very short time.

Container vs. Dyno

After starting a Docker image, you have a running container of this image. You can start an image multiple times, to get multiple isolated container of the same application. This enables you to build an image once and start easily multiple instances of it.

Heroku does the very same. After you build your app with your BuildPack, you get a slug which you can run on a Dyno. Such a dyno is “a lightweight container running a single user-specified command” as Heroku describes it.

Heroku even uses LXC for virtualization of their containers (dynos), which is the same technology Docker uses at its core.

Index vs. Add Ons

Docker images can be shared with the community. This is possible by uploading them to the official Docker index. All images on this index can be download and used by everyone. Most of them are documented very well and can be started with a single command. This makes it possible to run a lot of applications as building blocks. Here’s an example how to run elasticsearch:

A similar concept applies to Heroku’s add-on market. You can use (or buy) different pre-configured add-ons for your application (e.g. for elasticsearch). This makes it possible to build a complex app with common building blocks – such as Docker is doing it!

So both, Docker’s index and Heroku’s add-ons, underline a service oriented way of developing applications and reusing components.


2014-05-05 17_28_04-C__Users_tuhrig_Desktop_AWSRepo_formations_RELEASE_7.0.0.5_PIM.json (static) - S

Although the four points mentioned before are the most important concepts of both, Docker and Heroku have one more thing in common: both have a powerful command line interface which allows to manage containers. E.g. you can run heroku ps to see all your running slugs or docker ps to see all your running containers or you can request the log of a certain container.


Best regards,

Development speed, the Docker remote API and a pattern of frustration

One of the challenges Docker is facing right now, is its own development speed. Since its initial release in January 2013, there have been over 7.000 commits (in one year!) by more than 400 contributors. There are more than 1.800 forks on GitHub and Dockers brings up approximately one new release per month. Docker is in a super fast development right now and this is really great to see!

However, this very high development speed leaves a lot of third-party tools behind. If you develop a tool for Docker, you have to keep a very high pace. If not, your tool is outdated within a month.

Docker remote API client libraries

A good example how this development speed affects projects, are the remote API libraries for Docker. Docker offers a JSON API to access Docker in a programmatic way. It enables you for example to list all running containers and stop a specific one. All via JSON and HTTP requests.

To use this JSON API in a convenient way, people created bindings for their favorite programming language. As you can see below, there exist bindings for JavaScript, Ruby, Java and many more. I used some of them on my own and I am really thankful for the great work their developers have done!

But many of those libraries are outdated at the time I am writing this. To be exact: all of them are outdated! The current remote API version of Docker is v1.11 (see here for more) which none of the remote API libraries supports right now. Many of them don’t even support v1.10 or v1.9.

Here is the list of remote API tools as you find it at

Language Name Remote API
Python docker-py v1.9
Ruby docker-api v1.10
JavaScript (NodeJS) dockerode v1.10
JavaScript (NodeJS) v1.7
JavaScript (Angular) WebUI dockerui v1.8
Java docker-java v1.8
Erlang erldocker v1.4
Go dockerclient v1.10
PHP Docker-PHP v1.9
Scala reactive-docker v1.10

How to deal with rapidly evolving APIs

How to deal with rapidly evolving APIs is a difficult question and IMHO Docker made the right decision. By solely providing a JSON API Docker chose a modern and universal technique. A JSON API can be used in any language or even in a web browser. JSON (together with a RESTful API) is the state-of-the-art technique to interact with services. Docker even leaves the possibility to fall back to an old API version by adding an version identifier to the request. Well done.

But the decision to stay “universal” (by solely providing a JSON API) also means to don’t get specific. Getting specific (which means to use Docker in a certain programming language) is left to the developers of third party tools. These tools are also evolving rapidly right now, no matter if those are remote API bindings, deployment tools (like, or hosting solutions (like CoreOS). This enriches the Docker ecosystem and makes the project even more interesting.

Bad third party tools will fall back on you

The problem is, even if Docker made a good job (which they did!), outdated or poorly implemented third party tools will fall back on Docker, too. If you use a third party library (which you maybe found via the official website) and it works fine, you will be happy with Docker and the third party library. But if the library doesn’t work next month because you updated Docker and the library doesn’t take care of the API version, you will be frustrated about the tool and about Docker.

Pattern of frustration

This pattern of frustration occurs a lot in software development. Bad libraries cause frustrations about the tool itself. Let’s take Java as an example. A lot of people complain about Java that it is verbose, uses class-explosions as a pattern and makes things much more complicated as they should be. The famous AbstractSingletonProxyFactoryBean class of the Spring framework is just such an example (see +Paul Lewis). Another example is reading a file in Java which was an awful pain:

And even the new NIO API which came with Java 7 is not as easy as it could be:

You need to put a String into a Path to pass it into static method which output you need to put into a String again. Great idea! But what about something like this:

However, it is not the fault of Java, but of a poorly implemented third party tool. If you need to put a File into a FileReader which you need to put into a BufferedReader to be able to read a file line by line into a StringBuilder you use a terrible I/O library! But anyway, you will be frustrated about Java and how verbose it is (and maybe also about the API itself).

This pattern applies to many other things: You are angry about your smartphone, because of a poorly coded app. You are angry about Eclipse because it crashes with a newly installed plugin. And so on…

I hope this pattern of frustration will not apply to Docker and the community will develop a stable ecosystem of tools to provide a solid basis for development and deployment with Docker. A tool like Dockers lives trough its ecosystem. If the tools are buggy or outdated, people will be frustrated about Docker – and that would be a shame, because Docker is really great!

Best regards,