Using the console on Windows

I tell you a secret: I’m a Windows user. I don’t use OSX, I don’t use Linux, I use Windows. And I tell you something more: I like it 😉

Usually I develop Java or JavaScript applications which perfectly run under Windows, Linux, OSX or what so ever. So developing under Windows is no problem at all. Runtime environments, IDEs, editors – Windows has it all. However, people keep wondering how I can do the most simple task:

How do you connect to another server? Do you use Putty? – Argh, no, I just type ssh some.server.com and I’m done.

Or:

Do you use your IDE to work with GIT? Or do you have Source Tree? – Argh, no, I just type git add . and git commit -m "..." and I’m done.

But I also know, that not everybody is doing it like this. People use the weirdest tools and techniques when working under Windows. A lot of people use that f***ing small Windows CMD, Putty with its broken key-format or CygWin to be a little bit more Linux-like. But I don’t like all of these. The Windows CMD is unusable, Putty is unnecessary and CygWin is a monster you don’t need. Here is what I do.

Don’t use the Windows default CMD

The first thing I do, is to don’t use Windows’ default CMD. Why? It can’t even mark and copy things! However, there’s an easy and open-source alternative: ConEmu. ConEmu has all the simple things you expect: colors, tabs, resizable, copy-and-paste and much more. You can get it from GitHub and it even works without installation.

2016-05-21 21_51_21-Settings

Use GIT as a toolbox for Windows

The other thing I do, is to use GIT as a toolbox for Windows. When people are talking about using the console, they are actually talking about using tools. They talks about SCP, SSH or CURL like they come with their console – but they don’t! All of those things are just individual programs installed on their machine. They are not related to the command line! So why don’t install them on Windows?

If you use GIT, you already have everything you need in the bin folder (e.g. on C:\Program Files (x86)\Git\bin\):

ssh, scp, curl, cat, grep, less, sh, bash, ls, mv, cp, diff, gzip, and much more…

2016-05-21 22_09_29-Using the console on Windows – Thomas Uhrig

The only thing you need to do is to put GIT’s bin folder to your Windows variable path. You will have everything at a fingertip in your console.

2016-05-21 22_26_43-Environment Variables

Put my SSH-certificates to my user-folder

The last thing I usually do is to put my SSH-certificates to my user folder at C:\Users\tug\.ssh, so that SSH can find them.

Best regards,
Thomas

More

Resizing Vagrant box disk space

Vagrant is a great tool to provision virtual machines! As I’m a passionated Windows user, Vagrant is the weapon of my choice whenever I need to use some Linux-only tools such as Docker. I spinn up a new Linux VM, already configured with the things I need and start working. However, when it comes to resizing a disk, Vagrant is not nice to you…

The problem

Vagrant doesn’t provide any out-of-the-box option to configure or to change the disk size. The disk size of a VM totally depends on the base image used for the VM. There are base images with a 10 GB disk, some with a 20 GB disk and some other with a 40 GB disk. There is no Vagrant option to change this – and even worse: most Vagrant boxes use VMDK disks which cannot be resized!

Resizing (manually) with VirtualBox

As Vagrant doesn’t provide any out-of-the-box functionality, we need to do the resizing “manually”. Of course, we can write a script for this, too, but for now we keep it simple and do it by hand.

  1. First we need to convert the VMDK disk to a VDI disk which can be resized. We do this with the VBoxManage tool which comes with the VirtualBox installation:
  2. Now we can easily resize this VDI disk, e.g. to 50 GB:
  3. The last step we need to do is just to use the new disk instead of the old one. We can do this by cloning the VDI disk back to the original VMDK disk or within a view clicks in VirtualBox:

    2016-01-06 16_32_02-Oracle VM VirtualBox Manager

That’s it! Now start your VM with vagrant up and check the disk space. It’s at 50 GB and we have new free space again!

Best regards,
Thomas

More

VirtualBox crashes with STATUS_OBJECT_NAME_NOT_FOUND

As I’m a passionate Windows user (sorry…), I often use VirtualBox (with Vagrant) to pull up a Linux box to use Docker or some other “Linux-only” stuff. Usually, this works really fine, but today my VirtualBox crashed with a STATUS_OBJECT_NAME_NOT_FOUND error:

MXhqz

One of those mystic error where everything worked like a charm yesterday at 6 p.m., but when you start your machine today at 9 a.m. it’s broken. Damn.

The solution was (and still is) a small Windows patch you need to install on your machine:

https://support.microsoft.com/en-us/kb/2628582

Install, restart, back at work. I hope this will help you, too.

Best regards,
Thomas

Mount Windows folder to Boot2Docker VM

I just stumbled over a post on Stackoverflow (http://stackoverflow.com/questions/30864466/whats-the-best-way-to-share-files-from-windows-to-boot2docker-vm) with the question how to mount a Windows folder to a Boot2Docker VM. Although the steps are a little bit confusing, in the end it is not difficult to do.

Boot2Docker

Boot2Docker is a simple VM to run Docker. The VM will run on VirtualBox and Boot2Docker is just a tool to provision this VM (very similar to Vagrant, but smaller and customized for using Docker). You simply download and install Boot2Docker and run boot2docker up to start the VM. After the VM is up, you can run boot2docker ssh to login. Now, we can start to get our Windows folder.

Mounting the folder

To use one of your Windows folders in your Boot2Docker VM, you need to mount it. To do so, you mount your Windows folder to the VM:

Now you login to your VM via SSH (with boot2docker ssh) and do the following:

Make a folder inside your VM:

Mount your stuff from Windows:

After that, you can access c:/my/folder/with/code inside your Boot2Docker VM:

Now, that your code is present inside your VM, you can use it with Docker. Either by mounting it as a volume to the container:

Or by using it while building your Docker image:

Best regards,
Thomas

ORA-28001: the password has expired

Today I came about a very annoying exception. After my development setup was running smoothly for the last six months, my application was getting database errors today. I know that I didn’t break something, so the problem had to be somewhere else – and it was: ORA-28001: the password has expired

If you install an Oracle database on a Windows system, the default password policy will make all passwords expire after exactly six months! Great. So here’s how to fix that:

Best regards,
Thomas

DeployMan (command line tool to deploy Docker images to AWS)

DeployMan

2014-07-29 11_34_11-Java EE - Eclipse

Yesterday, I published a tool called DeployMan on GitHub. DeployMan is a command line tool to deploy Docker images to AWS and was the software prototype for my master thesis. I wrote my thesis at Informatica in Stuttgart-Weilimdorf, so first of all, I want to say thank you to Thomas Kasemir for the opportunity to put this online!

Disclaimer

At the time I am writing this post, DeployMan is a pure prototype. It was created for academic research and as a demo for my thesis. It is not ready ready for production. If you need a solid tool to deploy Docker images (to AWS), have a look at Puppet, CloudFormation (for AWS), Terraform, Vagrant, fig (for Docker) or any other orchestration tool that came up in the last couple of years.

What DeployMan does

DeployMan can create new AWS EC2 instances and deploy a predefined stack of Docker images on it. To do so, DeployMan takes a configuration file called a formation. A formation specifies how the EC2 machine should look like and which Docker images (and which configurations) should be deployed. Docker images can either be deployed from a Docker registry (the public one or a private) or a tarballs from a S3 storage. Together with each image, a configuration folder will pulled from a S3 storage and mounted to the running container.

Here is an example of a formation which deploys a Nginx server with a static HTML page:

Interfaces

DeployMan provides a command line interface to start instances and do some basic monitoring of the deployment process. Here is a screenshot which shows some formations (which can be started) and the output of a started Logstash server:

Run_Logstash_Server

To keep track of the deployment process in a more pleasant way, DeployMan has a web interface. The web interface shows details to machines, such as the deployed images and which containers are running. Here is how a Logstash server would look like:

Machine_Details

The project

GitHub-Mark

You can find the project on GitHub at https://github.com/tuhrig/DeployMan. I wrote a detailed README.md which explains how to build and use DeployMan. To test DeployMan, you need an AWS account (there are also free accounts).

The project is made with Java 8, Maven, the AWS Java API, the Docker Java API and a lot of small stuff like Apache Commons. The web interface is based on Spark (for the server), Google’s AngularJS and Twitter’s Bootstrap CSS.

Best regards,
Thomas

Presentation of my master thesis

Over the last six months, I wrote my master thesis about porting an enterprise OSGi application to a PaaS. Last Monday, the 21th Juli 2014, I presented the main results of my thesis to my professor (best greetings to you, Mr. Goik!) and to my colleges (thanks to all of you!) at Informatica in Stuttgart-Weilimdorf, Germany (where I had written my thesis based on one of their product information management applications, called Informatica PIM).

Here are the slides of my presentation.

While my master thesis also covers topics like OSGi, VMs and JEE application servers, the presentation focuses on my final solution of a deployment process for the cloud. Based on Docker, the complete software stack used for the Informatica PIM server was packaged into separate, self-contained images. Those images have been stored in a repository and were used to automatically setup cloud instances on Amazon Web Services (AWS).

The presentation gives answers to the following questions:

  • What is cloud computing and what is AWS?
  • What are containers and what is Docker?
  • How can we deploy containers?

To automate the deployment process of Docker images, I implemented my own little tool called DeployMan. It will show up at the end of my slides and I will write about it in a couple of days here. Although there are a lot of tools out there to automate Docker deployments (e.g. fig or Maestro), I wanted to do my own experiments and to create a prototype for my thesis.

Enjoy!

Best regards,
Thomas

Docker Registry Rest API

The Docker Registry

The Docker registry is Docker’s in-build way to share images. It is an open-source project and can be found at https://github.com/dotcloud/docker-registry in the official repository of DotCloud. You can set it up on your private server (maybe in the cloud) at push and pull your images to it. You can also secure it, e.g. with SSL and a NGINX (maybe I will write about this later).

The Rest API

Similar to Docker itself, the registry provides a Rest API to interact with it. Using the Rest API, you can list all images, search or brows a certain repository. The only prerequisite is that you define a search back-end in the registry’s config.yaml:

Now you can use the Rest API like this:

List a certain repository

Search

Get info to a certain image

List all image

And thanks to bwilcox from StackOverflow, this is how you can list all images:

More

Best regards,
Thomas

Cloud vendors with Windows

The cloud is build on Linux – that is my own humbling opinion. But is it really? To answer this question for myself, I took a look at a bunch of cloud vendors to see what they got under the hood. Here is what I found.

But note that the list is neither complete nor representative. I am also comparing two very different things: IaaS and PaaS. While IaaS vendors like AWS provide virtual machines, PaaS vendors like Heroku provide a tooling to setup complete environments.

However, the list shows that most of the vendors use Linux as their base system and the more you go to the PaaS direction, the more Windows vanishes.

Vendor Windows Linux Type Comment
Microsoft Azure yes yes IaaS
AWS yes yes IaaS AWS has a lot of Linux distributions and Windows version on their IaaS EC2.
AWS Elastic Beanstalk yes yes IaaS
eNlight Cloud yes yes CentOS, Red Hat Enterprise Linux, SUSE Linux, Oracle Linux, Ubuntu, Fedora, Debian, Window Server 2003, Windows Server 2008, Windows 7.
Google App Engine PaaS Google App Engine has a sandbox and hides the OS.
Google Compute Engine yes yes IaaS Linux, FreeBSD, Microsoft Windows
Heroku yes PaaS Ubuntu
Jelastic yes PaaS
HP Cloud yes IaaS Based on OpenStack.
OpenShift yes PaaS Red Hat Enterprise Linux
Engine Yard yes PaaS Ubuntu, Gentoo
Rackspace yes yes
Cloud Foundry yes PaaS

Best regards,
Thomas

How to know you are inside a Docker container

How to know that you are living in the Matrix? Well, I do not know, but at least I know how to tell you if you are inside a Docker container or not.

The Docker Matrix

Docker provides virtualization based on Linux Containers (LXC). LXC is a technology to provide operating system virtualization for processes on Linux. This means, that processes can be executed in isolation without starting a real and heavy virtual machine. All processes will be executed on the same Linux kernel, but will still have their own namespaces, users and file system.

An important feature of such virtualization is that applications inside a virtual environment do not know that they are not running on real hardware. An application will see the same environment, no matter if it is running on real or virtual resources.

/proc

However, there are some tricks. The /proc file system provides an interface to kernel data structures of processes. It is a pseudo file system and most of it is read-only. But every process on Linux will have an entry in this file system (named by its PID):

In this directory, we find information about the executed program, its command line arguments or working directory. And since the Linux kernel 2.6.24, we also find a file called cgroup:

This file contains information about the control group the process belongs to. Normally, it looks something like this:

But since LXC (and therefore Docker) makes use of cgroups, this file looks different inside a container:

As you can see, some resources (like the CPU) are belonging to a control group with the name of the container. We can make this a little bit easier if we use the keyword self instead of the PID. The keyword self will always reference the folder of the calling process:

And we can wrap this into a function (thanks to Henk Langeveld from StackOverflow):

More

Best regards,
Thomas