Around the world with Docker!

DISCLAIMER: Not intended for production use! 


UPDATE 12102015 – I changed the dashes to regular dashes so if you copy and paste commands they will work!

You could probably tell from my last three posts that I have been experimenting with Docker. If you have no clue what I’m talking about then see below! If you haven’t spent the past few months of your life admiring the weave of the cat-5 cabling in the IDF then let’s proceed.

I like it easy. Putting things in Docker containers certainly makes life easy. It is even easier when you make Docker images out of Dockerfiles. I had a dream to have a DevOps platform that I could have everything I needed to create SDN applications all in one system and be able to deploy an identical platform in several minutes. Things like a HP VAN SDN controller, OpenDayLight controller, a LAMP server and possibly a mininet application to generate some flows.

If you have been following along with the last blog posts you just might have an Ubuntu:Mate platform with the Docker daemon running. Wouldn’t it be great to say “Docker pull xod442/macfind3” and have a LAMP server that you can start using in 5 minutes? How about “Docker pull xod442/van” and have the HP VAN SDN controller at your disposal? Go ahead and do it. They are both waiting for you. I wish it was that easy for me. I had to go around the world.

I started looking around Dockerhub and found the OpenDayLight SDN controller in a docker container (docker pull raseel/opendaylight_base). Quickly the docker image is downloaded and up and running by using the command ”docker run -d -p 8000:8000 -p 8080:8080 -p 6633:6633 -p 1088:1088 -p 2400:2400 raseel/opendaylight_base /opt/opendaylight/”. Point your browser to the docker host@ port 8080 and login with the default credentials of admin/admin. Very easy! I wanted it to be that easy for HP customers to get the HP VAN SDN controller but I didn’t have a clue on how to do it.

I started with the installation instructions. A few sudo apt-get installs and unpacking a debian file, a couple shell scripts to run, keystone users to build. I was in over my head and needed to call in the professionals. My first call for help was someplace near London England. A good friend and mentor who works for Docker, Dave, told me I would have to learn about something called supervisord. Lots of fascinating things over on his blog at:

In a nutshell supervisord is like systemd and it stops and starts services and scripts. There were a few late nights learning how to use this. In the end it is not that difficult. Stay tuned for the blog on supervisord. Now I wanted to learn a little more about the startup process for the controller and what directories things are stored in. I called another pro in the Bay area who basically wrote the book on SDN. Chuck gave me some awesome information and it started me down another path of learning and exploration and led me right into CoreOS!

Hit the brakes….stop everything…..if you don’t know about CoreOS then get to the googeler quick! CoreOS is a lightweight operating system that is designed a lot like Chrome OS. It has an A and B side for booting. While your up and running on the A side, the B side is updating. A reboot puts you on the B side while the A side updates. BOOM! Mind blown! When the CoreOS boots up, IT IS DOCKER READY!!! More in the CoreOS blog later. If you can’t wait then look at this: Another great thing about CoreOS is it is designed from the ground up to be deployed in clusters and managed by etcd. I know, I had to run out and build one right away. This stuff is exciting!

Back to the SDN controller in the container. Another call to the bay revealed another master mind Juliano Vacaro with R&D in Brazil. This is where I struck pure gold. It turns out that Juliano and his team have built the HP Van SDN controller in a container. I could most likely pull it and my adventure would be over. I don’t like taking the short cuts and I wanted to learn. Juliano shared with me some examples of Dockerfiles and supervisord.conf. They do things just a bit differently and run the SDN controller separate from the keystone server. I wanted it all in one docker image to make it very easy for customers to pull it and start running without having to link containers together (yes, you can do that).

In the end, it was building the Dockerfile (a script that tells docker how to build an image) that finally did the trick: Here are the contents of the Dockerfile.


FROM ubuntu:14.04  # change14.04 with 12.04 for precise implementation

MAINTAINER Rick Kauffman <>

RUN apt-get update && apt-get install –no-install-recommends -y \
curl \
iptables \
iputils-arping \
net-tools \
ntp \
openjdk-7-jre-headless \
postgresql \
postgresql-client \
sudo \
supervisor \
software-properties-common \
ubuntu-cloud-keyring \

RUN rm -rf /var/lib/apt/lists/*

# Now add Keystone
RUN apt-get install –no-install-recommends -y ubuntu-cloud-keyring \
&& echo ‘deb trusty-updates/juno main’ >>/etc/apt/sources.list \
&& apt-get update \
&& apt-get install –no-install-recommends -y keystone

RUN rm -rf /var/lib/apt/lists/*

# Run the Keystone setup script
COPY ./ /
RUN ./

RUN echo ‘* Allowing external access to postgres database’ \
&& sed -i — ‘s/host all sdn\/32 trust/host all sdn\/32 trust\nhost all sdn\/0 trust/’ /etc/postgresql/9.3/main/pg_hba.conf \
&& sed -i — “s/#listen_addresses = ‘localhost’/listen_addresses = ‘*’/” /etc/postgresql/9.3/main/postgresql.conf
COPY ./hp-sdn-ctl_2.4.6.0627_amd64.deb /home/hp-sdn-ctl.deb
COPY ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf

COPY ./ /
EXPOSE 5000 35357 8443 6633


I needed a and a script along with the supervisord.conf file. Put all these files in a directory on a docker server, along with the debian package and issue the docker build –t ”xod442/van” . <– The dot at the end of this command will mess you up if you omit it. Then docker reads the dockerfile and creates the image. You can run the dockerfile over and over and it will produce the same exact image.

My trip around the world was fun and exciting (read: too many late nights in the lab) and I must say all the great people who helped me out are absolutely amazing, I cannot thank you enough. One thing for sure is I have an abundant amount of new topics to blog about. Stay tuned!

Now it is no longer necessary to stumble around getting your DevOps platform up and running. Get a docker server and start pulling!

Two commands to LAMP

Docker pull xod442/macfind3
docker run -d -p 80:80 xod442/macfind3 /usr/sbin/apache2ctl -D FOREGROUND

URL http://ip_address_of_docker_server

Two commands to get your HP VAN SDN Controller!

Docker pull xod442/van
docker run –privileged=true -d -p 8443:8443 -p 6633:6633 xod442/van /etc/supervisor/supervisord.conf
(The above two lines are actually one command)

URL https://ip_address_of _docker_server:8443/sdn/ui


Hit me up if you want to know more! I like to share!

Docker survival kit

WARNING!!!!!! Straight from the Mad Scientist!!

Part 3

Let’s finish this up! One of the biggest issues I had learning Docker was when you use $ sudo docker run -i -t xod442/lamp /bin/bash to get a terminal session running on a docker image, you spawn a new container id. THE CHANGES YOU ARE MAKING DO NOT EFFECT THE ORIGINAL DOCKER IMAGE!!!! They are only relevant to the container id you are working in. Once you are finished with the changes to the container, you will need to commit them to a NEW docker image $ sudo docker commit 90934ee6cf3f xod442/new_image_name. This is a bit tricky at first but once the light bulb comes on you’ll think you’re a freaking genius!

Now lets say the docker image we created is a LAMP server. We want to run the LAMP server and have it stay up until we decide to stop it. I found this command works well. $sudo docker run -d -p 80:80 xod442/macfind /usr/sbin/apache2ctl -D FOREGROUND. In this command we are binding the local host interface to port 80 and the docker interface to port 80 as well. To test if your LAMP is up point a browser to http://dockerhost (use the IP address of your docker host)

Another way to verify that our LAMP server is up and running is to look at the docker processes. $ sudo docker ps -a will display all the containers we have ever started and what their operational state is. In the diagram below you can see that container 90934ee6cf3f is UP and running on port 80 and 5a52ff424b65 exited about an hour ago.


Have you noticed that names? Like cocky_brattain? If you don’t specify a name when running or starting a container, docker will make one up. You will notice each one is unique to your host. You can use your own names by using $ sudo docker run –name (containerName) -i -t ubuntu /bin/bash. Now when you look at the docker processes, you can easily identify your container for the others.

Finally here is a short list of commands that I use often. Copy them down and make your own docker cheat sheet.

sudo docker run –name (containerName) -i -t ubuntu /bin/bash
-Start a docker container, give it a name, pull ubuntu from dockerhub, load into container and offer the bash prompt.

exit – exits the container

sudo docker ps -a – Shows what containers are active and recently stopped. Here you can find the container ID

sudo docker start (containerId) – Starts the container

sudo docker attach lampster – attaches to the console of the container by name

sudo docker exec -i -t containerid bash – gives you bash on a running container

sudo docker rm $(sudo docker ps -a -q) -Removes all containers from your workspace (Danger Will Robinson!!)

sudo docker rmi $(sudo docker images -q) – Removes all images from work space (Danger Will Robinson!!)

sudo docker login – Allows you to login to dockerhub

sudo docker search (Keyword) – Allows you to search the dockerhub for pre-built container

sudo docker pull (owner/ImageName) – Get container from dockerhub

sudo docker commit (containerId) (owner/ImageName) – Builds a new Image from a container

sudo docker push (owner/ImageName) – Put Images on your dockerhub space

Hopefully this three part blog has stirred up some interest in diving into the world of containerization. It is by far only a limited look into this technology and I urge you to setup your own docker workstation and explore!.

Finally, there is talk from Microsoft about working with Docker and implementing containers in Windows. When this is pervasive, keep in mind that if you build a docker container on a windows platform, it will not be able to run on top of a linux docker server. Kind of goes without saying ……but there are those of you reading this now who are not so string with the force!! You know who you are!

Diving Into Containers

WARNING!!!!!! Straight from the Mad Scientist!!

Part 2

No were not talking about dumpster diving ….but close. I don’t know about you , but I am a conceptual thinker. Give me the “headlines” (and google) and I can usually figure things out. If you’re not wired that way, along with the book mentioned in Part 1, you can also find some awesome documentation over at

OK, Part 1 left you with some homework. Did you set up your github and dockerhub accounts?…….OK….I’ll wait……go do it now!!!!

Conceptually, docker has three basic parts. Docker images (plenty to pull off of, The Docker client (docker command) and the docker host (server) running the docker daemon. Most of the interaction is with the host using the client commands. Below you will see a simple model of docker.


At this point you should have a new Ubuntu:Mate workstation and the docker deamon installed. If you were to use the $ sudo docker images command you should see a few local images. These were pulled down from dockerhub when we tested the docker install in Part 1. Here is an example of my current system.


You will probably notice the xod442, this is my dockerhub account name. Followed by a slash and the name of the Docker image. If you want to remove one image you can use the $ sudo docker rmi (image_name) command. If you would like a clean slate and delete all of the images you can use the $ sudo docker rmi $(sudo docker images -q), and say good bye to all your images. I used this several times in my learning process.

We briefly touched on the command to get us into a new container. We used $ sudo docker run -i -t ubuntu /bin/bash. The /bin/bash tells docker to do something and keep the container running. Without it the container would start and stop very quickly. You can use the $ sudo docker ps -a command to see the status of all containers. Without the -a option the command only shows running containers. In the graphic below I show the commands to start a new container and break down what is happening. I also show how to get out of a container (exit) and how to commit the changes that you make to a container, creating a new docker image (this is the point where the dockerhub account is going to come in handy.


Here is a diagram of the process. Using the docker run to initialize a new container, adding some extra love to it and committing to a new docker image. They say a picture is worth a 1000 words but this one is most likely 385. Remember concepts/headlines only!!!


I think this is a good breaking point. I urge you to go out to docker hub and browse some of the pre-built images. No need to reinvent the wheel, there are plenty to toys to play with. One last tip before we sign off. Use the $ sudo docker search (keyword) to look for specific images you might be interested in. You just might find what you are looking for. Finally if you want to get something you find on the dockerhub site, use the $ sudo docker pull (user-name/image_name) command to pull it down to your docker host.

Part 3 will be a docker survival kit!

I can hardly contain myself!

WARNING!!!!!! Straight from the Mad Scientist!!

Part 1

Curiosity and need often go hand in hand. When you know nothing about something, its best to start reading. Here is “The Docker Book” by James Turnbull. Perfect learners guide.

This blog is an effort to condense this information and help you get past a few wookie traps.

OK, first things first. What is Docker and why do you care? Well, I think of Docker as a multiplexer for the Operating System as opposed to VMware’s HyperVisor acting as a multiplexer for the hardware.

Here is a diagram of the basic differences between Virtualizaion and Containers. When you develop an application, it has dependencies on certain libraries and binaries (files we don’t often think about). If we are developing this on a VM in VMware, the app is dependent on certain files in that particular operation system. So if I ZIP up the APP files and send them to someone on another VM, the APP might not run. The only way to guarantee the APP to work correctly is to send the entire Virtual Machine. Docker builds and manages containers. Every dependent file needed for the APP to run properly are packaged in a very small file called a container. As long as you load the container on a similar docker host, the APP will run perfectly.


Let’s get started, we will need a workstation to turn into a Docker platform. I am a self confessed VirtualBox user. I could talk about why, but it would just be boring and not any fun. So fire up a new image of Ubuntu. Just found this and I have to admit….its pretty nice. Just take a look!


Install Docker:

Installing Docker is straight forward.
Open a terminal window and at the command prompt enter:
sudo apt-get update
sudo apt-get install

Make sure it installed properly by launching a new container:
sudo docker run -i -t ubuntu /bin/bash

You should now see a new bash root@c0679a7f6d84:/#
If this is what you see then you are in a new container. Congratulations!

UP NEXT!!!! Working with containers. Do yourself a favor and signup for free accounts on Github and Dockerhub…you’re going to need them!

A box inside a box inside a box?

Starting out in a new job and I find myself needing to know way more about VMware than I do now. Luckily, I have not been living under a rock and I know what VMware is. In a very small nutshell, VMware is a virtualization technology that uses hypervisors that basically multiplex the underlying hardware to many virtual machines. Multiple hypervisors are managed by VMware vSphere (Individual hypervisors can be managed by vCenter Client, more on that later). I’m thinking more like a Pistachio nutshell.

I recently acquired a new laptop with 16 GB of RAM and I have gone a little crazy with building Virtual Machines in Oracle Virtual Box and not really having a need for VMware products. Life comes at you fast and you need to learn to adapt or you will no longer be relevant. With a little creative thinking I found a way to build a complete VMware environment with two hypervisors, a vShpere appliance and a couple of real VM’s to vMotion back and forth. Big thanks to sysAdmGirl….she rocks!

Here is a picture of the logical lab environment. Keep in mind there are only two physical devices. The laptop and the Synology data store.


First things first, you will need to get a copy of Oracle Virtual Box and shutdown anything that is taking up extra RAM on your system, yes Chris, that means you’ll have to shut down TweetDeck as well!

You will see from the diagram the three Oracle VB’s will have 4GB of RAM and a 10GB hard disk and 2 processors. Follow the links to the ESXi hypervisor  (an ISO file), download it and while you are on VMware’s website get the vSphere OVA appliance. Two of  the Oracle Virtual Boxes will be made by using the Oracle Virtual Box interface and create new VM’s make sure to set the network interface cars to “bridged” mode. The third (vSphere) you will just need to double-click the OVA file and it will import into Virtual Box.

When they are all installed and running it will look like this.

ALERT!!!!! Pay attention here!!!
When you look at the vShere appliance it will say to point your browser to https://some_IP_address:5480. When you do, you will see something that looks like this:


You are probably thinking, where do I import the ESXi servers?…That’s what I thought too. This screen is to configure the vShpere appliance with single sign on and database storage locations. These are not the droids you are looking for. Drop the port 5480 from your URL and you will be presented the vShpere web client interface.

The VMware vSphere Web Client is a newer interface compared to the VMware vSphere Client (the old school client). The VMware vSphere client is the same tool used to manage a single ESXi hypervisor as well as vSphere. you can find it on VMware site as well. Once it’s installed, just feed it the IP address of your vSphere appliance (minus the port 5480) and off you go!

Alright, now you should have the three VM’s up and running. You will need to create a common data store that is running NFS. I used my Synology Network attached storage device. Find something you can use and figure out how to make it appear as NAS on your lab network. Unfortunately, I don’t know what you will use, so you will have to put on your little grey hat and start looking around. Just Bing it on Google. If you need to know how the ESXi servers connect to the NAS storage you can find that information Here.

What about the VM’s?

OK, so you have this micro environment and we have to find a desktop image we can deploy on our ESXi servers to vMotion back and forth. I found Damn Small Linux (50MB) fits the bill. Get it and load it to the shared NFS storage and use vSphere to create new VM’s on each hypervisor.

You’ve been a good sport so far and I promise we are almost at the end of this exercise. I did this because I thought “I wonder what would happen if I installed VMware in Oracle Virtual Box?” Would it work?  Is it like mixing matter and anti-matter? You are about to find out.

We need to make some slight modification to the ESXi hypervisors network settings so follow along:
In this diagram we launch the VMware vSphere Client and give the credentials for the vShpere appliance. Somehow mine is set to root/vmware. Then we click on each hypervisor and edit the networking settings.
Drilling down a little deeper, look for the properties and select the Management network (remember, this is for a LAB, in real life you would most likely do something else). Once there, click on the vMotion option to allow vMotion across the Management network.

BOOM! Use the vSphere to “MIGRATE” the DSL VM’s back and forth. Can you say Winner!!!

This is a very brief post about the working of vmware. I found a ton of cool , free, online training  Here at

Play Nice, you’re on your way to becoming a VCP!!

I feel a disturbance….but this time it’s a good thing

In case you have been hiding in a wiring closet admiring the weave of the Cat 5/6 cabling the last few months, let me bring you up to date on a big announcement from Hewlett-Packard. SDN.

Here is a link to the page

In 3 days and about 4 hours, HP will officially lauch the HP SDN App Store! This is a place where HP and 3rd party applications will be made available for use with the HP VAN SDN Controller. SDN Applications can either run internally in the controller (Reactive) or externally (Proactive). These application can be easily be downloaded to your controller (Reactive) or run along side of the controller (Proactive).

This is great news because those of us who imagine we are monster DevOps mavens…I did say imagine…can create applications and once accepted, can be accessed through the App Store. These applications can generate revenue for you.

So if you’re a company in need of a SDN solution, you have a place to shop. If your capable of creating your own application, you have a marketplace to sell your wares.

After all, selling SDN applications on the HP SDN App store is my retirement plan…;-)

Also: Good information over at the SDN commiunity Discussion Boards Here

There is a new love in my life!

What can I say? I was with my long time favorite Linux Distro, Ubuntu, and we were having an argument. I wanted it to have the luscious Cinnamon interface, an it was telling me “I don’t have support for that anymore”. There was some initial shedding of tears and I steeled myself and said “It’s OK, at least I have MATE”. Well I looked up my old acquaintance, MATE and it didn’t take long until I remembered why I left in the first place.

I started staying up late at night, Hitting the Googler, hoping I would find some thing new and refreshing. Then it happened, not only did I find something refreshing, but it was also Minty! I had found something very exciting Linux Mint.


I downloaded Linux Mint 17 and was instantly amazed by its good looks. We all know that looks can only go so far. So I took Mint out for a test drive and was completely blown away. The first thing I noticed was that the user interface was Cinnamon!!! WOOOT!

It’s the little things that make all the difference in the world. My scroll wheel on my mouse actually made the content on the screen scroll. What a concept! I was able to quickly search the network and mount my Synology NAS storage device. Lastly, I added my HP OfficeJet PRO8500A printer and it all worked flawlessly!

Just a couple quick commands on the commandline….whhhaaaa? The commandline windows are translucent? Out of the box? Just too cool……where was I…oh yes…command line…..I was able to get my L.A.M.P. server installed and with a quick “a2enmod cgi” I had the cgi script execution working as well.

So, do yourself a favor and dump that old distro for something sleek and beautiful and very, very (user) friendly with a minty fresh taste (I couldn’t resist)!

HIP TIP-O-THE DAY: Head over to to find a boat load of “Free” virtual box VDI’s for your downloading pleasure.

Just don’t tell them I sent you!

Goodbye URLLIB2, I’m not going to miss you!

Hot on the trail of another monster chunk of code writing, I found I was stuck in a trap I made for myself. I was at the end of my understanding of Python, URLLIB2 and IMC eAPI’s. I was trying to HTTP POST a chunk of xml into IMC’s Configuration Template library. This was quite perplexing….I tried every thing I could and no matter how I changed the programming, I would still get the dreaded 500 Internal Server error….you know what I’m talking about. I feel you cringing right now!

So, after about a billion Google searches I started seeing the stuff called Requests. Developed by a guy named Kenneth Reitz, it is my new favorite plaything.

Take a look at this sample of URLLIB2 code to get the POST working.

cj = CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
resp =
print c_url
print cj._cookies
# get to the cookie
foo = cj._cookies[‘’][‘/imc/’][‘JSESSIONID’].value
foo1 = “JSESSIONID=%s” % (foo)
# Authenticate
authhandler = urllib2.HTTPDigestAuthHandler()
authhandler.add_password(“iMC RESTful Web Services”, server, user, passw)
opener = urllib2.build_opener(authhandler)
a2 = urllib2.install_opener(opener)

# Now build header to send HTTP POST for controller file
#agent = “Apache-Httpclient/4.1 (java 1.5)”
pagehandle=urllib2.Request(my_url,c_data) #Add values forces POST
pagehandle.add_header(‘Content-Type’,’application/xml; charset=utf-8′)
pagehandle.add_header(‘USer-Agent’,’Apache-Httpclient/4.1 (java 1.5)’)
pagehandle.add_header(‘Cookie’, foo1)
c_result = urllib2.urlopen(pagehandle)

If you ask me, it was good for its time but very confusing with the openers and handlers.

Now here is the same code using requests..

s = requests.session() # This keeps the session open

# Cookie Factory
r = s.get(my_url)
cook = r.headers[‘set-cookie’]
# Strip out the JSESSIONID
x1, x2, x3 = cook.split(‘;’)
# Set up Authentication header info

# POST with requests (Probably don’t need all these headers…but they don’t hurt)

headers = {‘Accept’: ‘application/xml’, ‘host’: ‘’, ‘Content-Type’: ‘application/xml; charset=utf-8’, ‘Accept-encoding’: ‘application/xml’, ‘Connection’: ‘Keep-Alive’, ‘User-Agent’: ‘Apache-HttpClient/4.1 (java 1.5)’, ‘Cookie’: x1, ‘Cookie2’: ‘$Version=1’}

# This sends the controller xml data to the IMC server

r =, data=c_data, auth=auth, headers=headers)

From here I can use: r.headers, r.return_code because everything the remote site sent back is in the variable “r”.

Thank you Mr. Reitz!!!

P.S. In the end it was a xml tag that I had given a wrong name. Correct name wasand I had… humbling!