API programming with python for HP VAN SDN Controller

Part Two: Navigating the HP VAN SDN API
DISCLAIMER: You can only trust a mad scientist so much!

OK, we have our API programming lab up and running. You can log into the HP VAN SDN controller from your DevOps station. It is time to start exploring the API’s and what they have to offer.
For the most part API’s will be used in a couple of ways. It is the language of the web. API’s are simple to use. There is the HTTP GET, which collects information from the web server, POST, which creates and PUT, you guessed it, it updates. With these few basic commands we could create just about any C.R.U.D. application we want. Let’s not get ahead of ourselves, we want to find out what API’s are available. To do that will navigate to the RSDOC page of the controller. Thanks for thinking of us! The API’s are documented and interactive! Point the browser on the DevOps station to the address in the graphic below.
In order to get to all the goodies (API’s) we need to tell the controller that we are a “friendly” by entering the username and password. So this part is a bit tricky and you will need to enter some JSON in the “auth” api and get a token. By default the HP VAN SDN Controller uses keystone as an authentication mechanism. It will give you a token (good for 24 hours) to use in your communications. Click on the “auth” api and then on the green “Post” button. A box will appear asking for the JSON login string. {“login”:{”user”:”sdn”,”password”:”skyline”}}. Then the “Try it out” button. Look for the token in the server response. Copy the token to the clipboard (Double-click on it and copy)
Now look at the top of your screen for an explore button. Right to the left of it is a box where you will paste the token. It is a long hex number that I usually just memorize. Once it is pasted in the box, click on the Explore button and you should get a response of OK200 at the top center of the screen.
Before we start looking around you might want to know that switches, managed by the HP VAN SDN controller are called dpids, Datapath Identifiers. They look like they impart no knowledge but they do…just look at this.
So some of the API’s will ask for a specific dpid. Just use the datapaths API without any options to see what the controller knows about dpids.
Now we are ready to click on the other API’s and test drive them.
Now for a little bad news…Currently the HP VAN SDN controller is not aware of any network devices (dpids) so your exploration is a bit boring. We need to fire up a test network with a tool called mininet. To do this look on the desktop of the DevOps station. You will see a notepad file with start mininet. Double click the file and it will open in the default editor. Look for the IP address (should be xx.xx.xx.xx) Change the IP address to match the IP address of the HP VAN SDN controller. Open up a mate terminal and paste the command and hit enter.
You will see mininet start up and load the hosts and switches. It will finish at a mininet prompt. At the prompt enter the command pingall to generate some interesting traffic.
Now go back and start looking around for nodes, datapaths and links. You will start seeing some interesting and familiar things like IP addresses and MAC addresses and switch port numbers. Get familiar with these API’s and what they are capable of. We will be writing python applications to take advantage of them in the next post.
Happy Exploring!

API programming with python for HP VAN SDN Controller

Part One: Getting your API lab up and running
DISCLAIMER: You can only trust a mad scientist so much!

If you’re an occasional visitor to this blog then you will know that I have a “thing” for API’s, or Application Program Interfaces. Seems everybody has one these days so why not learn how to take advantage of them. Before we get started we will need a test lab where we can test our creations without bringing down the production network, a career limiting move (CLM).
So without spending any money, you can build a virtualized lab to run a couple Virtual Machines right on your laptop (recommend you have minimum 8 gig of RAM). You will need to run two VM’s. One will be the HP VAN SDN controller and the other will be a L.A.M.P. (Linux, Apache, MySQL and Php) server.
Luckily, I have these to VM’s prebuilt and sitting on my ftp server in a huge zip file with lots of other goodies. The following picture shows the ftp location and credentials. Once you get this downloaded (sorry if you’re still using 56k dial up) we can take the next step to getting the lab up and running.
OK, got the zip file, check….It has been extracted to some folder on your computer, check….you have identified the two “OVA” files we will import into Oracle Virtual Box (“HP VAN SDN Controller 2.3.5.ova” and “My New DevOps Box.ova”), check…..You have Virtualization Technology enabled you your PC…….do it now!!!!…..check…..You have the latest copy Oracle Virtual Box downloaded and installed……OK, I’ll wait…….you think you would have done this by now…OK…check. We are ready to import the OVA’s into Oracle Virtual Box (OVB).
Open OVB and from the main menu select File>>Import……check!
Once you have selected to VM to import click on NEXT and IMPORT…Just a few secs to wait here.
Now we have just a few changes to make to the VM settings to get this VM to be a part of our network. We will set the NIC (Network Interface Card) to be bridged mode. This is handy if you are running DHCP on the host network. You will do this for both virtual machines. The following picture will guide you through this process.
Now that we have the NIC configured properly we can start up the virtual machines and log in and take a look around. You will repeat the process depicted with both VM’s.
Now for a little bad news…the VM’s that you just started are Linux distros of Ubuntu. One with a GUI and the other without. The HP VAN SDN Controller will have a user of sdn and a password of skyline and the LAMP server will have a user of rick with a password of siesta3.
You will need a bare minimum of linux chops to complete this tutorial so here they are.
I will leave you here at this spot so you can practice your linux commands. On the desktop of the LAMP server is an icon for a MATE terminal. Double-click it and a terminal window will appear. Enter the command ifconfig and look at the IP address your LAMP server is assigned ….write it down. On the SDN controller there is no GUI so from the initial prompt enter ifconfig to see what IP address it was assigned and write it down.
Finally, to check to see if your SDN controller is operational lets go to the LAMP server and from the application menu, slide down to Internet then over to Chromium Web Browser. Once the browser launches enter https://ip_address_of_sdn_controller:8443/sdn/ui. It should look like the picture below.
Enter the credentials from above and navigate some of the menus. You are now ready to take the next step..stay tuned.

Around the world with Docker!

DISCLAIMER: Not intended for production use! 


UPDATE 12102015 – I changed the dashes to regular dashes so if you copy and paste commands they will work!

You could probably tell from my last three posts that I have been experimenting with Docker. If you have no clue what I’m talking about then see below! If you haven’t spent the past few months of your life admiring the weave of the cat-5 cabling in the IDF then let’s proceed.

I like it easy. Putting things in Docker containers certainly makes life easy. It is even easier when you make Docker images out of Dockerfiles. I had a dream to have a DevOps platform that I could have everything I needed to create SDN applications all in one system and be able to deploy an identical platform in several minutes. Things like a HP VAN SDN controller, OpenDayLight controller, a LAMP server and possibly a mininet application to generate some flows.

If you have been following along with the last blog posts you just might have an Ubuntu:Mate platform with the Docker daemon running. Wouldn’t it be great to say “Docker pull xod442/macfind3” and have a LAMP server that you can start using in 5 minutes? How about “Docker pull xod442/van” and have the HP VAN SDN controller at your disposal? Go ahead and do it. They are both waiting for you. I wish it was that easy for me. I had to go around the world.

I started looking around Dockerhub and found the OpenDayLight SDN controller in a docker container (docker pull raseel/opendaylight_base). Quickly the docker image is downloaded and up and running by using the command ”docker run -d -p 8000:8000 -p 8080:8080 -p 6633:6633 -p 1088:1088 -p 2400:2400 raseel/opendaylight_base /opt/opendaylight/run.sh”. Point your browser to the docker host@ port 8080 and login with the default credentials of admin/admin. Very easy! I wanted it to be that easy for HP customers to get the HP VAN SDN controller but I didn’t have a clue on how to do it.

I started with the installation instructions. A few sudo apt-get installs and unpacking a debian file, a couple shell scripts to run, keystone users to build. I was in over my head and needed to call in the professionals. My first call for help was someplace near London England. A good friend and mentor who works for Docker, Dave, told me I would have to learn about something called supervisord. Lots of fascinating things over on his blog at: http://dtucker.co.uk/.

In a nutshell supervisord is like systemd and it stops and starts services and scripts. There were a few late nights learning how to use this. In the end it is not that difficult. Stay tuned for the blog on supervisord. Now I wanted to learn a little more about the startup process for the controller and what directories things are stored in. I called another pro in the Bay area who basically wrote the book on SDN. http://www.amazon.com/Software-Defined-Networks-Comprehensive-Approach/dp/012416675X. Chuck gave me some awesome information and it started me down another path of learning and exploration and led me right into CoreOS!

Hit the brakes….stop everything…..if you don’t know about CoreOS then get to the googeler quick! CoreOS is a lightweight operating system that is designed a lot like Chrome OS. It has an A and B side for booting. While your up and running on the A side, the B side is updating. A reboot puts you on the B side while the A side updates. BOOM! Mind blown! When the CoreOS boots up, IT IS DOCKER READY!!! More in the CoreOS blog later. If you can’t wait then look at this: https://www.youtube.com/watch?v=vy6hWsOuCh8. Another great thing about CoreOS is it is designed from the ground up to be deployed in clusters and managed by etcd. I know, I had to run out and build one right away. This stuff is exciting!

Back to the SDN controller in the container. Another call to the bay revealed another master mind Juliano Vacaro with R&D in Brazil. This is where I struck pure gold. It turns out that Juliano and his team have built the HP Van SDN controller in a container. I could most likely pull it and my adventure would be over. I don’t like taking the short cuts and I wanted to learn. Juliano shared with me some examples of Dockerfiles and supervisord.conf. They do things just a bit differently and run the SDN controller separate from the keystone server. I wanted it all in one docker image to make it very easy for customers to pull it and start running without having to link containers together (yes, you can do that).

In the end, it was building the Dockerfile (a script that tells docker how to build an image) that finally did the trick: Here are the contents of the Dockerfile.


FROM ubuntu:14.04  # change14.04 with 12.04 for precise implementation

MAINTAINER Rick Kauffman <chewie@hp.com>

RUN apt-get update && apt-get install –no-install-recommends -y \
curl \
iptables \
iputils-arping \
net-tools \
ntp \
openjdk-7-jre-headless \
postgresql \
postgresql-client \
sudo \
supervisor \
software-properties-common \
ubuntu-cloud-keyring \

RUN rm -rf /var/lib/apt/lists/*

# Now add Keystone
RUN apt-get install –no-install-recommends -y ubuntu-cloud-keyring \
&& echo ‘deb http://ubuntu-cloud.archive.canonical.com/ubuntu trusty-updates/juno main’ >>/etc/apt/sources.list \
&& apt-get update \
&& apt-get install –no-install-recommends -y keystone

RUN rm -rf /var/lib/apt/lists/*

# Run the Keystone setup script
COPY ./setup-ks.sh /
RUN ./setup-ks.sh

RUN echo ‘* Allowing external access to postgres database’ \
&& sed -i — ‘s/host all sdn\/32 trust/host all sdn\/32 trust\nhost all sdn\/0 trust/’ /etc/postgresql/9.3/main/pg_hba.conf \
&& sed -i — “s/#listen_addresses = ‘localhost’/listen_addresses = ‘*’/” /etc/postgresql/9.3/main/postgresql.conf
COPY ./hp-sdn-ctl_2.4.6.0627_amd64.deb /home/hp-sdn-ctl.deb
COPY ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf

COPY ./run.sh /
EXPOSE 5000 35357 8443 6633
ENTRYPOINT [“/run.sh”]


I needed a run.sh and a setup-ks.sh script along with the supervisord.conf file. Put all these files in a directory on a docker server, along with the debian package and issue the docker build –t ”xod442/van” . <– The dot at the end of this command will mess you up if you omit it. Then docker reads the dockerfile and creates the image. You can run the dockerfile over and over and it will produce the same exact image.

My trip around the world was fun and exciting (read: too many late nights in the lab) and I must say all the great people who helped me out are absolutely amazing, I cannot thank you enough. One thing for sure is I have an abundant amount of new topics to blog about. Stay tuned!

Now it is no longer necessary to stumble around getting your DevOps platform up and running. Get a docker server and start pulling!

Two commands to LAMP

Docker pull xod442/macfind3
docker run -d -p 80:80 xod442/macfind3 /usr/sbin/apache2ctl -D FOREGROUND

URL http://ip_address_of_docker_server

Two commands to get your HP VAN SDN Controller!

Docker pull xod442/van
docker run –privileged=true -d -p 8443:8443 -p 6633:6633 xod442/van /etc/supervisor/supervisord.conf
(The above two lines are actually one command)

URL https://ip_address_of _docker_server:8443/sdn/ui


Hit me up if you want to know more! I like to share!

Docker survival kit

WARNING!!!!!! Straight from the Mad Scientist!!

Part 3

Let’s finish this up! One of the biggest issues I had learning Docker was when you use $ sudo docker run -i -t xod442/lamp /bin/bash to get a terminal session running on a docker image, you spawn a new container id. THE CHANGES YOU ARE MAKING DO NOT EFFECT THE ORIGINAL DOCKER IMAGE!!!! They are only relevant to the container id you are working in. Once you are finished with the changes to the container, you will need to commit them to a NEW docker image $ sudo docker commit 90934ee6cf3f xod442/new_image_name. This is a bit tricky at first but once the light bulb comes on you’ll think you’re a freaking genius!

Now lets say the docker image we created is a LAMP server. We want to run the LAMP server and have it stay up until we decide to stop it. I found this command works well. $sudo docker run -d -p 80:80 xod442/macfind /usr/sbin/apache2ctl -D FOREGROUND. In this command we are binding the local host interface to port 80 and the docker interface to port 80 as well. To test if your LAMP is up point a browser to http://dockerhost (use the IP address of your docker host)

Another way to verify that our LAMP server is up and running is to look at the docker processes. $ sudo docker ps -a will display all the containers we have ever started and what their operational state is. In the diagram below you can see that container 90934ee6cf3f is UP and running on port 80 and 5a52ff424b65 exited about an hour ago.


Have you noticed that names? Like cocky_brattain? If you don’t specify a name when running or starting a container, docker will make one up. You will notice each one is unique to your host. You can use your own names by using $ sudo docker run –name (containerName) -i -t ubuntu /bin/bash. Now when you look at the docker processes, you can easily identify your container for the others.

Finally here is a short list of commands that I use often. Copy them down and make your own docker cheat sheet.

sudo docker run –name (containerName) -i -t ubuntu /bin/bash
-Start a docker container, give it a name, pull ubuntu from dockerhub, load into container and offer the bash prompt.

exit – exits the container

sudo docker ps -a – Shows what containers are active and recently stopped. Here you can find the container ID

sudo docker start (containerId) – Starts the container

sudo docker attach lampster – attaches to the console of the container by name

sudo docker exec -i -t containerid bash – gives you bash on a running container

sudo docker rm $(sudo docker ps -a -q) -Removes all containers from your workspace (Danger Will Robinson!!)

sudo docker rmi $(sudo docker images -q) – Removes all images from work space (Danger Will Robinson!!)

sudo docker login – Allows you to login to dockerhub

sudo docker search (Keyword) – Allows you to search the dockerhub for pre-built container

sudo docker pull (owner/ImageName) – Get container from dockerhub

sudo docker commit (containerId) (owner/ImageName) – Builds a new Image from a container

sudo docker push (owner/ImageName) – Put Images on your dockerhub space

Hopefully this three part blog has stirred up some interest in diving into the world of containerization. It is by far only a limited look into this technology and I urge you to setup your own docker workstation and explore!.

Finally, there is talk from Microsoft about working with Docker and implementing containers in Windows. When this is pervasive, keep in mind that if you build a docker container on a windows platform, it will not be able to run on top of a linux docker server. Kind of goes without saying ……but there are those of you reading this now who are not so string with the force!! You know who you are!

Diving Into Containers

WARNING!!!!!! Straight from the Mad Scientist!!

Part 2

No were not talking about dumpster diving ….but close. I don’t know about you , but I am a conceptual thinker. Give me the “headlines” (and google) and I can usually figure things out. If you’re not wired that way, along with the book mentioned in Part 1, you can also find some awesome documentation over at docker.com.

OK, Part 1 left you with some homework. Did you set up your github and dockerhub accounts?…….OK….I’ll wait……go do it now!!!!

Conceptually, docker has three basic parts. Docker images (plenty to pull off of dockerhub.com), The Docker client (docker command) and the docker host (server) running the docker daemon. Most of the interaction is with the host using the client commands. Below you will see a simple model of docker.


At this point you should have a new Ubuntu:Mate workstation and the docker deamon installed. If you were to use the $ sudo docker images command you should see a few local images. These were pulled down from dockerhub when we tested the docker install in Part 1. Here is an example of my current system.


You will probably notice the xod442, this is my dockerhub account name. Followed by a slash and the name of the Docker image. If you want to remove one image you can use the $ sudo docker rmi (image_name) command. If you would like a clean slate and delete all of the images you can use the $ sudo docker rmi $(sudo docker images -q), and say good bye to all your images. I used this several times in my learning process.

We briefly touched on the command to get us into a new container. We used $ sudo docker run -i -t ubuntu /bin/bash. The /bin/bash tells docker to do something and keep the container running. Without it the container would start and stop very quickly. You can use the $ sudo docker ps -a command to see the status of all containers. Without the -a option the command only shows running containers. In the graphic below I show the commands to start a new container and break down what is happening. I also show how to get out of a container (exit) and how to commit the changes that you make to a container, creating a new docker image (this is the point where the dockerhub account is going to come in handy.


Here is a diagram of the process. Using the docker run to initialize a new container, adding some extra love to it and committing to a new docker image. They say a picture is worth a 1000 words but this one is most likely 385. Remember concepts/headlines only!!!


I think this is a good breaking point. I urge you to go out to docker hub and browse some of the pre-built images. No need to reinvent the wheel, there are plenty to toys to play with. One last tip before we sign off. Use the $ sudo docker search (keyword) to look for specific images you might be interested in. You just might find what you are looking for. Finally if you want to get something you find on the dockerhub site, use the $ sudo docker pull (user-name/image_name) command to pull it down to your docker host.

Part 3 will be a docker survival kit!

I can hardly contain myself!

WARNING!!!!!! Straight from the Mad Scientist!!

Part 1

Curiosity and need often go hand in hand. When you know nothing about something, its best to start reading. Here is “The Docker Book” by James Turnbull. Perfect learners guide.

This blog is an effort to condense this information and help you get past a few wookie traps.

OK, first things first. What is Docker and why do you care? Well, I think of Docker as a multiplexer for the Operating System as opposed to VMware’s HyperVisor acting as a multiplexer for the hardware.

Here is a diagram of the basic differences between Virtualizaion and Containers. When you develop an application, it has dependencies on certain libraries and binaries (files we don’t often think about). If we are developing this on a VM in VMware, the app is dependent on certain files in that particular operation system. So if I ZIP up the APP files and send them to someone on another VM, the APP might not run. The only way to guarantee the APP to work correctly is to send the entire Virtual Machine. Docker builds and manages containers. Every dependent file needed for the APP to run properly are packaged in a very small file called a container. As long as you load the container on a similar docker host, the APP will run perfectly.


Let’s get started, we will need a workstation to turn into a Docker platform. I am a self confessed VirtualBox user. I could talk about why, but it would just be boring and not any fun. So fire up a new image of Ubuntu. Just found this and I have to admit….its pretty nice. Just take a look!


Install Docker:

Installing Docker is straight forward.
Open a terminal window and at the command prompt enter:
sudo apt-get update
sudo apt-get install docker.io

Make sure it installed properly by launching a new container:
sudo docker run -i -t ubuntu /bin/bash

You should now see a new bash root@c0679a7f6d84:/#
If this is what you see then you are in a new container. Congratulations!

UP NEXT!!!! Working with containers. Do yourself a favor and signup for free accounts on Github and Dockerhub…you’re going to need them!

A box inside a box inside a box?

Starting out in a new job and I find myself needing to know way more about VMware than I do now. Luckily, I have not been living under a rock and I know what VMware is. In a very small nutshell, VMware is a virtualization technology that uses hypervisors that basically multiplex the underlying hardware to many virtual machines. Multiple hypervisors are managed by VMware vSphere (Individual hypervisors can be managed by vCenter Client, more on that later). I’m thinking more like a Pistachio nutshell.

I recently acquired a new laptop with 16 GB of RAM and I have gone a little crazy with building Virtual Machines in Oracle Virtual Box and not really having a need for VMware products. Life comes at you fast and you need to learn to adapt or you will no longer be relevant. With a little creative thinking I found a way to build a complete VMware environment with two hypervisors, a vShpere appliance and a couple of real VM’s to vMotion back and forth. Big thanks to sysAdmGirl….she rocks!

Here is a picture of the logical lab environment. Keep in mind there are only two physical devices. The laptop and the Synology data store.


First things first, you will need to get a copy of Oracle Virtual Box and shutdown anything that is taking up extra RAM on your system, yes Chris, that means you’ll have to shut down TweetDeck as well!

You will see from the diagram the three Oracle VB’s will have 4GB of RAM and a 10GB hard disk and 2 processors. Follow the links to the ESXi hypervisor  (an ISO file), download it and while you are on VMware’s website get the vSphere OVA appliance. Two of  the Oracle Virtual Boxes will be made by using the Oracle Virtual Box interface and create new VM’s make sure to set the network interface cars to “bridged” mode. The third (vSphere) you will just need to double-click the OVA file and it will import into Virtual Box.

When they are all installed and running it will look like this.

ALERT!!!!! Pay attention here!!!
When you look at the vShere appliance it will say to point your browser to https://some_IP_address:5480. When you do, you will see something that looks like this:


You are probably thinking, where do I import the ESXi servers?…That’s what I thought too. This screen is to configure the vShpere appliance with single sign on and database storage locations. These are not the droids you are looking for. Drop the port 5480 from your URL and you will be presented the vShpere web client interface.

The VMware vSphere Web Client is a newer interface compared to the VMware vSphere Client (the old school client). The VMware vSphere client is the same tool used to manage a single ESXi hypervisor as well as vSphere. you can find it on VMware site as well. Once it’s installed, just feed it the IP address of your vSphere appliance (minus the port 5480) and off you go!

Alright, now you should have the three VM’s up and running. You will need to create a common data store that is running NFS. I used my Synology Network attached storage device. Find something you can use and figure out how to make it appear as NAS on your lab network. Unfortunately, I don’t know what you will use, so you will have to put on your little grey hat and start looking around. Just Bing it on Google. If you need to know how the ESXi servers connect to the NAS storage you can find that information Here.

What about the VM’s?

OK, so you have this micro environment and we have to find a desktop image we can deploy on our ESXi servers to vMotion back and forth. I found Damn Small Linux (50MB) fits the bill. Get it and load it to the shared NFS storage and use vSphere to create new VM’s on each hypervisor.

You’ve been a good sport so far and I promise we are almost at the end of this exercise. I did this because I thought “I wonder what would happen if I installed VMware in Oracle Virtual Box?” Would it work?  Is it like mixing matter and anti-matter? You are about to find out.

We need to make some slight modification to the ESXi hypervisors network settings so follow along:
In this diagram we launch the VMware vSphere Client and give the credentials for the vShpere appliance. Somehow mine is set to root/vmware. Then we click on each hypervisor and edit the networking settings.
Drilling down a little deeper, look for the properties and select the Management network (remember, this is for a LAB, in real life you would most likely do something else). Once there, click on the vMotion option to allow vMotion across the Management network.

BOOM! Use the vSphere to “MIGRATE” the DSL VM’s back and forth. Can you say Winner!!!

This is a very brief post about the working of vmware. I found a ton of cool , free, online training  Here at VMware.com

Play Nice, you’re on your way to becoming a VCP!!

I feel a disturbance….but this time it’s a good thing

In case you have been hiding in a wiring closet admiring the weave of the Cat 5/6 cabling the last few months, let me bring you up to date on a big announcement from Hewlett-Packard. SDN.

Here is a link to the page

In 3 days and about 4 hours, HP will officially lauch the HP SDN App Store! This is a place where HP and 3rd party applications will be made available for use with the HP VAN SDN Controller. SDN Applications can either run internally in the controller (Reactive) or externally (Proactive). These application can be easily be downloaded to your controller (Reactive) or run along side of the controller (Proactive).

This is great news because those of us who imagine we are monster DevOps mavens…I did say imagine…can create applications and once accepted, can be accessed through the App Store. These applications can generate revenue for you.

So if you’re a company in need of a SDN solution, you have a place to shop. If your capable of creating your own application, you have a marketplace to sell your wares.

After all, selling SDN applications on the HP SDN App store is my retirement plan…;-)

Also: Good information over at the SDN commiunity Discussion Boards Here