Beginner’s guide to the Universe and Cloud Vision Portal

Part One: Get that switch in the game!

DISCLAIMER: You can only trust a mad scientist so much! 

Note for all the fans: I have not posted in quite a while. I was doing some soul searching and I cut the soles off of my shoes, learned to play the flute and lived in a tree. It took a while but I finally realized that life was not worth living if I couldn’t blog and look at facebook. So, I’m back!….and the crowd went wild!

OK, so you have heard the latest word on the street that HPE will start reselling Arista Networks. Yes, it’s true. That means we get to start experimenting with new things here in the laboratory. You might be of the mindset that a switch is a switch but Arista switches take it a bit further.

Arista keeps the “state” of the switch in something called SysDB. It’s a database that keeps all the state information of all the processes. If the OSFP process hangs, you simple restart it and it loads the last know state form the SysDB, which greatly reduces network outages.

Now if you kick it up a notch, all the Arista switches share the state information with Arista Cloud Vision Portal (CVP). CVP uses something called NetDB to keep the state of the entire network! CVP now gives us a single place where we can apply “API’s”, yes, I said it! API….you knew I wouldn’t go a whole blog post without mentioning it! The API’s allow the entire configuration of the network to be manipulated and automated any way you want it. I did a little pimping out of the log in screen, this is what it looks like.


Let’s login and start working with CVP. Here is what waits for you upon successful authentication.


As Jeff Probst would say, “First things first!” There are a number of things to click on and look around, but in order to do anything substantial we need to add some switches to CVP. CVP uses a concept of configlets, small portions of a config file that get applied to hierarchical containers (not Docker) and as you go down the tree, additional configlets are combined to make the final configuration. Let’s take a closer look. I will log into a vEOS switch running in Oracle Virtualbox (didn’t think I wouldn’t mention that either???). Here is spine01.


This switch is fully configured. Even so, it was missing some key information. Now pay attention! You are going to miss the most important part. CVP will not be able to manage any switch without the correct credentials. This means the credz that you use to log into CVP must be on the switch as well…….all you ZTP heads don’t blow a cork just yet, we’ll get to that in another blog, stay tuned….so, just for now, we will add the username to the switch, manually.

HOLD THE PHONE!!!!!!……I just added the username and it was the exact same commands as a Cisco switch! Yes, you are correct! You do not need to learn a new cli, for the most part it is exactly like Cisco.

Configuring the switch

Conf t (don’t need the “t”)
username cvpadmin privilege 15 role network-admin secret mypassword
CTL-Z (I’m old school)
Wri me (Told you…write to memory)

Now we have the credentials saved to the configuration, its VERY important that these are the same as the ones used to log into CVP.  We also need to tell the switch that it will be managed by……..wait for it………APIs!!

Conf  (Not using the ”t” this time because I’m lazy)
Management api http-commands
Cors allowed-origin all
No shut
Wri me

The only thing we are missing is assigning an IP address…..(be quite you ZTPheads!) to the management interface.

interface management 1
ip address
ip route
Write me

Looking good. You can consider that this will be the very basic configuration needed by CVP to manage the device. We will build a configlet to ensure that as well.

Adding the switch to CVP

It’s been all fun and games up to this point, but here I got burned pretty badly. Let’s be honest, there was nothing pretty about it. There are a couple of ways to get a running switch into CVP. You can add them one at a time or by using the bulk import method. We’ll look at both options but first we are going to guarantee that all of the devices we import will have a minimum configlet applied.

Log into CVP and look at the main screen (pictured above) and click on the Configlets button. Your list will be empty if you’re just starting out. Look at the top right of the screen and you will see a plus “+” sign with a down arrow….click on it and select configlets.

At the bottom of the page click on Save.

Now we need to apply the configlet to the uppermost container in our tree and I know I will have to have a discussion about containers right about now!



It’s a far flung organization we have over here a wookieware so we approached the container tree with a little foresight into what will happen and what could happen. Security it always a concern so we wanted to have a container just for security ALC’s otherwise known as access control lists and yes, I do know my ACL from a hole in the ground, but I digress. Here is what our tree looks like.


We created a top level container called Wookieware. After that we have our security container called Sector 5, this could be some geographic designation but it is just a place where the security folks can add a configlet of an ACL and have it trickle down to the lower levels where the switches will inherit them. Moving down the tree we have our data center container dc1 and after that the different pod containers. Finally a little further division of the switch role. Spine or leaf.

It’s important to note that configlets can be deployed in any container. The top level will have Motd, Triple-A, and basic routing. The switches will have configlets that pertain to just local configurations like IP address of interfaces, Autonomous system numbers, loopback IP addresses and hostnames.

The implied idea is that as we move down the tree the switch picks up the configlet in each container (and software image) and by the time we arrive at the switch most of the configuration is complete. So take a little time here and build out what you want your tree to look like.


But what about the switches!

Let’s go back to the main menu and look at the Inventory. At the top left you will see grid. Click on it and it will give you shortcuts to where you want to go.


Choose Inventory and it will take you to your CVP inventory page. From here (your list should be empty if you’re just starting out) look at the top right and click on the plus “+” sign.

We can add switches one at a time…BORING!!…..but let’s have a look. The add device radio button is on Add Device. In the Host Name / IP box we will but the IP address of the switch and navigate down to the container we want to place our switch. At this point you need to have a long think about what you are doing. If your switch is fully configured then I wouldn’t recommend adding it to the spot whare you want it. Perhaps it’s better to have a container for all new switches and import to that first. Then make sure you have the necessary configlets you desire. It’s up to you. Think!

I don’t like to think a lot so for the purposes of this blog I will choose the final container x-spines and click on the add button next to the Host Name /IP address.


Now watch out! Here’s is where it gets a little funky, don’t know if this is browser related but as long as the screen says connecting it will not add to the inventory when you select the save button at the bottom of the screen.


Look at the top right of the screen again and locate the refresh icon. Click it. Click it. Click it. Keep clicking on it until the switch says connected.


The switch in now in the inventory and is inheriting any configlets you have in your container tree. Click on the Save button at the bottom of the screen to return to the inventory home page.


Bulk importing switches

The same result can be obtained by using Import Device function. From the inventory home screen click on the plus “+” sign at the top right of your screen and this time we will choose the Import Device function. If you look just below the validate button you will see a link to download a sample CSV file. Download it and we will quickly fill it out.


Open up the Excel CSV file and look at the information we need. Below is a sample of the one I used to populate my system.


The same thing applies to the bulk loader. Keep refreshing your screen until you see all the switches Connected Then hit the Save button at the bottom of your screen.

Next up…Part Two….working with configlets.

If your new to Arista then look at this: Arista Warrior. It’s a bit dated but still an excellent resource.

A Brief diversion

Console 2.0 a limited operators interface to IMC.

OK, sorry for the brief diversion from the python blog, but something came up and I wanted to share. We all know the power of API’s, I have been evangelizing this for quite some time now. Here is an example of API’s in action. I have a customer that uses IMC and likes it a lot. The main trouble he is having is he cannot lock down IMC to the point where the noob console operators cannot get into trouble. Things like putting a port in the down state and having it turn out to be an uplink port. So with some python code, HTML5, IMC API’s and containers…can’t forget those, I put together a solution. Originally this was implemented on a WAMP server. So if you have a Windows only policy, this solution will work for you as well (You will need my Win2008R2 OVA file…email me).

I give you Console 2.0…in a docker container. Lately, it seems I’m ending a lot of conversations with …”in a docker container” …….I’m kind of like that guy with a jalapeno on a stick!
This solution allows for the multiple users to find MAC or IP addresses, up/down front facing ports only, change vlans on interfaces and add voice VOIP configuration to switchports. I added a few other handy things to have and finally an option that will blast the voip port back to factory defaults.
This is available on my Dockerhub account at:
To get the docker image you will issue this command from a docker server: sudo docker pull xod442/console2
To get it to run:
sudo docker run -d -p 80:80 xod442/console2 /usr/sbin/apache2ctl -D FOREGROUND
Point a browser to the docker server’s IP address on port 80. Away you go!

API programming with python for HP VAN SDN Controller

Part Two: Navigating the HP VAN SDN API
DISCLAIMER: You can only trust a mad scientist so much!

OK, we have our API programming lab up and running. You can log into the HP VAN SDN controller from your DevOps station. It is time to start exploring the API’s and what they have to offer.
For the most part API’s will be used in a couple of ways. It is the language of the web. API’s are simple to use. There is the HTTP GET, which collects information from the web server, POST, which creates and PUT, you guessed it, it updates. With these few basic commands we could create just about any C.R.U.D. application we want. Let’s not get ahead of ourselves, we want to find out what API’s are available. To do that will navigate to the RSDOC page of the controller. Thanks for thinking of us! The API’s are documented and interactive! Point the browser on the DevOps station to the address in the graphic below.
In order to get to all the goodies (API’s) we need to tell the controller that we are a “friendly” by entering the username and password. So this part is a bit tricky and you will need to enter some JSON in the “auth” api and get a token. By default the HP VAN SDN Controller uses keystone as an authentication mechanism. It will give you a token (good for 24 hours) to use in your communications. Click on the “auth” api and then on the green “Post” button. A box will appear asking for the JSON login string. {“login”:{”user”:”sdn”,”password”:”skyline”}}. Then the “Try it out” button. Look for the token in the server response. Copy the token to the clipboard (Double-click on it and copy)
Now look at the top of your screen for an explore button. Right to the left of it is a box where you will paste the token. It is a long hex number that I usually just memorize. Once it is pasted in the box, click on the Explore button and you should get a response of OK200 at the top center of the screen.
Before we start looking around you might want to know that switches, managed by the HP VAN SDN controller are called dpids, Datapath Identifiers. They look like they impart no knowledge but they do…just look at this.
So some of the API’s will ask for a specific dpid. Just use the datapaths API without any options to see what the controller knows about dpids.
Now we are ready to click on the other API’s and test drive them.
Now for a little bad news…Currently the HP VAN SDN controller is not aware of any network devices (dpids) so your exploration is a bit boring. We need to fire up a test network with a tool called mininet. To do this look on the desktop of the DevOps station. You will see a notepad file with start mininet. Double click the file and it will open in the default editor. Look for the IP address (should be xx.xx.xx.xx) Change the IP address to match the IP address of the HP VAN SDN controller. Open up a mate terminal and paste the command and hit enter.
You will see mininet start up and load the hosts and switches. It will finish at a mininet prompt. At the prompt enter the command pingall to generate some interesting traffic.
Now go back and start looking around for nodes, datapaths and links. You will start seeing some interesting and familiar things like IP addresses and MAC addresses and switch port numbers. Get familiar with these API’s and what they are capable of. We will be writing python applications to take advantage of them in the next post.
Happy Exploring!

API programming with python for HP VAN SDN Controller

Part One: Getting your API lab up and running
DISCLAIMER: You can only trust a mad scientist so much!

If you’re an occasional visitor to this blog then you will know that I have a “thing” for API’s, or Application Program Interfaces. Seems everybody has one these days so why not learn how to take advantage of them. Before we get started we will need a test lab where we can test our creations without bringing down the production network, a career limiting move (CLM).
So without spending any money, you can build a virtualized lab to run a couple Virtual Machines right on your laptop (recommend you have minimum 8 gig of RAM). You will need to run two VM’s. One will be the HP VAN SDN controller and the other will be a L.A.M.P. (Linux, Apache, MySQL and Php) server.
Luckily, I have these to VM’s prebuilt and sitting on my ftp server in a huge zip file with lots of other goodies. The following picture shows the ftp location and credentials. Once you get this downloaded (sorry if you’re still using 56k dial up) we can take the next step to getting the lab up and running.
OK, got the zip file, check….It has been extracted to some folder on your computer, check….you have identified the two “OVA” files we will import into Oracle Virtual Box (“HP VAN SDN Controller 2.3.5.ova” and “My New DevOps Box.ova”), check…..You have Virtualization Technology enabled you your PC…….do it now!!!!…..check…..You have the latest copy Oracle Virtual Box downloaded and installed……OK, I’ll wait…….you think you would have done this by now…OK…check. We are ready to import the OVA’s into Oracle Virtual Box (OVB).
Open OVB and from the main menu select File>>Import……check!
Once you have selected to VM to import click on NEXT and IMPORT…Just a few secs to wait here.
Now we have just a few changes to make to the VM settings to get this VM to be a part of our network. We will set the NIC (Network Interface Card) to be bridged mode. This is handy if you are running DHCP on the host network. You will do this for both virtual machines. The following picture will guide you through this process.
Now that we have the NIC configured properly we can start up the virtual machines and log in and take a look around. You will repeat the process depicted with both VM’s.
Now for a little bad news…the VM’s that you just started are Linux distros of Ubuntu. One with a GUI and the other without. The HP VAN SDN Controller will have a user of sdn and a password of skyline and the LAMP server will have a user of rick with a password of siesta3.
You will need a bare minimum of linux chops to complete this tutorial so here they are.
I will leave you here at this spot so you can practice your linux commands. On the desktop of the LAMP server is an icon for a MATE terminal. Double-click it and a terminal window will appear. Enter the command ifconfig and look at the IP address your LAMP server is assigned ….write it down. On the SDN controller there is no GUI so from the initial prompt enter ifconfig to see what IP address it was assigned and write it down.
Finally, to check to see if your SDN controller is operational lets go to the LAMP server and from the application menu, slide down to Internet then over to Chromium Web Browser. Once the browser launches enter https://ip_address_of_sdn_controller:8443/sdn/ui. It should look like the picture below.
Enter the credentials from above and navigate some of the menus. You are now ready to take the next step..stay tuned.

Around the world with Docker!

DISCLAIMER: Not intended for production use! 


UPDATE 12102015 – I changed the dashes to regular dashes so if you copy and paste commands they will work!

You could probably tell from my last three posts that I have been experimenting with Docker. If you have no clue what I’m talking about then see below! If you haven’t spent the past few months of your life admiring the weave of the cat-5 cabling in the IDF then let’s proceed.

I like it easy. Putting things in Docker containers certainly makes life easy. It is even easier when you make Docker images out of Dockerfiles. I had a dream to have a DevOps platform that I could have everything I needed to create SDN applications all in one system and be able to deploy an identical platform in several minutes. Things like a HP VAN SDN controller, OpenDayLight controller, a LAMP server and possibly a mininet application to generate some flows.

If you have been following along with the last blog posts you just might have an Ubuntu:Mate platform with the Docker daemon running. Wouldn’t it be great to say “Docker pull xod442/macfind3” and have a LAMP server that you can start using in 5 minutes? How about “Docker pull xod442/van” and have the HP VAN SDN controller at your disposal? Go ahead and do it. They are both waiting for you. I wish it was that easy for me. I had to go around the world.

I started looking around Dockerhub and found the OpenDayLight SDN controller in a docker container (docker pull raseel/opendaylight_base). Quickly the docker image is downloaded and up and running by using the command ”docker run -d -p 8000:8000 -p 8080:8080 -p 6633:6633 -p 1088:1088 -p 2400:2400 raseel/opendaylight_base /opt/opendaylight/”. Point your browser to the docker host@ port 8080 and login with the default credentials of admin/admin. Very easy! I wanted it to be that easy for HP customers to get the HP VAN SDN controller but I didn’t have a clue on how to do it.

I started with the installation instructions. A few sudo apt-get installs and unpacking a debian file, a couple shell scripts to run, keystone users to build. I was in over my head and needed to call in the professionals. My first call for help was someplace near London England. A good friend and mentor who works for Docker, Dave, told me I would have to learn about something called supervisord. Lots of fascinating things over on his blog at:

In a nutshell supervisord is like systemd and it stops and starts services and scripts. There were a few late nights learning how to use this. In the end it is not that difficult. Stay tuned for the blog on supervisord. Now I wanted to learn a little more about the startup process for the controller and what directories things are stored in. I called another pro in the Bay area who basically wrote the book on SDN. Chuck gave me some awesome information and it started me down another path of learning and exploration and led me right into CoreOS!

Hit the brakes….stop everything…..if you don’t know about CoreOS then get to the googeler quick! CoreOS is a lightweight operating system that is designed a lot like Chrome OS. It has an A and B side for booting. While your up and running on the A side, the B side is updating. A reboot puts you on the B side while the A side updates. BOOM! Mind blown! When the CoreOS boots up, IT IS DOCKER READY!!! More in the CoreOS blog later. If you can’t wait then look at this: Another great thing about CoreOS is it is designed from the ground up to be deployed in clusters and managed by etcd. I know, I had to run out and build one right away. This stuff is exciting!

Back to the SDN controller in the container. Another call to the bay revealed another master mind Juliano Vacaro with R&D in Brazil. This is where I struck pure gold. It turns out that Juliano and his team have built the HP Van SDN controller in a container. I could most likely pull it and my adventure would be over. I don’t like taking the short cuts and I wanted to learn. Juliano shared with me some examples of Dockerfiles and supervisord.conf. They do things just a bit differently and run the SDN controller separate from the keystone server. I wanted it all in one docker image to make it very easy for customers to pull it and start running without having to link containers together (yes, you can do that).

In the end, it was building the Dockerfile (a script that tells docker how to build an image) that finally did the trick: Here are the contents of the Dockerfile.


FROM ubuntu:14.04  # change14.04 with 12.04 for precise implementation

MAINTAINER Rick Kauffman <>

RUN apt-get update && apt-get install –no-install-recommends -y \
curl \
iptables \
iputils-arping \
net-tools \
ntp \
openjdk-7-jre-headless \
postgresql \
postgresql-client \
sudo \
supervisor \
software-properties-common \
ubuntu-cloud-keyring \

RUN rm -rf /var/lib/apt/lists/*

# Now add Keystone
RUN apt-get install –no-install-recommends -y ubuntu-cloud-keyring \
&& echo ‘deb trusty-updates/juno main’ >>/etc/apt/sources.list \
&& apt-get update \
&& apt-get install –no-install-recommends -y keystone

RUN rm -rf /var/lib/apt/lists/*

# Run the Keystone setup script
COPY ./ /
RUN ./

RUN echo ‘* Allowing external access to postgres database’ \
&& sed -i — ‘s/host all sdn\/32 trust/host all sdn\/32 trust\nhost all sdn\/0 trust/’ /etc/postgresql/9.3/main/pg_hba.conf \
&& sed -i — “s/#listen_addresses = ‘localhost’/listen_addresses = ‘*’/” /etc/postgresql/9.3/main/postgresql.conf
COPY ./hp-sdn-ctl_2.4.6.0627_amd64.deb /home/hp-sdn-ctl.deb
COPY ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf

COPY ./ /
EXPOSE 5000 35357 8443 6633


I needed a and a script along with the supervisord.conf file. Put all these files in a directory on a docker server, along with the debian package and issue the docker build –t ”xod442/van” . <– The dot at the end of this command will mess you up if you omit it. Then docker reads the dockerfile and creates the image. You can run the dockerfile over and over and it will produce the same exact image.

My trip around the world was fun and exciting (read: too many late nights in the lab) and I must say all the great people who helped me out are absolutely amazing, I cannot thank you enough. One thing for sure is I have an abundant amount of new topics to blog about. Stay tuned!

Now it is no longer necessary to stumble around getting your DevOps platform up and running. Get a docker server and start pulling!

Two commands to LAMP

Docker pull xod442/macfind3
docker run -d -p 80:80 xod442/macfind3 /usr/sbin/apache2ctl -D FOREGROUND

URL http://ip_address_of_docker_server

Two commands to get your HP VAN SDN Controller!

Docker pull xod442/van
docker run –privileged=true -d -p 8443:8443 -p 6633:6633 xod442/van /etc/supervisor/supervisord.conf
(The above two lines are actually one command)

URL https://ip_address_of _docker_server:8443/sdn/ui


Hit me up if you want to know more! I like to share!

Docker survival kit

WARNING!!!!!! Straight from the Mad Scientist!!

Part 3

Let’s finish this up! One of the biggest issues I had learning Docker was when you use $ sudo docker run -i -t xod442/lamp /bin/bash to get a terminal session running on a docker image, you spawn a new container id. THE CHANGES YOU ARE MAKING DO NOT EFFECT THE ORIGINAL DOCKER IMAGE!!!! They are only relevant to the container id you are working in. Once you are finished with the changes to the container, you will need to commit them to a NEW docker image $ sudo docker commit 90934ee6cf3f xod442/new_image_name. This is a bit tricky at first but once the light bulb comes on you’ll think you’re a freaking genius!

Now lets say the docker image we created is a LAMP server. We want to run the LAMP server and have it stay up until we decide to stop it. I found this command works well. $sudo docker run -d -p 80:80 xod442/macfind /usr/sbin/apache2ctl -D FOREGROUND. In this command we are binding the local host interface to port 80 and the docker interface to port 80 as well. To test if your LAMP is up point a browser to http://dockerhost (use the IP address of your docker host)

Another way to verify that our LAMP server is up and running is to look at the docker processes. $ sudo docker ps -a will display all the containers we have ever started and what their operational state is. In the diagram below you can see that container 90934ee6cf3f is UP and running on port 80 and 5a52ff424b65 exited about an hour ago.


Have you noticed that names? Like cocky_brattain? If you don’t specify a name when running or starting a container, docker will make one up. You will notice each one is unique to your host. You can use your own names by using $ sudo docker run –name (containerName) -i -t ubuntu /bin/bash. Now when you look at the docker processes, you can easily identify your container for the others.

Finally here is a short list of commands that I use often. Copy them down and make your own docker cheat sheet.

sudo docker run –name (containerName) -i -t ubuntu /bin/bash
-Start a docker container, give it a name, pull ubuntu from dockerhub, load into container and offer the bash prompt.

exit – exits the container

sudo docker ps -a – Shows what containers are active and recently stopped. Here you can find the container ID

sudo docker start (containerId) – Starts the container

sudo docker attach lampster – attaches to the console of the container by name

sudo docker exec -i -t containerid bash – gives you bash on a running container

sudo docker rm $(sudo docker ps -a -q) -Removes all containers from your workspace (Danger Will Robinson!!)

sudo docker rmi $(sudo docker images -q) – Removes all images from work space (Danger Will Robinson!!)

sudo docker login – Allows you to login to dockerhub

sudo docker search (Keyword) – Allows you to search the dockerhub for pre-built container

sudo docker pull (owner/ImageName) – Get container from dockerhub

sudo docker commit (containerId) (owner/ImageName) – Builds a new Image from a container

sudo docker push (owner/ImageName) – Put Images on your dockerhub space

Hopefully this three part blog has stirred up some interest in diving into the world of containerization. It is by far only a limited look into this technology and I urge you to setup your own docker workstation and explore!.

Finally, there is talk from Microsoft about working with Docker and implementing containers in Windows. When this is pervasive, keep in mind that if you build a docker container on a windows platform, it will not be able to run on top of a linux docker server. Kind of goes without saying ……but there are those of you reading this now who are not so string with the force!! You know who you are!

Diving Into Containers

WARNING!!!!!! Straight from the Mad Scientist!!

Part 2

No were not talking about dumpster diving ….but close. I don’t know about you , but I am a conceptual thinker. Give me the “headlines” (and google) and I can usually figure things out. If you’re not wired that way, along with the book mentioned in Part 1, you can also find some awesome documentation over at

OK, Part 1 left you with some homework. Did you set up your github and dockerhub accounts?…….OK….I’ll wait……go do it now!!!!

Conceptually, docker has three basic parts. Docker images (plenty to pull off of, The Docker client (docker command) and the docker host (server) running the docker daemon. Most of the interaction is with the host using the client commands. Below you will see a simple model of docker.


At this point you should have a new Ubuntu:Mate workstation and the docker deamon installed. If you were to use the $ sudo docker images command you should see a few local images. These were pulled down from dockerhub when we tested the docker install in Part 1. Here is an example of my current system.


You will probably notice the xod442, this is my dockerhub account name. Followed by a slash and the name of the Docker image. If you want to remove one image you can use the $ sudo docker rmi (image_name) command. If you would like a clean slate and delete all of the images you can use the $ sudo docker rmi $(sudo docker images -q), and say good bye to all your images. I used this several times in my learning process.

We briefly touched on the command to get us into a new container. We used $ sudo docker run -i -t ubuntu /bin/bash. The /bin/bash tells docker to do something and keep the container running. Without it the container would start and stop very quickly. You can use the $ sudo docker ps -a command to see the status of all containers. Without the -a option the command only shows running containers. In the graphic below I show the commands to start a new container and break down what is happening. I also show how to get out of a container (exit) and how to commit the changes that you make to a container, creating a new docker image (this is the point where the dockerhub account is going to come in handy.


Here is a diagram of the process. Using the docker run to initialize a new container, adding some extra love to it and committing to a new docker image. They say a picture is worth a 1000 words but this one is most likely 385. Remember concepts/headlines only!!!


I think this is a good breaking point. I urge you to go out to docker hub and browse some of the pre-built images. No need to reinvent the wheel, there are plenty to toys to play with. One last tip before we sign off. Use the $ sudo docker search (keyword) to look for specific images you might be interested in. You just might find what you are looking for. Finally if you want to get something you find on the dockerhub site, use the $ sudo docker pull (user-name/image_name) command to pull it down to your docker host.

Part 3 will be a docker survival kit!

I can hardly contain myself!

WARNING!!!!!! Straight from the Mad Scientist!!

Part 1

Curiosity and need often go hand in hand. When you know nothing about something, its best to start reading. Here is “The Docker Book” by James Turnbull. Perfect learners guide.

This blog is an effort to condense this information and help you get past a few wookie traps.

OK, first things first. What is Docker and why do you care? Well, I think of Docker as a multiplexer for the Operating System as opposed to VMware’s HyperVisor acting as a multiplexer for the hardware.

Here is a diagram of the basic differences between Virtualizaion and Containers. When you develop an application, it has dependencies on certain libraries and binaries (files we don’t often think about). If we are developing this on a VM in VMware, the app is dependent on certain files in that particular operation system. So if I ZIP up the APP files and send them to someone on another VM, the APP might not run. The only way to guarantee the APP to work correctly is to send the entire Virtual Machine. Docker builds and manages containers. Every dependent file needed for the APP to run properly are packaged in a very small file called a container. As long as you load the container on a similar docker host, the APP will run perfectly.


Let’s get started, we will need a workstation to turn into a Docker platform. I am a self confessed VirtualBox user. I could talk about why, but it would just be boring and not any fun. So fire up a new image of Ubuntu. Just found this and I have to admit….its pretty nice. Just take a look!


Install Docker:

Installing Docker is straight forward.
Open a terminal window and at the command prompt enter:
sudo apt-get update
sudo apt-get install

Make sure it installed properly by launching a new container:
sudo docker run -i -t ubuntu /bin/bash

You should now see a new bash root@c0679a7f6d84:/#
If this is what you see then you are in a new container. Congratulations!

UP NEXT!!!! Working with containers. Do yourself a favor and signup for free accounts on Github and Dockerhub…you’re going to need them!

A box inside a box inside a box?

Starting out in a new job and I find myself needing to know way more about VMware than I do now. Luckily, I have not been living under a rock and I know what VMware is. In a very small nutshell, VMware is a virtualization technology that uses hypervisors that basically multiplex the underlying hardware to many virtual machines. Multiple hypervisors are managed by VMware vSphere (Individual hypervisors can be managed by vCenter Client, more on that later). I’m thinking more like a Pistachio nutshell.

I recently acquired a new laptop with 16 GB of RAM and I have gone a little crazy with building Virtual Machines in Oracle Virtual Box and not really having a need for VMware products. Life comes at you fast and you need to learn to adapt or you will no longer be relevant. With a little creative thinking I found a way to build a complete VMware environment with two hypervisors, a vShpere appliance and a couple of real VM’s to vMotion back and forth. Big thanks to sysAdmGirl….she rocks!

Here is a picture of the logical lab environment. Keep in mind there are only two physical devices. The laptop and the Synology data store.


First things first, you will need to get a copy of Oracle Virtual Box and shutdown anything that is taking up extra RAM on your system, yes Chris, that means you’ll have to shut down TweetDeck as well!

You will see from the diagram the three Oracle VB’s will have 4GB of RAM and a 10GB hard disk and 2 processors. Follow the links to the ESXi hypervisor  (an ISO file), download it and while you are on VMware’s website get the vSphere OVA appliance. Two of  the Oracle Virtual Boxes will be made by using the Oracle Virtual Box interface and create new VM’s make sure to set the network interface cars to “bridged” mode. The third (vSphere) you will just need to double-click the OVA file and it will import into Virtual Box.

When they are all installed and running it will look like this.

ALERT!!!!! Pay attention here!!!
When you look at the vShere appliance it will say to point your browser to https://some_IP_address:5480. When you do, you will see something that looks like this:


You are probably thinking, where do I import the ESXi servers?…That’s what I thought too. This screen is to configure the vShpere appliance with single sign on and database storage locations. These are not the droids you are looking for. Drop the port 5480 from your URL and you will be presented the vShpere web client interface.

The VMware vSphere Web Client is a newer interface compared to the VMware vSphere Client (the old school client). The VMware vSphere client is the same tool used to manage a single ESXi hypervisor as well as vSphere. you can find it on VMware site as well. Once it’s installed, just feed it the IP address of your vSphere appliance (minus the port 5480) and off you go!

Alright, now you should have the three VM’s up and running. You will need to create a common data store that is running NFS. I used my Synology Network attached storage device. Find something you can use and figure out how to make it appear as NAS on your lab network. Unfortunately, I don’t know what you will use, so you will have to put on your little grey hat and start looking around. Just Bing it on Google. If you need to know how the ESXi servers connect to the NAS storage you can find that information Here.

What about the VM’s?

OK, so you have this micro environment and we have to find a desktop image we can deploy on our ESXi servers to vMotion back and forth. I found Damn Small Linux (50MB) fits the bill. Get it and load it to the shared NFS storage and use vSphere to create new VM’s on each hypervisor.

You’ve been a good sport so far and I promise we are almost at the end of this exercise. I did this because I thought “I wonder what would happen if I installed VMware in Oracle Virtual Box?” Would it work?  Is it like mixing matter and anti-matter? You are about to find out.

We need to make some slight modification to the ESXi hypervisors network settings so follow along:
In this diagram we launch the VMware vSphere Client and give the credentials for the vShpere appliance. Somehow mine is set to root/vmware. Then we click on each hypervisor and edit the networking settings.
Drilling down a little deeper, look for the properties and select the Management network (remember, this is for a LAB, in real life you would most likely do something else). Once there, click on the vMotion option to allow vMotion across the Management network.

BOOM! Use the vSphere to “MIGRATE” the DSL VM’s back and forth. Can you say Winner!!!

This is a very brief post about the working of vmware. I found a ton of cool , free, online training  Here at

Play Nice, you’re on your way to becoming a VCP!!