A year in the life of a wookie

DISCLAIMER: I am a self-proclaimed, self-trained software hack, I barely know what I am doing! I’m fairly sure this is how you do it.

I started to write some code and when I looked up, a year had gone by…….I’m listening to Strawberry Fields Forever and wondering how a full year has slipped by without a post to the blog……So, in the words of Lennon and McCartney,let me take you down, ‘cause I’m going to.

It’s been quite a journey from being a network engineer to becoming a “developer” (I use the term loosely) and every day I discover its a journey with no foreseeable end. I’m am learning new things all the time and the more I know it just opens up new avenues to discover.

I started the year off puzzling over an interesting problem. I was working with Ansible and trying to automate Cumulus switches. Seemed like it was fairly straight forward. All you need is a YAML file with “ALL” the variables for your data center(loopbacks, fabric IP’s, Router ID’s) and a playbook and viola, you have yourself a functioning data center.

Not so fast you ZTPheads, sure, I no longer had to type the configuration into the switch, but I had to come up with all the variables in the all.yaml file. You’re basically editing a text file with no syntax checking! That’s not ZTP, you’re just moving the workload over to making the ALL file.

I was working on a process to automatically generate the ALL file and came up with a simple solution. Antigua. This app takes 10 variables and spawns the entire spine/leaf network all.yaml file. It then uses Jinja2 templates to generate the full switch configurations and upload the starup-config file into Arista’s Cloud Vision Portal. After that the config is applied to the switches in the spine/leaf and everything (eBGP fabric) is up and running in less than a minute.
Want to roll out a new spine/leaf pod in your data center? How about having it up and running in less than 1 minute? Can you do that? I can.

Later I was challenged to have the switches just self configure and I rewrote Antigua into “ISSAC” a special version of Antigua that ran directly on Arista switches. Basically, the switch boots, looks in its TFTP directory for a “key” file. If it’s there, it sends it to all it neighbors (via ISSAC’s neighbor database) and then learns it’s position (spine01) and self-configures. The only thing you have to do is TFTP a key file into any one of the switches TFTP directory. That’s It! Even I can do that…..Geesh!

After I got the mechanics down for Antigua, I started to think about other applications where I could reuse the teknology. I started looking at managing VXLAN tunnels and automatically generating “vxlan” configlets and loading them into Arista CVP. I came up with Subway.
Subway uses a mongo database and keeps track of all the changes to the VIN flood lists. It was a fun thing to write but needs a little more work. With the introduction of eVPN, theres not much left to do with the control plane. Definitely an application worth coming back to.
Longing for something I could really use on a day to day basis, I started looking at what it takes to MLAG a pair of Arista switches. Take a look, it’s like 16 commands per switch.
A couple of them are even reversed, I hope you don’t make a mistake, you’ll have fun trouble shooting what you did wrong.

I really had the process refined for generating configlets and stuffing them into CVP. So I automated the MLAG process with Jetlag.
Jetlag basically takes two IP addresses, the seed port of the two ports used for MLAG between the switches and then the port number of the LACP downlink. Very simple application. It generates a mlag configlet, saves it to CVP and applies it to the switches.
If you think about it, these three applications could really become a single app. They all basically do the same thing. Get user input, make a text file of the config, upload to CVP and apply it to the switches in the inventory. After finishing these up and showing them around, Antigua started to get a lot of attention. Still the only app that I know of that can do what it does. Gauntlet thrown!

I moved onto another challenge. A partner wanted to use HPE IMC to monitor his customer’s network. If there was a real time alarm on IMC, they wanted IMC to open an “incident” in Service Now. and have two-way communication between the two to track resolution of the incident. I put together an app call “SnowBridge”. snowBridge uses IMC’s and Service Nows API’s and sits in the middle running a continuous loop. It runs on the command line with no user interaction. This app got me thinking about how to divide up the tasks into smaller micro-services. More on this later.

With this effort out of the way I started working on a request to streamline Antigua. Antigua will generate a YAML file. It could be used with Arista, Comware or Ansible. It will upload to any of the three systems (working on Tower). The new request was to strip away the flask framework, make it run from the command line and just generate the config files and store them on the hard drive. Check…..easy…….done. It’s called AntiguaCLI.

I had another request to update an application that runs in a docker container and allows for using the API’s from Bigswitch. More of a proof of concept app, it allows for logging into Bigswitch Networks BSN controller and looking at a couple lists. I think I will automate the routing function of BSN. This little project re-kindled my love for playing with docker. The next project would take it to new heights.

So, HPE makes an application called OneView that manages the new Synergy compute platforms. Tracks uplinks sets, networks, storage volumes, server profiles, kind of a single pane of glass for managing these types of things. When someone adds a network or a link set, the rest of the network world (Outside of OneView) doesn’t have a clue it happened. However there is some integration for the state change message bus in IMC. IMC listens to One View’s State Change Message Bus and learsn of these events but if you’re not using that……, certainly there must be something to use to learn of these changes.

I looked at the hpOneView python library and decided I could make an application that did just that and called it “Spymongo”. The whole solution (three docker containers) can be used or just a portion of it. It all depends on what you want to do.
The first thing is to docker-compose build the spymongo and the mongo database containers. Once these two containers are running, the spymongo app automatically listens the OneView scmb and learns about all the changes being made. It starts looking for network changes and saves the document to the mongodb. You don’t have to use it in this fashion. You could take the app and make it do whatever you need. You could possibly use it without the database.

I use the database because I need just a little more functionality out of the app. With spymongo running, scmb events are automatically being stored in a mongo db. I wrote a “front-end” for the database in flask called “spyFront”.
spyFront allows for the verification of the database records. It also has a feature that lets me learn every network event that OneView knows about prior to running spymongo. This lets me have “parity” between the OneView database and spymongo’s. The two databases are synced and spymongo keeps them up to date by listening the the scmb.
I did explore adding vcenter support to spyFront. Currently, spymong looks at vcenter and downloads a list of virtual machines and their network connections. With spymongo running, anyone can make an application and mount the mongodb. They no longer need to deal with ssh keys to have secure communication with OneView. Just simply use the mongo database and go.

Throughout the year I have been on my favorite learning site Udemy. and I have been looking at new things to play with. Hey, for $10 bucks it’s the cheapest education you will find. I have taken a Puppet course, a CHEF class, Flask/API/Docker training and a super heavy course in Angular where I learned about the Single Page Application.

Just last week I started looking at what Darth was doing (mind blowing as usual) over at Kontrolissues. and it looks like a blast! I also started deep diving mesophere’s DC/OS. All of this is available on my private github. Just need to push it through the Open Source Review Board before it becomes available to the public.

I hope it’s not another year until I blog again, I will promise to quit being so “lazy” and keep you up to date on the going-ons of a half-crazed nerf herder.

By the way, the lyric “climb in the back with your head in the clouds” kinda has a new meaning for me!
I wonder if anyone will know this lyric is from “Lucy in the sky with diamonds and not Strawberry fields forever…..HA!

Attack of the ZTP Server

Sub title: Simplified Aristra Zero Touch switch deployment

DISCLAIMER: You can only trust a mad scientist so much! 


Seems like I’m going a mile a minute here, but there is a great urgency to get this information out to as many people as possible, and as soon as possible. If you are building spine/leaf data centers with Arista switches, how would you like to provide a baseline configuration on a 4 spine and 8 leaf network and never touch one console cable?  You can get started just by scanning the barcode on the outside of the box!

ZTP is the nirvana state that all of us who have been doing this for a while dream of (while we’re not dreaming of other things!).  ZTP means you can pull the switch out of the box, plug in a power cable and connect the management port to the network and the switch will configure itself. It’s a neat trick that can save you a lot of keystrokes. There are several different ways of doing this and they are documented on the interwebs. No need to hash that out here. Let’s set up a simple lab and I’ll show you the way we do it here in the laboratory!


Above is a diagram of my lab. I am running 10x Arista vEOS switches in vagrant on a really big Windows 10 PC. All of them are set to zerothouch enable by default. This means if they can’t find a “startup-config file in flash, they will start broadcasting DHCP request out all ports.

You don’t need all 10, one or two will do, but I like to show off every now and then.

The ZTP server that we will implement will have a DHCP scope enabled and Option 67 as well to tell the switch who to “talk” to, to get is configuration, and possibly software as well.

First thing first! You will need to get my repo off of github at this location: https://github.com/xod442/aristaZTP

From a shell prompt on your linux workstation,

sudo git clone https://github.com/xod442/aristaZTP.git

You will see a newly created directory called aristaZTP.


I’m using the Ubuntu Mate distribution. It is probably one of the most “windows like” versions of linux and comes with “caja” a file explorer. Snap up a copy here: https://ubuntu-mate.org/download/


In Ubuntu’s command shell, to avoid typing sudo in front of just about every command you can “sudo –s” and once you enter the password, you will be at a “hashtag” prompt…..sudo no more!

Once you’re at the “#” prompt, type caja and spawn a very powerful file browser.


Now let me explain the different files, the README.md has some interesting tid bits and should be read at you leisure. There is a tftpboot directory, this is where we keep our csv variables and the configuration template. There is an interfaces file in my screenshot but these are not the droids you are looking for…….Finally, the setup.sh is a shell script that will configure the entire ZTP server from scratch. You just have to change a few things.



You’re looking at the first of three sections in the setup.sh file. Take a close look at what has a # in front of it. In the DHCP section the #apt-get install isc-dhcp-server is commented out. If you want to install it, and you do, get rid of the “#” in front of the command and it will change color.

Look a little more at the things you need to change.

Subnet netmask You might want to change this to match your own lab addressing. Take caution when working with multiple DHCP servers on the same network. Change the DHCP range and the ‘option tftp-server-name “”;’ to match the eth0 address of your ZTP server. A quick ifconfig should reveal the address you will need.

Next the option for the boot file. Option bootfile-name “boot.py” This is the file that you want the switch to start processing when it boots. This could be the switches startup-config, but you would need to do static DHCP entries and have a boot file for every switch! No such shenanigans here! Our file has the name boot.py which means it’s a python script and when the switch downloads the boot.py file it will start configuring the switch for you.

Let’s look at the other sections of the setup.sh file.

There are two more sections in the script. One to setup the tftp server and the other sets up a ftp server but we don’t really need it for our demonstration, but what the heck, fire it up anyway!


PAY CLOSE ATTENTION to the directories listed in the tftp server sections. These must point to where you have the tftpboot directory saved. Pay attention! Change them to match your environment.

Once you made the necessary changes to the setup.sh file, run it # ./setup.sh…………..didn’t run? Try change the permissions of the file like this # chmod 775 setup.sh…….try it again.

Now the DHCP and TFTP should be up and running but you will need to CHMOD the dhcpd.leases file. It’s a pain in the butt and I have not figured out a way to not have to do this but it only needs done when you boot up the ZTP server.


OK, just a couple other things to do for successful booting of our switches. The one thing you absolutely need to know is the MAC address of all the switches. You will need this information and should be easily obtained without having to console into each switch. Barcode lables on the boxes, Perhaps they are listed on your PO?

Once we have the MAC addresses, we can pair it up with some repetitive information and just about anything you would like to see configured in the “base configuration”. I keep things like the real IP address I want the switch to have and the Cloud Vision Portal credentials. You can maintain this information in an Excel spreadsheet or you can use “switchdb” A flask application I wrote for just such things. You can find switchdb here: https://github.com/xod442/scriptsonly/tree/master/switchdb

Hint, Hint! install switchdb on the ZTP server…easy peazy!

Once we have all of our MAC addresses in the CSV file, you will need to store the CSV file in the tftpboot directory with the name of “varMatrix.csv”. It should look something like this.


The last piece of this puzzle is the template.txt file. It is used to create the switches configuration. In this file, everywhere we have a dollar sign “$” it “pairs” a variable in the CSV file.


You can modify this template file to add anything else that might be a common configuration for all switches. Don’t get too crazy, the next blog post will cover Ansible and we will really start deploying configurations then

NOTE: Once the switches boot with this template, they will be ready to import into Cloud Vision Portal and further configurations can be deployed there!

The real “magic” behind this ZTP process is the boot.py file. I had to do some interesting things to make this work but the process is fairly simple. Here’s the process.

Switch boots and gets DHCP and location of the boot.py file, downloads and executes it

boot.py copies over the varMatrix.csv file and the template.txt file to the switches flash.

boot.py opens both files for reading.

boot.py finds the MAC address of the local switch and compares it to the MAC field in the CSV.

If there is a match, the dollar sign variables in the template are replaced with the CSV variables

boot.py saves the result to the switches flash drive and the switch reloads

The switches will be ready to be managed after they complete their reload. There are a couple different directions we can go from here. We can import the switches into CVP check out this blog post for more information. http://techworldwookie.com/?p=205

Next up: Whipping these switches into shape with Ansible and Docker!

Beginner’s guide to the Universe and Cloud Vision Portal Episode 2

Part Two: Get that switch in the game!
DISCLAIMER: You can only trust a mad scientist so much!
Thanks for hanging in there with me on this long blog post. We are going to go a little deep in building the configets for our spine leaf network using Arista Cloud Vision Portal (CVP). The CVP server ships as an OVA and with a couple of clicks we can have it up and running on our ESXi host. You’re going to need about 10 Gig of RAM to get this going. At a bare minimum I have had it running with 7 Gig, but I do not recommend it. The first step is to login with the cvpadmin user and run the setup script. You will be prompted to change the “Root” password, no weak stuff here….I tried.
After we accomplish this we need to run the setup script. I selected “s” for standalone mode. Answer the prompts according to your own environment.
Select “r” and “a” for apply and watch the boot process. I had a lot of issues with the nic coming up with the correct address. If this happens to you, quit the script with a “q” and login as “root”, remember, you just set the password for the root user. Once the network connectivity issues are resolved run the standalone script once more. It will take about 15 minuets for the CVP server to boot.

Look in /etc/sysconfig/network-scripts/ifcfg-eth0 for any mis-configurations. I had issues with the NTP server as well so I just used the “IP” address instead of the DNS name. All good, let’s move on. Point your favorite web browser to the address of the CVP server and you should be greeted with the login screen.
We are going to login to CVP with the “cvpadmin” user. At the first login attempt you will use cvpadmin as the userid and cvpadmin for the password as well. You will be prompted to change the password for cvpadmin.

CVP Main Menu

If you read part one of this blog post you should be a little familiar with navigating around CVP. The main menu is your starting point. Let’s verify our containers are just the way we want them using the Network Provisioning page. Arrange them to your liking. We will start with at least one of our switches are in the inventory. We’ll use it’s configuration to build our configlets.

Each container in the tree can host a particular part of the overall switch configuration, a configlet. As we traverse down the tree to the switch we pick up configlets and stitch them together to make up the overall configuration.

In the blog we will create all the configlets for our two spine/two leaf network.
If we take a closer look at our switch that we have added to the inventory, we see it is yellow and if we hover on the image we will get a pop-up that tells us the devices configuration is out of sync. This means that the configuration of the switch does not match what are configlets “think” it should be.
The process starts by “right-clicking” the device and selecting “manage-configet” from the network provisioning screen.
On this screen you will select “validate” from the bottom of the page and CVP will pull the configuration from the switch and compare it to what the configurations look like from cascading through the network provision tree.
Seems we have some differences between the two! If you see items marked in red in the running configuration window to the right, this indicates that you do not have a configlet for that part of the configuration and you will need to create it.
In the running configuration window, highlight the entire configuration and paste it into an Excel spreadsheet. It’s a little weird, but believe me, it makes it easier to see what’s going on.
OK, at this point we are going to start rearranging some of the configuration statements. We will group them into common items that can be found on all switches and unique items that only are available on the switch, things like router-id, loopback address, etc, etc.
Once we get our configuration statements arranged the way you want them, go back the CVP screen and let’s add a new configlet. Pull down the plus “+” sign menu and select Configlets.
Give the configlet a title and copy and paste the configuration statements from the excel SS into the configlet editor. Save.
Here we have copied out the rest of the unique information for the other switches to help build the configlets for all the devices in our network. Obviously, this won’t scale but it is a good way to learn what configlets are and help you keep track of what is going on as you’re just starting out.
Looking good, I think we can move on! Configlets done!

Assigning configlets to containers

We need to assign the common configlets to a top level container. From the network provision screen right-click on your top most container and manage configlets. Select the common configlets and assign them to this container.
Now, right-click on the switch and assign the configlets to the individual switch.
The unique configlets will be at the left and you will need to place a check mark on which ones you want to add. Don’t worry if the common configlets show up on the right. They are supposed to. Select validate from the bottom of the screen.
The main goal of this exercise is to get the red indicators in the running configuration window to no longer appear. You either make another configlet or remove the configuration from the switch.
It is possible to use the Reconcile at the bottom of the screen and this will clean up the remaining red items as well. I don’t really recommend this, it is up to you.
For me it is difficult to tell if these will be common across all switches or not. I just keep making configets until all the red is gone. As shown in the following image.


Now once we save the validation screen we will see back in the network provision screen that something new has appeared. A small letter “T” in a yellow circle tells us that we have tasks that need to be pushed.
At the top left corner of the screen get to the short cut menu and navigate to the tasks page and have a look.
On the far right of the following image, we see the tasks are pending.
Just above the tasks pending notification there is a small circle icon with an arrowhead inside. Click on this to start deploying the task(s). At this point they are only staged.
If the deployment is successful you can look at the network provision screen and see the color of the switch icon has changed to purple.
Now to finish this up we will add the other three switches and with any luck we will automatically configure the switches common elements.
Don’t forget to add the unique configlets to the new switches and deploy the new tasks
We have now completed this blog post on configlets and how to use them to configure our network devices. If we add more switches to other containers we will have to repeat this process. One of the benefits of doing our configurations in this format, is we can add ACL’s to the sector 5 container and we don’t need to be too worried about what configlets are in the others.
Next up, we will look at writing a API script to log into CVP and back these configlets up! Stay Tuned.

Oh, yeah, that would be a nice to have!

A quick Arista Spine/Leaf to play with
Part One: Get off the bench
DISCLAIMER: You can only trust a mad scientist so much!
It came to me while I was thinking about something else. I just showed you how to get switches loaded into Arista’s Cloud Vision portal, but you might not have any switches to play with.
You have Vagrant installed, if not Vagrant Up!
You have Oracle VirtualBox installed OVB
You have the box file from Arista
Arista vEOS-lab-6.16.6M-virtualbox.box
The finished product should look like this.
If you are familiar with vagrant, have I got a file for you!
Click this to download.
OK, move the vagrantfile into the directory …just make something up…and change to that dir.
Rename the file if you’re using linux to Vagrantfile (Capital V)
Make sure you add the box file to vagrant
Vagrant box add vEOS4166M vEOS-lab-4.16.6M-virtualbox.box

Then, at the command line just type vagrant up.

Each VM need 1536 of RAM!

The four switch lab will boot up inside Oracle Virtualbox.
Log into the switches with admin/admin

Now you have a lab that’s ready for a BGP configuration!

NOTE!!! You will need to go into the VM setting in Oracle and change the Management interface (#1) to Bridged mode.

It will show that it is set to NAT and that won’t work.

Beginner’s guide to the Universe and Cloud Vision Portal

Part One: Get that switch in the game!

DISCLAIMER: You can only trust a mad scientist so much! 

Note for all the fans: I have not posted in quite a while. I was doing some soul searching and I cut the soles off of my shoes, learned to play the flute and lived in a tree. It took a while but I finally realized that life was not worth living if I couldn’t blog and look at facebook. So, I’m back!….and the crowd went wild!

OK, so you have heard the latest word on the street that HPE will start reselling Arista Networks. Yes, it’s true. That means we get to start experimenting with new things here in the laboratory. You might be of the mindset that a switch is a switch but Arista switches take it a bit further.

Arista keeps the “state” of the switch in something called SysDB. It’s a database that keeps all the state information of all the processes. If the OSFP process hangs, you simple restart it and it loads the last know state form the SysDB, which greatly reduces network outages.

Now if you kick it up a notch, all the Arista switches share the state information with Arista Cloud Vision Portal (CVP). CVP uses something called NetDB to keep the state of the entire network! CVP now gives us a single place where we can apply “API’s”, yes, I said it! API….you knew I wouldn’t go a whole blog post without mentioning it! The API’s allow the entire configuration of the network to be manipulated and automated any way you want it. I did a little pimping out of the log in screen, this is what it looks like.


Let’s login and start working with CVP. Here is what waits for you upon successful authentication.


As Jeff Probst would say, “First things first!” There are a number of things to click on and look around, but in order to do anything substantial we need to add some switches to CVP. CVP uses a concept of configlets, small portions of a config file that get applied to hierarchical containers (not Docker) and as you go down the tree, additional configlets are combined to make the final configuration. Let’s take a closer look. I will log into a vEOS switch running in Oracle Virtualbox (didn’t think I wouldn’t mention that either???). Here is spine01.


This switch is fully configured. Even so, it was missing some key information. Now pay attention! You are going to miss the most important part. CVP will not be able to manage any switch without the correct credentials. This means the credz that you use to log into CVP must be on the switch as well…….all you ZTP heads don’t blow a cork just yet, we’ll get to that in another blog, stay tuned….so, just for now, we will add the username to the switch, manually.

HOLD THE PHONE!!!!!!……I just added the username and it was the exact same commands as a Cisco switch! Yes, you are correct! You do not need to learn a new cli, for the most part it is exactly like Cisco.

Configuring the switch

Conf t (don’t need the “t”)
username cvpadmin privilege 15 role network-admin secret mypassword
CTL-Z (I’m old school)
Wri me (Told you…write to memory)

Now we have the credentials saved to the configuration, its VERY important that these are the same as the ones used to log into CVP.  We also need to tell the switch that it will be managed by……..wait for it………APIs!!

Conf  (Not using the ”t” this time because I’m lazy)
Management api http-commands
Cors allowed-origin all
No shut
Wri me

The only thing we are missing is assigning an IP address…..(be quite you ZTPheads!) to the management interface.

interface management 1
ip address
ip route
Write me

Looking good. You can consider that this will be the very basic configuration needed by CVP to manage the device. We will build a configlet to ensure that as well.

Adding the switch to CVP

It’s been all fun and games up to this point, but here I got burned pretty badly. Let’s be honest, there was nothing pretty about it. There are a couple of ways to get a running switch into CVP. You can add them one at a time or by using the bulk import method. We’ll look at both options but first we are going to guarantee that all of the devices we import will have a minimum configlet applied.

Log into CVP and look at the main screen (pictured above) and click on the Configlets button. Your list will be empty if you’re just starting out. Look at the top right of the screen and you will see a plus “+” sign with a down arrow….click on it and select configlets.

At the bottom of the page click on Save.

Now we need to apply the configlet to the uppermost container in our tree and I know I will have to have a discussion about containers right about now!



It’s a far flung organization we have over here a wookieware so we approached the container tree with a little foresight into what will happen and what could happen. Security it always a concern so we wanted to have a container just for security ALC’s otherwise known as access control lists and yes, I do know my ACL from a hole in the ground, but I digress. Here is what our tree looks like.


We created a top level container called Wookieware. After that we have our security container called Sector 5, this could be some geographic designation but it is just a place where the security folks can add a configlet of an ACL and have it trickle down to the lower levels where the switches will inherit them. Moving down the tree we have our data center container dc1 and after that the different pod containers. Finally a little further division of the switch role. Spine or leaf.

It’s important to note that configlets can be deployed in any container. The top level will have Motd, Triple-A, and basic routing. The switches will have configlets that pertain to just local configurations like IP address of interfaces, Autonomous system numbers, loopback IP addresses and hostnames.

The implied idea is that as we move down the tree the switch picks up the configlet in each container (and software image) and by the time we arrive at the switch most of the configuration is complete. So take a little time here and build out what you want your tree to look like.


But what about the switches!

Let’s go back to the main menu and look at the Inventory. At the top left you will see grid. Click on it and it will give you shortcuts to where you want to go.


Choose Inventory and it will take you to your CVP inventory page. From here (your list should be empty if you’re just starting out) look at the top right and click on the plus “+” sign.

We can add switches one at a time…BORING!!…..but let’s have a look. The add device radio button is on Add Device. In the Host Name / IP box we will but the IP address of the switch and navigate down to the container we want to place our switch. At this point you need to have a long think about what you are doing. If your switch is fully configured then I wouldn’t recommend adding it to the spot whare you want it. Perhaps it’s better to have a container for all new switches and import to that first. Then make sure you have the necessary configlets you desire. It’s up to you. Think!

I don’t like to think a lot so for the purposes of this blog I will choose the final container x-spines and click on the add button next to the Host Name /IP address.


Now watch out! Here’s is where it gets a little funky, don’t know if this is browser related but as long as the screen says connecting it will not add to the inventory when you select the save button at the bottom of the screen.


Look at the top right of the screen again and locate the refresh icon. Click it. Click it. Click it. Keep clicking on it until the switch says connected.


The switch in now in the inventory and is inheriting any configlets you have in your container tree. Click on the Save button at the bottom of the screen to return to the inventory home page.


Bulk importing switches

The same result can be obtained by using Import Device function. From the inventory home screen click on the plus “+” sign at the top right of your screen and this time we will choose the Import Device function. If you look just below the validate button you will see a link to download a sample CSV file. Download it and we will quickly fill it out.


Open up the Excel CSV file and look at the information we need. Below is a sample of the one I used to populate my system.


The same thing applies to the bulk loader. Keep refreshing your screen until you see all the switches Connected Then hit the Save button at the bottom of your screen.

Next up…Part Two….working with configlets.

If your new to Arista then look at this: Arista Warrior. It’s a bit dated but still an excellent resource.

A Brief diversion

Console 2.0 a limited operators interface to IMC.

OK, sorry for the brief diversion from the python blog, but something came up and I wanted to share. We all know the power of API’s, I have been evangelizing this for quite some time now. Here is an example of API’s in action. I have a customer that uses IMC and likes it a lot. The main trouble he is having is he cannot lock down IMC to the point where the noob console operators cannot get into trouble. Things like putting a port in the down state and having it turn out to be an uplink port. So with some python code, HTML5, IMC API’s and containers…can’t forget those, I put together a solution. Originally this was implemented on a WAMP server. So if you have a Windows only policy, this solution will work for you as well (You will need my Win2008R2 OVA file…email me).

I give you Console 2.0…in a docker container. Lately, it seems I’m ending a lot of conversations with …”in a docker container” …….I’m kind of like that guy with a jalapeno on a stick!
This solution allows for the multiple users to find MAC or IP addresses, up/down front facing ports only, change vlans on interfaces and add voice VOIP configuration to switchports. I added a few other handy things to have and finally an option that will blast the voip port back to factory defaults.
This is available on my Dockerhub account at:
To get the docker image you will issue this command from a docker server: sudo docker pull xod442/console2
To get it to run:
sudo docker run -d -p 80:80 xod442/console2 /usr/sbin/apache2ctl -D FOREGROUND
Point a browser to the docker server’s IP address on port 80. Away you go!

API programming with python for HP VAN SDN Controller

Part Two: Navigating the HP VAN SDN API
DISCLAIMER: You can only trust a mad scientist so much!

OK, we have our API programming lab up and running. You can log into the HP VAN SDN controller from your DevOps station. It is time to start exploring the API’s and what they have to offer.
For the most part API’s will be used in a couple of ways. It is the language of the web. API’s are simple to use. There is the HTTP GET, which collects information from the web server, POST, which creates and PUT, you guessed it, it updates. With these few basic commands we could create just about any C.R.U.D. application we want. Let’s not get ahead of ourselves, we want to find out what API’s are available. To do that will navigate to the RSDOC page of the controller. Thanks for thinking of us! The API’s are documented and interactive! Point the browser on the DevOps station to the address in the graphic below.
In order to get to all the goodies (API’s) we need to tell the controller that we are a “friendly” by entering the username and password. So this part is a bit tricky and you will need to enter some JSON in the “auth” api and get a token. By default the HP VAN SDN Controller uses keystone as an authentication mechanism. It will give you a token (good for 24 hours) to use in your communications. Click on the “auth” api and then on the green “Post” button. A box will appear asking for the JSON login string. {“login”:{”user”:”sdn”,”password”:”skyline”}}. Then the “Try it out” button. Look for the token in the server response. Copy the token to the clipboard (Double-click on it and copy)
Now look at the top of your screen for an explore button. Right to the left of it is a box where you will paste the token. It is a long hex number that I usually just memorize. Once it is pasted in the box, click on the Explore button and you should get a response of OK200 at the top center of the screen.
Before we start looking around you might want to know that switches, managed by the HP VAN SDN controller are called dpids, Datapath Identifiers. They look like they impart no knowledge but they do…just look at this.
So some of the API’s will ask for a specific dpid. Just use the datapaths API without any options to see what the controller knows about dpids.
Now we are ready to click on the other API’s and test drive them.
Now for a little bad news…Currently the HP VAN SDN controller is not aware of any network devices (dpids) so your exploration is a bit boring. We need to fire up a test network with a tool called mininet. To do this look on the desktop of the DevOps station. You will see a notepad file with start mininet. Double click the file and it will open in the default editor. Look for the IP address (should be xx.xx.xx.xx) Change the IP address to match the IP address of the HP VAN SDN controller. Open up a mate terminal and paste the command and hit enter.
You will see mininet start up and load the hosts and switches. It will finish at a mininet prompt. At the prompt enter the command pingall to generate some interesting traffic.
Now go back and start looking around for nodes, datapaths and links. You will start seeing some interesting and familiar things like IP addresses and MAC addresses and switch port numbers. Get familiar with these API’s and what they are capable of. We will be writing python applications to take advantage of them in the next post.
Happy Exploring!

API programming with python for HP VAN SDN Controller

Part One: Getting your API lab up and running
DISCLAIMER: You can only trust a mad scientist so much!

If you’re an occasional visitor to this blog then you will know that I have a “thing” for API’s, or Application Program Interfaces. Seems everybody has one these days so why not learn how to take advantage of them. Before we get started we will need a test lab where we can test our creations without bringing down the production network, a career limiting move (CLM).
So without spending any money, you can build a virtualized lab to run a couple Virtual Machines right on your laptop (recommend you have minimum 8 gig of RAM). You will need to run two VM’s. One will be the HP VAN SDN controller and the other will be a L.A.M.P. (Linux, Apache, MySQL and Php) server.
Luckily, I have these to VM’s prebuilt and sitting on my ftp server in a huge zip file with lots of other goodies. The following picture shows the ftp location and credentials. Once you get this downloaded (sorry if you’re still using 56k dial up) we can take the next step to getting the lab up and running.
OK, got the zip file, check….It has been extracted to some folder on your computer, check….you have identified the two “OVA” files we will import into Oracle Virtual Box (“HP VAN SDN Controller 2.3.5.ova” and “My New DevOps Box.ova”), check…..You have Virtualization Technology enabled you your PC…….do it now!!!!…..check…..You have the latest copy Oracle Virtual Box downloaded and installed……OK, I’ll wait…….you think you would have done this by now…OK…check. We are ready to import the OVA’s into Oracle Virtual Box (OVB).
Open OVB and from the main menu select File>>Import……check!
Once you have selected to VM to import click on NEXT and IMPORT…Just a few secs to wait here.
Now we have just a few changes to make to the VM settings to get this VM to be a part of our network. We will set the NIC (Network Interface Card) to be bridged mode. This is handy if you are running DHCP on the host network. You will do this for both virtual machines. The following picture will guide you through this process.
Now that we have the NIC configured properly we can start up the virtual machines and log in and take a look around. You will repeat the process depicted with both VM’s.
Now for a little bad news…the VM’s that you just started are Linux distros of Ubuntu. One with a GUI and the other without. The HP VAN SDN Controller will have a user of sdn and a password of skyline and the LAMP server will have a user of rick with a password of siesta3.
You will need a bare minimum of linux chops to complete this tutorial so here they are.
I will leave you here at this spot so you can practice your linux commands. On the desktop of the LAMP server is an icon for a MATE terminal. Double-click it and a terminal window will appear. Enter the command ifconfig and look at the IP address your LAMP server is assigned ….write it down. On the SDN controller there is no GUI so from the initial prompt enter ifconfig to see what IP address it was assigned and write it down.
Finally, to check to see if your SDN controller is operational lets go to the LAMP server and from the application menu, slide down to Internet then over to Chromium Web Browser. Once the browser launches enter https://ip_address_of_sdn_controller:8443/sdn/ui. It should look like the picture below.
Enter the credentials from above and navigate some of the menus. You are now ready to take the next step..stay tuned.

Around the world with Docker!

DISCLAIMER: Not intended for production use! 


UPDATE 12102015 – I changed the dashes to regular dashes so if you copy and paste commands they will work!

You could probably tell from my last three posts that I have been experimenting with Docker. If you have no clue what I’m talking about then see below! If you haven’t spent the past few months of your life admiring the weave of the cat-5 cabling in the IDF then let’s proceed.

I like it easy. Putting things in Docker containers certainly makes life easy. It is even easier when you make Docker images out of Dockerfiles. I had a dream to have a DevOps platform that I could have everything I needed to create SDN applications all in one system and be able to deploy an identical platform in several minutes. Things like a HP VAN SDN controller, OpenDayLight controller, a LAMP server and possibly a mininet application to generate some flows.

If you have been following along with the last blog posts you just might have an Ubuntu:Mate platform with the Docker daemon running. Wouldn’t it be great to say “Docker pull xod442/macfind3” and have a LAMP server that you can start using in 5 minutes? How about “Docker pull xod442/van” and have the HP VAN SDN controller at your disposal? Go ahead and do it. They are both waiting for you. I wish it was that easy for me. I had to go around the world.

I started looking around Dockerhub and found the OpenDayLight SDN controller in a docker container (docker pull raseel/opendaylight_base). Quickly the docker image is downloaded and up and running by using the command ”docker run -d -p 8000:8000 -p 8080:8080 -p 6633:6633 -p 1088:1088 -p 2400:2400 raseel/opendaylight_base /opt/opendaylight/run.sh”. Point your browser to the docker host@ port 8080 and login with the default credentials of admin/admin. Very easy! I wanted it to be that easy for HP customers to get the HP VAN SDN controller but I didn’t have a clue on how to do it.

I started with the installation instructions. A few sudo apt-get installs and unpacking a debian file, a couple shell scripts to run, keystone users to build. I was in over my head and needed to call in the professionals. My first call for help was someplace near London England. A good friend and mentor who works for Docker, Dave, told me I would have to learn about something called supervisord. Lots of fascinating things over on his blog at: http://dtucker.co.uk/.

In a nutshell supervisord is like systemd and it stops and starts services and scripts. There were a few late nights learning how to use this. In the end it is not that difficult. Stay tuned for the blog on supervisord. Now I wanted to learn a little more about the startup process for the controller and what directories things are stored in. I called another pro in the Bay area who basically wrote the book on SDN. http://www.amazon.com/Software-Defined-Networks-Comprehensive-Approach/dp/012416675X. Chuck gave me some awesome information and it started me down another path of learning and exploration and led me right into CoreOS!

Hit the brakes….stop everything…..if you don’t know about CoreOS then get to the googeler quick! CoreOS is a lightweight operating system that is designed a lot like Chrome OS. It has an A and B side for booting. While your up and running on the A side, the B side is updating. A reboot puts you on the B side while the A side updates. BOOM! Mind blown! When the CoreOS boots up, IT IS DOCKER READY!!! More in the CoreOS blog later. If you can’t wait then look at this: https://www.youtube.com/watch?v=vy6hWsOuCh8. Another great thing about CoreOS is it is designed from the ground up to be deployed in clusters and managed by etcd. I know, I had to run out and build one right away. This stuff is exciting!

Back to the SDN controller in the container. Another call to the bay revealed another master mind Juliano Vacaro with R&D in Brazil. This is where I struck pure gold. It turns out that Juliano and his team have built the HP Van SDN controller in a container. I could most likely pull it and my adventure would be over. I don’t like taking the short cuts and I wanted to learn. Juliano shared with me some examples of Dockerfiles and supervisord.conf. They do things just a bit differently and run the SDN controller separate from the keystone server. I wanted it all in one docker image to make it very easy for customers to pull it and start running without having to link containers together (yes, you can do that).

In the end, it was building the Dockerfile (a script that tells docker how to build an image) that finally did the trick: Here are the contents of the Dockerfile.


FROM ubuntu:14.04  # change14.04 with 12.04 for precise implementation

MAINTAINER Rick Kauffman <chewie@hp.com>

RUN apt-get update && apt-get install –no-install-recommends -y \
curl \
iptables \
iputils-arping \
net-tools \
ntp \
openjdk-7-jre-headless \
postgresql \
postgresql-client \
sudo \
supervisor \
software-properties-common \
ubuntu-cloud-keyring \

RUN rm -rf /var/lib/apt/lists/*

# Now add Keystone
RUN apt-get install –no-install-recommends -y ubuntu-cloud-keyring \
&& echo ‘deb http://ubuntu-cloud.archive.canonical.com/ubuntu trusty-updates/juno main’ >>/etc/apt/sources.list \
&& apt-get update \
&& apt-get install –no-install-recommends -y keystone

RUN rm -rf /var/lib/apt/lists/*

# Run the Keystone setup script
COPY ./setup-ks.sh /
RUN ./setup-ks.sh

RUN echo ‘* Allowing external access to postgres database’ \
&& sed -i — ‘s/host all sdn\/32 trust/host all sdn\/32 trust\nhost all sdn\/0 trust/’ /etc/postgresql/9.3/main/pg_hba.conf \
&& sed -i — “s/#listen_addresses = ‘localhost’/listen_addresses = ‘*’/” /etc/postgresql/9.3/main/postgresql.conf
COPY ./hp-sdn-ctl_2.4.6.0627_amd64.deb /home/hp-sdn-ctl.deb
COPY ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf

COPY ./run.sh /
EXPOSE 5000 35357 8443 6633
ENTRYPOINT [“/run.sh”]


I needed a run.sh and a setup-ks.sh script along with the supervisord.conf file. Put all these files in a directory on a docker server, along with the debian package and issue the docker build –t ”xod442/van” . <– The dot at the end of this command will mess you up if you omit it. Then docker reads the dockerfile and creates the image. You can run the dockerfile over and over and it will produce the same exact image.

My trip around the world was fun and exciting (read: too many late nights in the lab) and I must say all the great people who helped me out are absolutely amazing, I cannot thank you enough. One thing for sure is I have an abundant amount of new topics to blog about. Stay tuned!

Now it is no longer necessary to stumble around getting your DevOps platform up and running. Get a docker server and start pulling!

Two commands to LAMP

Docker pull xod442/macfind3
docker run -d -p 80:80 xod442/macfind3 /usr/sbin/apache2ctl -D FOREGROUND

URL http://ip_address_of_docker_server

Two commands to get your HP VAN SDN Controller!

Docker pull xod442/van
docker run –privileged=true -d -p 8443:8443 -p 6633:6633 xod442/van /etc/supervisor/supervisord.conf
(The above two lines are actually one command)

URL https://ip_address_of _docker_server:8443/sdn/ui


Hit me up if you want to know more! I like to share!

Docker survival kit

WARNING!!!!!! Straight from the Mad Scientist!!

Part 3

Let’s finish this up! One of the biggest issues I had learning Docker was when you use $ sudo docker run -i -t xod442/lamp /bin/bash to get a terminal session running on a docker image, you spawn a new container id. THE CHANGES YOU ARE MAKING DO NOT EFFECT THE ORIGINAL DOCKER IMAGE!!!! They are only relevant to the container id you are working in. Once you are finished with the changes to the container, you will need to commit them to a NEW docker image $ sudo docker commit 90934ee6cf3f xod442/new_image_name. This is a bit tricky at first but once the light bulb comes on you’ll think you’re a freaking genius!

Now lets say the docker image we created is a LAMP server. We want to run the LAMP server and have it stay up until we decide to stop it. I found this command works well. $sudo docker run -d -p 80:80 xod442/macfind /usr/sbin/apache2ctl -D FOREGROUND. In this command we are binding the local host interface to port 80 and the docker interface to port 80 as well. To test if your LAMP is up point a browser to http://dockerhost (use the IP address of your docker host)

Another way to verify that our LAMP server is up and running is to look at the docker processes. $ sudo docker ps -a will display all the containers we have ever started and what their operational state is. In the diagram below you can see that container 90934ee6cf3f is UP and running on port 80 and 5a52ff424b65 exited about an hour ago.


Have you noticed that names? Like cocky_brattain? If you don’t specify a name when running or starting a container, docker will make one up. You will notice each one is unique to your host. You can use your own names by using $ sudo docker run –name (containerName) -i -t ubuntu /bin/bash. Now when you look at the docker processes, you can easily identify your container for the others.

Finally here is a short list of commands that I use often. Copy them down and make your own docker cheat sheet.

sudo docker run –name (containerName) -i -t ubuntu /bin/bash
-Start a docker container, give it a name, pull ubuntu from dockerhub, load into container and offer the bash prompt.

exit – exits the container

sudo docker ps -a – Shows what containers are active and recently stopped. Here you can find the container ID

sudo docker start (containerId) – Starts the container

sudo docker attach lampster – attaches to the console of the container by name

sudo docker exec -i -t containerid bash – gives you bash on a running container

sudo docker rm $(sudo docker ps -a -q) -Removes all containers from your workspace (Danger Will Robinson!!)

sudo docker rmi $(sudo docker images -q) – Removes all images from work space (Danger Will Robinson!!)

sudo docker login – Allows you to login to dockerhub

sudo docker search (Keyword) – Allows you to search the dockerhub for pre-built container

sudo docker pull (owner/ImageName) – Get container from dockerhub

sudo docker commit (containerId) (owner/ImageName) – Builds a new Image from a container

sudo docker push (owner/ImageName) – Put Images on your dockerhub space

Hopefully this three part blog has stirred up some interest in diving into the world of containerization. It is by far only a limited look into this technology and I urge you to setup your own docker workstation and explore!.

Finally, there is talk from Microsoft about working with Docker and implementing containers in Windows. When this is pervasive, keep in mind that if you build a docker container on a windows platform, it will not be able to run on top of a linux docker server. Kind of goes without saying ……but there are those of you reading this now who are not so string with the force!! You know who you are!