No LAG in Link Aggregation

Sub title: Automating the boring


TL;DR
Click less (a lot) with the ability to auto generate LACP sets with the HPE Composable Fabric controller API!

Say you have yourself a pair of HPE Composable Fabric (switches) modules. And about 48 HPE DL 360 servers. Maybe you are setting up a HADOOP lab and you are going to run something like elastic search on the nodes. Well, currently, there is not integration with that workload. You are going to have to use the GUI to manually configure each LACP set.
No worries you seeker of knowledge, you can grab a copy of sidekick:
Sidekick on HPE github
Choose the menu item for Autolag and select the “a-side” and “b-side” switches. Choose the vlan group and then enter how many LACP sets to create. Click the submit button and autolag will build the desired number of LAG’s on the HPE Composable Fabric manager.

Senseless acts of automation


TL;DR
Possibilities can become reality…..if you just think of enough stupid things!

Truly being a Mad Scientist these days. I would always say that with stackstorm I can automate just about anything. Well, My dream has always been to tweet a vlan into my data center. Yes, you’re proably rubbing your chin saying the cheese has slid off this wookie’s cracker.
But not so fast! If you think about what you have to do to make this task happen, in an automated fashion, then maybe it’s just not so dumb after all.
We don’t want to have to pick up our phone a tweet every time we want to test our app, do we? So we start with the fun little app to send a tweet for us.
tweet
It’s OK security buffs, I changed the oAuth token…Next we are going to have to make a stackstorm pack. It’s just a bunch of folders so we can keep everything together.
html
Now we need to depend on the twitter sensor so we need to install the st2 pack. st2 pack install twitter. In the sensor folder we will find the sensor, it looks like this.
html
You can see where it wants the oAuth token. So we add that information to the configs directory like this:
html
Now with the sensor kicks out a trigger for a matched tweet, our rule will “fire”
html
That rule will can for a st2 action to run. In this case it’s a workflow. Heres a look at the YAML file.
html
Looking at this file we can see that it has a runner type of Orquesta. That means it’s going to run a st2 workflow, like this:
html
There are two tasks in the workflow. The first is to format the tweet and get the information we want, the vlan numbers. The second will send three variables over to the stackstorm-hpe-cfm pack and run an action to POST a vlan to the CFM fabric.
html
The YAML file tells the python what variables are coming and then the python script will match to my twitter handle.
html

Finally, 3 variables are sent to the hpecfm.post_vlan st2 action. So, that my friends is how you can tweet vlans into your data center fabric. Absolutely crazy!

I sense a disturbance in the force! How about you Mr. Spock?

Sub title: stackstorm sensors, what little I know about them
_____________________________________________________________________________________
spock
TL;DR
Sensors are what make stackstorm “event” based. They can “watch” for specific events to occur, and when they do they emit a trigger-instance. The triggers are matched to Rules and if there is a match, the rule will kick off an action, or a host of actions in what is called a workflow. A critical piece of “If This Then That” automation.


When I started my journey into stackstorm, I knew that it would take some time to get my head around all the moving parts. The Actions, Workflows, Sensors, Rules, Triggers are all different from each other and have an intertwined relationship. In this blog, we have discussed building a stackstorm pack and building actions and workflows. By their nature actions and workflows are not automation. They need something to put them in motion. That job falls to the Rule. If you want to know more about rules, then jump down and follow the link at the bottom of this post.

Let’s start our sensor discussion with a simple action. We know from earlier posts that an action needs to have a YAML file that defines the variables used by the action and a shell or python script. There are other runner types but don’t worry about them here.
Below is a simple action that writes a hyperlink to a file called index.html. Here is the YAML file.
yaml

You can see we want to run a python file when this action is called. The file is write_html.py. Further, we see this python application takes an argument. In this case ‘link’ with a variable type of string.
html

Once the YAML and PYTHON are completed and stored in /opt/stackstorm/packs/default/actions/, we need to tell stackstorm to reload. We issue the command:

st2ctl reload –register-actions

If you want to see if st2 did the right thing with your action….issue:

st2 action list –p default

This will list all the actions in the default action directory.

Now we can run this action from the command line like this:

st2 run default.write_html body=’www.wookieware.com’

Closer inpsection of the python file reveals the location where the index.html is written. /opt/stackstorm/packs/tutorial/etc/index.html. If you load the link in a browser, you should see the URL you added when you ran the action, in the browser!

OK so far, but where is the automation? That would be where the sensor comes into focus. I am going to look at the rabbitMQ sensor. It’s one I am familiar with. Let’s have a look at a very simple implementation of the rabbitMQ. Two python scripts, one a sender, the other a receiver.
send
This script sets up a connection and a channel. Next, we declare what queue we want to subscribe to. Finally we push the message to the channel with the basic_publish. Great run this script about a 1000 lines …all you are doing is filling up the bus with messages that it cannot deliver. If you have the tools installed you can try this:

rabbitmqadmin get queue=hello count=99

This command will show you all the messages waiting to be delivered. They stay in the queue until someone gets them!
Now we need something to listen to our queue and do something about it. We need a “sensor”!
rec
This is where we start learning about callback. This script starts out the same and sets up a connection and a channel to communicate on. Likewise it declares what queue it want to subscribe to. Then we define the callback method. This is where we send variables to get handled by some action. In this case what gets sent to callback is printed to the screen with a preceding [X].
It’s in the ‘basic_consume where this script is consuming messages off the bus from the ‘hello’ queue. There is a parameter that when a message is on the bus, causes information to be passed to callback.
At this point, anything in the stackstorm universe can be done. Run an action, workflow, set triggers. I leave the rest up to you imagination. Which I sure is quite amazing!


So just to wrap up the rabbitMQ discussion, if you look at the bus and there are no messages, that might not be a bad thing. Messages were meant to be put on the bus….and taken off! Now you have a basic understanding of rabbitMQ, how can we apply it to the rabbitMQ stackstorm sensor? Let’s have a look.
rec
If you follow that blue link, it will take you to where I got the code for the simple rabbitmq lab. All the rest is just some obligatory authentication and init functions to get the sensor running in stackstorm. Here the sensor learns of what queues it should watch by the information in the example file in /opt/stackstorm/configs directory. Let’s look further:
rec
This should start looking more familiar. We are setting up a connection and a channel. Further, we declare what queues we are going to subscribe to. We may have more than one! Next, the callback, that when activated runs ‘dispatch_trigger’. Finally at the bottom is where we are consuming the queue. Just like in the example earlier. Nothing scary!
rec
Now, to tie it all up, the trigger itself. This could really be anything, it’s functionality is described in this section. We give the trigger a name. Rabbitmq.new_message works for me. Next we add in what parameters we want to pass. In our case the queue name and the body. From our action above, that would be the hyperlink to write to the index.html file!

Once a new message arrives in our rabbitMQ queue, this rule is activated:


name: “write_url_to_index”
description: “Write the APOD URL to the etc/index.html file.”
enabled: true

trigger:
type: “rabbitmq.new_message”
parameters: {}

action:
ref: “tutorial.write_html”
parameters:
link: “{{ trigger.body }}”

The trigger is a new message on the rabbitMQ bus. The trigger-instance has an action of tutorial.write_html. That would cause the action to run every time the is a new message on our rabbitMQ queue. Now thats automation!

This is a very high level discussion about a complex subject so I decided to put together a deeper dive into the technology.

So if you want to know ‘way’ more than this humble blog post….check out my stackstorm training, made just for you over on My github
Hey, it’s wellness Friday and it’s past 2 PM!

It’s officially open source, stackstorm-hpe-cfm st2 integration pack for Composable Fabric

Sub title: Stackstorm actions….how do I?
https://github.com/hewlettpackard/stackstorm-hpe-cfm.git

cvp
TL;DR
Since the last time we spoke we have successfully released the stackstorm pack on HPE’s external github. Funny, I thought there would be cake involved. We have had an issue submitted and worked through a couple pull requests. We’re using github to truly collaborate. I just wanted to share something interesting and valuable for you seekers just starting out. Actions and how to authorize them.


Stackstorm actions can use API calls to get information from a remote system. The first question I had was how do these actions authenticate to the host? Have you ever heard the old saying, “A little knowledge is a dangerous thing”? Well, it’s true. I had been reading the documentation up to the part I learned about st2’s datastore. The light bulb went off and I thought, “Why not just add the username and password in the st2kv.system and pass them as variables to the action (see previous post)? After running the action manually, everything worked as designed.


So I asked this question on the stackstorm-community slack channel (highly recommended) and was pointed to how Fortinet, and just about every other vendor, created their own st2 “base” action. It is a python class that gets called by every action. The idea is to only have to write the authentication once. Have I mentioned I’m learning a lot lately?

Let’s have a look at the action I worte: action.py

cvp

We start by importing the python binding for the HPE Composable Fabric Controller, followed by the base action from stackstorm. We create a python class that will become our own base action. During initialization, we look for a file with configuration parameters and we use variables defined in that file to get a token.
Let’s take a look at the YAML files.
yourpackname.yaml.example

cvp

This is just an example for the st2 user. It demonstrates what information is required for the action. The config file is the config.schema.yaml and it looks like this:

cvp

I believe this is the file for the source of the st2 pack config command

If you were to run this command: st2 pack config stackstorm will prompt you for the information. It will ask for the username and then the password and will display ***** when typing the password. Once it gathers all the information, you will be asked to save the data. The results of the conversation are stored in another file in the stackstorm/configs directory.

Now when we run a st2 action like hpecfm.get_switches, we create a class and inherit everything known in the action.py script shown above. The one variable we need from the base action class is self.client. This variable is the token we use to authenticate with the CFM via the python binding pyhpecfm.

cvp

I could just return the list of dictionaries that is assigned to the variable switches but I am only looking for a few choice bits and I want to give them special UBAR names like I find in servicenow. After creating your first pack and prior to running it, run st2 pack config mypackname and you should be set to start having the actions do what they were designed for.

So, the mystery is solved, not sure I fully understand what I am doing but it feels good and the code works. I have to admit putting code out open source forced me to be a better developer. I am starting to use the tools they way they were meant to be used. I am collaborating and it makes you feel you are part of something bigger.
Want a new action added to the pack? Just open an issue on github!

The Art of Packing Part 1

Sub title: Building a StackStorm Pack
DISCLAIMER: You can only trust a mad scientist so much!

cvp
TL;DR
Stackstorm uses sensors to monitor just about anything and when an event happens, a Rule is triggered that will launch Actions that do things. Intergration to the rest of the world happens through “packs”. Want to know more? Read on!


Don’t get the wrong impression! I’m not talking about packing your bags, this is a post about building StackStorm integration packs! What? Haven’t heard of StackStorm? It’s one of the coolest automation platforms I have had the “pleasure” of becoming familiar with. Ask not what does it do, rather what can’t it do. One thing I can’t do it educate you in this post, that’s not what its for. You can check them out over here.

A StackStorm integration pack is a predfined set of Actions, Sensors, Rule, python or shell scripts and other miscellaneous stuff. A StackStorm pack has a specific structure that looks like this!

cvp

This image is from the StackStorm documentation and if you are a real curious explorer you can look here.

I have just started this journey myself and I wanted to share some knowledge that cost about a thousand google searches.

So the good news is, if you have some existing python or shell scripts you are using for automating things, we can recycle them into StackStorm. I am working with python scripts. The first thing to do is pair the script file `myPython.py` with a ‘myPython.yaml” file. When this happens the resulting pair of files represent StackStorm Actions. These are stored in the Action directory. You can manually run an Action by typing st2 run action name on the command line. So lets take a look at an action.

Actions:

Lets start with the yaml file.

yaml

It’s fairly straigt forward. Our entry point will be a python script called `get_switches` The runner type is python-script. After that we describe each variable we will be sending to the python script at execution.

StackStorm Datastore

So I am using the Datastore built into StackStorm. st2 key set ipaddress “10.10.10.10” will save a key:value pair to the datastore. To retrieve it at any time, simply reference it by “{{ st2kv.system.ipaddress }}” this must be in DOUBLE quotes or you will not have any fun. So to recap this YAML file tells StackStorm that there is an action that will run a python script and pass three variable, IP Address, Username and Password. Don’t worry, you can encrypt these as well. Start googling!

Next we have the python script.

yaml

This was a script that I was already using. I added the import Action and wrapped my script up in a class. In the def() you see the three variable that we are being passed by the Action YAML.
You can also see I am importing pyhpecfm. To make this work, we need to include a requirements.txt file in our pack to tell StackSorm to perform a PIP install of the package automatically.

By creating this action, we can now reference it in workflows.

Workflows are YAML files that can run actions. Orquesta, StackStorm’s new workflow application, allows you to do some fairly complex workflows. Want to know more?
Orquesta.

When the workflow runs, it can call our action to get switches. Once it has the python dictionary from whatever we are talking to, it can stash it in the “context” and other Actions can read from that. It was here my cheeese almost slid off my cracker, the “context” was like stashing variables in thin air. Here is an example of an Oquesta workflow.

yaml

You can see the get_switches action is called. It in turn runs a python script. The result is stashed in the context by publishing it. Once it has succeeded, It will route to the sendsnow task and another workflow is called and passed the switch information. That workflow uses pre-built StacksStorm integration packs from Service Now which are freely available on the StackStorm Exchange here. Here is a screen shot of the workflow that calls Service now. I use with items and iterate through the dictionary.

yaml

I will be moving on to learning about sensors. These are python files that can watch message buses and kick off triggers.
Stay tuned for more in a later blog post!

Death of a travelling salesman….and just about everyone else!

Sub title: Winter is coming
DISCLAIMER: You can only trust a mad scientist so much!

cvp

Interesting times we are living in today, especially in the teknology industry! As you may know, if you read this blog, I am an automation evangelist. I have developed network automation applications for HPE Comware, Arista and Cumulus Linux. I always looked at the future with my “George Jetson” eyes and not my “Soylent Green” eyes. To me the future is bright, push button, and so much simpler because we are moving into a world of amazing automation.

I have been testifying, for quite a while, that if you are a network engineer, you need to depart from the command line and start picking up some python chops. You need to learn how to code, be familiar with the DevOps process, run around your office and say things like CICD, Kubernets, and salt-chef-puppet!
If you’re in a meeting, get a quizzical look on your face and ask “What about ZTP?” Congratulations! You’re now a member of Team Automation.
Now for the bad news…..

When I used to hear the word “automation” I would think of the simplicity it brings to day to day drudgery. Eliminating the small insignificant tasks that quite frankly drive me crazy. There is also a dark side to this. Whenever I hear the word automation, I think “humans don’t touch it”. Don’t get me wrong, someone has to invent the automation, but once that’s done….POOF… There goes a task that at one time required a human to touch it. Within the next 3 to 5 years I can image automation will be commonplace in customers data centers and large enterprise networks. They will quietly hum along and finding a network technician will be just about as hard as finding help at your local Home Depot on a Saturday morning.

Beyond that I see an even greater threat to your console cable wielding network engineer. A.I. Artificial Intelligence.
I look at the network automation practice as having five levels. As the levels get higher, we are bound to see AI getting inserted into the process.

cvp

You can clearly see that when we get to level five, we will all be seeking employment selling shoes or possibly in the construction business. If you include nanotechnology, robotics, and quantum computing into the mix we just might have to become robot phycologists (if they can think for themselves, they will have emotional hang ups just like us)!

Lately, I have been reading a lot about the future as seen through the eyes of Martin Ford. I am surprised to find he has a new read on just this subject called Architects of Intelligence….downloading now…Martin talks at great length how the influx of automation, robots (Yes, I have seen robots flipping burgers) and AI are going to shape our world. Jobs will be greatly impacted. Doctors, Lawyers, Scientists, Laborer’s. I don’t think there will be any job function not impacted by forthcoming teknology.

Is there any good news? Maybe, as far as the levels go, we are still fairly close to the bottom, still slogging through spanning-tree issues and route flapping. Most major enterprise networks have a Hodge-Podge of network management solutions and very little automation to speak of. So, for the moment we are relatively safe. This is no time to rest on your laurels!

There is an old hockey adage “Skate where the puck will be, not where it’s at” So it amazes me that today people are still trying to get the next vendor certification. CCIE, CISSP, VCP. I can tell you, with some confidence, by level 5 all that information will be inside the AI. I hope it won’t take too long to get that cert, it just may become obsolete by the time you do.

In closing, I would like to emphasize the need to re-invent yourself. There are a few vendors already bringing affordable AI solutions to the market. I know, AI has been around for a while but we are about to see an explosion in its use. As AI moves from being Reactive Machines all the way to Self-Awareness it will no doubt start replacing humans in many sectors. I for one will most likely be retired by then, living on my good looks and charm, no doubt (starving) but it’s you, dear reader, I am worried about.

Do yourself a favor and don’t quit your day job but make sure to stretch yourself. Learn all you can on these advanced subjects, don’t be the one who gets surprised by the events about to unfold and for heavens sake buy a warm coat……winter is coming!

Why the push for containers and kubernetes?

Applications are king. Everything we do from a data center hardware perspective is in support of delivering the application to the end user and storing the outcome of those user interactions. Upon an initial release of an application, developers are hard at work on the next release. New features, bug fixes and streamlining interfaces drive the need to improve the application. Unfortunately, past development methodologies produced applications that had 1000’s of lines of code and the ability to deploy new versions happened every three to six months.

Deploying these huge applications in the data center was accomplished by spinning up a new Virtual Machine and loading the corresponding operating system (and necessary hardware drivers) and the application. With the introduction of CICD, Continuous Integration Continuous Delivery, a DevOps approach to designing and deploying applications meant that those 1000-line applications would need to be broken down into smaller “chunks” with explicit unit testing.

With containerization the smaller bits of code could be loaded into a container and that application of 1000 lines could now be deployed in 120 containers. Want 10 copies of that application load balanced? That would result in 1200 containers. Sounds difficult but these 1200 containers can be deployed by Kubernetes in a few lines of a deployment.yaml file. Want to scale up from there? A single command to Kubernetes API would result in more resources made available to the application, on the fly with no service interruption.

Now that we have the smaller parts of our application, we can create automated testing and automated deployment. Due to the power of DevOps and CiCD, Amazon pushes new software updates to its website every 11 seconds.
Applications are King, Containerized applications are Kingier.

Change of the season

I would have to say I’m your average mad scientist. I don’t have bubbling beakers full of brightly colored fluids and sparks from electrodes zipping around my lab but I do have a lot of switches, routers, virtual machines and applications currently under development. I am not a professional programmer and everything I know about application development comes from the Googler and one or two books on the subject du jour from Amazon.

As expected, the winds of change bring changes, I find myself fascinated by another attraction. Plexxi.

Before we get to what it is, let’s start with why?

OK, Time for a little thought experiment. I want you to spin up a new Virtual Machine, with a new storage target in your data center…I’ll wait……OK! I bet it only took about 5 or 10 minutes for your to do that…..what didn’t you do? Configure the network. Did your new VM wind up in the right port group? What happens if you need to migrate the VM to another EXSi server? Is the VLAN on the destination server switchport? NO?

Then let’s start the stopwatch and see how long it takes the network team to make those changes for you. What if it just happens when we spun up the VM?

What kind of sorcery is this?……It’s not sorcery, it’s HPE composable Fabric formerly known as Plexxi.
api

When I talk about this teknology I like to think of a house. A house has a strong foundation, Walls that divide rooms from one another and a roof to keep me dry. Let’s start with the foundation.

We start by removing all the layers of network architecture and build a full mesh layer two network. Actually less physical links than a spine/leaf with 4 spines and eight leafs. We only need the leafs. The forwarding tables of each switch are managed and programmed by the controller. Oh and one little small thing to point out is the full mesh is auto instantiated!
api

Next up, the walls. Think about a convention center. One week they have a huge group in attendance and then next week it’s 10 small companies. They need to be able to move the walls and reconfigure the space to meet the needs of the customer. If you have attended any of these events, you’ll know what I mean. Composable Fabric is just about the same thing. I start with the full mesh network and then I am able to define Affinities, giving me the ability to isolate workload traffic. I’m not talking about separate vlans, I talking about real isolation. moving walls! Oh, and yes, this can be done dynamically, Affinities can be deployed and torn back down many, many times, without a single word on a command line by a human.
api

And finally, the roof, in composable terms this is the application integration layer. The application layer allows us to integrate into things like OpenShift/Kubernetes, OpenStack and vCenter (to name a few). We can look for events happening in these system and modify the network configuration to support changes in these upper layer cloud management systems. If you vMotion a VM from one ESXi host to another, the network port connected to the destination ESXi host is automatically configured and you didn’t even need to talk to the guy over in the network department. This means that the network is responsive to what the guys or gals are doing in the software defined compute/storage world……finally!
api

So, you can build your data center network like my Dad built his, or you can step into the world of self provisioning, self healing, software defined networking.

I ask you, are you a Fred Flintstone or a George Jetson?

Want more?….check it out… Here

A year in the life of a wookie

DISCLAIMER: I am a self-proclaimed, self-trained software hack, I barely know what I am doing! I’m fairly sure this is how you do it.

I started to write some code and when I looked up, a year had gone by…….I’m listening to Strawberry Fields Forever and wondering how a full year has slipped by without a post to the blog……So, in the words of Lennon and McCartney,let me take you down, ‘cause I’m going to.

It’s been quite a journey from being a network engineer to becoming a “developer” (I use the term loosely) and every day I discover its a journey with no foreseeable end. I’m am learning new things all the time and the more I know it just opens up new avenues to discover.

I started the year off puzzling over an interesting problem. I was working with Ansible and trying to automate Cumulus switches. Seemed like it was fairly straight forward. All you need is a YAML file with “ALL” the variables for your data center(loopbacks, fabric IP’s, Router ID’s) and a playbook and viola, you have yourself a functioning data center.
api

Not so fast you ZTPheads, sure, I no longer had to type the configuration into the switch, but I had to come up with all the variables in the all.yaml file. You’re basically editing a text file with no syntax checking! That’s not ZTP, you’re just moving the workload over to making the ALL file.

I was working on a process to automatically generate the ALL file and came up with a simple solution. Antigua. This app takes 10 variables and spawns the entire spine/leaf network all.yaml file. It then uses Jinja2 templates to generate the full switch configurations and upload the starup-config file into Arista’s Cloud Vision Portal. After that the config is applied to the switches in the spine/leaf and everything (eBGP fabric) is up and running in less than a minute.
api
Want to roll out a new spine/leaf pod in your data center? How about having it up and running in less than 1 minute? Can you do that? I can.

Later I was challenged to have the switches just self configure and I rewrote Antigua into “ISSAC” a special version of Antigua that ran directly on Arista switches. Basically, the switch boots, looks in its TFTP directory for a “key” file. If it’s there, it sends it to all it neighbors (via ISSAC’s neighbor database) and then learns it’s position (spine01) and self-configures. The only thing you have to do is TFTP a key file into any one of the switches TFTP directory. That’s It! Even I can do that…..Geesh!

After I got the mechanics down for Antigua, I started to think about other applications where I could reuse the teknology. I started looking at managing VXLAN tunnels and automatically generating “vxlan” configlets and loading them into Arista CVP. I came up with Subway.
api
Subway uses a mongo database and keeps track of all the changes to the VIN flood lists. It was a fun thing to write but needs a little more work. With the introduction of eVPN, theres not much left to do with the control plane. Definitely an application worth coming back to.
Longing for something I could really use on a day to day basis, I started looking at what it takes to MLAG a pair of Arista switches. Take a look, it’s like 16 commands per switch.
api
A couple of them are even reversed, I hope you don’t make a mistake, you’ll have fun trouble shooting what you did wrong.

I really had the process refined for generating configlets and stuffing them into CVP. So I automated the MLAG process with Jetlag.
Jetlag basically takes two IP addresses, the seed port of the two ports used for MLAG between the switches and then the port number of the LACP downlink. Very simple application. It generates a mlag configlet, saves it to CVP and applies it to the switches.
api
If you think about it, these three applications could really become a single app. They all basically do the same thing. Get user input, make a text file of the config, upload to CVP and apply it to the switches in the inventory. After finishing these up and showing them around, Antigua started to get a lot of attention. Still the only app that I know of that can do what it does. Gauntlet thrown!

I moved onto another challenge. A partner wanted to use HPE IMC to monitor his customer’s network. If there was a real time alarm on IMC, they wanted IMC to open an “incident” in Service Now. and have two-way communication between the two to track resolution of the incident. I put together an app call “SnowBridge”. snowBridge uses IMC’s and Service Nows API’s and sits in the middle running a continuous loop. It runs on the command line with no user interaction. This app got me thinking about how to divide up the tasks into smaller micro-services. More on this later.

With this effort out of the way I started working on a request to streamline Antigua. Antigua will generate a YAML file. It could be used with Arista, Comware or Ansible. It will upload to any of the three systems (working on Tower). The new request was to strip away the flask framework, make it run from the command line and just generate the config files and store them on the hard drive. Check…..easy…….done. It’s called AntiguaCLI.

I had another request to update an application that runs in a docker container and allows for using the API’s from Bigswitch. More of a proof of concept app, it allows for logging into Bigswitch Networks BSN controller and looking at a couple lists. I think I will automate the routing function of BSN. This little project re-kindled my love for playing with docker. The next project would take it to new heights.

So, HPE makes an application called OneView that manages the new Synergy compute platforms. Tracks uplinks sets, networks, storage volumes, server profiles, kind of a single pane of glass for managing these types of things. When someone adds a network or a link set, the rest of the network world (Outside of OneView) doesn’t have a clue it happened. However there is some integration for the state change message bus in IMC. IMC listens to One View’s State Change Message Bus and learsn of these events but if you’re not using that……, certainly there must be something to use to learn of these changes.

I looked at the hpOneView python library and decided I could make an application that did just that and called it “Spymongo”. The whole solution (three docker containers) can be used or just a portion of it. It all depends on what you want to do.
api
The first thing is to docker-compose build the spymongo and the mongo database containers. Once these two containers are running, the spymongo app automatically listens the OneView scmb and learns about all the changes being made. It starts looking for network changes and saves the document to the mongodb. You don’t have to use it in this fashion. You could take the app and make it do whatever you need. You could possibly use it without the database.

I use the database because I need just a little more functionality out of the app. With spymongo running, scmb events are automatically being stored in a mongo db. I wrote a “front-end” for the database in flask called “spyFront”.
api
spyFront allows for the verification of the database records. It also has a feature that lets me learn every network event that OneView knows about prior to running spymongo. This lets me have “parity” between the OneView database and spymongo’s. The two databases are synced and spymongo keeps them up to date by listening the the scmb.
I did explore adding vcenter support to spyFront. Currently, spymong looks at vcenter and downloads a list of virtual machines and their network connections. With spymongo running, anyone can make an application and mount the mongodb. They no longer need to deal with ssh keys to have secure communication with OneView. Just simply use the mongo database and go.

Throughout the year I have been on my favorite learning site Udemy. and I have been looking at new things to play with. Hey, for $10 bucks it’s the cheapest education you will find. I have taken a Puppet course, a CHEF class, Flask/API/Docker training and a super heavy course in Angular where I learned about the Single Page Application.

Just last week I started looking at what Darth was doing (mind blowing as usual) over at Kontrolissues. and it looks like a blast! I also started deep diving mesophere’s DC/OS. All of this is available on my private github. Just need to push it through the Open Source Review Board before it becomes available to the public.

I hope it’s not another year until I blog again, I will promise to quit being so “lazy” and keep you up to date on the going-ons of a half-crazed nerf herder.

By the way, the lyric “climb in the back with your head in the clouds” kinda has a new meaning for me!
I wonder if anyone will know this lyric is from “Lucy in the sky with diamonds and not Strawberry fields forever…..HA!