Why the push for containers and kubernetes?

Applications are king. Everything we do from a data center hardware perspective is in support of delivering the application to the end user and storing the outcome of those user interactions. Upon an initial release of an application, developers are hard at work on the next release. New features, bug fixes and streamlining interfaces drive the need to improve the application. Unfortunately, past development methodologies produced applications that had 1000’s of lines of code and the ability to deploy new versions happened every three to six months.

Deploying these huge applications in the data center was accomplished by spinning up a new Virtual Machine and loading the corresponding operating system (and necessary hardware drivers) and the application. With the introduction of CICD, Continuous Integration Continuous Delivery, a DevOps approach to designing and deploying applications meant that those 1000-line applications would need to be broken down into smaller “chunks” with explicit unit testing.

With containerization the smaller bits of code could be loaded into a container and that application of 1000 lines could now be deployed in 120 containers. Want 10 copies of that application load balanced? That would result in 1200 containers. Sounds difficult but these 1200 containers can be deployed by Kubernetes in a few lines of a deployment.yaml file. Want to scale up from there? A single command to Kubernetes API would result in more resources made available to the application, on the fly with no service interruption.

Now that we have the smaller parts of our application, we can create automated testing and automated deployment. Due to the power of DevOps and CiCD, Amazon pushes new software updates to its website every 11 seconds.
Applications are King, Containerized applications are Kingier.

Leave a Reply

Your email address will not be published. Required fields are marked *