Stock photo

Every small business that has faced growth challenges knows the headaches of rapidly scaling up without a good plan in place: things like increased stress on existing systems and processes, unexpected challenges with quality control, and all the extra time required to manage a higher volume of work. Many businesses find they need to reconfigure processes to regain the efficiency they lost in trying to meet higher demand without a framework for doing so.

With engineering teams deploying a number of application components to containers in the cloud, it’s no different. The more containers you add, the more effort is required to deploy, monitor, and scale them up and down. That’s where Kubernetes comes in–it’s an extensible container management platform that makes it easier for DevOps teams to orchestrate complex container-based architectures.

Here’s a look at how it works and what it’s designed to do.

When scaling up without a plan backfires

Imagine a classroom with twenty students and one teacher. The teacher has a good routine for creating lesson plans, grading papers, and working with students that need a little extra help. She’s able to keep an eye on students that misbehave from time to time and can maintain communication with all of their parents. Now, say the class gets moved to a larger room and is doubled to forty students.

All of a sudden, the teacher is facing a lot of extra work. It takes twice as long to grade papers, and she has less time to get to those that need extra instruction. In class, she’s unable to monitor behavior and their performance declines. Parents can’t get a hold of the teacher to discuss concerns, and test scores go down. All in all, the success of the class begins to suffer.

Now, say the teacher is given a teacher’s assistant. The TA sits in during lessons to keep an eye on bad behavior, grade papers, schedule extra help based on who needs it most and can address parent concerns and prioritize meetings when needed. The result? The school can enroll more students in the class without a decrease in course quality.

Kubernetes lets you scale up without extra work or a decrease in performance.

If you’re using a container system like Docker to deploy your application, you might have wondered how efficient the approach would be if you doubled or even tripled the number of containers. Maybe you’ve deployed your application on a few servers, and it’s working well, but quickly scaling up to meet a spike in traffic by adding more servers would be too difficult to manage on your own.

Some organizations have thousands of containers, and that can be nearly impossible to manage without some sort of framework. This is where the Kubernetes platform really shines.

It’s a solution that can be tailored and extended to meet your exact requirements, with add-on features that make it easier for you to know what’s happening with your containers. You have access to recovery and health monitoring to keep things running when containers fail, and a smooth process for deploying new containers. It’s like a smart intermediary between you and your containers, and it’s designed to handle a few key concerns.

Kubernetes covers the before, during, and after of container deployment.

Maintaining containers—”desired state management”—presents challenges that can be harder to handle the more spread out an infrastructure gets. To run an app on a container, you have to take into account things like networking, scheduling, distribution, and load balancing.

Without getting too far into all of the moving parts of Kubernetes’ container management technology—and it does get complex, so a Kubernetes pro is crucial for getting your implementation right—Kubernetes can help with the above, and manage tasks like scheduling and workload management. The three main areas it supports are:

Deployment: Feed a deployment file to the Kubernetes API to specify what you need for your application’s container, including criteria like RAM, CPU, file storage, etc.—all things that define your deployment. Kubernetes will take it from there.

Scaling: Rather than scaling up inefficiently by deploying one app container per server when you need more bandwidth, Kubernetes can make the call for you. It can determine how much space you need and your hardware requirements and spin those for you on the fly. “Schedulers” identify what resources you need at what time, and prioritize accordingly.

Monitoring: Once you’ve deployed, maintenance is the next issue Kubernetes handles for you. It’s able to let you know when a deployment goes down, spinning up another container and auto-recovering. Rolling restarts, automatic health checks, and extensions that allow you to manage SSL certificates automatically make maintenance a much simpler process.

Some of Kubernetes other moving parts include:

  • Labels are like name tags, which are used to identify containers, and as a bonus, they’re searchable.
  • Pods are the smallest unit on Kubernetes, a runnable unit of work. You can have one container per pod or tightly coupled containers in the same pod. Kubernetes connects the pod internally and to an external network.
  • Replication controllers duplicate and manage multiple pods.
  • Service discovery allows pods to share services and find one another.
  • Volumes let you store your container’s data and expose it to other pods.
  • Namespaces are grouping mechanisms inside Kubernetes, which isolate those pods from the rest of the culture if you want them on their own.

Ways to Implement Kubernetes

Kubernetes is an extensible platform that you can build on and add features to. It can be installed a number of ways, depending on how many extras you need from your implementation, what your hardware scenario is (bare metal, an AWS installation, or a virtual machine (VM)-based architecture, etc.), and the skill level of the DevOps professional you have helping manage your installation.

Here are three options for installing Kubernetes. Learn more about these three options and their pros and cons in this comparison article.

1.“Vanilla” Kubernetes.

This is the most basic installation, with all the features of a Kubernetes release, free and open-source. The install itself requires an experienced professional, as installation isn’t automated as it would be with a vendor-managed version (although there are some community-developed automation options). The “vanilla” Kubernetes networking infrastructure can be flexible but also complex to set up, despite the availability of plugins to manage aspects of networking. Most of the features you’ll get with a basic Kubernetes install are going to come from third-party add-ons.

2. A vendor-managed distribution with added features.

Each vendor will add some proprietary features, and certain aspects, like the networking challenges mentioned above, will be abstracted and handled for you. Vendors include Tectonic, Rancher, and Canonical Distribution. Some have a freemium model that includes support should you run into problems.

3. A Kubernetes-based Platform-as-a-Service (PaaS).

Complete platforms give you all the bells and whistles, from testing and staging to production and deployment. PaaS providers include OpenShift and DEIS.

Who to Hire

Implementing a Kubernetes cluster requires a cloud architect professional with Kubernetes expertise to get you up and running. Without a PaaS solution or a vendor-managed solution, you’ll need in-house expertise to handle aspects such as access control, logging, monitoring, application lifecycle, etc.