OpenShift For Mere Mortals: Kubernetes

OpenShift For Mere Mortals: Kubernetes

Later in this blog entry, there are some commands you can try for yourself. You can do this on your own computer using minikube. You can also try out the online interactive tutorials powered by Katacoda which also use minikube.

In last week's blog post, I discussed orchestration and how you can manage a bunch of containers together in one application. This week I am going to take it a step further and start exploring how you manage more than one application - some companies may use even hundreds or thousands of containers to run their applications. How can you easily manage these heavy workloads and scale them as traffic increases or decreases?

A long time ago (not in a galaxy far far away - but possibly a long plane ride away), a company named Google was actively driving the adoption of containers as a way of speeding up the development and deployment of software applications. They came up with a framework which they called Borg (see the Borg entry on Wikipedia if you aren't familiar). You can read the white paper on Borg for more details, but essentially it was how Google built up an infrastructure of tens of thousands of servers and were able to manage it effectively.

Eventually, some of the developers of Borg wanted to bring it (or something like it) to the Open Source community and let others manage large numbers of applications the same way. What they came up with was Kubernetes. Kubernetes allows you to build clusters of servers for running containers on and manage them from the command line or a REST client. There are master servers that manage the cluster state, schedule jobs, and execute API requests, and there are nodes that do the bulk of the work running the containers.

Kubernetes is actually a vast topic, so I'm going to talk about Kubernetes more from the perspective of a user than an administrator. There is a lot of interesting details in how Kubernetes manages the work under the hood, but it is really beyond the scope of what I'm trying to detail with this blog series (for example, although it would be interesting to talk about the API server, the scheduler, the kubelet processes, etc., it would be a significant detour from the container side of things). The documentation on Kubernetes is fairly extensive if you are more interested in the low-level details of how the platform works. This is also not a comprehensive guide to everything in Kubernetes - for example, there are many resource types but I'm only going to talk about a few of them - but hopefully I will give you enough that you will understand what it brings to the container world.

Kubernetes Resources

The foundation of Kubernetes is a set of building blocks (sometimes called primitives although I prefer the term resources) that provide mechanisms for deploying and managing applications on groups of servers called clusters. Kubernetes allows you to apply metadata to resources, such as using labels that can identify resources that should receive particular kinds of traffic. What you are building is usually a flow that looks like this:

Kubernetes Resources

Now I'm going to talk a little bit about each of the items in this flow.

Ingress

The ingress resource defines rules that allow external requests to reach endpoints within the cluster. You can configure the ingress to route URLs, load balance traffic, terminate SSL, and other networking-related operations. Typically you point the ingress resource to a service.

Service

A service is a resource that effectively groups a set of pods together. Typically, the pods that are grouped by a service are all running the same containers internally, and the service effectively becomes a load balancer that distributes traffic among the group of pods. Services also get their own IP address and a DNS name so that other resources in the cluster can easily do service discovery and route traffic.

Pod

A pod contains one or more containers that are deployed together onto a server. A pod is assigned a unique IP address within a cluster, so processes running inside the pod can freely access ports without conflict. Pods allow for mounting of volumes, similarly to how you would mount volumes into a container. Pods are the unit of scaling within Kubernetes, and so you spin up new pods to add additional workers to handle the load. For this reason, it is typically encouraged to only have one container per pod so that they can scale independently. You can, however, have multiple containers within pods (a good example is a sidecar container that is running a Splunk forwarder that collects the logs from the other container and forwards them to a Splunk instance).

Replica Sets

A replica set is a special controller that is in charge of ensuring that the specified number of pods are running at all times. This task was originally delegated to replication controllers, but now replica sets have replaced them. Typically replica sets are not created by developers directly but are instead created by deployments.

Deployments

A deployment (sometimes called a deployment controller) is a resource that defines the desired state and how the resources in that desired state should be deployed. Typically a deployment will define a set of containers that should be deployed, the number of replicas to create, what images should go in those containers, and what ports should be exposed. You can update the deployment resource to change the desired state, and it will kickstart the process to change the current state to match.

A Simple Kubernetes Application

Let's start with a simple example of an application running in Kubernetes. For these examples, I'm going to assume that you already have Kubernetes installed and running and the kubectl command has been properly initialized. When you use a tool like minikube, all of this happens when you start it.

Before continuing on, I would suggest running this command:

$ kubectl completion bash > ~/.kc_completion.sh
$ source ~/.kc_completion.sh

This will enable bash completion for kubectl commands and will save you valuable time. This completion even works for the names of resources like pods, which often have randomly generated names.

We can run a simple nginx container in Kubernetes like this:


$ kubectl run demo-nginx --image=nginx --port=80
deployment.apps "demo-nginx" created

Now if we look at our resources, we'll see that a few different ones have been created for us:


$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
demo-nginx   1         1         1            1           2m
$ kubectl get pods
NAME                          READY     STATUS    RESTARTS   AGE
demo-nginx-66877b9686-72hlv   1/1       Running   0          2m
$ kubectl get replicasets
NAME                    DESIRED   CURRENT   READY     AGE
demo-nginx-66877b9686   1         1         1         2m

Now, our nginx container is running, and it has its own IP address:


$ kubectl describe pod demo-nginx-66877b9686-72hlv | grep IP
IP:             172.18.0.4

We can even send HTTP requests to it with curl:


$ curl -s 172.18.0.4

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

This only works, though, if I'm on the same machine as the pod. The pod is assigned an internal IP that isn't exposed anywhere else, not even to other servers in the cluster (it's hard to demo this inside minikube, but if you happen to have a Kubernetes cluster, try accessing the IP from another server in the cluster). In order to make the pod reachable by resources on other servers in the cluster, I need to create a service to route traffic to it. Fortunately, Kubernetes makes that fairly easy:


$ kubectl expose deploy/demo-nginx
service "demo-nginx" exposed
$ kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
demo-nginx   ClusterIP   10.105.80.127   <none>        80/TCP    16m
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   17m

The IP address for our service is a cluster IP, which means that it is accessible from any server in the cluster. It still isn't accessible from the outside world, but we can modify the expose command to expose our service on a random external port on the cluster:


$ kubectl delete service/demo-nginx
service "demo-nginx" deleted
$ kubectl expose deploy/demo-nginx --type=NodePort --port=80
service "demo-nginx" exposed
$ kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
demo-nginx   NodePort    10.101.248.98   <none>        80:32470/TCP   27s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        3m

As you can see, there is now a port 32470 open. We can now request it through the external IP of the cluster (in the case of minikube, we can use the minikube ip command to get this IP, or simply use the loopback IP of 127.0.0.1):


$ curl -s $(minikube ip):32470

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

From here, we could easily manage other tasks that we wanted to do with our nginx containers. We can easily scale up by adding more pods. We can create an ingress to give our endpoint a URL. We can define other services with other pods that can be called from our nginx containers. We just need to define the resources we need and Kubernetes will take care of the rest. All of this can be done from the command line or by calling the REST APIs, which makes automating this kind of thing quite easy. Cloud providers will typically host a UI that allows you to do a lot of this without even knowing about the resource types or that you are even running Kubernetes.

Defining Kubernetes Resources

So far, I've been creating resources by issuing simple kubectl commands, but Kubernetes is extremely configurable. Each resource that I created has a definition that is written in YAML. You can easily see this yaml if you want:


$ kubectl get deployments/demo-nginx -o yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2018-05-25T20:08:28Z
  generation: 1
  labels:
    run: demo-nginx
  name: demo-nginx
...

You can save this YAML and then easily recreate the resource later if you need to. This is one of the most powerful things about Kubernetes - it is essentially infrastructure in code. You can capture the current state, delete all the resources, and then easily recreate them all (that of course doesn't include any state information that was held by the processes running inside the pods, of course, but there are ways to externalize state so that it won't be lost). You create a resource from a YAML file by using the create command:


$ kubectl create -f nginx2.yaml
deployment.extensions "demo2-nginx" created

In this case, I created another nginx deployment by altering the original yaml file to replace all instances of demo-nginx with demo2-nginx. Now when I go to look at my deployments, pods, and replica sets, I'll see that there are now two of each:


$ kubectl get deploy
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
demo-nginx   1         1         1            1           10m
demo2-nginx  1         1         1            1           1m
$ kubectl get po
NAME                          READY     STATUS    RESTARTS   AGE
demo-nginx-66877b9686-72hlv   1/1       Running   0          10m
demo2-nginx-748557974b-bb72w  1/1       Running   0          1m
$ kubectl get rs
NAME                    DESIRED   CURRENT   READY     AGE
demo-nginx-66877b9686   1         1         1         10m
demo2-nginx-748557974b  1         1         1         1m
$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
demo-nginx   NodePort    10.101.248.98   <none>        80:32470/TCP   5m
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        3m

The exception is the services, but that is because the original deployment yaml didn't include our service. Technically, the service isn't really part of the pod deployments, but rather a way of routing traffic from within the cluster to the appropriate pods. We could easily, however, create a clone service in the same way.

I can't go into all of the details of the Kubernetes configurations (again, too broad for this blog post), but fortunately, Kubernetes easily lets you explore these configurations using the explain command. For example, I can use it to see all the resource types available:


$ kubectl explain
You must specify the type of resource to explain. Valid resource types include:

  * all
  * certificatesigningrequests (aka 'csr')
  * clusterrolebindings
  * clusterroles
  * componentstatuses (aka 'cs')
  * configmaps (aka 'cm')
  * controllerrevisions
  * cronjobs
  * customresourcedefinition (aka 'crd')
...

Note that some of the resource types have an alias, such as houw configmaps can be also referred to as cm. So any place where a resource type is expected and that resource type has an alias, you can just use the alias. For example, above I got all the services and deployments, but I can also use their aliases:


$ kubectl get deploy
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
demo-nginx   1         1         1            1           10m
$ kubectl get po
NAME                          READY     STATUS    RESTARTS   AGE
demo-nginx-66877b9686-72hlv   1/1       Running   0          10m
$ kubectl get rs
NAME                    DESIRED   CURRENT   READY     AGE
demo-nginx-66877b9686   1         1         1         10m
$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
demo-nginx   NodePort    10.101.248.98   &tl;none>        80:32470/TCP   5m
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        3m

I can also use it to explain one of the resources (let's pick configmaps since I've never used it before):


$ kubectl explain configmaps

KIND:     ConfigMap
VERSION:  v1

DESCRIPTION:
     ConfigMap holds configuration data for pods to consume.

FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
...

As part of the explanation, it will give you a set of fields. These fields are the top level nodes in the yaml - if you look back to where I dumped the yaml for the deployment, you will notice the first line is apiVersion, followed by kind, etc. These fields can have child nodes that are indented. You can use the explain command to see these child nodes as well, to any depth. For example, the deployment resource above has a section that looks like this:


...
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      run: demo-nginx
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
...

If I wanted to understand the rollingUpdate configuration, I would just run:


$ kubectl explain deploy.spec.strategy.rollingUpdate

KIND:     Deployment
VERSION:  extensions/v1beta1

RESOURCE: rollingUpdate <Object>

DESCRIPTION:
     Rolling update config params. Present only if 
     DeploymentStrategyType = RollingUpdate.

     Spec to control the desired behavior of rolling update.

FIELDS:
   maxSurge     <string>
     The maximum number of pods that can be scheduled above the 
     desired number of pods. Value can be an absolute number (ex: 5) 
     or a percentage of desired
...

As you can see, if you don't understand a part of the configuration, you can just ask Kubernetes to explain it. The hardest part might be getting the initial configurations in place, but as you saw above, we can generate the most basic ones with simple commands.

Conclusion

This week, I talked about Kubernetes and how you can use it to manage multiple container deployments using simple commands. I also talked about the types of Kubernetes resources and how they are defined with YAML configurations. I demonstrated that you can easily save these configurations and recreate resources with a simple create command.

Next week, I will introduce the OpenShift platform, which builds on top of Kubernetes and provides some additional ways of building and managing your container applications. I will also touch upon Openshift.io, a cloud platform that allows you to code, build, test, and deploy applications all in one easy to use interface.

Related Article