Using Helm on OpenShift

Using Helm on OpenShift

What is Helm?

There's a lot of buzz these days about Kubernetes, and for good reason. Kubernetes is making it possible for companies of almost any size to run container applications in a way that creates a formal separation between the folks who manage your physical infrastructure (your DevOps or IT people) and the folks who want to deploy applications (your developers). It's not a 100% clean separation though. Developers are still having to define proxies, network volumes, CPU limits, etc. OpenShift has tackled this with their templates, and the Kubernetes community has begun to adopt a similar framework called Helm. If you are working on OpenShift, however, you don't have to be left out - it is possible to use Helm on OpenShift and use the charts that the Kubernetes community is developing (although you may have to tweak the charts slightly).

Installing OpenShift

For the purposes of this article, I'm going to use an existing installation of OpenShift Origin, which I just created by running the simple oc cluster up command on Ubuntu (I prefer to run this as root - if you do, you need to be root when you login as the administrator):


$ oc cluster up
Using Docker shared volumes for OpenShift volumes
Using 127.0.0.1 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.

The server is accessible via web console at:
    https://127.0.0.1:8443

You are logged in as:
    User:     developer
    Password: <any value>

To login as administrator:
    oc login -u system:admin

Nothing I'll be doing in this article should work differently if you are using the OpenShift Container Platform.

Installing Helm/Tiller

The next set of instructions are based on this blog post from the OpenShift website. It's a little verbose, so I'm going to give you the cliff notes version, but feel free to go there if you want more details.

Installing the Helm Client

First we want to install the Helm client. You can get it from GitHub. In this article, I'll be using 2.9.1. I just need the client, so I can install it like this:


$ curl -s https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz | tar xz
$ cd linux-amd64
$ ./helm init --client-only

I need to set an environmental variable that points to where helm was installed, which should be the .helm directory under your home directory. The helm init command does this for us, but that variable will only live as long as our shell does, so I will add a line to export it in my .bashrc/.zshrc:


export HELM_HOME=/home/osninja/.helm

I also add the location of the helm command to my PATH.

Installing Tiller

Next we create a new project where Tiller, the server side of Helm, will be hosted:


$ oc new-project tiller
Now using project "tiller" on server "https://127.0.0.1:8443".

Helm will need to know the Kubernetes namespace where Tiller is running. I do this by setting another environmental variable called TILLER_NAMESPACE in my .bashrc/.zshrc:


export TILLER_NAMESPCAE=tiller

We could technically install the Tiller server directly using helm init, but the client doesn't currently set up the service account rolebindings that OpenShift wants. The nice folks at Red Hat have created an OpenShift template (kind of their equivalent of Helm, sort of) that does everything we need at https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml. I will execute it like this:


$ wget -q https://github.com/openshift/origin/raw/master/examples/helm/tiller-template.yaml
$ oc process -p TILLER_NAMESPACE=$TILLER_NAMESPACE \
    -p HELM_VERSION=v2.9.1 -f tiller-template.yaml > tiller.yaml
$ oc create -f tiller.yaml
serviceaccount "tiller" created
role "tiller" created
rolebinding "tiller" created
deployment "tiller" created

After waiting a few minutes, if I get all the resources in the tiller namespace, I'll see that it has started a pod:


$ oc get all
NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/tiller   1         1         1            1           55s

NAME                   DESIRED   CURRENT   READY     AGE
rs/tiller-65dfdddd48   1         1         1         55s

NAME                         READY     STATUS    RESTARTS   AGE
po/tiller-65dfdddd48-bpfld   1/1       Running   0          55s

Running the helm version command will now show a version both for the client and the tiller server:


$ helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Running a Sample Chart

Now that we have helm installed, let's install a sample chart and demonstrate how easy it is to spin up an application on OpenShift using it. There is a simple NodeJS that I'm going to use as an example: https://github.com/openshift/nodejs-ex. First, however, we need to create a new project space for our app:


$ oc new-project myapp
Now using project "myapp" on server "https://127.0.0.1:8443".
...

We also need to grant edit access to the tiller server so that it can modify our new project for us:


$ oc policy add-role-to-user edit \
"system:serviceaccount:${TILLER_NAMESPACE}:tiller"
role "edit" added: "system:serviceaccount:tiller:tiller"

Jim Minter forked the official openshift/nodejs-ex project and added a helm chart, so now we are able to install it like this:


$ helm install https://github.com/jim-minter/nodejs-ex/raw/helm/helm/nodejs-0.1.tgz -n nodejs-ex
NAME:   nodejs-ex
LAST DEPLOYED: Fri Jul 27 01:51:24 2018
NAMESPACE: myapp
STATUS: DEPLOYED

RESOURCES:
==> v1/Route
NAME            AGE
nodejs-example  0s

==> v1/Service
NAME            TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
nodejs-example  ClusterIP  172.30.0.166  &tl;none>       8080/TCP  1s
...

As you can see, the chart has created a number of resources for us in the same way that picking an OpenShift template would have. If we look at all the resources in the project, we'll see that it created a build and deployed it into a pod:


$ oc get all
NAME                               REVISION   DESIRED   CURRENT   TRIGGERED BY
deploymentconfigs/nodejs-example   1          1         1         config,image(nodejs-example:latest)
...
NAME                      TYPE      FROM          STATUS     STARTED         DURATION
builds/nodejs-example-1   Source    Git@e79929f   Complete   2 minutes ago   58s
...
NAME                        READY     STATUS      RESTARTS   AGE
po/nodejs-example-1-796kx   1/1       Running     0          1m
po/nodejs-example-1-build   0/1       Completed   0          2m
...

It also created a route, but it's an internal name:


$ oc get route
NAME             HOST/PORT                               PATH      SERVICES         PORT      TERMINATION   WILDCARD
nodejs-example   nodejs-example-myapp.127.0.0.1.nip.io             nodejs-example   <all>                   None

Since I own my own domain, I'm going to edit this route and change the hostname to something more meaningful for me (the domain I'm going to use, myapp.osninja.io, points to the same IP as my OpenShift server - when OpenShift gets a request with that hostname, it's going to match it to this route, which will then direct traffic to the service, and the service will direct traffic to my pods). OpenShift lets me edit routes like this:


$ oc edit route

In the editor that appears, the only thing I'm going to change is that hostname:


...
  selfLink: /apis/route.openshift.io/v1/namespaces/myapp/routes/nodejs-example
  uid: 91df5626-913f-11e8-9423-ee150393fa2b
spec:
  host: nodejs-example-myapp.127.0.0.1.nip.io
  to:
    kind: Service
...

I change it to this:


...
  selfLink: /apis/route.openshift.io/v1/namespaces/myapp/routes/nodejs-example
  uid: 91df5626-913f-11e8-9423-ee150393fa2b
spec:
  host: myapp.osninja.io
  to:
    kind: Service
...

I save the file and exit. Now when I point my browser to myapp.osninja.io, I see my NodeJS app that I deployed with Helm:

Conclusion

I'm only scratching the surface of what we can do with Helm, but the power of using templates is something that OpenShift developers are quite familiar with. By embracing Helm, we make it easier to utilize application definitions that have been developed by the Kubernetes community and bring them into OpenShift. I'll be using this approach in my next article to install the Anchore Open Source Engine on OpenShift.

Related Article