This blog entry has some command line and UI examples in it that you can try yourself. If you go to the OpenShift Origin (the open source community edition of OpenShift), you will find various ways to run OpenShift directly on your own machine. If you go the oc cluster up route, you can check out my troubleshooting if you run into problems. You can also run the examples using the OpenShift Interactive Tutorial, powered by Katacoda.
In this entry, I'll be using OpenShift Origin. The Enterprise OpenShift Container Platform is similar but has additional controls and also lags slightly behind the open source version in terms of features as they are refined by the OpenShift community. Everything I will talk about applies in both environments.
In the previous entry in this series, I started introducing the OpenShift platform by transitioning from Kubernetes on the command line to OpenShift on the command line. Defining your container application infrastructure in OpenShift was fairly similar to how you did it in Kubernetes, and that made for an easy transition.
This week, I'm going to talk about the main way that OpenShift is even better than vanilla Kubernetes. You can build and deploy applications using Kubernetes fairly easily, but the folks at Red Hat wanted to make it even easier. They have built a rich UI that allows you to build up applications using templates, where you just fill in a few details and you'll be able to create all the Kubernetes resources automatically and pre-configured so that the resources know how to talk to one another.
OpenShift is also more geared towards development of container applications, not just hosting them. There is additional security controls in place as well as project spaces so that you can more easily manage multiple applications. This is especially important if you are working in a multitenancy environment where you are hosting many applications on a single cluster. OpenShift makes managing such environments much easier. For the purposes of this entry, I'm going to focus on a single project. In the future I may add another entry that discusses these multitenancy aspects and talk about roles and permissions.
When you log into OpenShift Origin, one thing you can do is to create a project. I'm using oc cluster up for my demo, so it has already created a starter project for me, but as you can see in the top right-hand corner, I can easily create my own project if I want.
A project is the equivalent of a Kubernetes namespace. I didn't talk about namespaces much before because they aren't as necessary to understand when you are first getting to know Kubernetes. By using namespaces, you can divide your resources into logical groups that are related. In the OpenShift world, you can do the same thing, only the UI refers to them as projects.
When I was creating projects on the command line (using minishift to demo doing so), I was using that default project that got created for me. In this case, I'm going to create a new project, so I'll go ahead and create a project by selecting the Create Project button.
You need to give your project (the Kubernetes namespace) a name, and it has some restrictions on the name that are due to the restrictions on naming Kubernetes namespaces. The names may only contain lower-case letters, numbers, and dashes, and cannot start or end with a dash. This may seem fairly restrictive, but in actuality, it is rarely an issue unless you are working with lots of projects, and OpenShift allows you to specify a Display Name to give a more meaningful name to the project that is displayed on the UI. You can also give a description if you would like. I'm calling my project osdemo. After I fill in the details and click Create, I'm presented with multiple ways of building my project. I can browse a catalog of templates, deploy an image, import some YAML, or even pull in resources from other projects. For this week, I'm going to use a tempate.
You can see that Origin comes with a number of templates built in. Just for simplicity, let's start with a simple nginx project like I've used in my previous entries.
Selecting a template will open up a wizard that will walk you through the steps of creating an application using that template. The first step describes what the template is for so that you can be sure it is what you are trying to build.
The second step involves filling out some configuration details for the template. For example, if I were picking a MySQL template, I would need to provide the root password, the database name, etc. This way, you get all the configuration out of the way at the start so that things just work when the actual resources are created. In this case, I need to choose the project I'm adding the server to, the version of nginx I want to use, a name for the server (more on this in a second), and finally a git repo where the project source will be located.
You might be wondering why we would need a source repository. Not all templates require a source repository, but this particular template does. Why? So that you can easily define what content will be hosted on the nginx server. For this demo, I'm just going to use the sample repo that the template suggests.
Once I select the Create button, my nginx server is created.
Jumping to the command line for a minute, we can use the oc command to show that a number of Kubernetes resources were created for us as part of this template:
$ oc project osdemo Now using project "osdemo" on server "https://127.0.0.1:8443". $ oc get all NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfigs/nginx-proxy 1 1 1 config,image(nginx-proxy:latest) NAME TYPE FROM LATEST buildconfigs/nginx-proxy Source Git@master 1 NAME TYPE FROM STATUS STARTED DURATION builds/nginx-proxy-1 Source Git@d467694 Complete 2 minutes ago 47s NAME DOCKER REPO TAGS UPDATED imagestreams/nginx-proxy 172.30.1.1:5000/osdemo/nginx-proxy latest About a minute ago NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD routes/nginx-proxy nginx-proxy-osdemo.127.0.0.1.nip.io nginx-proxy 8080-tcp None NAME READY STATUS RESTARTS AGE po/nginx-proxy-1-build 0/1 Completed 0 2m po/nginx-proxy-1-s84xd 1/1 Running 0 1m NAME DESIRED CURRENT READY AGE rc/nginx-proxy-1 1 1 1 1m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/nginx-proxy ClusterIP 172.30.26.88 <none> 8080/TCP 2m
It created these resources for us:
- deploymentconfigs/nginx-proxy - our deployment configuration that essentially captures the target state.
- buildconfigs/nginx-proxy - a build configuration that specifies details on how to build the image to run in a container.
- builds/nginx-proxy-1 - a build of the image.
- imagestreams/nginx-proxy - a resource that defines where the image is stored (currently in an internal registry running in OpenShift).
- routes/nginx-proxy - a resource that exposes a URL that can be used to route traffic to our server - the route directs traffic to the service svc/nginx-proxy.
- po/nginx-proxy-1-build - a pod that was created to do the build of our image which has now finished.
- po/nginx-proxy-1-s84xd - the pod where our server is currently running.
- rc/nginx-proxy-1 - the replication controller that ensures at least one pod is always running.
- svc/nginx-proxy - the service that load balances traffic across the pods - we only have one pod currently, but if we added additional pods, we could distribute the traffic through this service.
If you recall from when we build this application on the command line, we easily could hit the nginx server through the service IP (and port 8080, where the service is listening), and as we can see, the IP in this case is 172.30.26.88. If I run curl against this IP on any host in the cluster (in this case, just the host where I'm running Origin), I will see the content of the sample application:
$ curl -s 172.30.26.88:8080 <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <title>Welcome to OpenShift</title> <style> ... </style> </head> <body> <section class='container'> <hgroup> <h1>Welcome to your static nginx application on OpenShift</h1> </hgroup> ... </body> </html>
So just like before, we were able to deploy our simple nginx application easily. We can also use the URL provided by the route:
$ curl nginx-proxy-osdemo.127.0.0.1.nip.io <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <title>Welcome to OpenShift</title> <style> ...
You will notice that with the route, I didn't use port 8080, but instead went through the default port of 80. All traffic coming into OpenShift goes either through port 80 or 443. This provides more security since these are privileged ports and are more strictly controlled than higher numbered ports like 8080. We can also point a browser to see the content at the URL, but there are some technical details to make this work correctly. That is beyond the scope of this series, but it is something that I'll most likely cover in a deep dive.
This week I started talking about the OpenShift UI and how you can create a project in it. I was going to continue this series on, but in reality, this series is a beginner's guide to OpenShift, and it is best to stop here. There is still a lot more ground to cover in OpenShift, but already you should be able to see how easy it is to get started. I'm going to wrap this series up but soon I'll start some deep dives into different areas of OpenShift and explore further what this platform really brings to the table and why Red Hat has been at the forefront of enabling companies to migrate their apps to containers. If you have questions or comments about anything in this series, feel free to tweet at me or email me or even just leave a comment below.