Soup to Nuts: Building a Container Image CI/CD Pipeline with OpenShift and Jenkins

Soup to Nuts: Building a Container Image CI/CD Pipeline with OpenShift and Jenkins

I recently wrote a blog series called OpenShift for Mere Mortals that talks in depth about what containers are, how you build container images, what container orchestration is about, using Kubernetes to define container infrastructure, and lastly how OpenShift can make doing all of this much easier.

This week I'm starting a new series to dive into creating a Jenkins pipeline for building container images for use in OpenShift and how you can use the Anchore Jenkins Plugin to scan images before they are deployed. I'm going to start by talking about building an application in OpenShift and integrating Jenkins to build a CI/CD pipeline.

For the purposes of this post, I'm going to use a standalone Jenkins server and an OpenShift Origin server (the steps I'm taking in this post will work with the OpenShift Container Platform as well). I also have Docker CE installed. I'm running these servers on Ubuntu.

Jenkins Setup

You can download Jenkins to run on a whide variety of platforms including Linux, Mac OS X and Windows, but you can also run it inside a Docker container or on a Java application server like Tomcat. I use some Ubuntu servers for my research and development, so I installed it using the Debian packages. Once you have the server installed and running, it will be available on port 8080.

OpenShift Setup

There are several methods for installing an OpenShift Origin instance. If you are just getting started, the easiest thing to do is download the oc client and run it in a container. Alternatively, you can run OpenShift Origin in a virtual machine using Minishift or you can install a full server on Linux. I prefer just using the container installation of OpenShift Origin because it works great and is easy to get started by just running the oc cluster up command:

$ oc cluster up
Using Docker shared volumes for OpenShift volumes
Using as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.

The server is accessible via web console at:

You are logged in as:
    User:     developer
    Password: <any value>

To login as administrator:
    oc login -u system:admin

Setting Up A Sample Pipeline Project

With our server up and running, first we login as specified above. There is a default project that is defined when you start up Origin with the oc tool. You can use this project or create a new one. For the purposes of this post, I'm going to start fresh with a new project by selecting the Create Project button at the top right:


There is a Pipeline Build Example app in the catalog that has a Jenkins server and a sample app bundled together:

Chances are, however, that most people will be integrating OpenShift with a shared Jenkins instance that is building your non-container apps as well as your container apps. This is what I'll be doing, so instead I'll just pick the NodeJS + MongoDB standalone app template:

The template has a bunch of fields that you can use to configure the project, but we'll just stick with all the defaults:

Selecting the Create button at the bottom of the Configuration step will kick off creating the app resources within our project. It creates two primary service resources in our project, a nodejs-mongo-persistent service and a mongodb service. These services essentially represent our main application and the database.

Viewing Our Sample Application

After I select Close on the Results step, Origin will bring us back to our main projects screen. Now if I select the Pipeline Demo project in the top right, I will see an overview of my project, showing that I have two deployments and each deployment has one pod running (the pod being where our containers are running).

Our project template included a default route with a URL of You may be wondering what in the world this is, but if you do an nslookup of it on the command line where the oc cluster up command was run, you'll see that it resolves to, the loopback IP address:


In other words, this URL is giving a descriptive name that resolves to that loopback address. I'm running this example on a cluster up in the cloud on Digital Ocean, so having a route map to this IP isn't all that useful. However, since I own my own domain, I can easily create a subdomain URL that points to the same IP where my OpenShift cluster is, such as

The details of setting up this subdomain are out of scope of this post, but most hosting providers make it really easy to add subdomain A records so that you can quickly route traffic this way.

If you are running this directly on your machine, you can still make this work, but you'll have to use a browser plugin such as the Virtual-Hosts-Chrome-Extension (using this, you can set the virtual hostname to be whatever you want, just make sure the route URL name matches).

If you have used web servers like Apache httpd or NGINX, you are probably familiar with the concept of a virtual host where one web server can handle requests for several virtual hosts, routing traffic to different locations based on the host specified in the HTTP request. OpenShift routes work in a similar way.

After I have set up my subdomain, when I first point my web browser at it, I will get a response back from OpenShift that tells me that it doesn't have a way to route the request to an active application:

I'm going to remove the existing route and create a new one. I can easily delete the route from the command line (I can do it through the UI as well, but this is a little faster):


Now I can create a new route by selecting Routes under the Application menu on the left and then selecting the Create Route button that appears:


The details of the route a pretty straightforward. I only have to enter a few pieces of information:

  • For the Name field, I can choose anything as long as it consists of lower-case letters, numbers, periods, and hypens (this is a restriction imposed by naming Kubernetes resources). I call it myapp-route.
  • The Hostname is the virtual hostname, in my case
  • The Path allows me to possibly create two routes that have the same hostname but direct traffic at two different services (for example, could route to service A, could route to service B). I leave it as /.
  • For Service, I choose the nodejs-mongo-persistent service which is our NodeJS server.
  • Choosing that service will automatically choose the Target Port mapping for us, namely 8080 -> 8080 (you can think of this like how you expose a Docker container port externally that maps to an internal port within the container.

Everything else I leave alone, and then I select Create to build my route. With the route correctly in place, now when I point my browser to, I'll see the NodeJS app running:

Integrating Jenkins

Now that we have our app installed, running, and visible from the outside world, it is now time to integrate a CI/CD pipeline with our Jenkins server. We are going to do this by switching over to our Jenkins server, installing some plugins, and then configuring a job to do the build and deploy. I'm starting with an empty Jenkins install, so there are no jobs at first:

Before I create the job, I need to install the OpenShift Pipeline plugin. To install plugins, we select Manage Jenkins from the left, and the select the Manage Plugins item from the list:

I select the Available tab and search for OpenShift and select the OpenShift Pipeline plugin:

I select Install without restart and it will download and install the plugin:

Now we are ready to set up our pipeline. We go back to the main Jenkins page and choose New Item on the top left. On the next page, we enter a name (I'm using myapp-pipeline) and select the Pipeline type and select OK at the bottom:

On the next page, we can fill out details about our pipeline. I'm going to use defaults for almost everything. We do need to check the box marked This project is parameterized and enter some parameters:

There are three main parameters we need to enter:

Parameter NameTypeDescription
KUBERNETES_SERVICE_HOSTStringThe URL of our OpenShift host
SKIP_TLSBooleanChecked unless your server is secured with TLS
AUTH_TOKENPasswordOAuth Token from OpenShift

For me, the values are, checked, and XXXXXXXXXXXXXXXXXXXXXXXXX (just kidding, I'm not sharing my auth token with you, but you can get yours by running oc whoami -t).

The last thing we need to enter is a pipeline script. For this example, I'm going to use this script:

node ('') {
  stage ('buildInDevelopment') {
    openshiftBuild(namespace: 'pipeline-demo', buildConfig: 'nodejs-mongo-persistent', showBuildLogs: 'true')
  stage ('deployInDevelopment') {
    openshiftDeploy(namespace: 'pipeline-demo', deploymentConfig: 'nodejs-mongo-persistent')

This pipeline has two stages:

  • buildInDevelopment - this will build our image using the build config in OpenShift
  • deployInDevelopment - this will deploy our app into a pod.

When this is all entered, we hit Save at the bottom. Our pipeline is ready. It will take us back to a status page for the pipeline. Now we can just select Build with Parameters on the left. This will prompt you for the parameters:

Until you set up a service account builder (a topic for another day), I would suggest always looking up your token on OpenShift (running oc whoami -t) to get the latest token here. Then you just select Build. This will kick off a build. You can select the build (clicking on the build number) and then select Console Output to see the output of the build and deploy:

Once the build is done, the deploy happens, and you will see SUCCESS at the bottom if the build/deploy worked:

When we look in the Builds section in OpenShift and select the latest build for our node app, we'll see that it was triggered by Jenkins:


In this post, I talked about creating a simple OpenShift application and integrating Jenkins to establish a CI/CD pipeline. I'll be expanding on this pipeline in future posts to add security scanning and more.

Related Article