Securing an OpenShift Container Image CI/CD Pipeline with Anchore

Securing an OpenShift Container Image CI/CD Pipeline with Anchore

In a previous post, I talked about building a container image pipeline with OpenShift and Jenkins. Next, I want to secure that pipeline by including some image scanning. I'm going to use the Anchore Engine to do this. For the purposes of this post, I'm going to assume that the setup of OpenShift and Jenkins has already been done according to the post about integrating Jenkins and OpenShift, and I'll be using the same project. I'm running these servers on Ubuntu.

Anchore Engine Setup

The easiest way to run the Anchore engine is to run it inside a Docker container. You can find instructions on doing this on Anchore's GitHub Repo. I'll be using the docker-compose method of running the engine which automatically bundles both the engine and the database for the engine together. I simply create a directory containing two other directories and one file:

  • config - a directory mounted into the engine container containing the config.yaml file which configures the engine
  • db - a directory mounted into the database container where the files for the database are stored
  • docker-compose.yaml - the Docker compose defintion which defines the containers to run.

Starting the engine is then as simple as running a single command:

$ docker-compose up
Starting aevolume_anchore-db_1
Starting aevolume_anchore-engine_1
Attaching to aevolume_anchore-db_1, aevolume_anchore-engine_1
anchore-db_1      | LOG:  database system was shut down at 2018-07-24 01:09:25 UTC
anchore-db_1      | LOG:  MultiXact member wraparound protections are now enabled
anchore-db_1      | LOG:  database system is ready to accept connections
anchore-db_1      | LOG:  autovacuum launcher started
anchore-engine_1  | [MainThread] [anchore_engine.configuration.localconfig/validate_config()] [WARN] no webhooks defined in configuration file - notifications will be disabled
anchore-engine_1  | [MainThread] [anchore_manager.cli.service/start()] [INFO] Loading DB routines from module (anchore_engine)

After the engine starts up, it begins the process of updating the database with CVE information so that the engine can effectively report on known vulnerabilites. This process takes a few minutes, but once the engine quiets down and just logs messages that it is checking the image scanning queue, it is ready to go:

anchore-engine_1  | [service:simplequeue] 2018-07-24 01:17:15+0000 [-] "" - - [24/Jul/2018:01:17:14 +0000] "GET /v1/queues/images_to_analyze/?wait_max_seconds=0&visibility_timeout=0 HTTP/1.1" 200 3 "-" "python-requests/2.17.3"
anchore-engine_1  | [service:simplequeue] 2018-07-24 01:17:16+0000 [-] "" - - [24/Jul/2018:01:17:15 +0000] "GET /v1/queues/images_to_analyze/?wait_max_seconds=0&visibility_timeout=0 HTTP/1.1" 200 3 "-" "python-requests/2.17.3"

Anchore CLI Install

I also install the anchore-cli tool which easily lets me interact with the engine from the command line. Installation is easy and the instructions are available here. The only thing I also need to do is set three environmental variables that tell the tool how to connect to the engine:

  • ANCHORE_CLI_URL - the URL to the engine, for example
  • ANCHORE_CLI_USER - the user to use to connect to the engine
  • ACNHORE_CLI_PASS - the password to use to connect to the engine

Adding the OpenShift Registry

In order for Anchore to scan the images we are building, it will need to access the OpenShift registry where the built images are stored. You can secure and expose the registry by following the instructions in the OpenShift Origin Documentation. Once you have done that, you can use the anchore-cli tool to register the registry with the Anchore engine, providing the user and auth token (oc whoami -t) required to login to the registry:

$ anchore-cli registry add developer dghx-otMGHt8G4H1TZyLCiBShTvoYhGFOuki_anZ0TA
User: developer
Type: docker_v2
Verify TLS: True
Created: 2018-07-20T15:31:42Z
Updated: 2018-07-20T15:31:42Z

Anchore Plugin Install

The first thing I need to do is install the Anchore Container Image Scanner plugin. To do this, I select Manage Jenkins from the main Jenkins console and then select Manage Plugins:

I select the Available tab and search for anchore to find the plugin:

Jenkins will install the plugin and give us a success message when it is done:

Now we need to configure the Anchore plugin, so we select Manage Jenkins on the left , and then select Manage System:

We scroll down to find the Anchore Plugin Mode. Here you simply enter the URL to your Anchore engine and the username and password:

Once we save this, we need to edit our pipeline script. You can do this by selecting your pipeline and the selecting Configure on the left:

Scrolling to the bottom, we update the pipeline script to be this:

node ('') {
  stage ('buildInDevelopment') {
    openshiftBuild(namespace: 'pipeline-demo', buildConfig: 'nodejs-mongo-persistent', showBuildLogs: 'true')
  stage ('analyze') {
    def imageLine = ''
    writeFile file: 'anchore_images', text: imageLine
    anchore name: 'anchore_images'
  stage ('deployInDevelopment') {
    openshiftDeploy(namespace: 'pipeline-demo', deploymentConfig: 'nodejs-mongo-persistent')

As you can see, I create a temporary file called achore_images that has the names of images I want scanned, in this case the image we just built. We pass this file to the Anchore plugin and it will pull the image and scan it. Now when we build, we'll see that an extra stage has been added to our pipeline:

But we'll also see that it failed! Why is that? If we check the console output for the pipeline, the reason becomes obvious when we go to the bottom of the log:

2018-07-20T15:51:44.223 INFO   AnchoreWorker   Jenkins version: 2.121.1
2018-07-20T15:51:44.223 INFO   AnchoreWorker   Anchore Container Image Scanner Plugin version: 1.0.16
2018-07-20T15:51:44.223 INFO   AnchoreWorker   [global] debug: false
2018-07-20T15:51:44.223 INFO   AnchoreWorker   [global] enginemode: anchoreengine
2018-07-20T15:51:44.223 INFO   AnchoreWorker   [build] engineurl:
2018-07-20T15:51:44.225 INFO   AnchoreWorker   Submitting for analysis
2018-07-20T15:51:44.562 INFO   AnchoreWorker   Analysis request accepted, received image digest sha256:ffd25a2fe249c58165c14c5bbf0004f550934bc790a1e15ad1e421fdb28e29e4
2018-07-20T15:51:44.562 INFO   AnchoreWorker   Waiting for analysis of, polling status periodically
2018-07-20T15:55:00.639 INFO   AnchoreWorker   Completed analysis and processed policy evaluation result
2018-07-20T15:55:00.640 INFO   AnchoreWorker   Policy evaluation summary for - stop: 7 (+0 whitelisted), warn: 4 (+0 whitelisted), go: 0 (+0 whitelisted), final: stop
2018-07-20T15:55:00.640 INFO   AnchoreWorker   Anchore Container Image Scanner Plugin step result - FAIL
2018-07-20T15:55:00.643 INFO   AnchoreWorker   Querying vulnerability listing for
Archiving artifacts
2018-07-20T15:55:00.781 WARN   AnchorePlugin   Failing Anchore Container Image Scanner Plugin step due to final result FAIL
2018-07-20T15:55:00.782 INFO   AnchorePlugin   Completed Anchore Container Image Scanner step
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Failing Anchore Container Image Scanner Plugin step due to final result FAIL
Finished: FAILURE

The engine failed our build due to the policies set in place by the engine. This may not be what we want to happen, but that is configurable. At the very least, we want to know what vulnerabilities were found. Fortunately, the build did generate some artifacts that tell us exactly that:

You can click on the Anchore Report to get a more detailed report of the scan results. It includes a high level summary that quickly tells you what actions were taken based on the defined Anchore Policies (I'll talk about Policies later in a future post):

It also details the various vulnerabilites that were found with a traceback to the original source where they are tracked:

The build also produces some JSON files that capture these results in a portable data format. I'm not going to go into the details of the contents of these files, but if you examine the JSON output, you'll see that they detail CVE information about the various vulnerabilities that were found:

   "High Vulnerability found in package - gnupg2 (RHSA-2018:2181 -",

I can configure the engine to not fail my build by changing the pipeline script slightly:

  stage ('analyze') {
    def imageLine = ''
    writeFile file: 'anchore_images', text: imageLine
    anchore bailOnFail: false, name: 'anchore_images'

Here I've added buildOnFail: false. Now when I do my build, Jenkins will show that all the stages succeed:

If you look into the build details, however, you'll see that Anchore still classifies the scan as a failure, and I still can access the scan report even if the Jenkins build succeeds:

# Conclusion

In this post, I talked about adding the Anchore Container Image Scan plugin to my container image CI/CD pipeline to add container image scanning. I demonstrated how we can fail the build when vulnerabilities are found, and also how we can still have the build succeed even if vulnerabilities are found. In an upcoming post, I'll talk about running the Anchore engine directly in OpenShift using Helm.

Further Reading

Related Article