Installing Anchore Engine on OpenShift using Helm

An article about installing the Anchore Container Security Tools on OpenShift using Helm.

Installing Anchore Engine on OpenShift using Helm

In the article Securing an OpenShift Container Image CI/CD Pipeline with Anchore, I talked about using the Anchore Engine to secure a container CI/CD pipeline in Jenkins that is building and deploying on OpenShift Origin. In the article Using Helm on OpenShift, I talked about using Helm to deploying Kubernetes applications onto OpenShift. In this article, I am going to combine these topics to talk about how you can host the Anchore Engine directly in OpenShift.

I did have to make a small change to my setup for this to work. I had to turn on promiscuous mode for the docker0 interface (ifconfig docker0 promisc). This was a minor tweak that ensured the dns resolution for items in the cluster worked correctly inside pods. I'm not a network guy, so the details are a little beyond me. Feel free to comment if you can clear up my understanding about Kubernetes networking and why this would be necessary.

In a way, most of the work has already been done, as there is an existing Helm chart. We're going to start with that, but there are a couple issues we need to take care of. OpenShift has additional security controls that vanilla Kubernetes doesn't have, and so the chart that currently exists won't entirely work out of the box. The focus of this article will be on making the necessary changes so that the Anchore Engine can run.

Setting up the Project

First, like always, we start by creating a project:


$ oc new-project anchore-engine
Now using project "anchore-engine" on server "https://127.0.0.1:8443".
...

There are two more things we need to do before we start installing the engine with Helm. First, we need to give Tiller access to our project so that it can add resources:


$  oc policy add-role-to-user edit \
"system:serviceaccount:${TILLER_NAMESPACE}:tiller"
role "edit" added: "system:serviceaccount:tiller:tiller"

The other is to make it so that the Anchore images can run as root. Normally OpenShift doesn't let us do this (and we definitely should be careful about when we do), but for now, we're going to relax the policy. Note that this has to be done as the system:admin user for OpenShift (and just make sure you are in the anchore-engine project):


$ oc login -u system:admin
Logged into "https://127.0.0.1:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

  * anchore-engine
    default
...
$ oc adm policy add-scc-to-user anyuid -z default
scc "anyuid" added to: ["system:serviceaccount:anchore-engine:default"]

Updates to the Anchore Helm Chart

I've made some changes to the official Anchore Helm Chart, mostly so that I can run my own PostgreSQL database in OpenShift. If you are interested, the details are captured in another article:

https://blog.osninja.io/anchore-helm-chart-changes/

Installing the PostgreSQL database

You can technically use Helm to install the PostgreSQL database, but there is a problem with the image that it uses - the database won't start correctly when it finds the effective user id not to be what it expects. This is a technical detail about how OpenShift runs processes in containers and is really out of scope for this article (there is a workaround, so I'm going to be addressing that in the near future and making this additional step unnecessary). OpenShift, however, has a template that makes installing a PostgreSQL database really easy for us, so we'll use that instead.

Technical Note: Anchore Engine as of this writing requires at least PostgreSQL 9.6, so you need to check whether is installed in OpenShift. Since I'm using oc cluster up, I get this out of the box, but if you are using a standalone server, it might not be. If you find PostgreSQL 9.6 or greater in the product catalog, follow the instructions in the immediate next section, otherwise follow the instructions in the second after that for installing from an image in Red Hat's registry.

Installing from the Catalog

I log into the OpenShift UI, which is available at https://127.0.0.1:8443 (note that this uses a self-signed certificate, so your browser may warn you about this). I select the anchore-engine project I created earlier, and I select the Browse Catalog button to add the database:

From the catalog, I choose the PostgreSQL template:

I can leave most of the values as they are, but I do set the four fields that were specified in my helm charts:

Finishing the template, I return to the overview to see that my database is up and running and is listening on port 5432:

Installing from the Red Hat Registry

If you find that you don't have at least PostgreSQL 9.6 in the catalog, you can instead install the database from an image in Red Hat's image repository. Detailed instructions on installing the database are available in the OpenShift Container Platform Documentation. The 9.6 image is tagged as registry.access.redhat.com/rhscl/postgresql-96-rhel7.

Installing Anchore Engine

Now comes the fun part. First I clone the chart from my GitHub repo:


$ git clone https://github.com/openshiftninja/anchore-engine-helm.git
Cloning into 'anchore-engine-helm'...
remote: Counting objects: 20, done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 20 (delta 1), reused 17 (delta 1), pack-reused 0
Unpacking objects: 100% (20/20), done.
Checking connectivity... done.

Now we just tell Helm to do its thing in our project (make sure you are in the right project by doing an oc project anchore-engine)


$ cd anchore-engine-helm
$ oc project anchore-engine
Already on project "anchore-engine" on server "https://127.0.0.1:8443".
$ helm install -f values.yaml .
NAME:   nonplussed-moose
LAST DEPLOYED: Sun Jul 29 19:37:09 2018
NAMESPACE: anchore-engine
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                             TYPE    DATA  AGE
nonplussed-moose-anchore-engine  Opaque  5     1s

==> v1/ConfigMap
NAME                                    DATA  AGE
nonplussed-moose-anchore-engine-core    1     0s
nonplussed-moose-anchore-engine-worker  1     0s

==> v1/Service
NAME                             TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)
                                 AGE
nonplussed-moose-anchore-engine  ClusterIP  172.30.49.192         8228/TCP,8338
/TCP,8083/TCP,8082/TCP,8087/TCP  0s
...

The helm charts have created deployments, config maps (which essentially get mounted to the countainer and contain the configurations for the engine), secrets, and a service, and OpenShift did the rest, creating replica sets and pods. Now when we look at our overview, we'll see that we have three applications (the engine itself runs as one core pod and at least one worker, although you can add aditional workers):

You might be wondering about the name right-sasquatch (and if you run this yourself, you'll get a slightly different name), but this is similar to how Docker gives containers unique names when you start them. This allows us to run a helm chart multiple times and not cause conflicts with existing resources (in our case, however, we won't be running it more than once).

Uninstalling the Anchore Engine

The part of Helm that I'm finding to be the most useful? It's just as easy to uninstall the Anchore Engine as it was to install it. Normally we shouldn't need to do this, but say that we discover a problem in what database we are connecting to or we want to change the username/passwords used. This could normally be a nightmare to try and manage, but with Helm, we can can literally wipe the table and start over. All we need is the name of the release, which we can get with the helm list command:


$ helm list
NAME            REVISION        UPDATED                         STATUS          CHART                       NAMESPACE
right-sasquatch 1               Mon Jul 29 20:41:25 2018        DEPLOYED        anchore-engine-0.2.0        anchore-engine

We just run a delete command and specify the name that was generated, in this case, right-sasquatch:


$ helm delete right-sasquatch
release "right-sasquatch" deleted

And just as quickly as the Anchore Engine appeared, it is now gone.

For now, I would recommend emptying the PostgreSQL database or recreating it when doing this, just to prevent any errors due to data that carried over. I can delete it by just running three commands:

$ oc delete all --all
deploymentconfig "anchore-db" deleted
pod "anchore-db-1-n2d7z" deleted
service "anchore-db" deleted
$ oc delete secret anchore-db
secret "anchore-db" deleted
$ oc delete pvc anchore-db
persistentvolumeclaim "anchore-db" deleted

Surprisingly, oc delete all --all doesn't actually delete everything. :-/

Exposing the Anchore Engine

At this point, our engine is almost ready to go. It has to initialize the database and download some CVE information from it's remote sources. While that is happening, I will work on exposing the engine. There is already a service that has been created by the Helm chart, so I simply need to create a route for it. I select the Applicatins menu and then the Routes item:

On the page that appears, I select Create Route and then fill out the details of the route:

The name can be any valid Kubernetes name, but I just choose to call it anchore-engine. For the hostname, I'm going to use the value anchore.osninja.io. I have my own domain and can set up a subdomain just for Anchore that will route to the same IP as my OpenShift installation. When OpenShift sees that the request was for anchore.osninja.io, it will route it to the engine service (which I have selected in the form), and that service will route it appropriately to the core pod.

Using the Anchore CLI, I can validate that the engine is exposed:


$ env | grep -i ANCHORE
ANCHORE_CLI_URL=http://anchore.osninja.io/v1
ANCHORE_CLI_USER=admin
ANCHORE_CLI_PASS=foobar
$ anchore-cli system status
Service analyzer (precise-dragon-anchore-engine-worker-d855c5977-v5wtq, http://172.17.0.7:8084): up
Service simplequeue (precise-dragon-anchore-engine-core-5d78bc95f6-8cphp, http://precise-dragon-anchore-engine:8083): up
...
Engine DB Version: 0.0.7
Engine Code Version: 0.2.3

I also use the CLI to register the OpenShift image registry, which I had already secured and exposed by following the directions in the OpenShift Origin Documentation. We simply use the registry add command, passing the name of the registry, our username in OpenShift (developer) and our OpenShift auth token (oc whoami -t) for logging into the registry:


$ anchore-cli registry add registry.osninja.io developer $(oc whoami -t)
Registry: registry.osninja.io
User: developer
Type: docker_v2
Verify TLS: True
Created: 2018-07-30T04:15:34Z
Updated: 2018-07-30T04:15:34Z

Jenkins Integration

In a previous article, I had talked about how to install the Anchore Container Image Scanner plugin. I'm going to continue using that plugin, but I'm going to change it from pointing at my Docker Compose install of the Anchore Engine to the one I just started in OpenShift. To do this, I log into Jenkins, go to Manage Jenkins, and choose Configure System:

The only thing I'm going to change is the engine URL. Note in this case that the engine URL is slightly different than the one we set for the Anchore CLI - we don't include the /v1 path:

Now I can kick of a pipeline build using the job I had previously defined, and when it gets to the analyze step, we'll see that it is now using the engine we have running in OpenShift:


...
2018-07-30T04:23:12.677 INFO   AnchoreWorker   Jenkins version: 2.121.1
2018-07-30T04:23:12.678 INFO   AnchoreWorker   Anchore Container Image Scanner Plugin version: 1.0.16
2018-07-30T04:23:12.678 INFO   AnchoreWorker   [global] debug: false
2018-07-30T04:23:12.678 INFO   AnchoreWorker   [global] enginemode: anchoreengine
2018-07-30T04:23:12.678 INFO   AnchoreWorker   [build] engineurl: http://anchore.osninja.io
2018-07-30T04:23:12.678 INFO   AnchoreWorker   [build] engineuser: admin
2018-07-30T04:23:12.678 INFO   AnchoreWorker   [build] enginepass: ****
2018-07-30T04:23:12.678 INFO   AnchoreWorker   [build] engineverify: false
2018-07-30T04:23:12.678 INFO   AnchoreWorker   [build] name: anchore_images
2018-07-30T04:23:12.678 INFO   AnchoreWorker   [build] engineRetries: 300
2018-07-30T04:23:12.678 INFO   AnchoreWorker   [build] policyBundleId: 
2018-07-30T04:23:12.678 INFO   AnchoreWorker   [build] bailOnFail: false
2018-07-30T04:23:12.678 INFO   AnchoreWorker   [build] bailOnPluginFail: true
2018-07-30T04:23:12.678 INFO   AnchoreWorker   Submitting registry.osninja.io/pipeline-demo/nodejs-mongo-persistent:latest for analysis
2018-07-30T04:23:13.534 INFO   AnchoreWorker   Analysis request accepted, received image digest sha256:0510e1a0319887108a1294c16995bb8071941d3e3fd9277d5737d00d8a66ae7d
2018-07-30T04:23:13.534 INFO   AnchoreWorker   Waiting for analysis of registry.osninja.io/pipeline-demo/nodejs-mongo-persistent:latest, polling status periodically

Conclusion

In this article, I talked about how you can install the Anchore Engine on OpenShift using Helm. I installed the database separately, but it was fairly straightforward, and then I used a chart that had been modified slightly to use the external database. The install took only a few seconds, and I can just as easily remove it with one command. I then talked about exposing the engine and integrating it with my existing Jenkins install. I'll be continuing to expand on the use of Anchore for image scanning in the near future.

Further Reading

Related Article