Building Anchore Security Policy Bundles: Policies

Building Anchore Security Policy Bundles: Policies

In my last blog entry, I gave a high level description of what security policy bundles were and some of the things that you need to think about when writing them. Today I'm going to dive into the actual policies and discuss some real examples of the kinds of things you can do with them.

References

Creating a Policy Bundle

Policy bundles can be written in JSON and then uploaded to the engine via the command line. For a lot of teams, you are going to want to manage the policy bundle this way because then it can be committed to source control and versioned. This article will not go into the details of writing policies this way because it is a subject that is already well covered in Anchore's Policy Documentation.

Instead, I find them to be a lot easier to understand and build using the Anchore Enterprise UI. This is a UI component that lets you more easily visualize the state of the engine, including images that are being analyzed or have been previously analyzed, the policy bundle that is being used to evaluate the images, and events that occur. There is also a policy bundle editor built into the UI.

When you first install the engine, there is a default bundle that is already installed. Only one bundle can be used at a time for scanning, but you can have as many bundles as you want.

I create a new bundle and give it a name, say Acme Inc Policy Bundle. There are four main things I can put into this bundle: policies, whitelists, mappings, and whitelisted / blacklisted images. Today I'm going to talk about the policies part.

Policies

There's a lot of different kinds of policies that we can write into our bundle, and it would be hard to cover all the different policies that we would normally put into a bundle, so instead I'm going to focus on a few use cases that can highlight the kinds of policy content that you might want to have:

  • critical vulnerabilities: As a baseline, we want to not have images with critical vulnerabilities. We can always whitelist images to let exceptions through, but we'd rather error on the side of caution.
  • Dockerfile users: Ideally, we don't want to run images that use the user root and possibly other similar users. Docker by default doesn't namespace users, so when you run as root inside a container, you are really the root user and executing system calls as root. We'd like to avoid that as much as possible.
  • secrets: We don't want to leak secrets through our images. Often applications require credentials to interact with other systems in an uatomated way. This is understandable, but hardcoding these credentials inside an image can mean that the credentials can be leaked and compromised. Instead, we should be mounting these credentials at runtime (a good subject of a future article could be the challenge of securing secrets).

Policies can have one or more rules in them, and using the mappings we can align those rules to specific image registries, repositories, and versions. I'll start with a simple policy called Global Image Policy. This policy includes all the rules that every image must meet to be used in the company.

Gates, Triggers and Actions

The policy rules are constructed by choosing gates. There are a large number of different gates we can use, so again, I'm going to focus on a couple use cases to highlight the capabilities of these gates. Given a particular gate, you then choose a trigger that determines how that gate applies. Some triggers take parameters, which are basically the criteria for the trigger. If the trigger fires, you can then choose an action, whether to stop, warn, or go. This will become clearer as we walk through the examples.

Gate #1: Vulnerabilities

My first example will use the vulnerabilities gate. There are many things I can check with this gate, such as the severity of known vulnerabilities with components in the image, whether the vulnerability feed data on a component has grown stale, or if there is no known vulnerability information on a component. For this example, I want to protect my applications from critical vulnerabilities, and my default policy will be strict that there are to be no images with critical vulnerabilities allowed through (I can whitelist exceptions as needed).

The trigger I will use is the package trigger. This has three required parameters: the package type (all, os, or non-os), the severity comparison (<, >, <=, >=, etc.) and the severity. It has optional parameters as well, but I'm going to ignore those. What's nice is that the rule editor has drop downs so that picking the values for all of these is easy.

Finally, I just choose the action associated with the rule, in this case to STOP. This means that if this trigger fires, the overall image evaluation will be FAIL. A WARN action will issue a warning about the image, but the overall evalution will be PASS. A GO action will result in a PASS. See the document on policy evaluation for more details.

Gate #2: Dockerfile

The second gate I will use is the dockerfile gate. There are a lot of useful checks here including the effective user, ports that are exposed, instructions that are used, or whether there was no dockerfile provided (for example, you may not want to allow images that were created using alternative methods).

The trigger I'll be using for this example is the effective user trigger. This trigger has two parameters, the users and whether the trigger is a whitelist or a blacklist. I could be fairly stringent and force developers to only run as a small set of effective users (whitelist), but this is likely to be far too restrictive. Instead, I'm more interested in not permitting particular users such as root, daemon, and docker. I comma separate the users and choose the blacklist entry as the type.

Again, I'll select the action to be STOP because I don't want these images running.

Gate #3: Secret Scans

The third gate I will use is the secret scans gate. This gate has a single trigger, the content regex checks trigger. This trigger can either be used with content regular expressions or filename regular expressions. There are two optional parameters, corresponding with the content regular expression and the filename regular expression. The content regex name is the parameter I will use, and I will use a built-in name called PRIV_KEY. These names refer to regular expressions that are defined in an analyzer_config.yaml file that is installed with the engine, and you can easily define your own expressions and refer to them by name in this policy. The PRIV_KEY expression is intended to match some content found in private ssh keys, something we wouldn't want to leak.

Again, I will make this a STOP rule as I don't want these being used by default.

Policy Review

After I choose the action for each rule, I will see an overview of the policy I have built so far. The policy has not been saved yet and I can still make changes at this point.

I can easily remove rules, edit them, or even change the action quickly. I save the rules and my policy is added. I can add more policies focused on different security aspects the same way that I did this one. When I talk about mappings, it will become more clear why you would break things up into separate policies.

Conclusion

In this article, I talked about how you can create a policy bundle and a policy within the bundle. I provided a few different examples of rules within a policy, namely vulnerabilities, dockerfile, and secret scans gates. There are a lot more types of rules that are available, and I recommend looking at the policy checks document that goes into greater detail on the various checks available. Next time I will talk about whitelists, mappings, and whitelist/blacklist images.

Related Article