Building a Bro Cluster in Containers

Building a Bro Cluster in Containers

All credit for getting me started on this is due to the Rapid7 blog entry on building a Bro cluster. I simply took these instructions and modified them slightly so that the Bro processes were running in containers. Note that this is a work in progress and has some security issues that haven't been resolved yet, so this is not ready for production use.

About Bro

Bro is a really simple tool for doing network monitoring. It comes with a whole suite of scripts out of the box for monitoring a lot of common vulnerabilties. There are also services for getting feeds to keep you updated on new vulnerabilities, such as Critical Stack.

Unlike a lot of network monitoring tools like Snort, which are signature-based, Bro is also able to do behavioral-based detection by keeping state to track event chains. For example, if someone is ssh'ing from host to host inside your network, these events by themselves are not something that will draw attention, but if you correlate these events, you can easily identify that someone is moving laterally around in your network.

Bro Clusters

You can easily set up a cluster of bro instances for handling larger loads of network traffic to monitor. For example, you might want to scan traffic going through your DMZ. The regular recipe for this calls for installing bro directly on the servers, but I'm a container guy, so of course I want to do it with containers. Also, utilizing frameworks like Kubernetes can easily make it so that your cluster has high availability.

This is still a work in progress for me, so there will likely be more in the future as I figure out better ways of doing this, but I'm pretty happy with the progress so far. The biggest outstanding issue is managing the configuration of the cluster, because by default, the hosts are listed in the node.cfg file that is put on the cluster manager. That configuration is pushed out to all the hosts. When you are installing on physical servers, this is just fine, but with containers, you don't necessarily know which hosts they will end up on. I can constrain it, but by doing so, I lose some of my flexibility. I'm working on a solution for this, so stay tuned. For the purposes of this post, I'm going to hardcode the hosts and run particular node types on specific hosts.

Build the Bro Images

Containerizing Bro is rather simple, and I will simply point you at the Bro-Docker GitHub project that already has the Dockerfiles defined for various versions. I literally clone this into my own project space and build it as is.

Using Make to Build the Images

To make things easier, though, since I'm going to be building a number of images, I created a tree of Makefiles so that I can just run one make command to build everything. The structure kind of looks like this:


bro-cluster/
|-- Makefile.inc
|-- Makefile
|-- bro_image
|   |-- Makefile
|   |-- common/
|   |-- ...
|-- cluster_images
    |-- Makefile
    |-- node/
    |   |-- Makefile
    |   |-- Dockerfile
    |   |-- ...
    |-- manager/
        |-- Makefile
        |-- Dockerfile
        |-- ...

Top Level Makefiles

At the top level I have a Makefile.inc file that defines a number of variables:


CD           = cd
ECHO         = echo
DOCKER       = docker
BRO_VER      = 2.5.4
BUILD_ARG    = --build-arg
BV           = BRO_VER=$(BRO_VER)
BBV          = $(BUILD_ARG) $(BV)
BUILD_ARGS   = $(BBV)

My main Makefile defines some subprojects I want to build and some commands I can use to buid them individually if I want:


include Makefile.inc

BRO_IMAGE       = bro_image
BUILD_BRO       = bro
CLUSTER_IMAGES  = cluster_images
BUILD_CLUSTER   = cluster
RUN_MGR         = run_mgr
RUN_NODE        = run_node

all: build

build: $(BUILD_BRO) $(BUILD_CLUSTER)

clean:
	$(DOCKER) rmi -f bro/manager:$(BRO_VER)
	$(DOCKER) rmi -f bro/node:$(BRO_VER)
	$(DOCKER) rmi -f bro/bro:$(BRO_VER)

$(BUILD_BRO) : $(BRO_IMAGE)
	$(ECHO) building $(BRO_IMAGE)
	$(CD) $(BRO_IMAGE); $(MAKE) $(MFLAGS)

$(BUILD_CLUSTER) : $(CLUSTER_IMAGES)
	$(ECHO) building $(CLUSTER_IMAGES)
	$(CD) $(CLUSTER_IMAGES); $(MAKE) $(MFLAGS)

$(RUN_MGR) :
	$(CD) $(CLUSTER_IMAGES); $(MAKE) $(MFLAGS) $(RUN_MGR)

$(RUN_NODE) :
	$(CD) $(CLUSTER_IMAGES); $(MAKE) $(MFLAGS) $(RUN_NODE)

The Bro Base Image

My bro_image Makefile is pretty simple, just building Bro and tagging it appropriately:


include ../Makefile.inc

all: build

build:
	$(DOCKER) build -t bro/bro:$(BRO_VER) $(BUILD_ARGS) .

As I mentioned above, I'm just using the Dockerfile from the GitHub project. I do make one slight modification to the Dockerfile - instead of hard-coding the version throughout the Dockerfile, I pass it in via the --build-arg switch and then incorporate it inside the Dockerfile by using the ARG instruction:


FROM debian:stretch
ARG BRO_VER
...
RUN ln -s /usr/local/bro-${BRO_VER} /bro
...

The Cluster Images

In my cluster_images Makefile, I define my sub-subprojects which are the two cluster node types - node and manager. The node image can be a worker, logger, or proxy, whereas the manager image is specific to the cluster manager. The Makefile is pretty much a conduit from the top level one to the various sub-subprojects:


include ../Makefile.inc

BUILD_MANAGER = build_manager
BUILD_NODE    = build_node
NODE          = node
MANAGER       = manager
RUN_MGR       = run_mgr
RUN_NODE      = run_node

all: build

build: $(BUILD_NODE) $(BUILD_MANAGER) 

$(BUILD_NODE) : $(NODE)
	$(ECHO) building $(NODE)
	$(CD) $(NODE); $(MAKE) $(MFLAGS)

$(BUILD_MANAGER) : $(MANAGER)
	$(ECHO) building $(MANAGER)
	$(CD) $(MANAGER); $(MAKE) $(MFLAGS)

$(RUN_MGR) :
	$(DOCKER) run -it --net=host bro/manager

$(RUN_NODE) :
	$(DOCKER) run -it --net=host bro/node

The Cluster Node Image

One thing that each node in the cluster is going to require is an sshd process. This isn't 100% required, but it makes configuring and launching the cluster much easier. The Makefile for the node image is pretty simple:


include ../../Makefile.inc

all: build

build:
	$(DOCKER) build -t bro/node:$(BRO_VER) $(BUILD_ARGS) .

The Dockerfile for the node image is pretty straightforward:


ARG BRO_VER
FROM bro/bro:${BRO_VER}

ARG BRO_VER
COPY keys/id_rsa.pub /root/.ssh/authorized_keys2
COPY keys/id_rsa /root/.ssh/id_rsa
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY ssh_config /root/.ssh/config

RUN apt-get update && apt-get install -y openssh-server supervisor rsync net-tools --no-install-recommends && mkdir /run/sshd && \
    sed "s/#PermitRootLogin prohibit-password/PermitRootLogin yes/" /etc/ssh/sshd_config  > /etc/ssh/sshd_config && \
    chmod 0600 /root/.ssh/config && chmod 0600 /root/.ssh/id_rsa

You may not have known that you can dynamically pick the base image in a Dockerfile (valid in fairly recent Docker CE versions), but that's what I'm doing in the first couple lines so that if I change the Bro version, it will pull the appropriate Bro base image. For now, I'm running things as root. This is generally frowned upon, and I will eventually modify it to use a bro user. You'll notice that I copy some ssh keys in from a keys dir - these are used for communicating between instances in the cluster. I also copy in a supervisord configuration file. I am using supervisord to watch the processes running inside the container. The conf file looks like this for the node image, which just runs the sshd process:


[supervisord]
nodaemon=true

[program:sshd]
command=/usr/sbin/sshd -D -p 2022

I don't want supervisord running as a daemon - if I did, the container would just exit once the process backgrounded itself. I'm running the sshd not as a daemon either (-D) because supervisord expects to have the process it is monitoring as a child, not a detached daemon. I listen on port 2022 (-p 2022) because the hosts I'm running the cluster on already have an sshd server, but I don't want to use it because if I did, the way the cluster starts up would start processes outside the container rather than inside.

The Manager Image

The manager Makefile is similar to the node one:


include ../../Makefile.inc

all: build

build:
	$(DOCKER) build -t bro/manager:$(BRO_VER) $(BUILD_ARGS) .

The Dockerfile is fairly straightforward but different from the base:


ARG BRO_VER
FROM bro/node:${BRO_VER}

ARG BRO_VER
ENV BROHOME=/usr/local/bro-${BRO_VER}

COPY local.bro ${BROHOME}/share/bro/site/local.bro
COPY node.cfg ${BROHOME}/etc/node.cfg
COPY run_bro.sh ${BROHOME}/bin/run_bro.sh
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf

RUN chmod a+x ${BROHOME}/bin/run_bro.sh
CMD ["/usr/bin/supervisord","-c","/etc/supervisor/supervisord.conf"]

In this case, I'm building on top of the node image. I copy in a few files, set my run_bro.sh script executable, and then set the command to be running supervisord. The run_bro.sh script just allows me to run multiple steps from supervisord:


#!/bin/sh

broctl install
broctl check
broctl start

My supervisord.conf file is similar to the node, but adds my run_bro.sh script:


[supervisord]
nodaemon=true

[program:bro]
command=/bro/bin/run_bro.sh

[program:sshd]
command=/usr/sbin/sshd -D -p 2022

I'm going to skip over the local.bro file, which is just the basic Bro configuration for what Bro scripts should be loaded. The node.cfg file, however, is important, and defines my nodes and their hostnames/IPs:


[logger]
type=logger
host=XXX.XXX.XXX.1

[manager]
type=manager
host=XXX.XXX.XXX.2

[proxy-1]
type=proxy
host=XXX.XXX.XXX.3

[worker-1]
type=worker
host=XXX.XXX.XXX.4
interface=eth0

[worker-2]
type=worker
host=XXX.XXX.XXX.5
interface=eth0

I need one additional file to configure ssh to connect to a different port because in my case the servers are already running an sshd server, and I want the manager to connect to the sshd in the nodes. Fortunately, ssh makes this easy if you add a config file under your .ssh directory (which we do in the node Dockerfile):


Host *
        StrictHostKeyChecking no
        UserKnownHostsFile=/dev/null
        Port 2022

I turn off strict host key checking in this case because the nodes could possibly be deployed on different hosts in the cluster.

Running the Cluster

Running the cluster turns out to be pretty straightforward. I build all the images by running make, and then on each host, I just run a node image except for the host where the manager is. I've made a make command to make this easy (obviously I'm doing this outside of any kind of container orchestrator like Kubernetes - I will write that up in a future entry):


$ make run_node

On the manager host:


$ make run_mgr

Recall that nothing is running on the regular nodes except sshd. The manager is going to ssh out to the other hosts, install the Bro configuration, and start up the processes. That's all it really takes to start running the cluster. If you start a logger node, like I did, the logs gathered by the workers will be streamed to the logger host and written there. Otherwise, they'll be written on the manager host. For me, since I'm running a logger, I see logs on the logger node under /bro/spool/logger inside the logger container. Obviously if I want to preserve these logs, I can mount a volume to that location.

Conclusion

This has been a pretty fun exercise in taking something that was intended to run directly on servers and move it to containers. A lot of the work is just setting up some Makefiles to make building the images easy, and then a little bit of Dockerfile magic to put the right files into the images. The rest was done by Bro. Let me know if you have questions or feedback!

Related Article