Let's Talk About Security: Reflections on BlackHat

Black Hat USA is over, but I wrote up some thoughts on security that can help teams organize a strategy for defending your most important assets: your data.

Let's Talk About Security: Reflections on BlackHat

Black Hat Newbie

I just left my first Black Hat conference. What an amazing experience! Over 18,000 people came to the conference for a variety of reasons. Some came to check out the latest technology for putting up defenses against the hackers and cybercriminals. Some were here to network and possibly find job opportunities. Some simply wanted to learn skills in hacking, building firewalls or using the tools that are key to keeping hackers out.

My day job involves building a platform for machine learning and network monitoring utilizing container technologies, and I came seeking tools that could help me do that faster. Prior to the conference, I had done some independent research and determined there are at least two tools that I will definitely need - a container image scanning tool and a process monitoring tool. The solution for image scanning I found is the Anchore Image Scanning Engine. It's open source, easy to set up, and does exactly what we need. For process monitoring (watching processes down to the system call level and monitoring files, sockets and other resources the processes access), I found a tool called Falco, published by Sysdig. When I went to DockerCon in San Francisco, I learned that Sysdig offers a solution that combines Falco, the Anchore Image scanning engine, and an enterprise UI layer for visualization called Sysdig Secure (details here of the 2.0 release: https://sysdig.com/blog/sysdig-secure-2-0/). Perfect.

I'm always on the lookout for other tools if they can help us avoid reinventing the wheel or can give us even more insight into what the bad guys are doing. There's a problem though. There are hundreds of vendors attending the conference, many that I had never heard of. A lot of them are smaller companies, but they had impressive offerings that compare well against larger companies that have dominated the security space for years. The demos were well done, demonstrating methods of detecting bad guys and keeping them out, and using a lot of buzz words like visibility, machine learning, analytics, scanning, etc. How do you know which vendor to go with?

The Problem With Business as Usual Security

Let's take a step back and talk about the basics of implementing security solutions. First and foremost, you must have defenses in place for the attacks that happen all the time. Just as you would lock your car or your house, you need to have the baseline security in place so that hackers can't just walk in the front door. They have a suite of tools (and I'm just pointing at the legitimate ones) for probing your defenses, so you'll need to make sure that the defenses are there and kept up to date. Firewalls are a must. Network traffic monitoring is essential. Subscribing to intelligence feeds to stay on top of new malware threats and intrusion techniques is critical. This is not enough, however.

Your defenses are only as strong as the weakest point, and there is a memorable story I heard during the conference that demonstrates this. A local casino (it was not disclosed which one) installed a networked thermometer device into one of the aquariums that was located inside the casino. Unfortunately, the security of this device was weak, and a hacker was able to compromise the device and get onto the network of the casino. The hacker then began to move laterally around the network, probing for other weaknesses until he stumbled upon a priority client database that had been left exposed. Suffice to say that this was a security nightmare.

There's another problem as well. In 2016, there was 6,447 common IT security vulnerabilities and exposures (CVEs) disclosed. In 2017, that number jumped to 14,712, an average of over 40 per day. Hackers are able to weaponize these vulnerabilities in only a few hours to a few days, but software teams on average are not able to deliver fixes for several weeks. This gives hackers a significant window of time for staging attacks on known vulnerabilities that have been recently discovered. What makes this worse is that some companies will not address these vulnerabilities for months, extending the window further. One session I attended was by Cisco Talos, who work with companies to address vulnerabilities that are found, and some of the companies they contacted simply replied that they have no capacity to address the issues that found.

Detecting In Progress Attacks

I attend a monthly cybersecurity briefing, and exploitations are happening all the time, all over the world. These aren't just software attacks - there are physical attacks on hardware as well. A lot of these exploits are taking advantage of weak security postures, and some companies are not even be aware that these exploitations are happening until after the fact. Despite increased training and awareness, phishing attacks continue to be the main way that hackers are gaining access to company networks. Users click on links in emails or open attachments that inadvertently allow malware to be installed on their devices, and this weakens even the strongest security defenses.

How can you tell that these exploitations are happening? You are going to need tools for analyzing data, and your server and application logs are a large portion of that data. Splunk is one tool that excels at analyzing log data and also giving analysts the ability to quickly search through massive amounts of log data. Applying machine learning to that data can also help to identify suspicious behaviors that could be indicators of intrusion. A lot of the solutions I saw at BlackHat this year offer machine learning analytics out of the box, and it can an important tool for finding signs of activity that appear to be normal but are actually malicious.

I have been utilizing a tool called Bro to detect abnormal network traffic events. A lot of malware and intrusion tools are signature-based, meaning that they use hashes and similar techniques to find network packets that are indicators of attacks. The problem is that these tools miss the normal-looking traffic that may be coming from unauthorized users (from the perspective of a lot of your auditing tools, they appear to be legitimate users). Once hackers have gained access to your network, they may utilize tools like ssh to connect between machines on the network. These ssh connections aren't suspicious in isolation, but tools like Bro allow you to use stateful analysis of events to build event chains that could illuminate suspicious behaviors.

Acting on Suspicious Behavior

When you do find suspicious behavior, you need to act fast. The action shouldn't be triggered manually by someone looking at a dashboard or browsing a report that is emailed to them. Hackers may need only hours or minutes to get to critical information, and you may not have time to spare responding to the attacks. There are many actions that can be taken immediately in an automated fashion to get the ball rolling. Information on the attack can be gathered. Notifications can be sent to the appropriate parties through tools like Slack. Incidents can be created containing information that is collected for easy analysis. Issues can be automatically assigned and prioritized so that analysts can work a queue rather than guessing what item should be worked on next.

You should also have playbooks on hand that walk analysts through the best practices of triaging incidents, direct them to tools for gathering further details, and suggesting actions for stopping attacks before they escalate further. One company that I saw that had a really powerful tool for doing just this is Demisto, and their playbook tools have been incorporated into other platforms including the offering from RSA.

Proactive Defenses

So far, the techniques I have talked about have mostly been reactive. They are put in place to deal with security issues that already exist. Unfortunately, there are possibly security issues that haven't even been discovered yet, and there could even be attackers who are aware of the issues before the rest of us. In order to combat this, software developers need to take a more proactive approach and deal with security as a first class citizen in the software development process.

Testing is obviously important, and this includes your normal user testing and integration testing. You need to include penetration tests and other security tests as well. Using a toolkit like Metasploit can help by utilizing a suite of typical penetration attacks to test your security defenses. It helps if you also have ethical hackers who are continually testing the boundaries and actively trying to find the holes. A lot of companies complement their ethical hackers with hunt teams that actively search for the ethical hackers, to ensure that the hackers are not able to get into the system without being noticed.

One really powerful tool that a lot of us have at our disposal is SELinux. If you haven't heard of it, you need to start reading up on it. Unfortunately, it is a really complicated tool and difficult to configure, but if you take the time to set up proper policies, you can make a system difficult to damage. You have fine grain control over what processes can access what files, sockets, and other resources, and you can control what users have the ability to access the processes. Even root can be controlled, making it much less a concern that users have privileged access.

Along the same lines is seccomp which strictly controls what systemcalls are allowed, and Linux kernel capabilities which can control other aspects of what processes can do. Take advantage of the tools that you have at your disposal.

Ideally, we would shift the proactive defenses one step further and educate our developers on the techniques of writing secure code. Too often we write code that makes assumptions about the input and doesn't check boundary cases effectively. Every time we publish code like this, we are opening ourselves up to possible future security incidents. There are a number of tools available such as SonarQube that can flag code that could be exploited. Tools like this should be in place in every software project.

Our Software Reality

We live in a fast-paced world. Customers are demanding solutions from us, and they want them as soon as possible. Gone are the days when we can plan out software releases months or years in advance and spend weeks or months testing the software before it is delivered to customers. Agile has become the new way of doing things, and in a lot of ways, it is helping teams write better software, but it hasn't necessarily helped them write more secure software.

Security tends to be a lower priority item until teams have time to address issues that have been detected later. I don't know if this is going to change any time soon, but I hope with the recent trends in security compromises, companies will begin including strong security as part of the requirements of every software project.

Conclusion

If you take anything away from all of this, it should be that security needs to be something you are always thinking about as you plan software releases and service your customers. You need to have security defenses in place and tools for your analysts so they can quickly triage and address attacks. All your servers and networks should have monitoring for malware and intrusion. Everyone in your organization should be trained and tested to resist social engineering techniques that can expose your networks to attackers. Software developers should be focused on writing code that is not only functional and performant, but also secure.

There are a lot of vendor solutions out there that can help you do this, and in future posts I'll dive into various security topics and talk about some of the tools we use in order to build a proper security framework.

Related Article