the cups blog


Simplifying Network Management with Lockdown

Background: trying to provide rich policy enforcement with simple management

Known problem, firewall has rules based on port numbers and IP as data, but port 80 is used by so much that it’s always open. Implicit trust that layer 3 (IP) and 4 (port) map to user and application, but that’s not true.

Local context is key for firewalls: what application is really involved? Who is running the app? What files are they using? Where are users trying to go?

Motivation: 2007 Computer Crime & Security Survey, less effective messages are most commonly deployed. Firewalls -> 97% deployment. Easy to manage, well understood. End-point security / NAC at 27%: sits on machine. Decreased from 31% in 2006, perhaps due to lack of ease of use.

Traditional solutions lack fine control, new solutions lack easy of use and managability which drops correct use. Want both, hence Lockdown.

Lockdown has a policy component, enforcement, monitoring, and visualization. With Lockdown, policy is “allow outbound from firefox” which would block skype. Policy is infused with local context, can specify users, files, apps. Used LSM (sits in Linux kernal) to track socket activity, is this allowed? Sit on each system. Leads to better debugging of connectivity issues, packets don’t just disappear into a blackhole any more. Set up an IP table, can track when timeouts occur, don’t know why in normal case. In Lockdown, we’re monitoring on the system so we get instant feedback if the policy denies at the socket level. Can narrow to the specific machine. Monitoring on many campus machines, lightweight agent script (shell script across linux flavors, don’t have to compile) using netstat, ps, lsof. Poll them at set intervals and diff the data to see what files are touched and what processes run. Easy to deploy, small footprint. Analysis & visualization through viewer: see user/application/host connected to. See who is using what, visually, without going through logs. Can show all users with a specific app at a given time, etc. Can see how all hosts connect as a web. Can track down an outside attack via all machines a given machine touched. By user, tell what applications and hosts. Can pull application paths to see a host has three versions of firefox, can tell people to upgrade as needed.


Using local context is not new, but method & process is. Use existing tools in new ways with common and inexpensive solutions.

Fine control and manageability can be achieved. Simplifies track down problems.

Layered approach on any system, don’t need to change a lot. Traditional methods are outdated.

Q: did you evaluate little snitch or APCTOM (sp?) A: no.

Q: privacy? A: no concerns, research or cluster machines so they know there’s the potential to be monitored. We’re not monitoring their own personal machines.

Q: IRB approval is a fact, privacy is worth paying attention to. A: yeah, yeah.

Q: slide 27, what kind of information do you want someone to get out of the visualizations? A: trying to provide a view of what’s happening on the network.

Q: in some sense similar, this appears to be a record of who went where. Can you view who tried to go where but violated the rules? A: we were collecting data from all of the machines, but could not do more than passive monitoring since this is research. Looking at data and running through possible rule sets to understand what would have been denied.

Q: believe port number isn’t enough, but what about programs inside interpreters and misleading exe names? A: we chose because it was easy to do, and we can get the arguments to Java or Perl, but that requires more processing. Always the problem of things faked via rootkit and we’re not solving that.

Q: this is a visualization tool. We’ve seen a lot of them over time, there’s too much data to visualize. We need summary reports that make sense of it. How does this scale to long period of time or lots of machines, this is for someone looking for something to do. A: paper in the Fall at LISA on the viewer, can report at summary level and graphs too, the functionality is there. Q: presenting more information I don’t think is the right approach. Admins want less info but relevant, don’t want tools to help them find the problems, want to be presented with the problems. A: but how do they defined problems. Q: that’s the research problem! Don’t get hung up on putting data into a data base and not use it well. Have you thought about how to make it useful to someone who has limited time and does not want to go on a fishing expedition? A: no, we have not. This was useful as compared to going through log files, which is what admins were doing before. There is some ongoing work.

Q: there’s an issue in mgmt about what level of competence you want in your admins. You’re allowing less skilled sysadmins without a professional level of understanding. This lets people get at info more easily if they don’t have skill. A technologist looks at this and finds it more confusing.

Q: diagram sounds useful, do you have an example of how useful it is in practice? A: no, we didn’t do any of that stuff. Q: any feedback from administrators as to how useful it is? A: no, not yet, maybe we’ll get some comments in the fall.

Q: there are many toolkits already built, is there any plan to formally see that this is better than others? Do you conclude this solves your problem without experiments of some kind? Need to know how useful it is. Any plans? A: we would like to but we don’t have any plans, the tools are expensive to buy and usability is outside the scope of our research group.