SOUPS: Best Paper Award

Congrats to the best paper award winners!

Philip Inglesant, M. Angela Sasse, David Chadwick, and Lei Lei Shi for the paper Expressions of Expertness: The Virtuous Circle of Natural Language for Access Control Policy Specification (pdf).


USM Workshop Closing Remarks

If you presented and want your slides to be posted email or pass them to John Karat or Konstantin Beznosov.

Please provide feedback about the workshop on this website. We are deciding if we should continue this workshop or if we should start making it peer reviewed.


Design guidelines for IT security management tools

Reviewed papers from HOT Admin and other Guidelines (~20 papers) from literature and used them to create a set of 164 guidelines. Looked at technical, human and organizational factors. Took all 164 guidelines and put them through a card sorting exercise and came up with a framework in which all the guidelines fit.

Some important parts of the framework and subtopics:

  • Multiple levels of abstraction – Provide each person only with the information and view that they require for their job
  • Rehearsal and Planning – Deployment and configuration of a production system can be expensive and take down a system for a short time
  • Customizable Alerting – Need to be customized for different portions of the organization including thresholds, suppressing alarms or what channel is used.
  • Archiving – Tools need to keep track of critical information.
  • Workflow Support – Integration with different communication methods and sharing of information between different workflows

Hope to build this framework into a more comprehensive tool. Also trying to survey more papers (~45 currently). Want to see if all the current guidelines fit in the model.

Q: One problem with collaboration via a medium is the security of the medium of the collaboration. Is that considered much in the guidelines.

Q: How do you operationalize the guidelines is a very important question. For many organizations design guidelines are considered pointless. But if it is embedded in their tools such as eclipse then more people will make use of them.

Q: Is there anything in in your guidelines which discusses things like isolating portions of a network for security reasons?

Q: I appreciate the effort of extracting design guidelines from current literature. Once you gather these guidelines which are intended for tool designers go ask the designers and see if the guidelines help at all. Its tools not rules. Guidelines built into rules are far more effective in getting designers to pay attention.


SOAPS: Standards, Usable Security, and Accessibility: Can we constrain the problem any further?

Mary Ellen Zurko

  • Mary is chair of the W3C’s Web Security Context Working Group – first standards effort in usable security
    • Recommendation for display security context information, server identity, security error handling, TLS user trust, robustness of channel for security information
  • * Bringing in accessibility:
    • W3C has an explicit commitment to accessibility in all work.
    • Many of best practices for presenting usable security context presume visual display
    • Current assistive technologies don’t even make the browser security cues available (ie, screenreaders don’t announce the presence of the “padlock”)
    • Some user agents don’t even display the URL for https: cues
    • Have a single place to collect all the security context information that users can go to.

Logotypes in X.509 certificates

  • Visual and/or audio branding information to help with trust decisions
    • recommendations: have assistive technology speak text out loud when the user requests it
    • do _not_ automatically play the logotype or speak text
    • allow configuration of specific voices for security context information

Issues and questions:

  • Is there an accessibility analog to a consistent visual position for easy user reference?
  • What form does or should a non-intrusive notifcation take in the case where the risk level is not determined?
  • When attention must be paid to security information, do varying voice parameters ( pitch, voice choice, rate of speech) work?
  • Is there an audio equivalent of the information flooding attack?
  • Does allowing a configuration that speaks a password open a hole for a vulnerability that is otherwise unacceptable?


SOAPS: Usable Security for Persons with Alzheimer’s Disease

Kirstie Hawkey
Goal: Develop a calendar/reminder system that can be used throughout the phases of cognitive decline, adapt the information to a useful granularity and a usable form, securely store the personal information, yet allow it to be accessible for users with reduced cognitive abilities
Alzheimer’s Disease

  • Most common cause of dementia, progress decline in cognitive abilities, abilities can fluctuate

Prior requirements gathering: Semi structured interviews with caregiver/patients.
Device requirements:

  • Must be authoritive source of information
  • Mobile
  • Afford multimodal interactions, especially speech
  • Maintain a presence

Difficulty: Tech Introduction

  • Mechanical skills/fears: bad previous experiences, need to recover gracefully
  • Willingness of caregivers to provide information

Difficulty: Privacy and Security Concerns:

  • Device will contain quite a bit of sensitive information, device can be easily misplaced.
  • How to authenticate with the cognitively impaired?
  • Speech/audio interaction will likely play a strong role as cognitive abilities decline

Initial thoughts:

  • Biometrics could be problematic
  • Need seemless authentication with task
  • Can personal vocabulary help interaction abilities, and provide some defense?
  • Can proximity provide against theft/loss? (RFID medical tag, which could trigger heavier authentication, ie, passwords)

Q: Does this make the patients more vulnerable?


Access Control Policy Analysis & Visualization Tools for Security Professionals

Kami Vaniea, Qun Ni, Lorrie Faith Cranor, Elisa Bertino

Societe Generale: 7.2 billion trading loss in 2008. Employee moved from compliance to trading and his access wasn’t removed. Used knowledge and access to make high risk large trades.
Policy administration is non-trivial. Policies are huge and difficult to work with, polices can be something implemented to control access and physical access and file systems. Can be thousands of rules. Just CMU’s swipe card system which allows/denies access for thousands of people to buildings. Windows is deny takes precedence, firewall is first rule counter conflict resolution: need to understand these differences. Policies not consistently managed. Access to IT resources tend to be ad hoc.
Research on firewalls to analyze and determines all effective policy changes given a prospective new rule. Privacy: EXAM compares policies. Physical access-control: grey project at CMU.
Topic: how can we use visualizations to take policy analysis information and present it in ways people can use?
Privacy-aware role based access control (PRBAC): extends RBAC with support for privacy policies by adding a purpose element. Users assigned to roles, roles get permissions. Purposes attached to purpose bindings.
Example: distributed management with a central admin and 4 department admins. [Visual walk through of types of rules] Good central rule: employees can access room 101 from 9 am – 5 pm. Dept admin adds a rule that only people from project A can access. Then you and the rules to get project A from 9 am to 5 pm. Conflict: can get times that never overlap, etc. and conflicts. Also dominating rules, with one superceding the other. Can be redundant or an error.
Detecting conflicts and other policy issues: can use tools but how do we present to an administrator?
Prisimos system, not yet implemented. [Screen shot] Columns of rules, rows of roles, resources, actions, conditions, and obligations. Check box = this element and anything below (in a group) is associated. Solid box = some of the ones below are used in the rule but not all, need to expand. Right side, recommended changes with dominating and conflicting rules. Can click these to zero in and only look at relevant parts of the rule set, highlights conflicts.
Conclusion: policy authors need assistance. Tools exist. We need to build policy analysis visualizations which allow policy authors to better understand analysis of their policies.
Q: Actions row is most interesting part. Could be open ended definition. What are the implications of a certain action? Do you include that? A: AFS has 7 or 8 different actions and even the CS undergrads don’t get it. If I make a role called students what does that mean? Q: was getting at something else, you had a row of access. As an admin, what happens if this person has access, can you predict those? A: for file systems, well defined. For file access control, it’s RWIL, etc well defined. Something like firewall is more complex. Different issue from what this issue is looking at. Interesting research direction but not one this UI solves.
Q: combine rules and get conflicts. Will this UI help me as a user to spot them in combination? A: yes, line up the rules next to each other so you can see them side by side and see what’s wrong. Might be good to explain if they’re being and’ed and that’s the error.
Q: how computationally hard to detect conflicts? What about rules agree but are nonsensical, like I grant access to an inner room and not the outer room. A: not my area but: it’s linear and quick. To do second part, would have to build a domain-specific set of rules. Q: users add arbitrary constraints on what makes sense together? A: could but would be a problem for the admins. Q: yes, something for them to set. A: that’s why we’re looking at central v. dept admins
Q: how configurable with complex constraints, might need to be local to understand A: less on analysis than presentation. How do I make the and’ed part be in English. Q: what if it’s not time, what if it’s who signed an NDA? A: so far I’m staying out of organizational issues, just computer checkable condition you can compare. What you’re describing is complex and general, other researchers probably working on it.
Q: another case where we want to use computers so we have to do things computers can deal with? A: this gets back to intro speaker, answer is in having the policies out there and thinking ahead. Q: more general: human lives are lived in ways that are not measurable and computer processable in some ways. Is it that companies have restrictions so people have to constrain their lives to what a computer can be programmed to do. Universities are trying develop minds, but artists can’t go into computer labs. What are you doing to the human spirit? A: you can build the humans back into the loop in some ways, can put humans back in charge, you want an exception go talk to them. There are tools that make the marriage between “keep the bad guys out” and “how do I get in?” You cannot protect privacy without blocking access, so human spirit issues not just about gaining access.
Q: imagine no central admin. When teams want to know what they can share and not share, have you thought about the first column: some things people might not want to reveal, what happens when you don’t want to publish your rules to other teams? Don’t want to list the resources? A: some portion of the policy needs to go to *someone* for analysis. May be 3rd party, someone trusted, more involved. There are privacy issues and distributed issues, how do you combine these.
Q: how do you know these conflicts exist in real systems and that admins want to have answers to? A: doing data collection right now. AFS only controls access to directories not files. Yet other unix systems have file-based rules. Very easy to create conflicts like that. Especially when you have less skilled people this happens in practice. Another example: DBA access v. HR data.
Q: companies might have thousands of rules, will a table based representation scale? A: the table does not, but looking at the conflicts does: you limit the view to a handful of rules. Should not have 200 rules all conflicting at the same time, more like 3-4. Don’t try to scale, try to pick which part of the policy do I care about right now.


SOAPS: Towards a Universally Usable CAPTCHA

Graig Sauer, Harry Hochheiser, Hedi Feng and Jonathan Lazar


  • Types: Character, Image, Anomaly, Recognition, Sound
  • Examples: Gimpy, EZGIMPY, reCAPTCHA

Accessibility Concerns

  • Initially, CAPTCHAs were visual, then added audio to encompass more accessibility options

Study of accessibility/usability of audio reCAPTCHA

  • Potential concerns:
    • User comprehension, cognitive load, interference with screen readers (ie, overlapping sound with the CAPTCHA), frustrations as a result of the CAPTCHA
  • Design:
    • Jaws, external aids: braille note taker, MS Word
    • test: six attempts (one practice), short demographics survey
  • Demographics (averages, n=6): 14.5 year computer use, 7.25 hours of daily use, 7 out of 10 Jaws experience.
  • Results: avg 2.33 attempts correct, 46%
    • 90% correctness is acceptable (from Chellapilla et al.), much above what was observed
    • Schuluessler et al. suggests 51s is an acceptable completion time, this study showed 65.54s for correct, 59.56 from failed attempts.
    • Participants using external aids had higher performance on the task.

Question: What is a good measure for “good enough” (vs. the reported 5% beatable that’s taken as the worst)

  • Are these situational/threat model related questions?

Participant complaints: audio clarity, having to guess answers

Towards an Accessible CAPTCHA:

  • Universal Usability: Products and services that are usable for every citizen. Separation between systems.
  • Human Interaction Proof, Universally Usable (HIPUU)
    • Visual and Audio HIP
    • Challenges: search space, file recognition (checksums, signatures), input type
    • expanded prototype: sound merging, drop down list, free text input
    • Universal Usability: both visual and audio systems deployed concurrently
    • Further development options: expansion of the search space, free text vs. drop down list, in-audible white noise (to confound checksums and file length comparisons)
    • Planned studies: usability of the expanded HIPUU, free-text study, online user study

Q: Are there enough sound options to defeat machine training?

A: White-noise insertion, for instance, could be hard to insert without still being possible for automatic removal.


Some Usability Consideration in Access Control Systems

Elisa Bertino, Seraphin Calo, Hong Chen, Ninghui Li,Tiancheng Li, Jorge Lobo, Ian Molly, Qhiua Wang
The RBAC (role based access control) model: groupings of users with roles and then permissions, rather than direct assignment to each user. Been around a long time, fairly standardized.
Managing roles: In midsize enterprise with a few thousand employees, can have hundreds of roles and resources. Large enterprises -> thousands. Roles simplify management and improve security, but there is a high upfront role engineering cost to create these systems.
Two approaches to build systems from scratch: (1) top down, people perform analysis of business processes and derive roles. This is how it’s done right now, lots of human effort. (2) Bottom-up (role mining): still mostly research not practice. Use data mining to discover roles from existing system config data. Practical value is controversial: how do you mine roles with real-world meanings, they may not have natural sense, they may conform to a mathematically model instead.
Value of meaningful roles: sys mgrs add, remove, modify on a regular basis: dynamic users, roles, resources. If rule is #126, very hard to know the meaning: you want some semantic meaning. Optimize a snapshot of an RBAC system is very static, and that’s how many role-mining systems work today.
Top down has limitations too: expensive, time consuming, companies may not have the expertise for elicitation, and info may be confidential so consultants are not just clueless about your business and expensive, you’ve had to turn over vital business information to outsiders.
Want to build good RBAC systems. Problem 1: no standard or accepted metrics of a good system. Is this the structure, something about ease of use, how do you capture more than just a snapshot? Problem 2: orgs have trouble designing efficient and easy-to-manage RBAC systems by themselves using top-down. Automatic tools have great commercial value.
Idea: incorporate both approaches. Remember you need to maintain it and update it, RBAC is not static system.
Table: role mining roadmap, to use as much information as possible. See paper. Want to have semantic meaning after an automatic system is built. Use user attributes: name, job title, location, etc and feed that in. Where are resources located, logs of how the system is being used to learn new policies.
Managing evolving systems: need to deal with missing information as well as changes. Information may be incomplete, imprecise, have errors. Need to be able to recover from this and help improve the system. Companies merge, multiple legacy systems, etc.
Dynamic tools: limited research. Don’t know how to do this, just highlighting it’s an important area that is green, no real work here. Suggest a more holistic view of the system, apply learning techniques to generate RBAC system.
IBM has a set of products, ITIM Admin interface, one is web oriented. List of roles, see information about relationship between roles and resources. There’s also a GUI with graphs, not easy to see in the graph though.
Current support tools under evaluation to reduce configuration complexity.
Q: these roles get tangled and confused over time. Would be interesting to hear examples. A: company of 20,000 people buys a company of 3,000. Two systems, no what?
Q: people in the banking industry were using this back in the 1980s. Budgeted this as an infrastructure cost? Economics and business process may be key, e.g. cannot go live with a new service until RBAC updated. A: Yes. Companies would like RBAC but don’t have know-how and it’s very expensive.
Q: Total cost of ownership important. What about security issues wrt end user provisioning? What are the threats? A: Companies are asking for foolproof systems. Instead, assign some risk to how the system works. Measure how secure the system is and where to improve from 90% to 95%. When is it worth the money to improve?


Simplifying Network Management with Lockdown

Background: trying to provide rich policy enforcement with simple management

Known problem, firewall has rules based on port numbers and IP as data, but port 80 is used by so much that it’s always open. Implicit trust that layer 3 (IP) and 4 (port) map to user and application, but that’s not true.

Local context is key for firewalls: what application is really involved? Who is running the app? What files are they using? Where are users trying to go?

Motivation: 2007 Computer Crime & Security Survey, less effective messages are most commonly deployed. Firewalls -> 97% deployment. Easy to manage, well understood. End-point security / NAC at 27%: sits on machine. Decreased from 31% in 2006, perhaps due to lack of ease of use.

Traditional solutions lack fine control, new solutions lack easy of use and managability which drops correct use. Want both, hence Lockdown.

Lockdown has a policy component, enforcement, monitoring, and visualization. With Lockdown, policy is “allow outbound from firefox” which would block skype. Policy is infused with local context, can specify users, files, apps. Used LSM (sits in Linux kernal) to track socket activity, is this allowed? Sit on each system. Leads to better debugging of connectivity issues, packets don’t just disappear into a blackhole any more. Set up an IP table, can track when timeouts occur, don’t know why in normal case. In Lockdown, we’re monitoring on the system so we get instant feedback if the policy denies at the socket level. Can narrow to the specific machine. Monitoring on many campus machines, lightweight agent script (shell script across linux flavors, don’t have to compile) using netstat, ps, lsof. Poll them at set intervals and diff the data to see what files are touched and what processes run. Easy to deploy, small footprint. Analysis & visualization through viewer: see user/application/host connected to. See who is using what, visually, without going through logs. Can show all users with a specific app at a given time, etc. Can see how all hosts connect as a web. Can track down an outside attack via all machines a given machine touched. By user, tell what applications and hosts. Can pull application paths to see a host has three versions of firefox, can tell people to upgrade as needed.


Using local context is not new, but method & process is. Use existing tools in new ways with common and inexpensive solutions.

Fine control and manageability can be achieved. Simplifies track down problems.

Layered approach on any system, don’t need to change a lot. Traditional methods are outdated.

Q: did you evaluate little snitch or APCTOM (sp?) A: no.

Q: privacy? A: no concerns, research or cluster machines so they know there’s the potential to be monitored. We’re not monitoring their own personal machines.

Q: IRB approval is a fact, privacy is worth paying attention to. A: yeah, yeah.

Q: slide 27, what kind of information do you want someone to get out of the visualizations? A: trying to provide a view of what’s happening on the network.

Q: in some sense similar, this appears to be a record of who went where. Can you view who tried to go where but violated the rules? A: we were collecting data from all of the machines, but could not do more than passive monitoring since this is research. Looking at data and running through possible rule sets to understand what would have been denied.

Q: believe port number isn’t enough, but what about programs inside interpreters and misleading exe names? A: we chose because it was easy to do, and we can get the arguments to Java or Perl, but that requires more processing. Always the problem of things faked via rootkit and we’re not solving that.

Q: this is a visualization tool. We’ve seen a lot of them over time, there’s too much data to visualize. We need summary reports that make sense of it. How does this scale to long period of time or lots of machines, this is for someone looking for something to do. A: paper in the Fall at LISA on the viewer, can report at summary level and graphs too, the functionality is there. Q: presenting more information I don’t think is the right approach. Admins want less info but relevant, don’t want tools to help them find the problems, want to be presented with the problems. A: but how do they defined problems. Q: that’s the research problem! Don’t get hung up on putting data into a data base and not use it well. Have you thought about how to make it useful to someone who has limited time and does not want to go on a fishing expedition? A: no, we have not. This was useful as compared to going through log files, which is what admins were doing before. There is some ongoing work.

Q: there’s an issue in mgmt about what level of competence you want in your admins. You’re allowing less skilled sysadmins without a professional level of understanding. This lets people get at info more easily if they don’t have skill. A technologist looks at this and finds it more confusing.

Q: diagram sounds useful, do you have an example of how useful it is in practice? A: no, we didn’t do any of that stuff. Q: any feedback from administrators as to how useful it is? A: no, not yet, maybe we’ll get some comments in the fall.

Q: there are many toolkits already built, is there any plan to formally see that this is better than others? Do you conclude this solves your problem without experiments of some kind? Need to know how useful it is. Any plans? A: we would like to but we don’t have any plans, the tools are expensive to buy and usability is outside the scope of our research group.


SOAPS: Accessibility and Graphical Passwords

Alain Forget, Sonia Chiasson and Robert Biddle

Previous systems:
Click-based graphical passwords

  • PassPoints: security issues, users tend to click on the same points as other users in a given image
  • Cued Click-Points:
    • Click once on each of several images, where you click determines the next image you see.
    • Users still click on the same spots in a given image.
  • Persuasive CCP:
    • Eliminated the hotspots
  • Accessibility of these solutions?
    • Rely on vision and fine motor control

Decouple Content vs Presentation

  • ie, like how CSS does for web sites.
  • In click-based systems:
    • Presentation: Cue (image)
    • Selection: Response (clicks on a specific area)
  • Generalized model:
    • presentation: any cue, any modality (image, text, sound, haptic, video…)
    • (But shouldn’t provide a predictable response across all users)
    • response: any user input, any modality (clicking, typing, verbal, gesture, mouse movement…)
    • Example:
      • PassSounds: music clip, click at appropriate time.
      • Musicians can synthronize at approximately 250ms
      • early conclusions: ~5 clicks, 30sec max, +/-1/2sec accuracy.


  • PassPoints:
    • 451×331, 5 clicks, 19×19 tolerance ~ 43 bits password space
    • Minimize hotspots by using several images, providing selection assistance
  • PassSounds:
    • 30s, 1s tolerance. ~17 bits (about a 5-digit PIN)
    • Minimize hotspots by: using several clips, suggesting clicks, identifying other elements of the clip?


  • Allow any combination of modalities
  • Caution: cue and response cannot be evaluated in isolation


Q:Will there be a bandwidth concern with using these techniques?

A:Images seem modest, most of these techniques aren’t particularly high bandwidth. Perhaps there’s a compromise
Q:Have you done longitudinal studies to test recall?
A:No testing over time yet, but testing interference with other passwords.