the cups blog

07-23-08

posted by bp
Comments Off on SOAPS: Usable Security for Persons with Alzheimer’s Disease

SOAPS: Usable Security for Persons with Alzheimer’s Disease

Kirstie Hawkey
Goal: Develop a calendar/reminder system that can be used throughout the phases of cognitive decline, adapt the information to a useful granularity and a usable form, securely store the personal information, yet allow it to be accessible for users with reduced cognitive abilities
Alzheimer’s Disease

  • Most common cause of dementia, progress decline in cognitive abilities, abilities can fluctuate

Prior requirements gathering: Semi structured interviews with caregiver/patients.
Device requirements:

  • Must be authoritive source of information
  • Mobile
  • Afford multimodal interactions, especially speech
  • Maintain a presence

Difficulty: Tech Introduction

  • Mechanical skills/fears: bad previous experiences, need to recover gracefully
  • Willingness of caregivers to provide information

Difficulty: Privacy and Security Concerns:

  • Device will contain quite a bit of sensitive information, device can be easily misplaced.
  • How to authenticate with the cognitively impaired?
  • Speech/audio interaction will likely play a strong role as cognitive abilities decline

Initial thoughts:

  • Biometrics could be problematic
  • Need seemless authentication with task
  • Can personal vocabulary help interaction abilities, and provide some defense?
  • Can proximity provide against theft/loss? (RFID medical tag, which could trigger heavier authentication, ie, passwords)

Questions:
Q: Does this make the patients more vulnerable?

07-23-08

posted by aleecia
Comments Off on Access Control Policy Analysis & Visualization Tools for Security Professionals

Access Control Policy Analysis & Visualization Tools for Security Professionals

Kami Vaniea, Qun Ni, Lorrie Faith Cranor, Elisa Bertino

Societe Generale: 7.2 billion trading loss in 2008. Employee moved from compliance to trading and his access wasn’t removed. Used knowledge and access to make high risk large trades.
Policy administration is non-trivial. Policies are huge and difficult to work with, polices can be something implemented to control access and physical access and file systems. Can be thousands of rules. Just CMU’s swipe card system which allows/denies access for thousands of people to buildings. Windows is deny takes precedence, firewall is first rule counter conflict resolution: need to understand these differences. Policies not consistently managed. Access to IT resources tend to be ad hoc.
Research on firewalls to analyze and determines all effective policy changes given a prospective new rule. Privacy: EXAM compares policies. Physical access-control: grey project at CMU.
Topic: how can we use visualizations to take policy analysis information and present it in ways people can use?
Privacy-aware role based access control (PRBAC): extends RBAC with support for privacy policies by adding a purpose element. Users assigned to roles, roles get permissions. Purposes attached to purpose bindings.
Example: distributed management with a central admin and 4 department admins. [Visual walk through of types of rules] Good central rule: employees can access room 101 from 9 am – 5 pm. Dept admin adds a rule that only people from project A can access. Then you and the rules to get project A from 9 am to 5 pm. Conflict: can get times that never overlap, etc. and conflicts. Also dominating rules, with one superceding the other. Can be redundant or an error.
Detecting conflicts and other policy issues: can use tools but how do we present to an administrator?
Prisimos system, not yet implemented. [Screen shot] Columns of rules, rows of roles, resources, actions, conditions, and obligations. Check box = this element and anything below (in a group) is associated. Solid box = some of the ones below are used in the rule but not all, need to expand. Right side, recommended changes with dominating and conflicting rules. Can click these to zero in and only look at relevant parts of the rule set, highlights conflicts.
Conclusion: policy authors need assistance. Tools exist. We need to build policy analysis visualizations which allow policy authors to better understand analysis of their policies.
Q: Actions row is most interesting part. Could be open ended definition. What are the implications of a certain action? Do you include that? A: AFS has 7 or 8 different actions and even the CS undergrads don’t get it. If I make a role called students what does that mean? Q: was getting at something else, you had a row of access. As an admin, what happens if this person has access, can you predict those? A: for file systems, well defined. For file access control, it’s RWIL, etc well defined. Something like firewall is more complex. Different issue from what this issue is looking at. Interesting research direction but not one this UI solves.
Q: combine rules and get conflicts. Will this UI help me as a user to spot them in combination? A: yes, line up the rules next to each other so you can see them side by side and see what’s wrong. Might be good to explain if they’re being and’ed and that’s the error.
Q: how computationally hard to detect conflicts? What about rules agree but are nonsensical, like I grant access to an inner room and not the outer room. A: not my area but: it’s linear and quick. To do second part, would have to build a domain-specific set of rules. Q: users add arbitrary constraints on what makes sense together? A: could but would be a problem for the admins. Q: yes, something for them to set. A: that’s why we’re looking at central v. dept admins
Q: how configurable with complex constraints, might need to be local to understand A: less on analysis than presentation. How do I make the and’ed part be in English. Q: what if it’s not time, what if it’s who signed an NDA? A: so far I’m staying out of organizational issues, just computer checkable condition you can compare. What you’re describing is complex and general, other researchers probably working on it.
Q: another case where we want to use computers so we have to do things computers can deal with? A: this gets back to intro speaker, answer is in having the policies out there and thinking ahead. Q: more general: human lives are lived in ways that are not measurable and computer processable in some ways. Is it that companies have restrictions so people have to constrain their lives to what a computer can be programmed to do. Universities are trying develop minds, but artists can’t go into computer labs. What are you doing to the human spirit? A: you can build the humans back into the loop in some ways, can put humans back in charge, you want an exception go talk to them. There are tools that make the marriage between “keep the bad guys out” and “how do I get in?” You cannot protect privacy without blocking access, so human spirit issues not just about gaining access.
Q: imagine no central admin. When teams want to know what they can share and not share, have you thought about the first column: some things people might not want to reveal, what happens when you don’t want to publish your rules to other teams? Don’t want to list the resources? A: some portion of the policy needs to go to *someone* for analysis. May be 3rd party, someone trusted, more involved. There are privacy issues and distributed issues, how do you combine these.
Q: how do you know these conflicts exist in real systems and that admins want to have answers to? A: doing data collection right now. AFS only controls access to directories not files. Yet other unix systems have file-based rules. Very easy to create conflicts like that. Especially when you have less skilled people this happens in practice. Another example: DBA access v. HR data.
Q: companies might have thousands of rules, will a table based representation scale? A: the table does not, but looking at the conflicts does: you limit the view to a handful of rules. Should not have 200 rules all conflicting at the same time, more like 3-4. Don’t try to scale, try to pick which part of the policy do I care about right now.

07-23-08

posted by bp
4 Comments

SOAPS: Towards a Universally Usable CAPTCHA

Graig Sauer, Harry Hochheiser, Hedi Feng and Jonathan Lazar

Introduction:

  • Types: Character, Image, Anomaly, Recognition, Sound
  • Examples: Gimpy, EZGIMPY, reCAPTCHA

Accessibility Concerns

  • Initially, CAPTCHAs were visual, then added audio to encompass more accessibility options

Study of accessibility/usability of audio reCAPTCHA

  • Potential concerns:
    • User comprehension, cognitive load, interference with screen readers (ie, overlapping sound with the CAPTCHA), frustrations as a result of the CAPTCHA
  • Design:
    • Jaws, external aids: braille note taker, MS Word
    • test: six attempts (one practice), short demographics survey
  • Demographics (averages, n=6): 14.5 year computer use, 7.25 hours of daily use, 7 out of 10 Jaws experience.
  • Results: avg 2.33 attempts correct, 46%
    • 90% correctness is acceptable (from Chellapilla et al.), much above what was observed
    • Schuluessler et al. suggests 51s is an acceptable completion time, this study showed 65.54s for correct, 59.56 from failed attempts.
    • Participants using external aids had higher performance on the task.

Question: What is a good measure for “good enough” (vs. the reported 5% beatable that’s taken as the worst)

  • Are these situational/threat model related questions?

Participant complaints: audio clarity, having to guess answers

Towards an Accessible CAPTCHA:

  • Universal Usability: Products and services that are usable for every citizen. Separation between systems.
  • Human Interaction Proof, Universally Usable (HIPUU)
    • Visual and Audio HIP
    • Challenges: search space, file recognition (checksums, signatures), input type
    • expanded prototype: sound merging, drop down list, free text input
    • Universal Usability: both visual and audio systems deployed concurrently
    • Further development options: expansion of the search space, free text vs. drop down list, in-audible white noise (to confound checksums and file length comparisons)
    • Planned studies: usability of the expanded HIPUU, free-text study, online user study

Questions:
Q: Are there enough sound options to defeat machine training?

A: White-noise insertion, for instance, could be hard to insert without still being possible for automatic removal.

07-23-08

posted by aleecia
Comments Off on Some Usability Consideration in Access Control Systems

Some Usability Consideration in Access Control Systems

Elisa Bertino, Seraphin Calo, Hong Chen, Ninghui Li,Tiancheng Li, Jorge Lobo, Ian Molly, Qhiua Wang
The RBAC (role based access control) model: groupings of users with roles and then permissions, rather than direct assignment to each user. Been around a long time, fairly standardized.
Managing roles: In midsize enterprise with a few thousand employees, can have hundreds of roles and resources. Large enterprises -> thousands. Roles simplify management and improve security, but there is a high upfront role engineering cost to create these systems.
Two approaches to build systems from scratch: (1) top down, people perform analysis of business processes and derive roles. This is how it’s done right now, lots of human effort. (2) Bottom-up (role mining): still mostly research not practice. Use data mining to discover roles from existing system config data. Practical value is controversial: how do you mine roles with real-world meanings, they may not have natural sense, they may conform to a mathematically model instead.
Value of meaningful roles: sys mgrs add, remove, modify on a regular basis: dynamic users, roles, resources. If rule is #126, very hard to know the meaning: you want some semantic meaning. Optimize a snapshot of an RBAC system is very static, and that’s how many role-mining systems work today.
Top down has limitations too: expensive, time consuming, companies may not have the expertise for elicitation, and info may be confidential so consultants are not just clueless about your business and expensive, you’ve had to turn over vital business information to outsiders.
Want to build good RBAC systems. Problem 1: no standard or accepted metrics of a good system. Is this the structure, something about ease of use, how do you capture more than just a snapshot? Problem 2: orgs have trouble designing efficient and easy-to-manage RBAC systems by themselves using top-down. Automatic tools have great commercial value.
Idea: incorporate both approaches. Remember you need to maintain it and update it, RBAC is not static system.
Table: role mining roadmap, to use as much information as possible. See paper. Want to have semantic meaning after an automatic system is built. Use user attributes: name, job title, location, etc and feed that in. Where are resources located, logs of how the system is being used to learn new policies.
Managing evolving systems: need to deal with missing information as well as changes. Information may be incomplete, imprecise, have errors. Need to be able to recover from this and help improve the system. Companies merge, multiple legacy systems, etc.
Dynamic tools: limited research. Don’t know how to do this, just highlighting it’s an important area that is green, no real work here. Suggest a more holistic view of the system, apply learning techniques to generate RBAC system.
IBM has a set of products, ITIM Admin interface, one is web oriented. List of roles, see information about relationship between roles and resources. There’s also a GUI with graphs, not easy to see in the graph though.
Current support tools under evaluation to reduce configuration complexity.
Q: these roles get tangled and confused over time. Would be interesting to hear examples. A: company of 20,000 people buys a company of 3,000. Two systems, no what?
Q: people in the banking industry were using this back in the 1980s. Budgeted this as an infrastructure cost? Economics and business process may be key, e.g. cannot go live with a new service until RBAC updated. A: Yes. Companies would like RBAC but don’t have know-how and it’s very expensive.
Q: Total cost of ownership important. What about security issues wrt end user provisioning? What are the threats? A: Companies are asking for foolproof systems. Instead, assign some risk to how the system works. Measure how secure the system is and where to improve from 90% to 95%. When is it worth the money to improve?

07-23-08

posted by aleecia
Comments Off on Simplifying Network Management with Lockdown

Simplifying Network Management with Lockdown

Background: trying to provide rich policy enforcement with simple management

Known problem, firewall has rules based on port numbers and IP as data, but port 80 is used by so much that it’s always open. Implicit trust that layer 3 (IP) and 4 (port) map to user and application, but that’s not true.

Local context is key for firewalls: what application is really involved? Who is running the app? What files are they using? Where are users trying to go?

Motivation: 2007 Computer Crime & Security Survey, less effective messages are most commonly deployed. Firewalls -> 97% deployment. Easy to manage, well understood. End-point security / NAC at 27%: sits on machine. Decreased from 31% in 2006, perhaps due to lack of ease of use.

Traditional solutions lack fine control, new solutions lack easy of use and managability which drops correct use. Want both, hence Lockdown.

Lockdown has a policy component, enforcement, monitoring, and visualization. With Lockdown, policy is “allow outbound from firefox” which would block skype. Policy is infused with local context, can specify users, files, apps. Used LSM (sits in Linux kernal) to track socket activity, is this allowed? Sit on each system. Leads to better debugging of connectivity issues, packets don’t just disappear into a blackhole any more. Set up an IP table, can track when timeouts occur, don’t know why in normal case. In Lockdown, we’re monitoring on the system so we get instant feedback if the policy denies at the socket level. Can narrow to the specific machine. Monitoring on many campus machines, lightweight agent script (shell script across linux flavors, don’t have to compile) using netstat, ps, lsof. Poll them at set intervals and diff the data to see what files are touched and what processes run. Easy to deploy, small footprint. Analysis & visualization through viewer: see user/application/host connected to. See who is using what, visually, without going through logs. Can show all users with a specific app at a given time, etc. Can see how all hosts connect as a web. Can track down an outside attack via all machines a given machine touched. By user, tell what applications and hosts. Can pull application paths to see a host has three versions of firefox, can tell people to upgrade as needed.

Conclusions

Using local context is not new, but method & process is. Use existing tools in new ways with common and inexpensive solutions.

Fine control and manageability can be achieved. Simplifies track down problems.

Layered approach on any system, don’t need to change a lot. Traditional methods are outdated.

http://netscale.cse.nd.edu/lockdown

Q: did you evaluate little snitch or APCTOM (sp?) A: no.

Q: privacy? A: no concerns, research or cluster machines so they know there’s the potential to be monitored. We’re not monitoring their own personal machines.

Q: IRB approval is a fact, privacy is worth paying attention to. A: yeah, yeah.

Q: slide 27, what kind of information do you want someone to get out of the visualizations? A: trying to provide a view of what’s happening on the network.

Q: in some sense similar, this appears to be a record of who went where. Can you view who tried to go where but violated the rules? A: we were collecting data from all of the machines, but could not do more than passive monitoring since this is research. Looking at data and running through possible rule sets to understand what would have been denied.

Q: believe port number isn’t enough, but what about programs inside interpreters and misleading exe names? A: we chose because it was easy to do, and we can get the arguments to Java or Perl, but that requires more processing. Always the problem of things faked via rootkit and we’re not solving that.

Q: this is a visualization tool. We’ve seen a lot of them over time, there’s too much data to visualize. We need summary reports that make sense of it. How does this scale to long period of time or lots of machines, this is for someone looking for something to do. A: paper in the Fall at LISA on the viewer, can report at summary level and graphs too, the functionality is there. Q: presenting more information I don’t think is the right approach. Admins want less info but relevant, don’t want tools to help them find the problems, want to be presented with the problems. A: but how do they defined problems. Q: that’s the research problem! Don’t get hung up on putting data into a data base and not use it well. Have you thought about how to make it useful to someone who has limited time and does not want to go on a fishing expedition? A: no, we have not. This was useful as compared to going through log files, which is what admins were doing before. There is some ongoing work.

Q: there’s an issue in mgmt about what level of competence you want in your admins. You’re allowing less skilled sysadmins without a professional level of understanding. This lets people get at info more easily if they don’t have skill. A technologist looks at this and finds it more confusing.

Q: diagram sounds useful, do you have an example of how useful it is in practice? A: no, we didn’t do any of that stuff. Q: any feedback from administrators as to how useful it is? A: no, not yet, maybe we’ll get some comments in the fall.

Q: there are many toolkits already built, is there any plan to formally see that this is better than others? Do you conclude this solves your problem without experiments of some kind? Need to know how useful it is. Any plans? A: we would like to but we don’t have any plans, the tools are expensive to buy and usability is outside the scope of our research group.

07-23-08

posted by bp
Comments Off on SOAPS: Accessibility and Graphical Passwords

SOAPS: Accessibility and Graphical Passwords

Alain Forget, Sonia Chiasson and Robert Biddle

Previous systems:
Click-based graphical passwords

  • PassPoints: security issues, users tend to click on the same points as other users in a given image
  • Cued Click-Points:
    • Click once on each of several images, where you click determines the next image you see.
    • Users still click on the same spots in a given image.
  • Persuasive CCP:
    • Eliminated the hotspots
  • Accessibility of these solutions?
    • Rely on vision and fine motor control

Decouple Content vs Presentation

  • ie, like how CSS does for web sites.
  • In click-based systems:
    • Presentation: Cue (image)
    • Selection: Response (clicks on a specific area)
  • Generalized model:
    • presentation: any cue, any modality (image, text, sound, haptic, video…)
    • (But shouldn’t provide a predictable response across all users)
    • response: any user input, any modality (clicking, typing, verbal, gesture, mouse movement…)
    • Example:
      • PassSounds: music clip, click at appropriate time.
      • Musicians can synthronize at approximately 250ms
      • early conclusions: ~5 clicks, 30sec max, +/-1/2sec accuracy.

Security:

  • PassPoints:
    • 451×331, 5 clicks, 19×19 tolerance ~ 43 bits password space
    • Minimize hotspots by using several images, providing selection assistance
  • PassSounds:
    • 30s, 1s tolerance. ~17 bits (about a 5-digit PIN)
    • Minimize hotspots by: using several clips, suggesting clicks, identifying other elements of the clip?

Alternatives

  • Allow any combination of modalities
  • Caution: cue and response cannot be evaluated in isolation

Discussion:

Q:Will there be a bandwidth concern with using these techniques?

A:Images seem modest, most of these techniques aren’t particularly high bandwidth. Perhaps there’s a compromise
Q:Have you done longitudinal studies to test recall?
A:No testing over time yet, but testing interference with other passwords.

07-23-08

posted by bp
Comments Off on SOAPS: Accessible voice CAPTCHAs for Internet Telephony

SOAPS: Accessible voice CAPTCHAs for Internet Telephony

Background

  • CAPTCHAs help protect services from automated requests
  • Internet Telephony (VoIP) is becoming popular -> risk of voice spam
  • Could CAPTCHAs be used to prevent VoIP spam?
  • Callers not on a whitelist or blacklist would have to prove they are human before calling a recipient

Research questions:

  • Can CAPTCHAs work as a spam prevention mechanism?
  • Can CAPTCHAs be adapted to telephony:
  • Wide variety of phones and networks
  • How will users react? Are solutions sufficiently usable and accessible?

Scenario:

  • Unknown callers diverted to CAPTCHA server, on pass transferred to actual call to recipient. Recipient can update access list to prevent future CAPTCHAs (or calls)
  • Note: John (initiator) doesn’t need any additional hardware or software.

Implementation:

  • Implemented test framework for Skype
  • 5-digit audio CAPTCHA with no distortions (ie, not secure, emphasis on testing user reactions)

Study:

  • 10 participants, varied length of instructions given to users as a subgroup condition (2 lengths)
  • Input methods: laptop keyboard and plug-in “mobile phone”

Results:

  • Most were surprised about the CAPTCHA (not informed ahead of time)
  • Majority of users passed on the first try
  • Users in short-instructions group made more mistakes
  • The overall grade of easiness was high
  • After multiple tasks, users’ “pleasant” grade decreased.

Usability challenges:

  • Becomes boring quickly, but skipping instructions takes extra instructional time.
  • Callers had difficulty comprehending all of the information at once (ie, * key to submit, # to reset).
  • What works with one device can be unusable on another (ie, gaps between digits needs to be large for mobile phone users).

Design improvements:

  • “Press any number key”, eg, when a certain sound is heard
  • “Press any key n times”
  • Redesign cancel function (or remove it and have user retry on failure)
  • Design instructions carefully, present in users native language. More information may be necessary, despite desire to limit length of interaction.

Next steps:

  • Test with different user groups: aged, non-native speakers of the CAPTCHA language
  • Is the CAPTCHA feasible for spam protection?

Questions:

Q:Was the task digit-at-a-time or all digits at once?
A: Task was digit-at-a-time.

07-23-08

posted by bp
Comments Off on SOAPS: An improved audio CAPTCHAs

SOAPS: An improved audio CAPTCHAs

Jennifer Tam, Jiri Simsa, David Huggins-Daines, Luis von Ahn and Manuel Blum

CAPTCHAs – a test to determine if the user is human.
Some existing audio CAPTCHAs have only a 70% human passing rate, because the additional noise injected into the audio makes discerning the digits difficult.
Additional concern: task time is much greater than with visual CAPTCHAs.

Are current CAPTCHAs secure?

  • Considered insecure if can be beat 5% of the time.
  • Because of the limited vocabulary, a trained system can beat them 45% of the time.

Methodology:
Testing targetted at: Google, reCAPTCHA, digg
Sampled 1000 from each

Algorithm to break them:

  • Segment audio
  • Features: classify as digit/letter, noise, or voice

Dataset:

  • Manually segmented/labelled.
  • Testing used an automatic segmenting algorithm

Feature algorithms:

  • Mel-frequency cepstral coefficients (MFCC)
  • Perceptual linear prediction (PLP)
  • Relative spectral transform with PLP (RASTA-PLP)

Trained with AdaBoost, SVM, k-NN
Algorithm: segment, recognize (features -> labels), repeat until all segments or a maximum solution size

  • 66% Google, 45% reCAPTCHA, 71% Digg. These are for exact matches, rates are higher if errors are allowed (at least Google permits 1 error in the response).

How to build a better audio CAPTCHA?

  • Apply reCAPTCHA’s visual approach to audio techniques.
  • Similar to visual reCAPTCHA , transcribe audio that failed Automatic Speech Recognition, but with audio that is spoken clearly.

How will it work?

  • Start with phrases with known transcriptions.
  • User will transcript adjacent phrases to transcribe.
  • Un-transcribed phrase’s transcription is recorded after the known-phrase transcription is matched.

Security Analysis:

  • Speaker independent recognition and open vocabularies are difficult for ASR systems.
  • AM broadcast and mp3 cause coding degradation which also reduces performance.

Conclusion:

  • Improved accessibility for RECAPTCHA
  • Provide transcriptions for non-transcribed audio

Discussion:
Q: How will the bad guys respond to this new technique?
A: Will be collecting data as it runs, detect weak bits and remove them from the system. Should be possible to stay ahead of the bad guys (by having more complete data). Different radio show sources will provide different background patterns which would need to be segmented in the bad guy’s training data, as well.
Q: Radio shows pushed the development of “widely understandable” accents. Does this make them particularly vulnerable to computer attack?
A; Not clear this will be a problem, as there were various accents encouraged in the shows, among other reasons.
Q: What about language barriers?
A: Eventually include audio sources from other languages, perhaps chosen by location or menu selection.

Q:How do you plan to clean the data for spelling, punctuation, etc.
A: Currently, ignore case and punctuation (for comparison).
Also planning to deploy a dictionary for cleanup/comparison.
Q: Can users poison the system?
A: There are statistical tests for ultimate acceptance of a given transcription, as well as other techniques.
Q: What about the deaf-blind users?
What techniques for computer use are available?
A: ASCII output/keyboard input.
A: Would it be possible to use a haptic device with a waveform?
Q: To deal with dyslexia, perhaps combine the audio with a visual representation?
A: Runs the risk of giving the computer more leverage as well, and there are no transcriptions for the audio to use to generate the visual representation.

07-23-08

posted by aleecia
Comments Off on Talk: Research Directions for Network Intrusion Recovery

Talk: Research Directions for Network Intrusion Recovery

Authors: Michael Locasto, Matthew Burnside, Darrell Bethea

Realized these areas affect people more than themselves and would like feedback on these topics and research. Network intrusion and discovery is underappreciated on the recovery side, seen as boring system administration work.

Focus: usable systems for intrusion response. One benefit is an incident archive. Orgs & people have disincentive to share incidents, bad PR, so there’s no archive to research to see which problems to focus on. Network intruction recover is difficult for researching, designing & creating usable security mechanisms.

Started logging incidents, march 07, dec 07, march 08. Talking about dec 07 rest are in the paper.

Graphics research group in CS got 4 new machines with nvidea cards and unofficial drivers. IT staff installed non-standard drivers all was good. 12 months later, machines crash on 12/6/07. Added to ticketing system, IT staff rebooted them everything seemed fine, Monday crashed again on 12/10. Finals week – didn’t start diagnosis until 12/13. Two rdist masters, and they start to crash as well on Monday 12/10. Recent kernal upgrade – roll back. Crash again on 13th and 14th. On the 17th, compile the kernal not just apply the binary. The make failed because it could not create directories that just have numbers. Sounds like – rootkit, might be intercepting file ids. This is the first time we think it might be a security issue. Booting from CD shows that common utilities were replaced. Every machine managed with rdist — 200 machines! — has been compromised. And then the staff goes home for the holidays at the end of the week. Plus, Friday the 21st half the staff is leaving for new jobs. Switched OS to a different Linux flavor, changed everyone’s password and sent text messages to everyone to go out-of-band.

Lessons learned

There was no recovery agenda. Multiple conflicting points of view. Masters students running much of the show, no one’s there long term. Decisions are informal and qualitative: why switch from RedHat? The swaying argument was the person doing the install was “comfortable with the package management system.” Why is that the right factor for security-related decisions? But the RedHat advocates had moved on from the group, so now people wanted to install what they knew. How do you create and update a plan in the face of so much churn? Reviewing once a year isn’t going to be enough. How do we do this in a useable and efficient way? Human memory is pretty bad. People involved in multiple incidents confused what happened when. There wasn’t clear record keeping. IDS systems don’t work. The rootkit conflicted with unofficial video drivers, the machines crashed. In another incident, NFS mount failed. Even when Snort is turned on, who’s going to look at 500 messages a day? The infrastructure is weak. And the human level issues complicate things even further.

Tension about forensics: do you keep a machine up or take it down?

Staff and ISP might want to take the machine down you don’t want a reputation as spreading a worm. But might want to keep it up to figure out what’s going wrong and to be able to fix it. Users want to stop the threat to their privacy, but if it’s a critical machine — or during finals — it may not be possible to take down. 8 months later there are still machines vulnerable to the same attacks.

Research directions

  • Not just technical but human problems. First approach, bulleted list of what could be possible. But doesn’t get across interactions. Used Tufte as a starting point for visualizing a “decision surface” to help plan out activities and see where complexities lie.
  • Predict latent vulnerabilities based on what you’re already learned.
  • Recording infrastructure with “recovery trees” and figure out how to integrate with current tasks, need a system woven into the infrastucture
  • Technical comparisons of alternatives: NLP on release notes, query bug databases, etc.

Conclusion

Community should focus on creating mechanisms that deal with recovery as a system of both humans & computers

Q&A: recovery trees could help with things like where are the LDAP servers and what happens if they fail. A: Need to know how things work now, and that’s hard with 25 years and no notes. You need a system that can figure out where things are.

Q: interesting when stuff breaks, we giggle when grandma says “my computer doesn’t work it must be a virus.” How many times does stuff break that isn’t security? A: Don’t know, probably most are not security. You dig when you find a symptom, maybe network is slow.

Q: when building a db of incidents, two problems: kind of problems people have may be so different they can’t find anything useful; organizations aren’t highly willing to share. A: People do experience the same sorts of things so there is value to compare notes. Also value for research, especially as people bring new tools into play, can evaluate them. Second part: have to get friendly with sysadmins. They’re willing to share them, you have to talk to IT directly.
Q: incident response varies by inside or outside threat. Any data on %s? In your case, was it inside or out? A: we don’t know suspect outside. Don’t have data. Even defining what an insider is gets hard. See verizon report: http://securityblog.verizonbusiness.com/2008/06/10/2008-data-breach-investigations-report/

Q: was the driver the threat? A: no, just the canary that showed the rootkit.

07-23-08

posted by harraton
2 Comments

SOAPS: Challenges in Universally Usable Privacy and Security

Presented by Harry Hochheiser.

User diversity (young, old, varying motor skills), technology diversity, and context of use (home or office environment, physical factors, social factors) impact the way that people can interact with systems.

Security and privacy mechanisms require users to “jump through hoops” to prove themselves (or pay attention to things they’d rather disregard).

  • Additional information (security indicators)
  • Additional tasks (email encryption)
  • Harder tasks (passwords, CAPTCHA)

These mechanisms raise accessibility barriers.

Anti-phishing tools

  • These tools depend on the site content and cues available in the browser; elements that are inaccessible in screen reading software.
  • Features of the tools may be hard to understand by seniors, the visual impaired.

Passwords

  • Remember passwords, manage multiple accounts – may be difficult for people w/ cognitive disabilities and physical disabilities

CAPTCHAS

  • Visual CAPTCHAS and Audio CAPTCHAS may be difficult for people who are visually impaired, or in loud environments.

Several tools exist to check for accessibility, but all the tools will give you different results from each other. There is a lack of really good tools to help developers check. The tools themselves are not enough; screen reading software should also be used; else, the developer should check by turning the screen off and not using a mouse and seeing if they can still navigate.

Possible approaches for universally usable privacy and security:

  • User diversity
    – Providing alternative forms of content (cons: may curb effect, incur high dev and maintenance costs
    – Development of a single system that is accessible by diversified populations.
  • Gaps in user knowledge
    – Development of easily understandable vocabulary and icons
    – Transparent system actions
    – Better training
  • Technology diversity
    – Consideration for small displays
    – Consideration for small input devices

(Audience: Universally Usable designs benefit _everyone_. )

Running user studies: try diverse users groups to find out how and why people are falling for phishing attacks.