SOUPS 2005

July 6-8, 2005
Pittsburgh, PA

Symposium On Usable Privacy and Security

PROGRAM

All events will be held in the Carnegie Mellon Collaborative Innovation Center (CIC) distance learning facility (1st floor), unless otherwise noted. Lunches will be held in the Singleton Room in Roberts Hall. The poster session will be held in the Newell-Simon Hall Atrium.

Wednesday, July 6

12:30 - 5 pm: Registration

1:30 - 5 pm: Tutorials

  • User Interface Design, Prototyping, and Evaluation (held in CIC seminar room)
  • Introduction to Computer Security and Privacy

Thursday, July 7

8 - 9 am: Breakfast and registration

9 am - 10:30 am: Opening session

10:30 - 11 am: Break

11 am - 12:30 pm: Refereed paper session: Usable Security, Chair: Mary Ellen Zurko (IBM Software Group)

12:30 - 1:30 pm: Lunch (Singleton Room, Roberts Hall)

1:30 - 3:30 pm: Refereed paper session: Usable Privacy, Chair: John Karat (IBM T. J. Watson Research Center)

3:45 - 6 pm: Poster session and reception (Newell-Simon Hall Atrium)

7 - 9:45 pm: Dinner at the Church Brew Works

Trolleys to dinner departing at 6:15 and 6:45 pm; trolleys back to hotel departing at 9:15 and 9:45 pm

Friday, July 8

8 - 9 am: Breakfast and registration

9 - 10:15 am: Panel - Usability of Security Administration vs. Usability of End-user Security

10:15 - 10:30 am: Break

10:30 am - noon: Refereed paper session: Visualizing Security, Chair: Diana Smetters (Palo Alto Reserach Center)

Noon - 1 pm: Lunch (Singleton Room, Roberts Hall)

1-2:15 pm: Panel - When User Studies Attack: Evaluating Security By Intentionally Attacking Users

2:30 - 3:50 pm: Discussion sessions

  • Usability and Acceptance of Biometrics (conference room 2101, second floor)
  • Valuation and Context (conference room 2201, second floor)
  • When User Studies Attack: Evaluating Security By Intentionally Attacking Users (distance learning facility, first floor)
  • Usable Interfaces for Anonymous Communication (seminar room, first floor)

4-5 pm: Closing session

TUTORIALS

User Interface Design, Prototyping, and Evaluation

Instructor: Jason I. Hong, Carnegie Mellon University

This tutorial will cover the key concepts and techniques in user interface design, prototyping, and evaluation. This course is meant as an introduction for computer security researchers, practitioners, and students who have little or no experience in this area. Topics covered will include conducting field studies, conceptual models and metaphors, lo-fidelity prototyping, user tests, and heuristic evaluation.

[tutorial notes]

Bio: Jason I. Hong is an assistant professor in the Human Computer Interaction Institute, School of Computer Science, at Carnegie Mellon University. His current research interests include ubiquitous computing, privacy, rapid prototyping tools, and multimodal interaction. Jason received his PhD from the Computer Science department at the University of California at Berkeley.

Introduction to Computer Security and Privacy

Instructor: Simson Garfinkel, MIT

This tutorial provides a primer on security and privacy for those with a background in usability. It will cover the fundamentals of computer security and privacy: what is a security policy? What is a privacy policy? Who writes these policies and what do they include? How is security implemented in a modern computer system? What are the roles and limitations of encryption? Special attention will be paid to issues of log files, data sanitization, public key cryptography, and work to date that has attempted to align security and usability.

[1st hour tutorial notes]
[2nd hour tutorial notes]
[3rd hour tutorial notes]

Bio: Simson Garfinkel has worked and published in the fields of security and privacy for the past 18 years. His current research interests include computer forensics and the alignment of security and usability. Simson hopes to receive his PhD from Massachusetts Institute of Technology on June 3rd.

INVITED TALK

My Dad's Computer, Microsoft, and the Future of Internet Security

Bill Cheswick, Lumeta

My Dad runs a standard Windows XP computer on recent hardware. Despite our frequent efforts, this machine is often infected with a great deal of malware. This is not my Dad's fault. Millions of people are routinely running dangerous software, and often don't understand the downside of a badly-infected computer. In Feb 2001 Bill Gates committed Microsoft to improving the security of their software. There are indications that Microsoft is trying very hard to improve their security, and the recently-released Service Pack 2 is a good step towards cleaning their Augean stables. How far have they gone, and what are the prospects for my Dad's computing environment and improved security of corporate and government intranets? And how do Linux, Unix, and Macintoshes fit into all this?

Bio: Ches has been out and about in the Internet security field since the late 1980s. He is known for his early work in firewalls and proxies, and for the book he has co-authored with Steve Bellovin and now Avi Rubin. In summer 2000 Ches helped spin off the Internet cartography work he did at Bell Labs with Hal Burch into a startup, Lumeta Corporation, which explores the extent and perimeter hosts of corporate and government intranets.

POSTERS

THURSDAY EVENING DINNER

Our Thursday evening dinner will be held at the Church Brew Works, a unique Pittsburgh restaurant housed in a restored church.

Attendees will have a choice of three dinner entrees. Please indicate your selection on the conference registration form.

  • Pasta Primavera- tossed with an array of vegetables such as spinach, carrots, zucchini, spinach and peppers. Lightly seasoned with olive oil, garlic, herbs and Parmesan cheese.
  • Pine Nut Crusted Halibut- lightly seasoned and pan seared, served over exotic mushroom risotto, steamed asparagus and topped with basil emulsion.
  • Black Pepper Glazed Pork Chop- grilled center cut served with exotic mushroom risotto, steamed asparagus and balsamic reduction

PANELS

Usability of Security Administration vs. Usability of End-user Security

Having recently received increasing attention, usable security is implicitly all about the end user who employs a computer system to accomplish security-unrelated business or personal goals. However, there is another aspect to usable security. Security administrators have to deal with the order of magnitude more difficult problem of administering large-scale complex enterprise systems, where an error could cost a fortune.

Is the notion of usable security for end-users and security administrators the same? What are the differences in the background, training, goals, constraints, and tools between the administrators and end-users? How do these differences affect the (perception of) usability of the protection mechanisms and other security tools? Can the approaches to improving the security usability for end-users be directly applied to the domain of security administration, and vice versa? With some of the modern-day systems, where users are largely responsible for their own security self-administration, where is the boundary between the end-users and administrators? Can it be defined precisely or is it blurred?

Panelists:
Konstantin Beznosov, University of British Columbia (moderator)
Mary Ellen Zurko, IBM
Steve Chan, Lawrence Berkeley National Laboratory and School of Information Management and Systems at UC Berkeley
Greg Conti, United States Military Academy

[slides]

When User Studies Attack: Evaluating Security By Intentionally Attacking Users

Researchers and practitioners increasingly agree that security software can and should be evaluated by user studies. Unlike other software, however, security systems have an important scenario that is very hard to test: when the user is under attack by an intelligent, determined adversary. A variety of attack studies have appeared recently, ranging from password security ("give me your password for a candy bar") to secure email to phishing.

This panel will talk about the issues and problems of attacking users. How can users be motivated to defend their security in a study without putting undue emphasis on security? How should attack studies be designed so that they are faithful to the fact that in the real world, security is almost never the user's primary goal? What kinds of secrets should be attacked, and how far should the attack go in penetrating the information or exposing it? How can we balance ethical concerns for protecting subjects with the need to make realistic attacks that reveal just how secure they are?

Panelists:
Robert Miller, MIT (moderator) [
slides]
Simson Garfinkel, MIT [slides]
Filippo Menczer, Indiana University [slides]
Robert Kraut, Carnegie Mellon University [slides]

DISCUSSION SESSIONS

Usability and Acceptance of Biometrics

Moderator: Andrew S. Patrick, NRC Canada

Early research on attitudes towards biometrics suggested that the public had serious concerns about privacy and misuse. They often associated biometric systems with law enforcement activities (e.g., fingerprints), and were worried that their biometric data could be lost, stolen, or misused in some way. They were also concerned that government authorities might use the biometric information in ways they did not approve of (e.g., linking databases).

More recently, however, reports are coming in that the public is accepting, and perhaps demanding, biometric security systems. People seem to be quite willing to accept biometrics in some situations (e.g., "pay by touch," border and immigration control). The likely factor that explains the discrepancy is context, meaning the identity, place, time, and activity that are associated with using the biometric.

This discussion session will examine contextual factors to get a better understanding of when and how biometrics will be accepted. Opportunities for collaborative research in this area will be discussed, including a proposal for a series of international surveys of people's knowledge of, and attitudes towards, biometric security systems.

Valuation and Context

Moderators: Kimberly Perzel and Seth Proctor, Sun Microsystems

Humans include value while considering security tradeoffs and risk analysis for making decisions. Values recognized by humans may include time, time to replace, cost, privacy, trust-building and many other things. For example, when purchasing something from the web that the user wants quickly, he may provide information he would otherwise keep private. This discussion will focus on methods to capture these value tradeoffs and how to incorporate them into the security model. Without notions of value, it is harder to completely model human mechanisms like reputation and trust.

[slides]

When User Studies Attack: Evaluating Security By Intentionally Attacking Users

Moderators: Robert Miller, Simson Garfinkel, and Min Wu, MIT

Researchers and designers of security systems are increasingly coming to the understanding that neither security nor usability can be added in at the end of a design process, and that the usability of security software can and should be evaluated using user studies. Most user studies test routine tasks like logging in, sending encrypted mail, or recalling passwords, but security systems have another scenario that is vitally important to test: when the user is under attack by an intelligent, determined adversary. These kinds of studies are essential for measuring usability and security, but they are much harder to design and conduct. This discussion session will consider the issues and problems surrounding studies that intentionally attack users. Questions to be considered include: How can users be motivated to defend their security in a study without putting undue emphasis on security? How should attack studies be designed so that they are faithful to the fact that in the real world, security is almost never the user's primary goal? What kinds of secrets should be attacked, and how far should the attack go in penetrating the information or exposing it? How long should the user be given to learn a security system before the first attack? Should subsequent attacks be similar to the first, allowing the user to learn the attacker's pattern, or should they escalate, becoming more urgent or more threatening until the user finally cracks? How can we balance ethical concerns for protecting subjects with the need to make realistic attacks that reveal just how secure they are?

Usable Interfaces for Anonymous Communication

Moderator: Roger Dingledine, The Free Haven Project

We propose to examine and discuss approaches to building an effective and usable interface for Tor, a decentralized network of computers on the Internet that increases privacy in Web browsing, instant messaging, and other applications. We estimate there are some 20,000 Tor users currently, routing their traffic through about 150 volunteer Tor servers on five continents. However, Tor's current user interface approach -- running as a daemon in the background -- does a poor job of communicating network status and security levels to the user. The Tor project, affiliated with the Electronic Frontier Foundation, is running a UI contest to develop a vision of how Tor can work in a user's everyday anonymous browsing experience. Some of the challenges we'll discuss include how to make alerts and error conditions visible on screen; how to let the user configure Tor to use certain paths or avoid certain paths; how to learn about the current state of a Tor connection, including which servers it uses; and how to find out whether (and which) applications are using Tor safely.

[slides]

 

SOUPS is sponsored by Carnegie Mellon CyLab.