Don Norman: When Security Gets in the Way

An exert from an essay Don Norman wrote after attending SOUPS and the NAS workshops this summer….

I recently attended two conferences on Usability, Security, and Privacy. The first, SOUPS (Symposium on Usable Privacy and Security), was held on the Google campus in Mountain View, California, the second at the National Academies building in Washington, DC. Google is a semi-restricted campus. People can freely wander about the campus, but most buildings are locked and openable only with the proper badge. Security guards were visibly present: polite and helpful, but always watching. Our meetings were held in a public auditorium that did not require authorization for entrance. But the room was in a secure building, and the toilets were within the secure space. How did the world’s security experts handle the situation? With a brick. The side door of the auditorium that led to the secure part of the building and the toilets was propped open with a brick. So much for key access, badges, and security guards.


How Users Use Access Control

What is access control? Its a specification of policy, who can do what to whom?

Systems that use named groups allow for a level of indirection. Users don’t need to know the exact content of a group just the properties of it.

Access control is hard to use! People avoid it and try and get around it. XP’s interface is complex. But first we need to understand what controls do users really want? Users like to share data using email.

We want to know how much control users actually want to have. We looked at the access control lists for servers that had been run for several years. We wanted to know what sorts of things were users willing to expend effort to set.

Looked at group memberships in two types of systems, an administrator managed system (Unix, windows) and a user managed system (mailing lists). Also looked at access control lists shared on DocuShare. DocuShare is a context management system. It has discretionary access-control. All files have an owner.

What access control work people do?

When users make their own groups there are allot more groups. Users are also involved in quite a few mailing lists. Only 13.4% of users who owned groups. Admin created groups were clearly organized by intent. User-created groups showed ineffective transition between intent and effect. Many misspellings.

5.2% of objects had their ACL explicitly modified. The remainder were interdicted. Users were more likely to change ACLs on folders than files. People more often changed who had access than they type of access that they had. Many of the documents they saw were public. This could be intentional as this is a web interface for sharing.

Users only occasionally set access controls, they primarily relied on inheritance. When they do assign controls they are surprisingly complex.

Implications include simplifying the access control system. Can we remove the deny? Simplifying the inheritance model for changes. Limit the types of permissions that can be granted. Simple tools could help users allot.

Audience Questions

  • Since sharing through email is so common, why not let users share through email and you do the complexities for them? You could do that.
  • The striking logical difference between email and centralized file is transitivity and centralized vs. decentralized policy. Which is more error prone? Error prone is not because it is centralized or decentralized. Email is a gifting model.
  • How much the problem might be that the users don’t understand the access-control model but that the developers don’t understand their own model? The developers aren’t doing the stupid things its the users. The developers of this system do understand it. Users just throw more permissions at the problem till the person gets access.


Paper presented by Rob Reeder.

Looked at the “secret” questions used by the top four webmail providers. The problem with secret questions are that 1) some random person could guess it 2) your significant other could guess it 3) you could forget it. So why not just use your email account to verification? What happens when you can’t get back into your email account because you forgot your password or you no longer have the account?

Our vision for the future is to use many different backup authentication options. There are several different ways you could authenticate yourself including SMS. But now what combinations of mechanisms is necessary to reset a password.

How will users understand what they will need to do to get back in their account if they get locked out? Describe it in terms of an exam. Give the users a list of pieces of evidence they have to provide and the values of each of these pieces of evidence are shown. The user needs to get a certain number of points to get back into their account. This seemed to require too much math so also created an interface that divided evidence into weak, medium, and strong.

Did a lab study of 18 participants age 30-48. Showed the users the current LiveID interface, a shortened Exam interface and the Exam interface. Participants were asked if “Jane Doe” could get back into her account if she provided a certain piece of evidence.

Users had trouble determining what would be needed to get back into their account for the LiveID interface. Users were much better at determining what would be needed using the Exam interface. Users were also able to accurately determined what was required given a long Exam interface with little difference from the short Exam interface. The Exam interfaces also performed better than the Evidence interface where users see the types of evidence needed.

Audience Questions:

  • Why does it matter if people do not understand what LiveId is doing? In this study we are measuring comprehension. The reason is that if someone misunderstands what they need to do they are going to feel betrayed.
  • Do you think anyone looks at that screen before they loose their password? They would look at it when they setup their account. Many users may be happy with the defaults but we think that power users want to know this.
  • The one open question is what would have happened on the LiveId screen if you had had a sentence saying “you only need one of these methods?” You could do that, but it doesn’t scale to the six or ten things you might need to do. It doesn’t scale to situations where multiple authentications could be used.
  • Do you have any evidence that when users can configure what they use to get back in do they do a better job than the default? We didn’t test this. We think that is an interesting question. Two key things 1) how do you prod people to make things more secure? This is when they had the least invested in it. 2) how do we help them make things more secure?
  • You assume that users who actually want to use this flexibility, did you ask your participants if they thought they would find this useful? We didn’t ask the users. People love things till they have to actually use them and figure out they are broken. We don’t really have the data because users can’t make an informed choice at the moment.


A “Nutrition Label” for Privacy

Presented by Patrick Gage Kelley

Privacy policies are difficult to read.  We examined the warning science and labeling literature (nutrition, energy) to guide our work in designing a new privacy label.  The FTC commissioned a study to design a label for financial privacy.

First iteration: Text-based label with category boxes, a list view.

Second iteration: Grid-based visualization to allow users to find intersections of information. Simplified symbols from 11 to 5 and added color. Worked to convey “choice” to readers.

Conducted 5 focus group (7 – 11 participants each) to categorize how people understand how they understood elements of the label, and compare labels to examine how people choose between two companies with different elements highlighted in the label.  Asked questions to determine if users could find information using the labels.

Conducted a laboratory study (n = 30) to compare the label to natural language policies.

Results: The label matched the performance of natural language polcies, or surpassed it in the accuracy of information for several elements.  The time to find information was significantly lower for the label as compared to the natural language policy.  Label like-ability significant beat the natural lanaguge policy.  Label beat the natural language ability for ability to compare.

Additional work:
Another focus group targeting an older population.  The older population understood the concepts of opt-in and opt-out which younger people have a harder time understanding.

Next steps:
Large online study, having people compare the label to natural language policies.

Implementing the label in


Ubiquitous Systems and the Family: Thoughts about the Networked Home

Paper presented by Linda Little.

In this research the authors tried to look at the data very broadly. Linda told us that she intends to focus heavily on the methodology which she thinks will be very helpful to this audience.

Each of us carry around many different devices in our daily lives. If someone else starts using your device do you want it to still use and make decisions using your personal settings.

An important part of the family unit is how they interact with each other. If we design and create products for families we need to understand that not all families are functional. Each family works differently and has different needs and boundaries.

If we think about the vision of the future how do we portray it? We recruited people from different backgrounds and asked them about four scenarios we had developed. We had professional actors act out the scenarios. The intention was that the people seeing the scenarios would engage in serious discuss the scenario. They were related to every day tasks including voting and shopping. We drew participants from all parts of the population. We allocated participants into groups based on technical background. So we divided groups based on technical ability and then by gender. This was done because technical males tend to dominate the discussions and we wanted to hear from everyone even the older, non-technical females. There were 325 participants 180 males.

The networked home of the future is supposed to respond to the wants and needs of the people in it. The people need to be able to set preferences. There are trust and privacy issues in the future home. We discussed these with participants.

Linda showed an example scenario video for shopping. The futuristic shopping cart helps the woman know what she has at home, wants to buy, soon-to-happen birthdays and where to find things in the store. At first the participants said “wow I want one of these!” Then after the discussion got started participants started to worry about things like complexity of the device and who would control it. The major themes were: 1) Is it usable? 2) Who controls it? 3) Who sees it? 4) Who benefits 5) Who takes responsibility?

Audience Question

  • The videos are great, can you upload them to YouTube? Yes.
  • Did participants talk about what was currently in the home? Yes. They mentioned how super markets store data now. They also mentioned that they don’t go into their sons home.
  • How did you run the focus groups, what did you use as prompting questions? We had 24 focus groups though some were not well attended. We used a very open methodology and tried not to introduce bias. We did bring back the conversation if it wandered too far.
  • Putting together this video was expensive, how much of an advantage was it? We found that videos were an easier way to elicit information. People related better than giving a written script.
  • Do you think taking a purely qualitative approach is the best one to use? We already know that users think in terms of user, data and purpose? What we did after we got data from the focus group we put it into a questionnaire.
  • Did all the focus groups get all the scenarios? Yes
  • Did you design the scenarios to cause discussion on specific question areas such as privacy or security? The scenarios were designed to encourage discussion on trust and privacy. The groups did discuss other information.


Challenges in Supporting End-User Privacy and Security Management with Social Navigation


The author presented two social navigation systems intended to assist users with privacy and security decisions by showing them the solutions others used. He then discussed the various issues that arose from using these systems.

Audience questions:

  • You had a small number of users so there were a very small number of experts? Other work I have done suggest that experts are not well appreciated. People trust the community at large more assuming that their motives are more pure. If experts are to be used in the system they may need to state their credentials to explain why they should be considered experts.
  • How did you define your experts? We leveraged previous work that said that experts have both breadth and depth. We selected users who had lots of experience with these systems and had breadth and depth.
  • Could other users see who had said that something had helped them? The issue of annonomys and identified information. Users tend to latch on to information that they agree with and ignore things they disagree with. For example people latch on to movie critics who they tend to agree with.


School of Phish

Presented by Ponnurangam Kumaraguru (PK) from the CUPS lab at CMU.

Phishing attacks work, in 2005 73 million adults received more than 50 phising attacks. There are many different strategies for dealing with Phishing attacks. 1) Eliminate the threat 2) warn users about the threat 3) educates users about Phishing attacks. The speakers focus is on educating users.

The problem is that users are hard to train. There are many existing training materials but they really could be better. In prior work the CUPS lab presented PhishGuru, which is a web comic that trains users on how to not fall for phish. What is novel about PK’s work on PhishGuru is that it makes use of a “teachable moment,” a time when the user is ready to learn about Phishing. PK found that users teachable moment occurs right after they have fallen for a Phishing attack.

In this study PK is evaluating the retention after several weeks. He also evaluated the retention in relation to the number of training they were given. Subjects were solicitated from CMU faculty, students and staff. Those who signed up for the study were sent simulated Phishing emails and legitimate emails. All help desks at the university were notified about the study so they would not proactively block the emails or send a bulk notification.

Users were split into three groups 1) Control, no training material 2) one training 3) two training. Users were trained by sending them a fake Phishing email which took users to a fake login page. If they provided information the user was shown the training material. Theoretically, those users who don’t click on links in Phishing emails don’t need to be trained to not click on phising emails.

PhishGuru broadly had a 50% reduction in users who click on Phishing links. Those participants who saw the comic were significantly less likely to enter login information on a Phishing page. This was true even after 28 days. Those who received two training emails were even less likely to click on the Phishing links. Additionally, users did not change their behavior towards legitimate emails.

Students were the most vulnerable demographic amongst students, faculty and staff.

Audience questions:

  • How do you think the results of your study are generalizable outside of CMU? Our population is definitely tech savy. People who are between 18-25 are most voneralbe even after training. People outside of CMU could benifit from this work. I hope that it is generalizable but it may only be generalizable to tech savy users.
  • Did you ask users if they had fallen for other phishing attacks? There are lots of people who clicked on the link but did not give information. We don’t know if they fell for other attacks.
  • Most people who clicked on the link entered a password? Yes, this is consistent with all other studies I have done. If a user clicks on a link they are likely to enter the information. We train users not to click on links but if they do they will likely give their information.
  • Legidimate email, lacked images and looked like it was plain text, it could have been an image and it could have hidden the url? In this study we found that if you create a good spear phishing email people will fall for it.
  • Doesn’t that say that training users not to fall for phishing not a good way to deal with phising? Just traing is not going to solve the problem. Training is the last step in the lifecycle of solving the problem.
  • If you tell the security comunity that training works then you give them a way out so they don’t need to implement good security?
  • In the US most banks will cover all the expenses if someone falls for phish, so why would any user take time go through this training? There are other reprocations than just the monitary issues.
  • Users may spend more time in the training than they save in money?


More blog posts at

There are more SOUPS 2009 blog posts availible at


Thinking Evil Tutorial Part 2

The Think Evil tutorial (slides) talks about how attackers and defenders react to each other.


When security people want to measure the network?

The speaker’s group built a system called Netalyzr which tests “your Internet connection for signs of trouble.” The application test for many different things to determine if there is anything sitting between the user and the Internet. The tests are specifically intended to push boundaries and send back inconsistent responses to get information on what is sitting on the connection. Some of the things tested for are:

  • Tests for connectivity of different protocols
  • Deliberately violates the protocols in an attempt to cause a malfunction
  • Tests for caches by pulling a changing image twice
  • Tests for lack of connectivity to specific sites to determine if Malware has changed anything
  • Tests for connectivity to Windows Update

The list of websites that Netalyzr checks connectivity to were generated by a set of security researchers “thinking evil.” Sites like IM chat clients and search pages may be proxied to get passwords. The tool has resulted in some interesting things. For example one ISP redirects to a proxy of Googles web page.

style=”text-align: left;”>Netalyzr needs some usability work in how to explain some of the results to normal users. Things such as buffering in conjunction with BitTorrent and Skype can result in latencies that can confuse end users.

Security in my Everyday life

The speaker spent this section talking about the complex set of financial protocols he uses for his everyday life.

(Blogger note: Check out the Personal Data Privacy blog for tips on how to do security for normal people.)


Someone please fix passwords! I don’t like remembering them. I don’t like RSA keys. I love SSH but typing the password into is dangerous because if someone compromised the server they have my password. As a result I always use public key authentication. I also use agent key forwarding even though I know it is horribly insecure for similar reasons.

The speaker stores his passwords in his wallet because his wallet is almost never stolen and he is not too concerned about loosing it. An audience member also comments that the passwords are probably the least valuable thing in your wallet if it is lost.

Credit Cards

The speaker is not too concerned about credit cards and he uses them for most of his purchases. He is not concerned because he is not the one who takes the damage

An audience member commented that in Europe the laws are different and that the burden and risk is on the user. In the UK there is a law that states that you have protection if you use the credit card online. If the chip and pin is used then it is the consumer that is on the line. If it goes through the chip and pin network then the burden is on the user to prove the charge was fraudulent. However, chip and pin cards also have magnetic stripe to use if no chip and pin system is available for use. An audience member says they had a chip and pin card cloned and used in another country through the magnetic stripe and in that case they were not considered to be liable. The speaker commented that if he was forced to use such a card where the damages and responsibility is on the user he would either 1) always pay cached 2) put it in the microwave.

Debit Cards

The speaker is very concerned about debit cards and is very selective about where he uses one. He also always checks the ATMs for any sign of tampering. This is because though he may not be liable eventually his money is at stake initially wich is a strong modivator.

Online Banking

The speaker doesn’t do online banking. All bills are paid by mail because even though it is not overly secure it is an O(n) attack that requires physical access to the letter. Sometimes I pay via phone with a credit card.

Audience Discussion

There is general audience disagreement that the use of checks is more usable than using a credit card. The speaker argues that the use of checks is the result of a cost benefit analysis of the security risks and implementation costs and he is deliberately sacrificing usability in this case to gain security.

We are here because security is difficult and because it is not useable. The speaker would like to do banking online but he needs a secure channel where he can personally verify every single transaction because there is always a non-zero chance that the host is compromised especially on public terminals. He wants a push button that basically approves transactions only when the user expressly pushes a button to approve the transaction. An audience member comments that this is very difficult from a usability standpoint because you have to install software on the user’s machine.


Think Evil Tutorial Part 1

The Think Evil tutorial (slides) talks about how attackers and defenders react to each other.


As a first example we looked at casino cheating. Casinos have an interesting problem because 1) money is involved 2) there is no hope of negotiating with the attackers 3) determining the difference between a good and bad player is hard.

Card counting works and puts the odds in the players favor but it also makes the pattern of play more regular. This can be detected by wafting a player’s pattern over time. Anti virus does something similar, it recognizes the patterns of known viruses allowing them to block bad things. Similarly host based IDS recognizes good things and allows them. However, to do this you need to be able to differentiate “bad” from “good”.

Casinos have several defenses to even the odds back out. Two examples are reshuffling more often and using more decks both of which make it harder for card counters to get good enough odds. Windows XP used to be very open until someone wrote the Blaster worm. Then Microsoft released Service Pack 2 which turned all services off by default.

Casinos also sometimes just do nothing, many card counters are not good enough to bother about. In fact a card counter who are bad at card counting are a good thing since they think they can win which is exactly what casinos love. Security sometimes takes a similar opinion. If the cost of defending against something is more expensive than the thing being defended than it is not worth it.

The MIT Card-Counting ring made the observation that casinos look for individual players not groups. So they did card counting in groups. This works well because they are attacking the pattern matching strategy. Mimicry attacks are where the attacker makes their behavior look like known good behavior. The attacker can also use evasion where the defender is looking for known bad behavior so the attacker makes their behavior look different than the known bad. The goal of defense is to have complete coverage of all bad behavior. This is why anti virus companies are shifting towards exploit identification not signature identification because it is more general. MIT also made use of the fact that their attack was novel. It takes time for a security program to adapt to a new type of attack.

Roulette has an attack called “pastposting” where you change your bet after the ball has already landed. An anti-pastposting roulette wheel invented to prevent pastposting by raising an alarm if the bets are changed. To beat the system the players can mimic drunken players and continuously trigger the alarm until the dealer turns it off. Attackers can use malicious false positives to cause defenders to turn off alarms or start ignoring them. Reactions have a cost, the attacker may simply want to cost the defenders time, money or annoyance.

Even worse the dealer could be corrupt. If the attackers are friends with the dealer the dealer can do many things to make the players more “lucky.” Insider attacks are a security nightmare because the insider must be trusted and must have insider knowledge of the system. Insiders are also people which have all sorts of human weaknesses. There was a study where researchers traded candy for passwords (Note: those passwords were never verified). Casinos have cameras not just to watch customers its to also watches the dealers.

Some casinos are experimenting with RFID tags in the chips. This lets them track the chips around the casino and identify players that are winning or loosing.

You can win at Roulette because it is not a random process. Thorp also commented on this. If bets are allowed after the time the ball is thrown then you can use the phase and velocity of the ball and the wheel to predict where the ball will land. This works 40% of the time. Someone else also created a cell phone app that did this. In response the casinos made this illegal. Changing the attackers cost benefit analysis can also be used as a defense.


People are self-interested and typically act in their own self interest, if they understand their self interest. Each attacker has their own self interest and those interests can be very different.

You should always model an adversary as someone who is creative and innovative. Don’t underestimate your opponent. Security researchers get into a rat hole on tactics too early. Security experts spent too much time securing the door and don’t consider that the attacker wants something in the room and is uninterested in attacking the door and may just break a window.