Why Johnny Can’t Opt Out: A Usability Evaluation of Tools to Limit Online Behavioral Advertising

We recently published a CyLab tech report on our user study of privacy tools. The Wall Street Journal also published an article about the study.

TECHNICAL reports: cmu-cylab-11-017

Title:  Why Johnny Can’t Opt Out: A Usability Evaluation of Tools to Limit Online Behavioral Advertising

Authors:        Pedro G. Leon, Blase Ur, Rebecca Balebako, Lorrie Faith Cranor, Richard Shay, and Yang Wang

Publication Date:       October 31, 2011


We present results of a 45-participant laboratory study investigating the usability of tools to limit online behavioral advertising (OBA).We tested nine tools, including tools that block access to advertising websites, tools that set cookies indicating a user’s preference to opt out of OBA, and privacy tools that are built directly into web browsers. We interviewed participants about OBA, observed their behavior as they installed and used a privacy tool, and recorded their perceptions and attitudes about that tool. We found serious usability flaws in all nine tools we examined. The online opt-out tools were challenging for users to understand and configure. Users tend to be unfamiliar with most advertising companies, and therefore are unable to make meaningful choices. Users liked the fact that the browsers we tested had built-in Do Not Track features, but were wary of whether advertising companies would respect this preference. Users struggled to install and configure blocking lists to make effective use of blocking tools. They often erroneously concluded the tool they were using was blocking OBA when they had not properly configured it to do so.


I Know Where You Live: Analyzing Privacy Protection in Public Databases

New CUPS tech report. A shorter version of this paper will be presented at the CCS WPES workshop ( ) later this month.

TECHNICAL reports: cmu-cylab-11-015

Title: I Know Where You Live: Analyzing Privacy Protection in Public Databases
Authors: Manya Sleeper, Divya Sharma, and Lorrie Faith Cranor
Publication Date: October 3, 2011


Policymakers struggle to determine the proper tradeoffs between data accessibility and data-subject privacy as public records move online. For example, Allegheny County, Pennsylvania recently eliminated the ability to search the county property assessment database using property owners’ names. We conducted a user study to determine whether this strategy provides effective privacy protection against a non-expert adversary. We found that removing search by name provides some increased privacy protection, because some users were unable to use other means to determine the address of an individual. However, this privacy protection is limited, and interface usability problems presented a comparable barrier. Our analysis suggests that if policymakers use removal of search by name as a privacy mechanism they should attempt to mitigate usability issues that can hinder legitimate use of public records databases.

Full Report: CMU-CyLab-11-015


Do Not Track List in the News

Professor Lorrie Cranor comments on the FTC’s proposed ‘Do Not Track List.’

View video.


Architecture Is Policy: The Legal and Social Impact of Technical Design Decisions

Today several members of the Electronic Frontier Foundation (EFF) Board visited CMU and participated in a panel discussion on the effect technical design decisions can have on our society. Below is the abstract for the panel and a summary of the discussions, points, and comments.

Watch the entire panel on YouTube.


Technology design can maximize or decimate our basic rights to free speech, privacy, property ownership, and creative thought.  Board members of the Electronic Frontier Foundation (EFF) discuss some good and bad design decisions through the years and the societal impact of those decisions.

The panel opened with comments from each of the panelists.

Cindy Cohn
EFF Legal Director, Moderator

Cindy pointed to the discussions EFF is having with Google about privacy issues concerning Google Books. Public libraries have dealt with privacy issues for a long time. Specifically they are concerned with the government’s ability to get information on what content people read. Libraries take measures to completely remove information about what books are being checked out and by whom. New technologies like Google Books need to consider privacy issues now, while they are implementing the system, not just when they come up.

Dave Farber
Distinguished Career Professor of Computer Science and Public Policy,
School of Computer Science, Carnegie Mellon University

The design of systems greatly controls the type of security and privacy that can be built on them. However, security and privacy requirements can also greatly affect the design of a system if considered early enough in the design process.  An example of this is the Multics system. The designers wanted an environment that protected privacy and security but no hardware could support it. They worked out the security and privacy issues before the code and the technology came together to produce an enviornment that could support it.  Planning how to put security and privacy into the system before coding it creates a system which is capable of supporting security and privacy instead of trying to work with a system that is inherently insecure.

In the early days of the internet many decisions were made without regard to policy or their impact on privacy and security. What would have happened if the creators of the internet had considered these policy issues?

Ed Felten

Professor of Computer Science and Public Affairs and Director, Center for Information Technology Policy, Princeton University

Ed talked about SSL certificates as an example of a technology where the developers were thinking about security from the beginning but still messed up. How do you know the webpage you are looking at is really coming from who you think it is online? The answer is SSL certificates. When your web browser contacts a secure (https) website the website gives your browser a certificate. How does your web browser know that the certificate is valid? It verifies the certificate using a Certificate Authority’s (CA) public key. How do you know the CA’s key is valid? You need a single Certificate Authority that everyone trusts that verifies the entire transaction. Unfortunately there is no such Certificate Authority.  Instead your web browser has a default list of  7200 trusted CAs that they believe to be legitimate. Any of these CAs can sign a certificate and your browser will just trust it. Also, any one of these 7200 can delegate their “god-like abilities” to any other authority they choose. For example the person in charge of the Chinese government CA can give anyone the ability to do a man in the middle attack on any secure communication. There are no doubt that there are people doing creative things with this today. But fixing it is not easy.

Lorrie Cranor
Associate Professor of Computer Science and Engineering and Public Policy, and Director of the CyLab Usable Privacy and Security Laboratory (CUPS), Carnegie Mellon University

Lorrie started out with a discussion of P3P which she helped develop. About 10 years ago the goal was to define a standard way to let companies communicate their privacy policies using an XML based format. Using P3P, one concept allowed companies to post their privacy policies and browsers to interpret it and automatically negotiate the terms. People thought that using P3P would also allow browsers to display privacy policies in a standard way to end users.  The only major browser to implement P3P was Internet Explorer (IE) and it used P3P only as a cookie blocking mechanism. Basically, if your site had 3rd party cookies and no P3P policy or a bad P3P policy IE would block your site. So all the companies that delivered third party advertisements quickly adopted “good” privacy policies as defined by Microsoft.  As a result Microsoft set the standard for “good” privacy policies, for third party advertisers, across the internet.

The CUPS Lab at CMU is trying to better understand the effects privacy policies can have on consumer’s behavior.  To this end, we have built a site called PrivacyFinder which allows people to see Google or Yahoo! search results annotated with privacy policy information. Our studies have shown that people will purchase from websites that cost a little bit more if they have good privacy. Depending on how you setup the architecture of a system you can impact how people react to it.

John Buckman
EFF Board Chair, Serial Entrepreneur

John chose to talk about three different technologies he has worked on that had interesting security and privacy issues.

Lyris was an email discussion server. Developers wanted to support the use of Lyris by a Chinese dissidents group. To help support this population the developers chose to create an anonymous message feature which was made available to all Lyris users.  The Chinese dissidents didn’t use the feature but schools did. Teachers used it so that students could talk to each other without fear of retribution. The feature allowed students in creative writing classes to freely discuss controversial writing.

Bookmooch is a book swapping website. Privacy was an issue when the site was first designed. All the books you want to read and all you have given away are visible to everyone. But accounts are anonymous. The site uses the difference between privacy and anonymity to give people a strong trust network while still separating their online actions from their identity. The biggest user demographic is mothers who appreciate both the detailed online accounts and the privacy, through anonymity, it gives them.

Magnatune is an internet record label. What should the rights be of the people who come and listen to the music? There is normally a contract when you open a CD or listen to music online. But if you violate the contract there are really no repercussions. Magnatune takes the position that if you have chosen to spend money on songs, you are probably an honest person. So, there is a creative commons license on the music. Subscribers are asked not to share the music they purchase on Bittorrent. If subscribers want to share Magnatune music with friends that is a good thing, Magnatune would rather you shared the music locally and got people interested in the site.  This policy seems to be working, Bittorrent doesn’t have any of the paid albums shared.

Open for Questions

Question: How do we fix the issue with Certificate Authorities? (Directed at Ed)

Ed pointed out that people need to think about institutions and trust. You need to give someone the divine authority to approve Certificate Authorities. You also need more things done in the open instead of behind closed doors. If a Certificate Authority chooses to delegate all their rights to another authority that should be forced to be in the open. You need to think how the technology works together with organizations. You also need to think about how the public can meaningfully understand the technology.

Lorrie added that even if you can address the underlying trust model issues most user interfaces that show certificate issues are very poor. Even if we made them very trustworthy the user just swats them away. What the user mostly cares about is if anything has changed. For example if I am going to Ebay and suddenly Ebay is being verified by someone else that is a problem.

Ed added that the problem is that the designers default solution is to ask the user or add another user preference. Neither is a real solution.

Question: Music used to work on an economy of scarcity. Now you can easily distribute music for nearly no cost. This is true for many electronic resources.

John pointed out that by suing the pirate economy the RIAA has made the UI interfaces for file sharing technologies quite bad. Meanwhile LastFM and Pandora have nice user experiences. An obvious way to combat music piracy and electronic distribution is to come up with things that encurage usage. An older example is mobile phone bills. For a while mobile phone bills would come and they would be scarily high causing people to stop using mobile phones.  The mobile phone industry fixed that and now they are doing better. Music labels were working by suing offenders. Now websites like Hulu are doing well.

Cindy added that an alternative way to think about the problem is as a licensing problem. The traditional mechanism of licensing used by music distributors is to control all the copies of your product. Another solution is voluntary collective licensing. There are many models that already exist and we need to be considering existing models ones before we create new ones. Controlling all copies is not the only way.

Question: A current issue is that there are platforms, such as the iPhone, where anyone can write apps. What kind of privacy and security risks could this bring about?

John pointed out that the landscape for privacy and security on these platforms has a wide range.  Apple tracks everything while other platforms have different privacy and security controls.

Dave added that the answer depends on what these apps run on and what they enable or don’t enable. The problem is an issue of how constrained an app is. Apple has this issue with apps that do things like find the number of the phone it is on. If the underpinnings were built better you could assume that apps were well behaved but unfortunately this is not the case.

Ed added that mobile devices cause more issues. They are 1) always on 2) easily lots or stolen 3) know where they are. All of these add extra privacy and security concerns.

Question: Multics was designed with security from the beginning and is no longer used. The Internet was not designed with security in mind and everyone uses it. Can we take a lesson from this?

Dave pointed out that Multics was successful when it was used and where it was used. It was success but the architecture changed. Internet is insecure and a success but you are sitting on the edge of a cliff. What would it take to crash DNS?  It worrisome that we build more and more of our infrastructure on this thing and no one is sure how to secure it.

Question:  What are other strategies for having social interaction where anonymity may not be an option?

John commented that he actually likes Facebook’s model. To see anything you need an account and to see information on anyone you need friends to friend you.

Cindy pointed out that Facebook started out to a trust based model. It was closed to a single university and in order to see information you had to be friended by someone.  Many of EFF’s concerns with Facebook are that they are shifting away from a trust based model to a more Twitter-like open model.

Question: Are you doing anything with education? Are you working on any privacy and security curriculum?

Cindy started out by commenting that several of the board members are university professors.  But creating education curriculum isn’t something EFF is working on much. EFF isn’t well possitioned to do that. Instead EFF is more interested in making students involved because today’s students going to be the ones building these sorts of tools.

Question: I haven’t heard about de-anonymization yet today. Anonymity is good but what if the data becomes no longer anonymous? This would be scary in places like Bookmooch that are based on anonymity.

Cindy pointed out that EFF has been critical of Google over this. EFF has a site called Panopticlick which shows visitors their “browser fingerprint,” or how unique their browsing behavior is. In a closed community there is different concerns. In some communities there isn’t and issue because they just don’t want names immediately attached to their online behavior. De-anomization breaks allot of things.

Lorrie commented how important it is to be realistic when saying that things are anonymous. For example Bookmooch people may have trouble identifying each other but if someone had the whole database they might be able to do more.

John pointed out that Bookmooch is very transparent. This is obvious from the first setup. John struggles with this with Magnitune. At one point someone crawled the site and came up with a number of how much each artist was making from music sales. This made musicians very unhappy.

Dave spent a year in Washington DC working with policy makers. According to him policy makers have minimal technological knowledge.  But despite how little they know about technology these policy makers are going to make the laws that are going to decide the future of technology policy. The policy they are establishing impacts us continuously. Technologically skilled people would do yourself and the nation a service if you spend a couple years in Washington DC educating them.

Question: The earlier question about education was about certification. The real issue is a lack of educational materials. EFF does have some leverage in that. The real problem is educating everyone else not geeks like us. I would pick as the primary target post secondary education.

Cindy pointed out that EFF did develop a curriculum for copyright law. California created a requirement that k-12 students get an education in copyright. Music and other organizations with interests in protecting their copyrighted works immediately created curriculum that was somewhat biased and ignored important topics such as fair use. We created a counter curriculum. EFF isn’t quite sure how much the curriculum is used. There are a network of teachers across the nation that seem to use it.

One of the most frequent complaints from kids is that they call and say “my mom read my facebook page” and we tell them “that is because you friended her.”


Welcome new CUPS students

Welcome to our new CUPS lab PhD students Rich Shay and Saranga Komanduri!


Technology transfer of successful usable security research into product

About 15 SOUPS attendees attended this discussion session (thanks to all of you!) While we spent plenty of time on the challenges of technology transfer, I’m recording the useful practices and forward looking ideas on the topic, to help inspire others. I apologize for not citing names and organizations; feel free to self declare parts that were “yours”! Also, feel free to include anything I missed.

  • Proof of usability enables tech transfer. Some people who transfer research ideas into deployment use personas, use case based modeling, and lots of usability testing. Research that does some of this for themselves and puts out the results may have an easier time making the transition.
  • Make the “real problems” known so academics can use them, in their own work, and in evaluating the work of others for funding and publication. Usability, scalability (e.g. 3 million enrollments), performance, deployment (e.g. client side install). A list or taxonomy or other framework might be helpful.
  • Product timelines are short. Techniques are needed to evaluate usable security ideas in a product context within the constraints of a product cycle timeline.
  • Intellectual property status is a big concern. Researchers interested in getting uptake on their ideas are encouraged to be very clear about its IP status. Particularly if you’re giving the idea away with no encumbrances.
  • Best practices can be easy to tech transfer. For example, guidance on how to make security or privacy mechanisms or artifacts usable.
  • Heterogeneous user population should be designed for. For example, user age range (13 – 97), low income (public terminal access)
  • Analysts can help make recommendations for technology transfer. Perhaps there should be more opportunities for outreach or discussions between the research community and analysts.
  • Results from experience of use can be harder to publish (since the conditions are not controlled) but very useful for justifying or motivating technology transfer, and can catch small problems with a big impact. One example is Google’s approach of trying out different designs and measuring their use.
  • Real world data is hard for researchers to get. More work in finding ways to share data sets safely would help. Or canonical data sets based on real world attributes.
  • A forum for new ideas can be incredible useful to people looking to find ideas to pull into their products or deployments. Even half based ones. One example is New Security Paradigms Workshop.


Some photos from Mez of SOUPS 2008

Mostly social stuff


Try The Preference-Based Authentication Demo

Dear all,

It’s nice to present our poster at SOUPS. Welcome to try our demo on preference-based authentication. Any comments will be appreciated.

For more details, see



PCI Regulation Discussion Summary

PCI DSS is Payment Card Industry Data Security Standard, a collaborative effort to achieve a common set of security standards for use by entities that process, store, or transport payment card data. This applies to: all merchants that “store, process, or transmit cardholder data” and all payment channels including brick-and-mortar, mail, telephone, and e-commerce.

PCI Standards

  • Install and maintain a firewall configuration to protect card holder data
  • Do not use vendor-supplied defaults for system passwords and other security parameters
  • Protect stored cardholder data
  • Encrypt transmission of cardholder data across open, public networks
  • Use and regularly update anti-virus software
  • Develop and maintain secure systems and applications
  • Restrict Access to cardholder data by business need-to-know
  • Assign a unique ID to each person with computer access
  • Restrict physical access to cardholder data
  • Track and monitor all access to network resources and cardholder data
  • Regularly test security systems and processes
  • Maintain a policy that address information security

PCI Winners & Losers
The winners will be Visa, MasterCard, and others, Consulting and security firms, and possibly (though this has not been determined) consumers. The merchants certainly lose.

PCI Complicance
Air France is currently undergoing a multi-million dollar effort to comply with PCI. It is attempting to reduce the number of applications that use credit cards, record processing requirements, and are implementing encryption and PCI storage in the network.

Some questions raised involve liability issues, for example who to assign liability to when fraud happens. Also it is unclear how outsourcing will effect security and compliance with PCI.


More SOUPS blogs at

You will find more SOUPS blog entires at