the cups blog


posted by kami
Comments Off on Two new CUPS studies on online behavioral advertising

Two new CUPS studies on online behavioral advertising

I am pleased to announce that the Carnegie Mellon CUPS lab has published two new technical reports on user studies related to online behavioral advertising.


Title: Smart, Useful, Scary, Creepy: Perceptions of Online Behavioral Advertising
Authors: Blase Ur, Pedro G. Leon, Lorrie Faith Cranor, Richard Shay and Yang Wang
Publication Date: April 2, 2012


We report results of 48 semi-structured interviews about online behavioral advertising (OBA). We investigate non-technical users’ attitudes about OBA, then explain these attitudes by delving into users’ understanding of its practice. Participants were surprised that their browsing history is currently used to tailor advertisements. They were unable to determine accurately what information is collected during OBA, assuming that advertisers collect more information than they actually do. Participants also misunderstood the role of advertising networks, basing their opinions of an advertising company on that company’s non-advertising activities. Furthermore, participants were unfamiliar with advertising industry icons intended to notify them when ads are behaviorally targeted, often believing that these icons were intended for advertisers, not for users. While many participants felt tailored advertising could benefit them, existing notice and choice mechanisms are not effectively reaching users. Our results suggest new directions both for providing users with effective notice about OBA and for the design of usable privacy tools that help consumers express their preferences about online behavioral advertising.


Title: What Do Online Behavioral Advertising Disclosures Communicate to Users?
Authors: Pedro Giovanni Leon, Justin Cranshaw, Lorrie Faith Cranor, Jim Graves, Manoj Hastak, Blase Ur and Guzi Xu
Publication Date: April 2, 2012


Online Behavioral Advertising (OBA) is the practice of tailoring ads based on an individual’s online activities. We conducted a 1,505-participant online study to investigate Internet users’ perceptions of OBA disclosures while performing an online task. We tested icons, accompanying taglines, and landing pages intended to inform users about OBA and provide opt-out options; these were based on prior research or drawn from those currently in use. The icons, taglines, and landing pages fell short both in terms of notifying participants about OBA and clearly informing participants about their choices. Half of the participants remembered the ads they saw but only 12% correctly remembered the disclosure taglines attached to ads. The majority of participants mistakenly believed that ads would pop up if they clicked on disclosure icons and taglines, and more participants incorrectly thought that clicking the disclosures would let them purchase their own advertisements than correctly understood that they could then opt out of OBA. “Ad-Choices,” the tagline most commonly used by online advertisers, was particularly ineffective at communicating notice and choice. 45% of participants who saw “AdChoices” believed that it was intended to sell advertising space, while only 27% believed it was an avenue to stop tailored ads. A majority of participants mistakenly believed that opting out would stop all online tracking, not just tailored ads. We discuss challenges in crafting disclosures, and we provide suggestions for improvement.


posted by kami

Why Johnny Can’t Opt Out: A Usability Evaluation of Tools to Limit Online Behavioral Advertising

We recently published a CyLab tech report on our user study of privacy tools. The Wall Street Journal also published an article about the study.

TECHNICAL reports: cmu-cylab-11-017

Title:  Why Johnny Can’t Opt Out: A Usability Evaluation of Tools to Limit Online Behavioral Advertising

Authors:        Pedro G. Leon, Blase Ur, Rebecca Balebako, Lorrie Faith Cranor, Richard Shay, and Yang Wang

Publication Date:       October 31, 2011


We present results of a 45-participant laboratory study investigating the usability of tools to limit online behavioral advertising (OBA).We tested nine tools, including tools that block access to advertising websites, tools that set cookies indicating a user’s preference to opt out of OBA, and privacy tools that are built directly into web browsers. We interviewed participants about OBA, observed their behavior as they installed and used a privacy tool, and recorded their perceptions and attitudes about that tool. We found serious usability flaws in all nine tools we examined. The online opt-out tools were challenging for users to understand and configure. Users tend to be unfamiliar with most advertising companies, and therefore are unable to make meaningful choices. Users liked the fact that the browsers we tested had built-in Do Not Track features, but were wary of whether advertising companies would respect this preference. Users struggled to install and configure blocking lists to make effective use of blocking tools. They often erroneously concluded the tool they were using was blocking OBA when they had not properly configured it to do so.


posted by kami

I Know Where You Live: Analyzing Privacy Protection in Public Databases

New CUPS tech report. A shorter version of this paper will be presented at the CCS WPES workshop ( ) later this month.

TECHNICAL reports: cmu-cylab-11-015

Title: I Know Where You Live: Analyzing Privacy Protection in Public Databases
Authors: Manya Sleeper, Divya Sharma, and Lorrie Faith Cranor
Publication Date: October 3, 2011


Policymakers struggle to determine the proper tradeoffs between data accessibility and data-subject privacy as public records move online. For example, Allegheny County, Pennsylvania recently eliminated the ability to search the county property assessment database using property owners’ names. We conducted a user study to determine whether this strategy provides effective privacy protection against a non-expert adversary. We found that removing search by name provides some increased privacy protection, because some users were unable to use other means to determine the address of an individual. However, this privacy protection is limited, and interface usability problems presented a comparable barrier. Our analysis suggests that if policymakers use removal of search by name as a privacy mechanism they should attempt to mitigate usability issues that can hinder legitimate use of public records databases.

Full Report: CMU-CyLab-11-015


posted by Yang
Comments Off on Indirect Content Privacy Surveys: Measuring Privacy Without Asking About It (paper 15)

Indirect Content Privacy Surveys: Measuring Privacy Without Asking About It (paper 15)

Alex Braunstein, Google
Laura Granka, Google
Jessica Staddon, Google

This paper presents an interesting way of measuring people’s privacy concerns indirectly. Why cannot we just ask people about privacy concerns directly? The authors present results of 3 surveys that demonstrate even subtle changes of wording of the questions (e.g., including words such as sensitive and worry) can cause large divergence in responses.

So, is there way to get at people’s privacy concerns without asking about privacy directly. The authors proposed a quite interesting approach. They focused on the question of how private people would think their information is (email, document, etc). They came up with three attributes that indirectly measure the sensitivity of these different types of personal information:

  • important to you
  • important to others
  • infrequently shared

They designed questions to measure each of these attributes and computed a privacy score for each data type by averaging the answers to these questions that measure the three attributes. They then compared the results from this indirect privacy instrument with the direct instruments.

It’s important to note that they didn’t compare the actual privacy scores but rather the relative ranking of these data types (e.g., email is more sensitive than document). Their results showed that the indirect instrument did preserve the same ranking that the direct instrument yield. This suggests that this indirect privacy survey approach may be feasible to assess people’s privacy concerns. They also commented that this indirect approach could be applied to other privacy-related contexts.

Read the paper at:



posted by Michelle
Comments Off on Home is safer than the cloud! Privacy concerns for consumer cloud storage (Paper 13)

Home is safer than the cloud! Privacy concerns for consumer cloud storage (Paper 13)

Iulia Ion, ETH Zurich
Niharika Sachdeva, IIIT-Delhi
Ponnurangam Kumaraguru, IIIT-Delhi
Srdjan Capkun, ETH Zurich

Cloud storage seems to promise access to your data from anywhere, security and backups managed for you, and other wonderful features. But there are some catches:

  • Can the cloud provider view and modify my data? Can they sell it?
  • Who is liable in case data is lost?
  • Is the content in the cloud really secure from hackers, government agents, etc?

Prior studies have looked at enterprise concerns about cloud storage, but not end users; also many privacy studies focus on the U.S.

In this paper, the authors chose to examine the attitudes toward cloud storage of end users in Switzerland and in India. They conducted 36 semi-structured interviews in each country, asking about current practices, privacy perceptions, and rights and guarantees related to cloud storage. Based on the interview results, the authors formulated a 20-minute online survey containing multiple-choice and Likert questions on the same topics, with about 400 participants.

Current data storage practices and attitudes:

  • More than 80% keep local backups of data they store on the internet
  • About 80% also “try not to” store sensitive data online; Swiss are less comfortable than Indians storing sensitive information online
  • A majority feel that if their data is hacked it’s their own fault for keeping the data on the internet in the first place

Attitudes toward privacy:

  • No data is safe; anything can be hacked
  • But I’m not very interesting so no one would bother
  • Swiss are less accepting of government monitoring and surveillance than Indians are.

Consumer misperceptions:

  • Don’t realize that the webmail provider can delete/disable your account at any time
  • Don’t realize that the webmail provider can examine your attachments
  • Don’t know what their rights are if data is lost


  • Provide stronger security mechanisms in the cloud
  • Improve presentation of privacy policies
  • Consumer protection rules, agencies for cloud storage
  • Future work: investigate awareness of international laws

Read the full paper at


posted by kami
Comments Off on Privacy: Is There An App For That? (Paper 12)

Privacy: Is There An App For That? (Paper 12)

Jennifer King, University of California, Berkeley
Airi Lampinen, Helsinki Institute for Information Technology HIIT
Alex Smolen, University of California, Berkeley

What do Facebook users understand about Applications on Facebook Platform?

Wrote an app on Facebook to conduct a survey. Then seeded from two Facebook accounts.

Survey results

  • 98% had heard of apps
  • 65% had claimed to have added 10 or fewer apps
  • 77% understood that apps were created by both Facebook and other 3rd parties
  • 48% were uncertain if Facebook reviews apps
  • 28% had never read the “Allow Access” notice where permissions apps use are shown.
  • 58% disagree with the statement “I only add apps from people/companies I’ve heard of”
  • Asked what parts of your facebook account this survey can access, only one person got the question correct.

Using this data the authors tried to determine what information predicted certain traits.

Adverse Events: Asked questions about adverse events on Facebook such as having someone post something negative about you.

Interpersonal Privacy Attitudes: Older people were more concerned with interpersonal.

Those most knowledgeable: appear to use the apps the same way as other users.

Read the full paper at:


posted by patrick
Comments Off on ROAuth: Recommendation Based Open Authorization (Paper 11)

ROAuth: Recommendation Based Open Authorization (Paper 11)

Mohamed Shehab, University of North Carolina at Charlotte
Said Marouf, University of North Carolina at Charlotte
Christopher Hudel, University of North Carolina at Charlotte

This paper proposes a collaborative filtering model that utilizes community decisions to help users make informed decisions about third party applications that request access to their private information at installation time.

The authors developed a browser-based extension to intercept the default OAuth 2.0 request flow and to provide users with an easy and usable interface to configure their privacy settings for applications. This extension includes a multi-criteria recommendation system that uses collaborative filtering to incorporate the decisions of the community and previous decisions made by an individual user to provide users with recommendations on permissions requested by applications.

The evaluations show that the recommender system properly predicts the user’s decision with about 90% accuracy and that the recommendation value of 45% or higher indicates that the system recommends granting the requested permission, and lower than 45% is recommends denying the permission.

A user study was conducted to show the effectiveness of the proposed browser extension; one group was provided with privacy recommendations generated by the recommendation system while the other users were not shown any recommendations. The results show that users who were not presented with the recommendation were more likely to grant permissions to applications compared to those who were provided with recommendations.

Read the full paper at:


posted by kami
1 Comment

“I regretted the minute I pressed share”: A Qualitative Study of Regrets on Facebook (Paper 10)

Yang Wang, Carnegie Mellon University
Gregory Norcie, Carnegie Mellon University
Saranga Komanduri, Carnegie Mellon University
Pedro Giovanni Leon, Carnegie Mellon University
Lorrie Faith Cranor, Carnegie Mellon University
Alessandro Acquisti, Carnegie Mellon University

This study looked at what negative experiences people are having on Facebook. In particular the authors asked people if they have ever regretted what they posted on Facebook and why.


  • What do users regret posting?
  • Why do users make these posts?
  • What are the consequences?

Surveyed 321 Facebook users on Mechanical Turk but didn’t get much data. Then did semi-structured interviews of 19 Facebook participants but only got a few regrets. Tried a diary study, but got very few regrets. Finally did a revised online survey.

What did people regret?

  • Things about other people
  • Relationships
  • Controversial topics
  • Negative content
  • Personal information and work

Why post regretable things?

  • “It’s cool”, “It’s funny”
  • “I didn’t think”
  • “Hot” states – angry, frustrated, excited, drunk, etc
  • Unintended audience – “I didn’t know he can see it”
  • Accidents – “I didn’t know I posted”


Q1: Do you think Google+ does any better than Facebook in terms of your design ideas?

A1: Not any better on awareness. They make some progress on Avoiding unintended audiences, they do popup a message before you re-share something? Make people think, I have seen some of this in Gmail. I haven’t seen prediction of regrets.

Q2: Facebook reminds you of people you talk with frequently, but not those you don’t talk with. It is against Facebook’s model to encourage small friend lists.

A2: It is in the interest of these social network operators to think about privacy. I think especially with Google Circle entering the picture.

Read the full paper at:


posted by Michelle
Comments Off on Heuristics for Evaluating IT Security Management Tools (Paper 7)

Heuristics for Evaluating IT Security Management Tools (Paper 7)

Pooya Jaferian, University of British Columbia
Kirstie Hawkey, Dalhousie University
Andreas Sotirakopoulos, University of British Columbia
Maria Velez-Rojas, CA Technologies
Konstantin Beznosov, University of British Columbia

This paper arose from a struggle to evaluate the usability of IT Security Management (ITSM) tools. Recruiting actual IT managers for lab or field studies proved difficult, so the authors chose to use the “discount” usability evaluation technique of asking experts armed with heuristics to evaluate the tools.

For this process to work, you need good heuristics. Building on guidelines from a prior paper as well as HCI activity theory, the authors developed seven heuristics:

  • Visibility of activity status
  • History of actions and changes on artifacts
  • Flexible representation of information
  • Rules and constraints
  • Planning and dividing work between users
  • Capturing, sharing, and discovery of knowledge
  • Verification of knowledge

To evaluate the heuristics, the authors set up a between-subjects study in which experts were asked to evaluate one tool using the new ITSM heuristics or with existing, non-domain-specific Nielsen’s heuristics. The authors then evaluated how successfully participants in each condition identified major and minor problems in the target tool.

Major results include:

  • More high-severity problems were found using the new ITSM heuristics than with the Nielsen’s heuristics.
  • The ITSM heuristics were rated as easy to learn, as easy to apply and as effective as Nielsen’s by the participants, all of whom had used Nielsen’s heuristics before.
  • In general, comprehensively evaluating complex ITSM tools may require more evaluators than for simpler interfaces, to ensure full coverage.
  • The ITSM and Nielsen’s heuristics are complementary and should be used together for maximum effectiveness.

Read the full paper at


posted by patrick
1 Comment

Shoulder Surfing Defence for Recall-based Graphical Passwords (Paper 6)

Nur Haryani Zakaria, Newcastle University, UK
David Griffiths, Newcastle University, UK
Sacha Brostoff, University College London, UK
Jeff Yan, Newcastle University, UK

The presenter was Haryani Zakaria of Newcastle University. She began with an introduction to the graphical system they used, called “Draw-A-Secret.” This graphical password system consists of a user drawing a pattern on a screen. The authors were concerned about shoulder surfing attacks on this scheme. The authors considered three defense techniques against shoulder surfing in this paper. Decoy strokes were false strokes made by the system, being drawn automatically to confuse the attacker. Disappearing strokes occur when the system makes the lines drawn by the user vanish as soon as the stylus is lifted. The line snaking defense consists of the lines disappearing as well, but with the disappearance occurring as the user is drawing a line, without waiting for the stylus to be raised. The authors studied these techniques in both their effectiveness and usability.

User Study 1: effectiveness. The non-experimenter participants in the experiment were the attackers. They were introduced and given a demonstration, and an experimenter acted as the victim. The participants observed the victim entering a password, with different defense techniques depending on condition. The results indicate that the control group and the decoy stroke group both were successful in about three-quarters of their attacks, with under half for the disappearing stroke and line snaking techniques.

User Study 2: usability. The authors removed the less successful decoy technique and performed a usability study on the remaining two. There were 30 participants, assigned to these conditions. They looked at login time and login error rate. Line snaking takes longer to log in, and more attempts to log in, than disappearing stroke. And more users preferred the disappearing stroke technique. Participants felt more confident when their lines remained until completion, letting them know their line was drawn correctly. Thus, the disappearing stroke technique appears to offer comparably good protection while being more usable than the snaking technique.

Read the full paper at: