the cups blog

07-15-09

posted by kami
1 Comment

Think Evil Tutorial Part 1

The Think Evil tutorial (slides) talks about how attackers and defenders react to each other.

Intro/Casinos

As a first example we looked at casino cheating. Casinos have an interesting problem because 1) money is involved 2) there is no hope of negotiating with the attackers 3) determining the difference between a good and bad player is hard.

Card counting works and puts the odds in the players favor but it also makes the pattern of play more regular. This can be detected by wafting a player’s pattern over time. Anti virus does something similar, it recognizes the patterns of known viruses allowing them to block bad things. Similarly host based IDS recognizes good things and allows them. However, to do this you need to be able to differentiate “bad” from “good”.

Casinos have several defenses to even the odds back out. Two examples are reshuffling more often and using more decks both of which make it harder for card counters to get good enough odds. Windows XP used to be very open until someone wrote the Blaster worm. Then Microsoft released Service Pack 2 which turned all services off by default.

Casinos also sometimes just do nothing, many card counters are not good enough to bother about. In fact a card counter who are bad at card counting are a good thing since they think they can win which is exactly what casinos love. Security sometimes takes a similar opinion. If the cost of defending against something is more expensive than the thing being defended than it is not worth it.

The MIT Card-Counting ring made the observation that casinos look for individual players not groups. So they did card counting in groups. This works well because they are attacking the pattern matching strategy. Mimicry attacks are where the attacker makes their behavior look like known good behavior. The attacker can also use evasion where the defender is looking for known bad behavior so the attacker makes their behavior look different than the known bad. The goal of defense is to have complete coverage of all bad behavior. This is why anti virus companies are shifting towards exploit identification not signature identification because it is more general. MIT also made use of the fact that their attack was novel. It takes time for a security program to adapt to a new type of attack.

Roulette has an attack called “pastposting” where you change your bet after the ball has already landed. An anti-pastposting roulette wheel invented to prevent pastposting by raising an alarm if the bets are changed. To beat the system the players can mimic drunken players and continuously trigger the alarm until the dealer turns it off. Attackers can use malicious false positives to cause defenders to turn off alarms or start ignoring them. Reactions have a cost, the attacker may simply want to cost the defenders time, money or annoyance.

Even worse the dealer could be corrupt. If the attackers are friends with the dealer the dealer can do many things to make the players more “lucky.” Insider attacks are a security nightmare because the insider must be trusted and must have insider knowledge of the system. Insiders are also people which have all sorts of human weaknesses. There was a study where researchers traded candy for passwords (Note: those passwords were never verified). Casinos have cameras not just to watch customers its to also watches the dealers.

Some casinos are experimenting with RFID tags in the chips. This lets them track the chips around the casino and identify players that are winning or loosing.

You can win at Roulette because it is not a random process. Thorp also commented on this. If bets are allowed after the time the ball is thrown then you can use the phase and velocity of the ball and the wheel to predict where the ball will land. This works 40% of the time. Someone else also created a cell phone app that did this. In response the casinos made this illegal. Changing the attackers cost benefit analysis can also be used as a defense.

People

People are self-interested and typically act in their own self interest, if they understand their self interest. Each attacker has their own self interest and those interests can be very different.

You should always model an adversary as someone who is creative and innovative. Don’t underestimate your opponent. Security researchers get into a rat hole on tactics too early. Security experts spent too much time securing the door and don’t consider that the attacker wants something in the room and is uninterested in attacking the door and may just break a window.

07-15-09

posted by patrick
Comments Off on Welcome to SOUPS 2009!

Welcome to SOUPS 2009!

Welcome to a new year and a new gathering of SOUPS, the Symposium on Usable Privacy and Security. This year, our fifth, we have relocated from the sunny hills of Pittsburgh, Pennsylvania to the sunny Google campus in Mountain View, California.

For the next three days you will have an opportunity to hear from people doing cutting edge research as well as from industry on their first hand experiences with usable privacy and security challenges. Our two tutorials for the day are both already underway: Designing and Evaluating Usable Security and Privacy Technology and Think Evil ™ and we will have more indepth reports on those soon, here.

If you are interested in posting to this blog, just click the Register link above, and your account will shortly be upgraded so you can post your notes & thoughts from SOUPS 2009. Tag your SOUPS photos with soups09 and we will pull those in (but remember no pictures anywhere inside any Google buildings) and finally our tweets are already streaming in, just use the #soups hashtag to engage in the SOUPS discussion on twitter.

07-27-08

posted by Mez
1 Comment

Some photos from Mez of SOUPS 2008

Mostly social stuff

http://www.flickr.com/photos/8391807@N05/sets/72157600254522816/

07-26-08

posted by lyang
Comments Off on Try The Preference-Based Authentication Demo

Try The Preference-Based Authentication Demo

Dear all,

It’s nice to present our poster at SOUPS. Welcome to try our demo on preference-based authentication. Any comments will be appreciated.

http://blue-moon-authentication.com/

For more details, see http://I-forgot-my-password.com

Liu

07-25-08

posted by patrick
Comments Off on PCI Regulation Discussion Summary

PCI Regulation Discussion Summary

PCI DSS is Payment Card Industry Data Security Standard, a collaborative effort to achieve a common set of security standards for use by entities that process, store, or transport payment card data. This applies to: all merchants that “store, process, or transmit cardholder data” and all payment channels including brick-and-mortar, mail, telephone, and e-commerce.

PCI Standards

  • Install and maintain a firewall configuration to protect card holder data
  • Do not use vendor-supplied defaults for system passwords and other security parameters
  • Protect stored cardholder data
  • Encrypt transmission of cardholder data across open, public networks
  • Use and regularly update anti-virus software
  • Develop and maintain secure systems and applications
  • Restrict Access to cardholder data by business need-to-know
  • Assign a unique ID to each person with computer access
  • Restrict physical access to cardholder data
  • Track and monitor all access to network resources and cardholder data
  • Regularly test security systems and processes
  • Maintain a policy that address information security

PCI Winners & Losers
The winners will be Visa, MasterCard, and others, Consulting and security firms, and possibly (though this has not been determined) consumers. The merchants certainly lose.

PCI Complicance
Air France is currently undergoing a multi-million dollar effort to comply with PCI. It is attempting to reduce the number of applications that use credit cards, record processing requirements, and are implementing encryption and PCI storage in the network.

Some questions raised involve liability issues, for example who to assign liability to when fraud happens. Also it is unclear how outsourcing will effect security and compliance with PCI.

07-25-08

posted by lorrie
1 Comment

More SOUPS blogs at usablesecurity.com

You will find more SOUPS blog entires at usablesecurity.com.

07-25-08

posted by jerry
3 Comments

Analyzing Websites for User-Visible Security Design Flaws

Study

-chose not to examine bugs or browser flaws

-Analyzed a combination of 214 websites(mostly banks)

Demo:

-Login on insecure pages

-Contact information on insecure pages

Should this be a concern?

-exploits would not be straightforward, but attackers are becoming more organized

Use of Third-Party Sites

-break in chain of trust

Demo:

-transition to third party site

Policies on User Ids and Passwords

-inadequate or unclear policies for user ids and passwords

Ambiguity in Policies

-emailing security sensitive information

Results

-significant number of sites have login design flaws (47%)

Limitations of Study

-may have failed to completely retrieve all relevant pages

-Only looked financial intitutions in US

-used heuristics for automated analysis

Usability Lessons for WebSites

-stay on the same host name

-if not keep on same domain

-else make “proper introduction”

-use SSL throughout the site

07-25-08

posted by ponguru
Comments Off on Discussion notes – Metrics for Characterizing Research Participants’ Technical Knowledge

Discussion notes – Metrics for Characterizing Research Participants’ Technical Knowledge

Summary from the discussion Metrics for Characterizing Research Participants’ Technical Knowledge:

– Background with some studies and criteria that they used
– Participants agreed that there needs to be a metric but it is not clear whether there can be one-size-fit-all
– Conduct a large study among different types of users and then decide on what type of questions can be used for specific study
– Suggestion on looking on users’ behavior to classify technical or novice (e.g. using short cut keys)
– Some questions that we agreed on which we may use in the future studies
– Are you technical or non-technical?
– Why do you think you are technical or non-technical?
– What is your educational background?

07-24-08

posted by srhim
Comments Off on Use Your Illusion: Secure Authentication Usable Anywhere

Use Your Illusion: Secure Authentication Usable Anywhere

Eiji Hayashi

Nicolas Christin

Rachna Dhamija

Adria Perrig

Graphical Authentication

  • Passfaces – Faces are used as graphical portfolio
  • Pass Points – Use “a sequence of clicks” as a shared secret
  • DAS (Draw-A-Secret) –
  • Deja vu

Graphical Portfolio

  • If user chooses portfolio, easy to remember
  • If it’s random, users have difficult remembering picture

Use your Illusion

  1. Allow users to take/choose picture by themselves
  2. Distort pictures
  3. Assign the distorted pictures as graphical portfolio

Requirement for Distortion

  • One-way
  • Discarding precise shapes and colors
  • Preserving rough shapes and colors

Oil Paintings are used

Distortion level

  • If high, difficult to guess, but difficult to memorize
  • If low, easy to memorize, but easy to guess

Low Fidelity Test – Show most distorted imagine then ask user to guess image. If user does not know, continue showing less distorted images.

Also ask user at which point / distorted image he can’t recognize the image is a dog.

Prototype

  • Implemented on Nokia’s cell-phone
  • Also on the web

1st Usability Test

  • 45 Participant were divided into 3 groups
  • Self-selected, Non-distorted – Mean was around 20 sec
  • self-selected, distorted – 20sec
  • Imposed, Highly-distorted – 70 sec

Process of Memorization

  • Participants assign meanings to distorted images
  • Assigning meanings helps memorization

2nd test

  • 54 participants were divided into 3 groups
  • self-selected, non-distorted
  • self-selected, distorted
  • imposed, distorted

Future Work

  • Detailed usability test
  • long term test
  • find an optimal distortion
  • investigate a metric evaluating distortion level

Assigning meaning helps memorization

07-24-08

posted by bp
Comments Off on Securing Passfaces for Description

Securing Passfaces for Description

Paul Dunphy, James Nicholson and Patrick Olivier

Study 1:

  • 18 participants (9m, 9f) , 45 faces (27f, 18m)
  • Record descriptions of 15 faces each
  • Results: Females made longer descriptions, used more words to describe them

Study 2:

  • 56 partcipants (31m, 25f)
  • Within-subject with conditions:
    • Random decoys
    • Visually similar decoys (used a separate set of participants to group similar matches)
    • Descriptively similar decoys
  • Task: participant to choose 5 correct passfaces from descriptions to login.
  • Results:
    • Average score in random condition best
    • 9% of logins were successful (7 in random, 5 in visual, 1 in verbal).

Discussion

  • Decoy grouping effective
  • Overall login success low
  • Is there an impact on memorability/shoulder surfing?
  • What about related graphical schemes?