the cups blog

07-21-11

posted by patrick
No Comments

Using Data Type Based Security Alert Dialogs to Raise Online Security Awareness (Paper 2)

Max-Emanuel Maurer, University of Munich
Alexander De Luca, University of Munich
Sylvia Kempe, University of Munich

Passive indicators are not the best approach because users don’t notice them, and users are soon habituated to quickly pass through active blocking of a websites. Maurer et al. came up with a different approach, a semi-blocking dialog, with three versions as shown in the image below. The dialog is positioned near the data entry box, and appears as you type in that box. The warning shows the type of data they are entering (as image and text) and an addition information box that shows whether or not traffic is encrypted and the domain.

Three examples of warnings

A first trial evaluation allowed them to get initial feedback, and also update the design based on feedback and a design exercise. They then ran a field study with 14 participants across 7 days, which people generally liked, and found the warnings did decrease overtime. In general, they repair that semi-blocking dialogs are beneficial, though users won’t find additional information if not shown (required expansion).

From the questions, we learned that the tool does suspend most AJAX submissions by creating an additional text field, though there are of course spoofing attacks that could be attempted, and a longer study with the tool needs to be run to see if they are habituated to it, if they understand the benefits, and if they understand why and when it appears.

Read the full paper at: http://cups.cs.cmu.edu/soups/2011/proceedings/a2_Maurer.pdf

07-21-11

posted by patrick
No Comments

A Brick Wall, a Locked Door, and a Bandit: Promoting A Physical Security Metaphor For Firewall Warnings (Paper 1)

Fahimeh Raja, University of British Columbia
Kirstie Hawkey, Dalhousie University
Steven Hsu, University of British Columbia
Kai-Le Clement Wang, University of British Columbia
Konstantin Beznosov, University of British Columbia

“A Brick Wall” aims to design firewall warnings that will accurately communicate risk to users.

The authors designed graphical warnings using a physical security mental-model of a person trying to gain access to a secured door in a brick wall surrounding the users computer room. The user is presented with a security dialog with a color-coded title bar, a short text description of the reason for the warning, the graphical security cartoon illustrating the risk, and a series of actions (allow, deny, etc.) to take depicted by padlocks being opened or remaining secure.

The security cartoon varies based on the severity of the warning:

  • The most severe warning for known-malicious access features a red title bar and depicts a robber approaching the door carrying a knife and a bag labeled “data.”
  • The modest warning for unknown access features a yellow title bar and depicts a grey human-silhouette approaching the door.
  • The safe warning for identified-safe access features a green title bar and depicts a friendly figure approaching the door.

A study was conducted to compare the effectiveness of graphical warnings with text warnings from the Comodo Personal Firewall in conveying risk associated with a given warning. Graphical warnings increased subjects understanding of the protection offered by the firewall over the text-only warnings and increased subjects assessment of risk. Two-thirds of subjects preferred the graphical warnings. The remaining third of the subjects that preferred textual warnings correlated strongly with increased technical capability and held that opinion for interesting reasons (more professional, graphical looks childish).

Read the full paper at http://cups.cs.cmu.edu/soups/2011/proceedings/a1_Raja.pdf

07-20-11

posted by kami
No Comments

VizSec 2011: Cyber-security analytics

Ankit Singh, Alex Endert, Lauren Bradel, Christopher Andrews, Chris North and Robert Kincaid, “Using Large Displays for Live Visual History of Cyber-security Analytic Process”

Authors worked with eight professional cyber analysts a couple times a week for about three months. Also observed the analysists analyzing a known data set.

Watched analysts use:

  • Multiple data sources
  • Multiple tools/windows
  • Extensive Excel usage

Noticed heavy use of versioning in the analysis. The analysts had difficulty re-creating their steps based on all the versions of documents they were creating.

Authors considered four improvements based on their observations.

  • Make use of the resolution and size of the monitors – Give the users more resolution
  • De-aggregation of data
  • Case Management – They did lots of task switching which cost time and memory load.
  • Process History – the ability to visualize and go back to prior states.

Created an add-on to Excel. The add-on provides a “Fork” option where the user can split off a new version associated with a new subtask. They can also make comments.

Propagating vs. Forking

If a user makes a change to a historical version should that change propagate to latter versions or should it branch? If propagation is used how do we indicate to users what will change?

 

 

07-20-11

posted by kami
No Comments

VizSec2011: Malicious Activity on the Internet

Francesco Roveta, Luca Di Mario, Federico Maggi, Giorgio Caviglia, Stefano Zanero and Paolo Ciuccarelli, “BURN: Baring Unknown Rogue Networks”

The goal of this work is to expose malicious hosts.

The FIRE system focuses on the top four internet threats

  • Malware
  • Botnets
  • Phishing
  • Spam

Authors focus on Autonomous Systems (AS) because targeting individual IPs is challenging.

Authors are using data from Anubis, PhishTank and SpamHaus and feeds it into FIRE to quantify the amount of malicious activity that a AS is involved in. The outcome of this project is that many “shady” ISPs were reported to law enforcement and some ISPs were notified and took action.

Exploring the data in FIRE is challenging. To solve this issue we created BURN (Baring Unknown Rouge Networks) to visualize the data.

BURN is targeted towards both researchers and end users.

BURN provides a Global and a AS view. The Global view uses lots of well thought out graphical visualizations to shows information world wide. In this view information like the size and ongoing state of ASs is shown. If the annalist is interested in a particular AS the can look at a detailed view which also has a bunch of different graphs to see different information features

BURN is currently in private beta.

07-20-11

posted by patrick
No Comments

Survey and classification of potential security UX conventions

Rob Reeder
Senior Security Program Manager
Microsoft

Our story is we have been tasked to make our security advice and requirements more specific. For example, we get questions like:

  • What icon should I use?
  • How big should the icon be?
  • Can you give me a generic sentence to insert?

So, we will assume these conventions are beneficial, and our next steps are to create these conventions.

We discovered ANSI Z535.4 2007, a standard for more general safety and product warnings, in our search, and will be interleaving this work throughout the talk today.

What are the properties of a good (security) convention?

  • Intuitive to users
  • Consistently applied
    • Doesn’t interfere with other uses of technique
    • e.g., bold font is used for other things
  • Studied & tested
  • Resistant to spoofing (this is obviously very difficult)
  • Easy to implement
  • Easy to localize
    • any word that needs to be changed has to undergo a localization process, being translated into dozens of different languages
    • could make translation tables to solve this
  • Easy to enforce usage across company/industry
    • easy to tell when it is being used correctly or incorrectly
  • Portable to different devices
  • Accessible
  • [bonus] Already in use!

Final Thoughts: There are many challenges to getting good conventions, including industry advantage, spoofing of elements, gaining widespread use, but the ANSI standard, and their clear and useful standards give me hope.

07-20-11

posted by kami
2 Comments

VizSec 2011: Malware Images

Lakshmanan Nataraj, Karthikeyan Shanmugavadivel, Gregoire Jacob and B.S Manjunath, “Malware Images: Visualization and Automatic Classification”

The authors visualize the bytes of malware files to produce small visualizations. These visualizations let you get a high level sense of a file and what the different components are.

If you look at malware across variants they look visually similar while looking dissimilar to other malware families.

Once malware is converted to an image representation you can use that to characterize the malware. The authors used texture features that are normally used to identify different landscapes or other images. They used k-nearest neighbors for classification. Using a euclidian distance measurement` to determine how similar images are.

Took 2000 malware comprising eight malware families and converted them to images and used image texture based features. The authors were able to get around 98% classification accuracy.

What about packing?

Images after packing look completely different from the unpacked executable.

Common wisdom says that everything packed by the same packer should look the same and not like the original. The authors tried packing each malware with each of three packers. Even after packing the authors were able to identify family groups with high accuracy.

Used 25k malware from Anubis and VxHeavens Datasets and labeled using Microsoft Security Essentials and used the top 100 families. Still got high accuracy.

Tried 64k malware with 531 families and still got high accuracy.

The biggest advantage of image based malware analysis is speed. It only takes about 50ms. It also doesn’t require execution or dis-assembly.

Limitations of this work is that it is data driven. It doesn’t prevent zero day attacks well. Also the characterization is grouping based on images not on actual functionality.

Questions

Q1: How do forensic malware analysists see using this work? What about the low accuracy points.

A1: Low accuracy could be countered by using more AV labeling.

Q2: What you are doing is visualizing signatures. Will this work on polymorphic malware? Is this different enough from existing software, since it is only classifying known malware not separating it from good code.

A2: We did try adding in a bunch of non-executable default windows files as a extra “family” and were able to tell the difference between this “family” and others.

 

07-20-11

posted by patrick
No Comments

SOUPS 2011 Begins!

SOUPS 2011 has gotten underway this morning with a workshop, a symposium, and a tutorial.

The Workshop on Usable Security Indicator Conventions began with a round of lightning talks to bring together the history of standardization efforts as well as current interests and thoughts on security indicators.

The 8th International Symposium on Visualization for Cyber Security (VizSec2011) also takes place today, beginning with a keynote by Lorrie Faith Cranor, followed by a series of six technical papers.

And finally there are two tutorials today, the morning tutorial is presented by Simson Garfinkel on Working with Computer Forensics Data and the afternoon tutorial is presented by Sonia Chiasson and Robert Biddle on Experiment Design and Quantitative Methods for Usable Security Research.

Anyone with notes should feel free to post them here, tag pictures with soups11, or tweet with the hashtag #soups11. And we will be continuing to post to this space.

12-03-10

posted by kami
No Comments

Do Not Track List in the News

Professor Lorrie Cranor comments on the FTC’s proposed ‘Do Not Track List.’

View video.

03-08-10

posted by kami
1 Comment

Architecture Is Policy: The Legal and Social Impact of Technical Design Decisions

Today several members of the Electronic Frontier Foundation (EFF) Board visited CMU and participated in a panel discussion on the effect technical design decisions can have on our society. Below is the abstract for the panel and a summary of the discussions, points, and comments.

Watch the entire panel on YouTube.

Abstract:

Technology design can maximize or decimate our basic rights to free speech, privacy, property ownership, and creative thought.  Board members of the Electronic Frontier Foundation (EFF) discuss some good and bad design decisions through the years and the societal impact of those decisions.

The panel opened with comments from each of the panelists.

Cindy Cohn
EFF Legal Director, Moderator

Cindy pointed to the discussions EFF is having with Google about privacy issues concerning Google Books. Public libraries have dealt with privacy issues for a long time. Specifically they are concerned with the government’s ability to get information on what content people read. Libraries take measures to completely remove information about what books are being checked out and by whom. New technologies like Google Books need to consider privacy issues now, while they are implementing the system, not just when they come up.

Dave Farber
Distinguished Career Professor of Computer Science and Public Policy,
School of Computer Science, Carnegie Mellon University

The design of systems greatly controls the type of security and privacy that can be built on them. However, security and privacy requirements can also greatly affect the design of a system if considered early enough in the design process.  An example of this is the Multics system. The designers wanted an environment that protected privacy and security but no hardware could support it. They worked out the security and privacy issues before the code and the technology came together to produce an enviornment that could support it.  Planning how to put security and privacy into the system before coding it creates a system which is capable of supporting security and privacy instead of trying to work with a system that is inherently insecure.

In the early days of the internet many decisions were made without regard to policy or their impact on privacy and security. What would have happened if the creators of the internet had considered these policy issues?

Ed Felten

Professor of Computer Science and Public Affairs and Director, Center for Information Technology Policy, Princeton University

Ed talked about SSL certificates as an example of a technology where the developers were thinking about security from the beginning but still messed up. How do you know the webpage you are looking at is really coming from who you think it is online? The answer is SSL certificates. When your web browser contacts a secure (https) website the website gives your browser a certificate. How does your web browser know that the certificate is valid? It verifies the certificate using a Certificate Authority’s (CA) public key. How do you know the CA’s key is valid? You need a single Certificate Authority that everyone trusts that verifies the entire transaction. Unfortunately there is no such Certificate Authority.  Instead your web browser has a default list of  7200 trusted CAs that they believe to be legitimate. Any of these CAs can sign a certificate and your browser will just trust it. Also, any one of these 7200 can delegate their “god-like abilities” to any other authority they choose. For example the person in charge of the Chinese government CA can give anyone the ability to do a man in the middle attack on any secure communication. There are no doubt that there are people doing creative things with this today. But fixing it is not easy.

Lorrie Cranor
Associate Professor of Computer Science and Engineering and Public Policy, and Director of the CyLab Usable Privacy and Security Laboratory (CUPS), Carnegie Mellon University

Lorrie started out with a discussion of P3P which she helped develop. About 10 years ago the goal was to define a standard way to let companies communicate their privacy policies using an XML based format. Using P3P, one concept allowed companies to post their privacy policies and browsers to interpret it and automatically negotiate the terms. People thought that using P3P would also allow browsers to display privacy policies in a standard way to end users.  The only major browser to implement P3P was Internet Explorer (IE) and it used P3P only as a cookie blocking mechanism. Basically, if your site had 3rd party cookies and no P3P policy or a bad P3P policy IE would block your site. So all the companies that delivered third party advertisements quickly adopted “good” privacy policies as defined by Microsoft.  As a result Microsoft set the standard for “good” privacy policies, for third party advertisers, across the internet.

The CUPS Lab at CMU is trying to better understand the effects privacy policies can have on consumer’s behavior.  To this end, we have built a site called PrivacyFinder which allows people to see Google or Yahoo! search results annotated with privacy policy information. Our studies have shown that people will purchase from websites that cost a little bit more if they have good privacy. Depending on how you setup the architecture of a system you can impact how people react to it.

John Buckman
EFF Board Chair, Serial Entrepreneur

John chose to talk about three different technologies he has worked on that had interesting security and privacy issues.

Lyris was an email discussion server. Developers wanted to support the use of Lyris by a Chinese dissidents group. To help support this population the developers chose to create an anonymous message feature which was made available to all Lyris users.  The Chinese dissidents didn’t use the feature but schools did. Teachers used it so that students could talk to each other without fear of retribution. The feature allowed students in creative writing classes to freely discuss controversial writing.

Bookmooch is a book swapping website. Privacy was an issue when the site was first designed. All the books you want to read and all you have given away are visible to everyone. But accounts are anonymous. The site uses the difference between privacy and anonymity to give people a strong trust network while still separating their online actions from their identity. The biggest user demographic is mothers who appreciate both the detailed online accounts and the privacy, through anonymity, it gives them.

Magnatune is an internet record label. What should the rights be of the people who come and listen to the music? There is normally a contract when you open a CD or listen to music online. But if you violate the contract there are really no repercussions. Magnatune takes the position that if you have chosen to spend money on songs, you are probably an honest person. So, there is a creative commons license on the music. Subscribers are asked not to share the music they purchase on Bittorrent. If subscribers want to share Magnatune music with friends that is a good thing, Magnatune would rather you shared the music locally and got people interested in the site.  This policy seems to be working, Bittorrent doesn’t have any of the paid albums shared.

Open for Questions

Question: How do we fix the issue with Certificate Authorities? (Directed at Ed)

Ed pointed out that people need to think about institutions and trust. You need to give someone the divine authority to approve Certificate Authorities. You also need more things done in the open instead of behind closed doors. If a Certificate Authority chooses to delegate all their rights to another authority that should be forced to be in the open. You need to think how the technology works together with organizations. You also need to think about how the public can meaningfully understand the technology.

Lorrie added that even if you can address the underlying trust model issues most user interfaces that show certificate issues are very poor. Even if we made them very trustworthy the user just swats them away. What the user mostly cares about is if anything has changed. For example if I am going to Ebay and suddenly Ebay is being verified by someone else that is a problem.

Ed added that the problem is that the designers default solution is to ask the user or add another user preference. Neither is a real solution.

Question: Music used to work on an economy of scarcity. Now you can easily distribute music for nearly no cost. This is true for many electronic resources.

John pointed out that by suing the pirate economy the RIAA has made the UI interfaces for file sharing technologies quite bad. Meanwhile LastFM and Pandora have nice user experiences. An obvious way to combat music piracy and electronic distribution is to come up with things that encurage usage. An older example is mobile phone bills. For a while mobile phone bills would come and they would be scarily high causing people to stop using mobile phones.  The mobile phone industry fixed that and now they are doing better. Music labels were working by suing offenders. Now websites like Hulu are doing well.

Cindy added that an alternative way to think about the problem is as a licensing problem. The traditional mechanism of licensing used by music distributors is to control all the copies of your product. Another solution is voluntary collective licensing. There are many models that already exist and we need to be considering existing models ones before we create new ones. Controlling all copies is not the only way.

Question: A current issue is that there are platforms, such as the iPhone, where anyone can write apps. What kind of privacy and security risks could this bring about?

John pointed out that the landscape for privacy and security on these platforms has a wide range.  Apple tracks everything while other platforms have different privacy and security controls.

Dave added that the answer depends on what these apps run on and what they enable or don’t enable. The problem is an issue of how constrained an app is. Apple has this issue with apps that do things like find the number of the phone it is on. If the underpinnings were built better you could assume that apps were well behaved but unfortunately this is not the case.

Ed added that mobile devices cause more issues. They are 1) always on 2) easily lots or stolen 3) know where they are. All of these add extra privacy and security concerns.

Question: Multics was designed with security from the beginning and is no longer used. The Internet was not designed with security in mind and everyone uses it. Can we take a lesson from this?

Dave pointed out that Multics was successful when it was used and where it was used. It was success but the architecture changed. Internet is insecure and a success but you are sitting on the edge of a cliff. What would it take to crash DNS?  It worrisome that we build more and more of our infrastructure on this thing and no one is sure how to secure it.

Question:  What are other strategies for having social interaction where anonymity may not be an option?

John commented that he actually likes Facebook’s model. To see anything you need an account and to see information on anyone you need friends to friend you.

Cindy pointed out that Facebook started out to a trust based model. It was closed to a single university and in order to see information you had to be friended by someone.  Many of EFF’s concerns with Facebook are that they are shifting away from a trust based model to a more Twitter-like open model.

Question: Are you doing anything with education? Are you working on any privacy and security curriculum?

Cindy started out by commenting that several of the board members are university professors.  But creating education curriculum isn’t something EFF is working on much. EFF isn’t well possitioned to do that. Instead EFF is more interested in making students involved because today’s students going to be the ones building these sorts of tools.

Question: I haven’t heard about de-anonymization yet today. Anonymity is good but what if the data becomes no longer anonymous? This would be scary in places like Bookmooch that are based on anonymity.

Cindy pointed out that EFF has been critical of Google over this. EFF has a site called Panopticlick which shows visitors their “browser fingerprint,” or how unique their browsing behavior is. In a closed community there is different concerns. In some communities there isn’t and issue because they just don’t want names immediately attached to their online behavior. De-anomization breaks allot of things.

Lorrie commented how important it is to be realistic when saying that things are anonymous. For example Bookmooch people may have trouble identifying each other but if someone had the whole database they might be able to do more.

John pointed out that Bookmooch is very transparent. This is obvious from the first setup. John struggles with this with Magnitune. At one point someone crawled the site and came up with a number of how much each artist was making from music sales. This made musicians very unhappy.

Dave spent a year in Washington DC working with policy makers. According to him policy makers have minimal technological knowledge.  But despite how little they know about technology these policy makers are going to make the laws that are going to decide the future of technology policy. The policy they are establishing impacts us continuously. Technologically skilled people would do yourself and the nation a service if you spend a couple years in Washington DC educating them.

Question: The earlier question about education was about certification. The real issue is a lack of educational materials. EFF does have some leverage in that. The real problem is educating everyone else not geeks like us. I would pick as the primary target post secondary education.

Cindy pointed out that EFF did develop a curriculum for copyright law. California created a requirement that k-12 students get an education in copyright. Music and other organizations with interests in protecting their copyrighted works immediately created curriculum that was somewhat biased and ignored important topics such as fair use. We created a counter curriculum. EFF isn’t quite sure how much the curriculum is used. There are a network of teachers across the nation that seem to use it.

One of the most frequent complaints from kids is that they call and say “my mom read my facebook page” and we tell them “that is because you friended her.”

08-24-09

posted by kami
No Comments

Welcome new CUPS students

Welcome to our new CUPS lab PhD students Rich Shay and Saranga Komanduri!