
07-20-11
VizSec2011: Malicious Activity on the Internet
Francesco Roveta, Luca Di Mario, Federico Maggi, Giorgio Caviglia, Stefano Zanero and Paolo Ciuccarelli, “BURN: Baring Unknown Rogue Networks”
The goal of this work is to expose malicious hosts.
The FIRE system focuses on the top four internet threats
- Malware
- Botnets
- Phishing
- Spam
Authors focus on Autonomous Systems (AS) because targeting individual IPs is challenging.
Authors are using data from Anubis, PhishTank and SpamHaus and feeds it into FIRE to quantify the amount of malicious activity that a AS is involved in. The outcome of this project is that many “shady” ISPs were reported to law enforcement and some ISPs were notified and took action.
Exploring the data in FIRE is challenging. To solve this issue we created BURN (Baring Unknown Rouge Networks) to visualize the data.
BURN is targeted towards both researchers and end users.
BURN provides a Global and a AS view. The Global view uses lots of well thought out graphical visualizations to shows information world wide. In this view information like the size and ongoing state of ASs is shown. If the annalist is interested in a particular AS the can look at a detailed view which also has a bunch of different graphs to see different information features
BURN is currently in private beta.
07-20-11
Survey and classification of potential security UX conventions
Rob Reeder
Senior Security Program Manager
Microsoft
Our story is we have been tasked to make our security advice and requirements more specific. For example, we get questions like:
- What icon should I use?
- How big should the icon be?
- Can you give me a generic sentence to insert?
So, we will assume these conventions are beneficial, and our next steps are to create these conventions.
We discovered ANSI Z535.4 2007, a standard for more general safety and product warnings, in our search, and will be interleaving this work throughout the talk today.
What are the properties of a good (security) convention?
- Intuitive to users
- and/or easily taught
- e.g., larger text gets more attention, red frequently means danger
- Keith Lang’s talk on spiky buttons for dangerous actions
- Consistently applied
- Doesn’t interfere with other uses of technique
- e.g., bold font is used for other things
- Studied & tested
- Resistant to spoofing (this is obviously very difficult)
- Easy to implement
- Easy to localize
- any word that needs to be changed has to undergo a localization process, being translated into dozens of different languages
- could make translation tables to solve this
- Easy to enforce usage across company/industry
- easy to tell when it is being used correctly or incorrectly
- Portable to different devices
- Accessible
- [bonus] Already in use!
Final Thoughts: There are many challenges to getting good conventions, including industry advantage, spoofing of elements, gaining widespread use, but the ANSI standard, and their clear and useful standards give me hope.
07-20-11
VizSec 2011: Malware Images
Lakshmanan Nataraj, Karthikeyan Shanmugavadivel, Gregoire Jacob and B.S Manjunath, “Malware Images: Visualization and Automatic Classification”
The authors visualize the bytes of malware files to produce small visualizations. These visualizations let you get a high level sense of a file and what the different components are.
If you look at malware across variants they look visually similar while looking dissimilar to other malware families.
Once malware is converted to an image representation you can use that to characterize the malware. The authors used texture features that are normally used to identify different landscapes or other images. They used k-nearest neighbors for classification. Using a euclidian distance measurement` to determine how similar images are.
Took 2000 malware comprising eight malware families and converted them to images and used image texture based features. The authors were able to get around 98% classification accuracy.
What about packing?
Images after packing look completely different from the unpacked executable.
Common wisdom says that everything packed by the same packer should look the same and not like the original. The authors tried packing each malware with each of three packers. Even after packing the authors were able to identify family groups with high accuracy.
Used 25k malware from Anubis and VxHeavens Datasets and labeled using Microsoft Security Essentials and used the top 100 families. Still got high accuracy.
Tried 64k malware with 531 families and still got high accuracy.
The biggest advantage of image based malware analysis is speed. It only takes about 50ms. It also doesn’t require execution or dis-assembly.
Limitations of this work is that it is data driven. It doesn’t prevent zero day attacks well. Also the characterization is grouping based on images not on actual functionality.
Questions
Q1: How do forensic malware analysists see using this work? What about the low accuracy points.
A1: Low accuracy could be countered by using more AV labeling.
Q2: What you are doing is visualizing signatures. Will this work on polymorphic malware? Is this different enough from existing software, since it is only classifying known malware not separating it from good code.
A2: We did try adding in a bunch of non-executable default windows files as a extra “family” and were able to tell the difference between this “family” and others.
07-20-11
SOUPS 2011 Begins!
SOUPS 2011 has gotten underway this morning with a workshop, a symposium, and a tutorial.
The Workshop on Usable Security Indicator Conventions began with a round of lightning talks to bring together the history of standardization efforts as well as current interests and thoughts on security indicators.
The 8th International Symposium on Visualization for Cyber Security (VizSec2011) also takes place today, beginning with a keynote by Lorrie Faith Cranor, followed by a series of six technical papers.
And finally there are two tutorials today, the morning tutorial is presented by Simson Garfinkel on Working with Computer Forensics Data and the afternoon tutorial is presented by Sonia Chiasson and Robert Biddle on Experiment Design and Quantitative Methods for Usable Security Research.
Anyone with notes should feel free to post them here, tag pictures with soups11, or tweet with the hashtag #soups11. And we will be continuing to post to this space.