Friday, August 12, 2011

USENIX: Applied Cryptography, Refereed Papers

Differential Privacy Under Fire
Andreas Haeberlen, Benjamin C. Pierce, and Arjun Narayan, University of Pennsylvania.

There is a lot of data out there that is very important that we try to protect. For example, Netflix knows what movies you watch. Users rate movies in Netflix so that Netflix can make recommendations, but they don't necessarily want to share that information with the rest of the world. Simply replacing people's real names with pseudonyms is not enough, because if people know enough about you, then they will still be able to identify you from the available data and learn even more about you.

Even with protections, people can take advantage of timing attacks where they know the data must be in there, just based on how long the system took to reply to the query.

So, how can we avoid leaking information via query completion time? Their suggestion is to make timing predictable - so regardless of how long the query takes, always return at a constant time. That may mean padding on a delay, or aborting part of the query and returning an error.

By aborting the query, that could actually change the result, but the researchers say that's okay, because the default values will be set to what was expected if the lookout had completed (in this case 1, for true).

Their proposed solution, Fuzz, will pad this time in there, which sounds like it will solve the timing attack, but may make your transactions unacceptably slow, in my opinion.

The audio and video of this presentation are now online.

Outsourcing the Decryption of ABE Ciphertexts
Matthew Green and Susan Hohenberger, Johns Hopkins University; Brent Waters, University of Texas at Austin. Presented by Matt Green.

The researchers have been working on protecting medical records. By using cryptographic control on the records, you can encrypt the record for all valid participants, but that is not very flexible - what if you add, or remove, relevant people?

Attribute-based encryption (ABE) is a little more general. For example, you can encrypt data that can be read by "Cardiologist at Johns Hopkins", so if your cardiologist changes, your new doctor can still access your medical record.

The main problem is that the more complex the policy, the larger the ciphertext grows as well as the decryption time. For example, doing a decrypt on a smartphone could take up to 30 seconds - too long for practical use, particularly if you were a doctor that had to do these decrypts all day long.

The naive approach is to leverage the cloud to assist with the decryption, but you really need to trust your cloud....just too many vectors for attack.

Their approach is to have *two* keys - a transform key (TK) and a secure key (SK). The transform key, which can be in the cloud, can't fully decrypt the ciphertext by itself. The cloud would then partially decrypt the data, and the SK on the phone would complete it.

The researchers found that by doing this transform, which allows external assist, the decrypt time on their iPhone went from 28 seconds to under 2 seconds.

This same research can be applied to smartcards, which are very slow little chips.

The audio and video of this presentation are now online.

Faster Secure Two-Party Computation Using Garbled Circuits
Yan Huang and David Evans, University of Virginia; Jonathan Katz, University of Maryland; Lior Malka, Intel. Presented by Yan Huang.

The researches are trying to implement a system for secure 2-party computation using garbled circuits that is much more scalable and significantly faster than prior work.

This is based on prior work by Andrew Yao from the 1980s. While the garbled circuits theory has been around for a long time, prior implementations have been too slow to be used in practice. The researchers used a Yao chaining garbled circuit, and added a method of parallel processing to speed up the processing time.

Their framework doesn't require people to have expert knowledge about cryptography, but users will need to know basic ideas of boolean circuits. You can learn more and try out their Android app at their website, mightbeevil.com.

The audio and video of this presentation are now online.

This article is syndicated from Thoughts on security, beer, theater and biking!

USENIX: Pico: No More Passwords!

Frank Stajano, from the University of Cambridge, talked about the growing password problem Many years ago, when we all only had one or two passwords to remember, memorizing one or two simple 8 character passwords was very simple to do.

Nowadays, we like have 20-30 (or more?) accounts, all with different password policies, and we just can't memorize them all - and the things we're coming up with that we believe have high entropy, are actually very easily cracked - as illustrated by this recent xkcd.

The little shortcuts we take, like reusing our "good" passwords, means that once it is compromised on one site (through no fault of the user), the attacker has access to many more sites. This was demonstrated recently with the Sony password leaks.

Because we forget passwords, all websites have a method for recovering your password - which can be attacked.

Stajano says that passwords are both unusable and insecure, so why on earth are we still using them?

Perhaps we can start over? Let's get rid of passwords! That's where Pico comes in. The goals of Pico are:
  • no more passwords, passphrases or PINs
  • scalable to thousands of vendors
  • no less secure than passwords (trivial)
  • usability benefits
  • security benefits
He wants to make sure we stay away from trying to make the user remember things, so that eliminates things like remembering pictures, shapes, etc.

Other requirements for Pico are it must be scalable, secure, loss-resistant, theft-resistant, works-for-all, works from anywhere, no search, no typing, continuous.

Pico would have a camera, display, pairing button and main button, as well as radio to communicate. The device could look like a smart phone, a keyfob, watch, etc, but it is a dedicated device. It shouldn't be on multipurpose device, like an actual smart phone, as it would then be opened up to too many forms of attack.

The camera would use a visual code in order to know what it is trying to authenticate. The radio device would be used to communicate to the computer over an encrypted channel. The main button is used to authenticate, and the pairing button would be used for initialization of an authentication pairing. Obviously, this type of system would not just be an extension of existing systems, but would require hardware extensions.

Pico would initialize by scanning the application's visual code, get the full key via radio and check it against the visual code and stores it. Pico would respond, then, with an ephemeral public key, then challenges the application to prove ownership of the application's secret key. Once all of those challenges are passed, then Pico will come up with it's on keypair for that application and share a long term public key with the application. The application will store that and then would know your Pico the next time you try to connect to that application.

While you're connected to the application, your Pico would be continually talking to the application, via the radio interface.

Of course, simply having the Pico cannot be enough - otherwise someone could take your Pico and impersonate you. This is where the concept of "picosiblings" comes into play. Picosiblings would be things like a watch, belt, ring, cellphone, etc (things you often have with you), and the device would only work with those things nearby. [VAF: Personally, I'd hate to think I wouldn't be able to get money out of the ATM simply because I'd forgotten to wear my Java ring that day].

If you lose your Pico, you'd need to use some of your picosiblings to regenerate it - so don't lose all of your picosiblings as well! It seems that you want to have enough picosiblings, but not too many. I'm not sure how you determine that correct level :)

Pico access can't be tortured out of you, as it can't be unlocked by anything that you know (there's no PIN or password).

"Optimization is the process of taking something that works and replacing it with something that almost works, but costs less." - Roger Needham

With that in mind, Stajano notes that if he actually wants people to adopt this, he would likely need to think of a smart phone client.

There were a lot of interesting ideas in this talk, but the thought of carrying around yet another device is not appealing, and the burden of replacement and function (with all the picosiblings) makes this seem untenable to me - but, if it gets people thinking, then it's definitely a step in the right direction!

The audio and video of this presentation are now online.

This article is syndicated from Thoughts on security, beer, theater and biking!

USENIX: Dealing with Malware and Bots, Refereed Papers

Detecting Malware Domains at the Upper DNS Hierarchy
Manos Antonakakis, Damballa Inc. and Georgia Institute of Technology; Roberto Perdisci, University of Georgia; Wenke Lee, Georgia Institute of Technology; Nikolaos Vasiloglou II, Damballa Inc.; David Dagon, Georgia Institute of Technology. Presented by Manos ANtonakakis.

The motivation is that IP-based blocking techniques cannot keep up with the number of IP addresses that the C&C domains use, as well as there is a time gap between the day the malware is released and the day the security community analyzes it. There is a new tool, Kopis, that can analyze large volumes of DNS messages at AuthNN or TLD [top level domain] servers that will detect malware-related domain names.

Kopis asks the question: who is looking up what and where is it pointing?

The research focused on "interesting domain names" - those that have the most lookup requester diversity and resolvers that are from networks that historically from networks that have been compromised in the past.

Their researchers also looked at the rise of IMDDOS.The first big infection happened in China, and it took between 15-20 days before the US and Europe were infected.

Kopis can be used to detect phishing campaigns by identifying malware-related domains, before a related hash for the attack is identified. You can protect your network before it's infected.

The audio and video of this presentation are now online.

BOTMAGNIFIER: Locating Spambots on the Internet
Gianluca Stringhini, University of California, Santa Barbara; Thorsten Holz, Ruhr-University Bochum; Brett Stone-Gross, Christopher Kruegel, and Giovanni Vigna, University of California, Santa Barbara. Presented by Gianluca Stringhini.

Spam is getting sneakier and sneakier, coming up with subjects and senders that seem relevant to you, which gets it through filters and gets you to open the mail. It's hard to track spambots, as IP addresses of infected machines change frequently and new members can be recruited quickly.

They've been able to find other members of a botnet by assuming that all members will behave in a similar fashion (ie frequency and targets). Additionally, they used a spam trap to populate seed pools (a set of IP addresses that participated in a specific spam campaign) and logs at a Spamhause mirror to find known spammers.

In order to get this right and avoid false positives, they need to have at least 1,000 IP addresses in their seed pool. They came up with a great equation for calculating the threshold for what is really spam, and attempted to label which spam was coming from which botnets.

When they ran their software between September 28, 2010 and February 5, 2011, they tracked 2,031,110 bot IP addresses! The hope is that this software can help to improve existing blacklists.

The audio and video of this presentation are now online.

JACKSTRAWS: Picking Command and Control Connections from Bot Traffic

Gregoire Jacob, University of California, Santa Barbara; Ralf Hund, Ruhr-University Bochum; Christopher Kruegel, University of California, Santa Barbara; Thorsten Holz, Ruhr-University Bochum. Presented by Gregoire Jacob.

Current detection techniques fall into a two categories: host-based techniques, network-based techniques. In order to automatically detect these, you need to be able to examine clean command and control (C&C) logs, but this can be hard as these are often encrypted.

Jackstraws uses a combination of network traces and host-based activity and applies machine learning to identify and generalize C&C related host activity. They achieve the latter by mining significant activities and identify similar activity types.

All of this data is input into jackstraws so it can generate a template for matching other botnets. With lots of interesting graphs, they can now identify C&C traffic from noise.

The audio and video of this presentation are now online.

This article syndicated from Thoughts on security, beer, theater and biking!

USENIX: The (Decentralized) SSL Observatory

Peter Eckersley, Senior Staff Technologist for the Electronic Frontier Foundation, and Jesse Burns, Founding Partner, iSEC Partners, started with the well known crypto stipulation, which is your encryption is only as good as your trust anchor. Knowing that, they wanted to see how secure the X.509 certificates in the wild are, so they started scanning port 443 on IPv4 servers the world over, so they could collect certificates.

They have created an Observatory Browser Extension that collects certificate chain, destination domain, approximate time stamp, optional ASN and server IP that users can install into Firefox that can be used to help the researchers gather more information, and also help you to identify if you've got a bogus certificate in your browser.

Certificate Authorities have a hard job (verifying server identities) with strange incentives (they get paid for each certificate they issue). In 2009 there were three major vulnerabilities due to CA mistakes and in 2010, EFF discovered some evidence that governments were compelling CAs to put in back doors for them. On top of all that, there are a lot of certificate authorities out there. All of these things were daunting to the researchers as they started their project.

The technology this is all based on, X.509, was designed in the 1980s, before TLS/SSL or even HTTP! In their research, they discovered 10,320 kinds of X.509 certificates in the wild, of those, only about 1300 were *valid* (according to SSL).

They found 16.2 million IPs were listening on port 443, and 11 million responded to their SSL handshake.

Typical browsers trust about 1500 CAs. Can that really be a good thing?

These CAs are located in about 52 different countries. They found many certificates that are valid but don't actually identify anyone in particular: localhost, exchange, Mail and private IP addresses [RFC 1918]. What's the point of having a CA verify your identity, if you aren't really providing an actual identity?

They tried to use their browsers to check certificate validity, but had a hard time using it, because Firefox and IE cache intermediate CAs. This means that some certificates are considered valid only sometime (depending on where you've been before with you browser). Clearly, that shouldn't be - a certificate should either be valid, or not.

Even when problems are found and the CA authorities are aware, revocation of problematic certificates is difficult or impossible to do, as many browsers and other software doesn't look at revocation data. They found nearly 2 million revocations, 4 in the future and 2 from the 1970s (before this technology existed).

They found a few subordinate CAs that claim to be from the country "ww" (which doesn't exist), with organization "global" and a bunch of other bogus information - that were irrevocable, and the CPS pointed to dead websites.

So, what can we do? Consensus measurement, more vigilant auditing, DNSSEC + DANE, or certificate pinning via HTTPS headers.

Consensus measurement is where you look for sites to all agree that a certificate is valid, but false warnings can happen when sites swap certificates for testing purposes or for other unknown reasons. Users are already "trained" to ignore warnings if they get too many false positives, so this approach would be problematic.

Certificate pinning relies on whoever used to be domain.com should stay domain.com, which works great if it is implemented correctly. The right way to do this, is to create a private CA just for this domain, and use it in parallel to PKIX. Using this correctly can protect you against compromise and malice, though users would still be vulnerable at first connection.

This article is syndicated from Thoughts on security, beer, theater and biking!

Thursday, August 11, 2011

USENIX: Deport on Arrival: Adventures in Technology, Politics, and Power

J. Alex Halderman, Assistant Professor, Computer Science and Engineering at the University of Michigan, started the talk out with a look back at his family history. Apparently his great-grandfather was an illegitimate of a noble and artist, and his grandmother was a spy. As a grad student, back in 2003, he was working on DRM technologies (at the time made to protect CDs - remember those?).

These early copy protections could be easily over-ridden with felt tip markers or by pressing the shift key while inserting a disk. Halderman wrote about this online, and was quickly threatened with lawsuits by the DCMA and the company that had created the DRM technology (their slogan was "Light years beyond encryption").

The next round of DRM technology would install software onto your computer to prevent you from copying CDs - in the form of a rootkit that munged with your registry. Not only was that software doing things that that weren't disclosed, but they also introduced privilege escalation bugs, and if you did uninstall the software, it would leave a remotely executable vulnerability on your desktop.

"Most people, I think, don't even know what a rootkit is, so why should they care about it?" - Thomas Hess, Sony.

Halderman, by publishing these issues, caused Sony to have to recall millions of CDs over the holiday season and brought government oversite into the industry. To the best of his knowledge, attempts at putting DRM software onto CDs has been dropped by the industry. [VAF: though I have seen these recently on CDs I've purchased, at least labeled that it had copy protection.]

Since then, Halderman has been focusing on voting machines, all the way back to the old machines with the big pull levers. In that time, most of the requirements around "robustness" had to do with machines working in hot or cold weather and not losing data if they were dropped.

After the 2000 election debacle, may electronic voting machines were rushed to market without adequate testing and without a third party security review. The code was put up online accidentally by Diebold, and people found many mistakes quite quickly. Diebold claimed the software was out of date and threatened to sue many of the people who had found issues.

In 2008, Halderman and two other researchers finally got their hands on an actual Diebold Accuvote machine, which he acquired from a man in Times Square wearing a trench coat in an alley.... really.

Realizing how litigious Diebold was, the researchers performed their experiments on the machine in a room (missing from the building blue prints) in the basement of their building.

They were able to discover a method to set the percentage of votes they wanted one candidate to get at the end of the voting period, all the while, the paper tape was printing the correct numbers for those voting.

Another method of attack could be done with just 30 seconds of access to the machine with a memory card that would overwrite the voting machine's memory.

Finally, they were able to come up with a voting machine virus that would self-propagate to every voting machine.

Despite these findings, these machines are still used state wide in at least Maryland.

Diebold argued that the box had security in the form of a lock, but the researchers found you could pick the lock with a lock pick set in 10-15 seconds, a little longer with a paper clip. But, why bother? All boxes had the same key, and that same key was also used on minibars and jukeboxes - readily available for purchase on the Internet.

Debra Bowen, Secretary of State in California, took this research to heart and began a full audit of all of California's voting machines and demand e-voting machine manufacturers to provide source code for analysis. The California review found that it wasn't just Diebold that had issues, but all manufacturers of electronic voting machines.

Halderman and other researchers were able to obtain voting machines for next to nothing at various government surplus sales. In one case, they thought, why bother doing this again? We know the box will be insecure. So, instead, the got the voting machine to boot Linux, start X and run a PacMan emulator.... :-)

As states can't seem to find enough bugs in physical electronic voting machines, places like Washington, D.C. wanted to try Internet voting last year. Luckily for Halderman and his grad students, D.C. put the system online a few weeks in advance of voting to allow people to attempt to attack the system.

The students discovered the router passwords were "cisco123" and that there was a publicly accessible webcam in the server rooms. By watching the server rooms for a few days, they knew the schedule of the admin (shown in the talk picking his nose) and when security went home. So, they could launch their attack after 5PM.

They were able to put in false ballots *and* get the system to send them copies of other people's votes. The ballots were encrypted on the server, but the temporary copy of the ballots were not...

Halderman and his researchers did not let D.C. folks know that they were in active attack mode, but wanted to see how long it would take them to notice. They modified the "Thank You for Voting" page to play Michigan's fight song after every vote. It took two days for them to discover this, only because another tester complained to the authorities that he didn't like the new music they'd put on the page - it was annoying.

That still may not have been enough to stop them from deploying. It was also discovered that one of their internal testers wanted to make sure the system wouldn't crash if someone uploaded a very large PDF file, so he uploaded the biggest file he could find... which happened to be the real voter credentials for the election. So, the e-voting was called off... for last year. Wonder what 2011 will hold?

Halderman broke from election talk to tell us about his recent adventures in airports, including filming TSA agents (who don't like to be filmed patting people down, because they feel their privacy is being violated) and wandering around parts of airports that were meant to be secured, but weren't (doors unlocked and security guards were asleep).

Halderman and another researcher went to India to study their electronic voting machines, which previously had not been evaluated by independent researchers. They were able to get their hands on some actual voting systems, and did find that the software was hardcoded into the hardware during manufacturing. So, they attacked the LED display that shows you how many votes each candidate got by making a lookalike board that had chips hidden under the LEDs and a blue tooth transmitter, so you could remotely stack the votes.

The person in India, Hari, who had helped them get access to the voting machine was taken into custody by police a short time later. Fortunately, all ended well for Hari, but it must have been a terrible time while he was in custody. This, of course, led to Halderman being denied future access to India, which he discovered the next time he traveled there.

This was a very entertaining talk, done mostly with pictures, yet it was still very easy to follow. A delight! Once this talk is posted online, definitely check it out!

The audio and video of this presentation are now online.

This article is syndicated from Thoughts on security, beer, theater and biking!

USENIX: Privacy in the Age of Augmented Reality

Presented by Alessandro Acquisti, Associate Professor of Information Technology and Public Policy at Heinz College, Carnegie Mellon University.

Acquisti asks what are the trade-offs associated with protecting and sharing personal information? How, rationally, do we calculate the risk and benefits?

You can look at it from a economics point of view. Acquisti starts with an example from a paper called Guns, Privacy and Crime, analyzing where the state of Tennessee released the names and zip codes of all people that had handgun carry permits. The NRA was outraged, as well as privacy experts, saying this information would make these people at more risk for crime - newspapers believed it would be the opposite. Acquisti and his colleagues studied this and found a direct relation between crime in those areas - that is, crime went *up* in areas with low gun ownership. Obviously, the criminals knew the risk was lower to themselves in those neighborhoods. I'm sure that's not what the state of Tennessee was going for.

The conundrum here, of course, is that different people value their privacy at different levels. He asks us to consider: "Willingness to accept (WTA) money to give away information" vs. "Willingness to pay (WTP) money to protect information." In theory, they should be the same, but in practice, they believed people have a higher WTP.

Acquisti and his colleagues did an experiment at a local shopping mall where they rewarded survey participants gift cards as a reward. One group received a $10 gift card that would not be traced, and the other group was given $12 card that would have the transactions tracked and linked to your name, and they were given the option to swap.

So, while they're both actually being given the same choice, it was psychologically framed differently when presented. People who were originally given the $12 card very rarely wanted to give up the $12 to get their privacy back, while those that started with the $10 card wanted to keep it. If you have less privacy, you value privacy less. McNeally's famous quote, "You have zero privacy anyway. Get over it," came up.

Another area they were curious about was is the Internet really the end of forgetting? That is, memories fade, but Facebook doesn't. I've said this over & over again to teenagers, "The Internet is forever." What the researchers wanted to see was that if people would discount the information if it was old. Their hypothesis was that bad information would be discounted more slowly than good information. For example, if you last received an award 10 years ago, people may say, "Yeah, but what have they done lately," compared to being caught drunk driving, for which you may not ever be forgiven.

Their researchers did three experiments: the dictator game (with real money), the company experiment (judging a real company, but no real money involved), and the wallet experiment (where subjects read about someone doing something either good or bad with a wallet and then judge him).

In the wallet experiment, even though all of this information is fresh on the mind of the subjects, they found that if they said Mr. A did something positive with a found wallet 5 years ago (returning cash found), does not impact people's feelings about Mr. A, whereas if he had done it recently, they would have a more positive view of him. But, if he did something negative (like keeping the cash), it didn't matter if it happened last year or 5 years ago - people did not like this Mr. A.

The lesson learned here is that be careful about letting negative information about yourself from getting on the Internet, as people will not forgive your past indiscretions. The speaker gave specific examples of the Facebook meme where young women post pictures of themselves when they are out of control drunk and passed out or worse. Even as they grow up and mature, they will not be forgiven for those past indiscretions.

And, with computer facial tagging getting better and better, even untagging yourself won't prevent you from being recognized.

The researchers studied public Facebook profile pictures along with their IDs and compared them to publicly known pictures of those people to see if people are using their real picture - they were able to discover that about 85% of them were accurate images. This could be further leveraged to see if people are using their own real picture on dating sites :)

What this means, is that even if you change your name, you still won't be able to escape your face (well, not without significant cost and potentially negative consequences).

The better and faster that facial recognition software gets, the less privacy we will have in public. Someone you just met could look you up by your face and learn all sorts of information about you. Scary!

The audio and video of this presentation are now online.

This article is syndicated from Thoughts on security, beer, theater and biking!

USENIX: Securing Search, Refereed Papers

Measuring and Analyzing Search-Redirection Attacks in the Illicit Online Prescription Drug Trade

Nektarios Leontiadis, Carnegie Mellon University; Tyler Moore, Harvard University; Nicolas Christin, Carnegie Mellon University. Presented by Nektarios Leontiadis.

The researchers chose to focus on illegal sales of prescription drugs, as it's the most dangerous form of online crime - if someone takes the wrong dosage of a drug, or gets a counterfeit drug, they can die.

This type of spam takes advantage of trust that people have in someone's blog or other social network by exploiting search results. The search results in a browser shows what looks like valid links, but will redirect you to an online pharmacy - they call these infected links.

The researchers collected a lot of search results where they queried for various drug related topics (from "cialis with now prescription" to "ambien overdose"), and they got many infected servers (like umass.edu) and legitimate servers (online pharmacies).

These infected sites and illegitimate comments to blogs are crowding out legitimate online health resources. .Edu domains and high ranking sites are particularly at risk, and the infection seems to last longer on .Edu sites.

The problem is that they are actually getting a very high conversion rate (ie number of click-throughs where people actually make purchases).

The researchers see three possible solutions: getting the prominent infected sites fixed, which would not be too hard, as there are only a handful there, fixing the search engines to recognize these attacks, and trying to stop illegitimate redirection.

The audio and video of this presentation are now online.

deSEO: Combating Search-Result Poisoning

John P. John, University of Washington; Fang Yu and Yinglian Xie, MSR Silicon Valley; Arvind Krishnamurthy, University of Washington; Martín Abadi, MSR Silicon Valley. Presented by John P. John.

John showed us the malware pipeline: find vulnerable servers-> compromise webservers and host malicious content - > spread malicious links via email, IM, search results -> bad stuff happens.

Their research focused on on the spread on the malicious links. Nearly 40% of popular searches contain at least one malicious link in top results. Instead of getting the content you want, you get "scareware" that tells you that your PC is infected and you need to install software to fix it. As if that's not bad enough, give it a few weeks or months, and it will ask you to pay $50 in order to keep protecting your PC, even though it is actually malware.

Sites running osCmmerce are particularly at risk, due to it being a popular piece of shopping cart software with many well known unpatched vulnerabilities.

The script is obfuscated, but what it basically does is generates a page for a keyword, gets text from Google and images from Bing, and now it's got something that will look legitimate and get you to click through. They get their keywords based on top trending keywords from Bing and Google.

The malicious sites can sufficiently cloak their behaviour using redirects and javascript, so they can hide themselves from automatic detection by search engines.

Their tool looks for sites that suddenly have a new type of content or large quantities of new content, clustering similar domains, and comparing the new pages on one site to another's.

The audio and video of this presentation are now online.

This article syndicated from Thoughts on security, beer, theater and biking!

USENIX: I'm From the Government and I'm Here to Help: Perspectives From a Privacy Tech Wonk

Tara Whalen, IT Research Analyst from the Office from the Privacy Commissioner of Canada, was a last minute fill in.

Ronald Reagan: "The nine most terrifying words in the English language are 'I'm from the government and I'm here to help.'", and while Whalen is from the government, she hopes that we aren't terrified of her. :-)

As the US Government doesn't have an Office of Privacy, Whalen gave us an overview of her Canadian agency. The office was established in 1983 with the passing of the Canadian privacy act. Their mandate is to oversee compliance with the 1983 Privacy Act and the 2000 PIPEDA Act, which means they protect both corporate world and individual citizens. They help review new policies and guide parliament.

In addition to those more standard government functions, they also have a technology analysis branch, where they do investigations, audits, privacy impact assessments, and research. This division supports a lot of research, even including a game for Canadian children to teach them about privacy.

Whalen went into detail into a couple of case studies. The first one was their investigation of Facebook, where a group of law students had reviewed Facebook's policies as compared to Canada's PIPEDA and Privacy acts. Their result was a 24 point complaint to Whalen's office, which triggered an in depth investigation.

The investigation was very detailed and involved using things like packet sniffers to see what actually is happening with data on the wire. After a year, the Canadian government had an official complaint to give to Facebook requesting eight items to be corrected, six of which where relatively easy changes to the language on the site. For example, disambiguation between account deactivation and account deletion.

Some of the roadblocks that her team hit were Facebook redoing all of their privacy settings and adding many new features in December 2009 as well as all of the third party apps that hook into the system. New complaints have come in, so the investigation is still undergoing and Whalen could not comment further.

The next case study she presented was on the Google WiFi complaints, which was initiated by privacy investigator in Germany. Basically, while Google was driving around collecting pictures for their Street View service, they were also collecting information on WiFi networks. Google's initial response was that there was no data payload being collected, which made the privacy experts very happy ... until they found out that wasn't a true statement. Google had actually accidentally collected over 600 GB of payload data from around the world from unprotected WiFi networks.

Google of course apologized and quickly discontinued the practice.

Google did hand over the data collected in Canada (18G) to the government, who then was faced with a bit of a conundrum. Google had not looked at nor utilized the data, so the privacy group didn't want to go and deep dive into the potentially very personal information and expose things that at this point had still been private. They did a cursory examination where they did look at some of the personal information to verify that it was indeed collected, and presented aggregate information in their report. They did find whole emails, even though Google had stressed they had only picked up fragments of information - obviously, the data they collected depended on what a user was doing at the exact moment the Street View car drove past their house.

Google did take the complaints very seriously, added changes to training for engineering and then also appointed an internal privacy officer.

Another area the privacy office looks at is location privacy. The case shown here was about a German citizen who sued Deutsche Telecom in order to get his own data about his locations and then shared it with the world. Quite a shock about how much information his cell phone carrier had for him!

Then there was the recent case where Apple was collecting location information from iPhones and 3G iPads, even if the location services were disabled on the device. This information wasn't just stored on the device, but also transferred to any computer you would sync with and transmitted to Apple. This was well discussed in the media, particularly due to how visually interesting the maps were.

It wasn't just Apple. Android and Microsoft did this as well, though to varying degrees.

In Canada, there is a lot of legislation being proposed to help protect privacy and better define when data can be held and accessed by law enforcement.

It is good to know that, at least in Canada, there is someone in the government that cares deeply about protecting citizen privacy.

The audio and video of this presentation are now online.

This article is syndicated from Thoughts on security, beer, theater and biking!

Wednesday, August 10, 2011

USENIX: Forensic Analysis, Refereed Papers

Font sizeForensic Triage for Mobile Phones with DECODE

Written by Robert J Walls, Erik Learned-Miller and Brian Neil Levine, University of Massachusetts Amherst, presented by Robert Walls.

Forensic triage attempts to acquire evidence quickly, accurately from a crime scene. DECODE works on mobile phones and can extract information from the raw data on the phone, without specific knowledge of the phone's file system or operating system.

Phones are the focus of this research as they are everywhere and essentially record our lives, and likely contain evidence. Even without direct evidence, they can be used to find motivation and establish a time line.

Directly browsing the phone only gets law enforcement the information that hasn't been deleted, and could possibly modify the data while the phone is being inspected. Many commercial tools currently available are very expensive and focused on the most common phones.

DECODE will look at the raw storage (bytes of data with unknown format), which helps retrieve "deleted" data, meta-data and time-stamps. It does this using block hash filtering and inference.

Inference relies on most phones having data listed together, like name, time, and phone number.

This work can be applied to phones that have not been previously seen - making it much more extensible in this ever changing market.

The audio and video of this presentation are now online.

mCarve: Carving Attribute Dump Sets

Written by Ton van Deursen, Sjouke Mauw, and Sasa Radomirovic, University of Luxembourg. Presented by Sjouke Mauw.

These researchers used beer, card readers and time to look at hacking their public transport cards. Unfortunately, were not able to use existing forensic carving tools, so had more work to do. The researches knew when the cards were purchased, how much money was left on them, and when they were last used - as they were their own cards. This gave them some "known text" to search for, ie attributes of the card.

Just knowing that "plain text" was not enough, in some cases the plain text was too simple and would appear multiple times on the card, for example, knowing that the card had been used 4 times. But, using that data, with others, they were able to narrow down the different components of the card.
The audio and video of this presentation are now online.

ShellOS: Enabling Fast Detection and Forensic Analysis of Code Injection Attacks

Written by Kevin Z. Snow, Srinivas Krishnan, and Fabian Monrose, Universyt of North Carolina at Chapel Hill; Niels Provos, Google.

Exploit kits are making it easier and easier to deploy attacks. The speaker started out with a real world example of an email that looked very much like the standard email you get from a Xerox copy-scanner, except that the attachment contained shell code that could be used to attack the system.

One way of detecting this with dynamic code analysis by partly executing the code in a sand box environment, to detect malicious code. Emulation based approaches are slow and can be easily detected by the malicious code.

This is where ShellOS comes into play. Execution runs uninterrupted, at native speed. If any fault occurs, it is trapped and skipped. It does this in real-time, which makes it more stealthy.

Their next experiments were around how effective they were in practice at detecting shellcode. At 100Mb line, they could process packets in real-time and not risk any dropped packets, running on one CPU.

Their most important test came for trying to detect PDF code injection attacks. This is where Niels Provos came into play, handing over documents that had been flagged by Google's Large-Scale Web Malware Detection System and compared it to past USENIX Security conference PDFs (the assumption being that those would be exploit free).

While examining these documents, they found almost all of them were attempting to get a shell, which is what ShellOS was created to detect. They were able to detect the code in the malicious documents, and didn't get any false positives in the presumed innocent set from USENIX.

This seems like a very cool project and I'd be very interested to see where this ends up going.
The audio and video of this presentation are now online.

This post is syndicated from Thoughts on security, beer, theater and biking!

USENIX: Analysis of Deployed Systems

Why (Special Agent) Johnny (Still) Can't Encrypt: A Security Analysis of the APCO Project 25 Two-Way Radio System

The paper was written by Sandy Clark, Travis Goodspeed, Perry Metzger, Zachary Wasserman, Kevin Xu and Matt Blaze, presented by Matt Blaze.

APCO Project 25 (P25) is a standard for digital two-way radio used by law enforcement in the US and worldwide. They work over a narrow band radio channel at 9600 baudes, where the sender makes all decisions, everything is multi-cast and has no concept of "ACK".

The standard does allow for optional security, like encryption (AES, DES, etc) that are configured in a manual process - though they can be rekeyed live (while in use). What is interesting about these security options is that they are not explicitly defined in the standard, which leaves it up to vendors to come up with ways configure things like encryption. So far, the paper authors haven't found any of the devices that use authentication.

Looking at attacks, you can use something similar to "ping" to actually create a map of where all of the P25s in the area are - basically, giving away locations of security personal, which can help attackers find weak spots.

There are also very easy ways to jam these devices using consumer devices, like "GirlTech IMME" (an "instant messenger" toy), which could be purchased for $15. Jammers can even be configured to jam selective traffic, like block all encrypted traffic - a good way to get users to think something is wrong with their crypto mode so they'll disable it.

While you can rekey on the fly, it does require everyone already having a key to begin with. The P25s rely on centralized keying, so if just one radio comes in that does not have the key, then everyone needs to talk in the clear. So, why bother with cryptanalysis, when you can just look for clear text [USENIX Security '95]?

The researchers recommend that the encrypted switch be disabled all together and just encrypt an entire channel, and decrease frequency of rekeying, which is actually leading to security problems and getting people to talk in the clear.

The audio and video of this presentation are now online.

Dark Clouds on the Horizon: Using Cloud Storage as Attack Vector and Online Slack Space

The paper was written by Martin Mulazzani, Sebastian Schrittwieser, Manuel Leithner, Markus Huber and Edgar Weippl from SBA Research.

There are many places where you can now store data in the "cloud", some using simple models like FTP or more complex, like delta detection. Most sites are now trying to use deduplication, which will help save on storage space.

Looking at Dropbox, which uses Amazon Simple Storage System (S3), dedup (SHA-256) and AES for encryption. The researchers' first attack takes advantage of the hash manipulation, where they could use unauthorized file access by just having the hash value - undetectable by victim or Dropbox.

The second attack they analyzed was the "stolen host ID attack", where Dropbox uses host ID to link particular host with an account - so, once someone else takes your credentials, they can impersonate you. This attack can be easily detected, and Dropbox is now preventing this.

If you know someone else's host ID, you can store your data in their Dropbox - won't count against your storage quota, and as long as you have the address, you can continue to retrieve your data.

The audio and video of this presentation are now online.

Comprehensive Experimental Analyses of Automotive Attack Surfaces


Written by Stephen Checkoway, Damon McCoy, Brian Kantor, Danny Anderson, Hovav Shacham, Stefan Savage, Karl Koscher, Alexei Czeskis, Franziska Roesner and Tadayoshi Kohno.

Cars are no longer a mere mechanical device, they are controlled by tons of computer controls (ECUs) running millions of lines of code. In general, this makes the car safer, but is a problem if an attacker is able to take control of the car.

Many of these ECUs can even be reprogrammed while the car is being driven! All it would take is for one of these devices to be infected for it to spread to the rest of the vehicle. These types of controlls could allow an attacker to do things like disable breaks, disable lights or even disable the engine!

The researchers said there were three major types of systems to attack. First, indirect physical attacks work over a physical interface, though no direct access to the physical device. Short-range wireless attacks can impact things like tire pressure sensors, remote keyless entry, wifi access points and vehicle-to-vehicle communications. Third type of attack was long-range wireless attacks, taking advantage of things like HD radio or systems that are used for roadside assistance.

Every vector of attack the team worked on led to some type of system shut down.

In the indirect physical attack, the team looked at the media player that uses ISO-9660, which is apparently pretty common. They were able to come up with a WMA file that would play fine on a computer, but would reprogram a car's radio.

Their short-range wireless attack used bluetooth to take advantage of a strcpy() bug, which was completely undetectable by the user. They were also able to take advantage of a buffer overflow in the telematics unit in the car - basically, you can call a car and fill it with malicious code.

In fact, they could take their malicious "song" from before on an MP3 player with the speaker going to a phone that has called the unique cell code for the car, and the attack code was loaded by the car.

Actually managed to install an IRC client onto the telematics unit, and could use that client to get a shell on the telematics unit, getting the car to send broadcast packets to attack other cars.

You can easily use this technology to steal a car - use GPS to locate the car, use their device to unlock the car, bypass security tools and start the engine. They showed a video where they did this - drove a car away with no key!

Same researchers took advantage of these same technologies to remotely eavesdrop on people in their car - 1,500 miles away!

These telematics units contained things like ftp, telnet, nc, vi... on a UNIX like real-time operating system. Not quite secure out of the box...

How did we get here? Basically, nobody's been attacking them, so there's been no reason to protect them. But, this is improving - SAE, USCAR and US DOT are working on this. Too little, too late? Let's hope not!

The speaker ended the talk with a picture of a hacked odometer. Great talk!

The audio and video of this presentation are now online.

This post is syndicated from Thoughts on security, beer, theater and biking!

USENIX: Three Cyber War Falacies, Dave Aitel, Invited Talk

Dave Aitel, CEO of Immunity, Inc, started out with a picture of "The truth shall make you free" - a quote from a wall in the offices of the CIA, which helped Aitel launch his talk on cyber security and cyber war and how irony pervades this industry.

According to Aitel, there are three fallacies of cyber war:
  1. Cyberwar is asymmetric
  2. Cyberwar is non-kinetic - as in it's in the virtual world, no "real" victims.
  3. Cyberwar is not attributable
Aitel notes that there won't always be explosions or "instant death" when it comes to cyber war, but there can still be great consequences - including loss of utilities, the more things come on line.

He warns Californians about the security implications of PG&E's SmartGrid, where a not-so-smart chip will control when you can have AC, etc., that will be very easy to compromise. That type of attack, along with recently discussed expoits on automobiles, put people's lives in jeopardy every day.

For example, STUXNET, which many took as a temporary trojan horse that is now totally under control - what the community at large doesn't seem to see, is that it was a demonstration that this (or something like it) can be used to target any factory or any utility at any time.

The problem with a lot of these trojans and worms, is that once your corporate network is infected, it is virtually impossible to to completely rid your network of these hackers. Think about it, if it took you six months to a year to discover the intruder, then you have to assume they are everywhere and you will unlikely be able to totally get them out.

Aitel then started on his point about how cyber war is NOT asymmetric by giving many counter examples, though, unfortunately he spoke very fast and the slide ware moved quickly (and was overcrowded and filled with tiny words) so I had a hard time following that point...

Automated computer security commonly involves things like vulnerability scanners, static analysis, web application scanners - they just don't work, too slow and tedious and really still require manual analysis. Aitel believes his team can find more bugs by just looking at the code, rather than relying on this analysis. Personally, I think that's great if we were all perfect, but I've definitely seen static analyzers find stuff that humans missed, both in writing and while reviewing.

Aitel has a very strong opinion on "script kiddies" - he believes that the term "script kiddie" belittles what is really a challenging career, which he compares to nuclear scientist. I'm sure he's trying to be a bit tongue in cheek, but, as someone with a science degree, I can say for certain that a nuclear scientist would most definitely be a lot more skilled than someone that runs someone else's attack scripts. Sure, there may be some learning curve to running these, but... it's not nuclear science.

Aitel then went on to quote CERN about how SSL based VPNs are all broken, due to fundamental flaws in the architectures, but did not go into details. I'm happy that I'm reading my mail over an IPsec connection ;-)

One thing that has changed over the years is that the attacking community is now mature, organized and highly motivated. After realizing that DefCon this year had reachead 19 years of age... and that I started attending back in DefCon 2, I can only imagine how accurate that statement is. [and at DefCon this year, there was a children's track.... ]

Regulation can't help here - it's too slow. Aitel argues until the "traditional bearded men that work on security get into Government" it isn't going to get better. I guess Professor Spafford meets that mold, but not sure how Susan Landau fits that mold.... guess she'd better work on her beard.

Overall, this talk was fun and entertaining, but seemed to be an agenda for why you couldn't possibly secure your own network or code on your own, and you need to hire his security team.

I think the important take-a-ways are that you cannot rely on static analyzers alone to make sure your code and network are secure, policies and tools need to be regularly re-reviewed, and keep ahead of the attackers.

The audio and video of the talk are now online.

This post is syndicated from Thoughts on security, beer, theater and bikes!

USENIX: Charles Stross Opening Keynote

Charles Stross, two time Hugo Award winning science fiction writer, greeted us with a talk on Network Security in the Medium Term (2061-2561). He quipped that by setting his predictions so far out, we'll be unlikely able to prove him wrong - and if we can, then he'll be happy to just be alive.

His talk started out with stating the obvious, that we won't have to worry about network security if, for example, we have a global system panic - so, he's going to assume that doesn't happen. The other issues around seeing his predictions come to fruition depends on medicine meaningfully extending our lives, having stable political systems and managing society's increasing complexity.

In the future, we won't be able to ignore emerging countries like China and India, and, well, the whole of Africa, all of which will change how we manage security and interact together.

Stross told a story of a theoretical time traveler from the 1960s and how huge of a technology hurdle he'd have to overcome - no imagine how much technology change we can expect in the next 50 years! Children nowadays will never know the experience of being lost, always connected, always with GPS at hand. [of course, such an assumption presumes the child growing up in a home of means, don't forget the aforementioned Africa!]

Stross took us through a myriad of potential futures - like will we, as humans, have a lifelog like we have in modern cars? Will we have to fight super viruses, bacteria and cancer? Will genome sequencing computers be able to help protect us? But, will we want to share our own DNA in order to aid these computers? Give up our privacy in order to always have an alibi? Give our insurance companies access to vehicle harddrives so they can detect how people really drive?? It seems unlikely.

I've personally submitted my DNA into two systems: Kaiser Foundation's Genome Study and 23andMe. I did this both to advance science and to give myself valuable information about my gene profile, though my neighbor, a DNA forensic scientist, believes I was insane to share my DNA with anyone. Perhaps I should've consulted her before I spit in that cup.

Lots of interesting scenarios to think about, we'll definitely have to be more careful with our privacy as more information is put online and in storage.

The audio and video of the talk are now online.

This article is syndicated from Thoughts on Security, beer, theater and biking!

Sunday, August 7, 2011

Peaches en Regalia

My husband and I had the pleasure to see Peaches en Regalia from Wily West Productions at the Stage Werx Theater in San Francisco last weekend.

This is a new work, being presented for the first time as two acts. It's a sweet play revolving around the ever changing life of the title character, Peaches, who has recently taken a job at a restaurant where they serve... wait for it... Peaches en Regalia.

Sarah Moser, as Peaches, takes the stage by storm with her opening monologue which describes college, her internship at a financial institution and why she decided to take a job at a diner. Moser is energetic and her monologue comes to life as the other actor's help her reenact scenes from her recent past.

The hilarity continues when Philip Goleman comes into the diner as Norman and then gives us a hilarious monologue on bathroom etiquette and working on his flirting skills.

Of course, the two come together. The fast paced and delightful show moves along as Nicole Hammersla (Joanne) and Cooper Carlson (Syd) fill in the picture. Joanne has a nervous tic (picking at her sweaters) and Syd is a soft-hearted republican.

This was a wonderful production that will keep you entertained throughout! Even with a short 10 minute intermission, the show was still about 90 minutes running.
Definitely worth a trip up to San Francisco!

This post is syndicated from Thoughts on security, beer, theater and biking.

Thursday, August 4, 2011

Oracle Solaris Security BoF at USENIX Security

I'm excited to announce that Oracle Solaris Security developers will be presenting on our recent work, talking about cloud security and doing a Q&A panel at a BoF at USENIX Security next Thursday night.

Date: Thursday, August 11, 2011
Time: 7:30-8:30 PM
Location: The Westin St. Francis, 335 Powell Street, San Francisco, CA
Room: Elizabethan A

If you're attending USENIX Security, please come by. There will be give-a-ways for excellent questions and beer. Does it get any better than that?