Posts in Category "Privacy & Security"

Recent news, steady progress

Funny news day — lots of little things popping, some drawing much more attention than others, hard to get perspective. There’s a common theme among them, however. Even though there’s lots of growth in new types of environments, there’s a lot of work in bridging them, too.

One example is how browsers are starting to expose Flash Player local storage… from the FAQ:

“Integration with browser privacy controls for managing local storage — Users now have a simpler way to clear local storage from the browser settings interface, similar to how users clear their browser cookies today. Flash Player 10.3 integrates control of local storage with the browser’s privacy settings in Mozilla Firefox 4, Microsoft Internet Explorer 8 and higher, and future releases of Apple Safari and Google Chrome.”

Letting webpages store more-than-cookie-sized data is a good idea, as recent HTML5 local-storage work shows. But as cross-site tracking and personality databases become more worrisome, it makes sense to expose integrated local-storage to public control however people wish to control it. The good news is that different parties can (and do) work together to bring this about. Progress.

Another example is the Wallaby project… not as dramatic as Techmeme may paint it, but it’s still useful to be able to bring basic SWF assets into a different delivery environment. Fragmentation is natural during fast evolution, but connecting such silos is natural too. Progress.

A subtler example is from the Dreamweaver team this week, about the differences in touch events across different WebKit-based browsers. Touchscreens and scrolling forms, or preventing doubled events when there’s also a trackball controller… natural for fragmentation to occur, and natural to bridge that fragmentation too.

More obvious is the work that Adobe’s Digital Publishing group has been doing… working with major publishers to bridge across all the various islands of new devices rapidly appearing. This will soon help smaller publishers too.

Screaming headlines may clash and obscure significa, but the real pattern underlaying the news is easier to see: we’re rapidly gaining a wide variety of connected digital screens, and the big work is in helping anyone to write to them. There’s daily, incremental progress towards that goal… connecting those silos, bridging those islands.

Adobe stance on local storage

Notice WIRED has coverage of a legal challenge to various websites which use Local Shared Objects in Adobe Flash Player to complement browser cookies in identifying return visitors… got picked up by Slashdot and Techmeme.

I don’t know details of the individual websites or the particular concern, but I do know that Adobe has expressed its position on this… see the February “Adobe condemns cookie respawning in comments to FTC” and “My Interview with Adobe Chief Privacy Officer”. Adobe is also working with the major browser vendors to integrate with their recent “private browsing” modes.

(For me personally, the bigger issue is any such storage and identification done by third-parties across websites… the WIRED article’s webpage itself requests assets from nearly a dozen third-party domains: “web beacons” which notify a service when you visit a page. Details of local storage or IP tracking only seem to matter once such third-party notification systems are in place.)

Private Browsing

Good news… Adobe Flash Player 10.1 can tie into the “Private Browsing Mode” of some browsers, meaning that any local storage is flushed at the end of each session.

I haven’t worked with it across browsers myself yet, but it already tests successfully within Microsoft Internet Explorer 8.0, Mozilla FireFox 3.5, and Google Chrome 1.0. Apple Safari is also reported to be supporting it in the future. (I’m not sure about Opera’s Private Mode, especially in light of their quotes about plugins, but I hope it will also work in Opera.)

This won’t matter for most of us, but is good protection on shared screens (libraries, hotels, etc)… if we enter a password on a public computer, this could clean out all traces in both the browser’s cache and Flash’s Local Storage.

The Flash Player’s local storage options have existed since Player 6 in 2002, possibly earlier. They’re necessary for storing application state across sessions, and also for synching with servers. Back then browsers rarely offered interfaces for managing their own local storage, so Player offered a mini-UI on a context-click, and a larger Settings interface on an adobe.com webpage to expose your local data store. Now that browsers are offering more mature privacy controls — and ones with which plugins can connect! — it’s great to see many control aspects united in a common interface.

This may cause complications for content developers, however. While “Private Mode” makes most sense for public screens, their alternate name of “Porn Mode” implies use on family machines too. If someone returns to a game after entering Private Mode, then the game may not recognize their previous levels of accomplishment. Even within the same Private Mode session, browsers vary in how they handle inter-page communication, so it may be difficult to retain app-state across HTML refreshes. The last half of the Developer Connection article has more info on potential new support costs.

If you’re working in this area of local storage, then you might also want to check into “3rd-Party Cookies, DOM Storage and Privacy”, a recent survey of how different browsers deal with different third-party storage requests in different user-modes.

Cooperation on “Private Mode” settings is a positive step, but there’s still more work to do… as Adobe’s Brad Arkin recently explained: “For a long time we have been trying to work with the web browser vendors for them to open up the API, so that when the user clicks ‘clear browser cookies,’ this will also clear the Flash Player Local Shared Objects. But the browsers don’t expose those APIs today. That’s something that we’ve been working with the browser vendors, because if they can open up that API ability then we can hook into that as Flash Player, so that when the user clicks ‘clear’ it will clear LSOs as well as the browser cookies. Our goal is to make it as easy and as intuitive as possible for the users to manage Local Shared Objects. There’s a lot of study going on right now around the user interface and the integration at the browser level of how we can best support that.”

(btw, I do not agree with pundits’ speech that “privacy is really, really dead”… most of these arguments are of the form “privacy is not absolute; therefore it does not exist”. Just as with protection from crime or disease, the goal is to minimize your exposure to risk. In this case it’s prudent to minimize the effect of web trackers to create proprietary databases which are then potentially vulnerable to internal or external breaches. Just because we likely cannot resist a highly determined privacy attack does not imply that we should fail to protect against all other privacy risks.)

Inside Adobe Security

Dennis Fisher and Ryan Naraine recently conducted a great audio interview at Threatpost with Adobe’s Brad Arkin. They talk about what actually happens when a new way is found to do evil things with deployed software. Ryan also produced a full transcript of the audio, and I’ve extracted some of the main ideas below.

(Editing notes: I rearranged different segments into general topics, and did some pretty heavy editing to turn conversational speech into written speech. If in doubt, please refer to Brad’s original words in the audio interview at Threatpost.)

Thanks to Dennis and Ryan for conducting this interview and for publishing the transcript!

The Dec 15 incident

“We received several reports from different partners about a potential new Reader vulnerability, all within a couple of minutes of each other on Monday afternoon. We opened these files and did some quick triage work to verify that it was an attack that worked against the latest fully updated versions of Reader and Acrobat. Then, based on the number of contacts that we’d received, we figured this was something that people were seeing and that it was likely to get some coverage in the media soon.

“We posted to our PSIRT blog as soon as that information came in, just saying that this appeared to be real, we were looking into it, and that we’d have more information later. From there, the team worked here in North America, and then in our remote offices overseas, on the problem through the night and we got the advisory out on Tuesday. That advisory contained the CVE numbers, some details about what some users can do in order to mitigate the problem before a patch is available. We then updated the advisory later on Tuesday with the ship schedule.

“Our focus, historically, had been on getting a copy of a reproducible bug, and then we shifted all of our focus into remediating the bug, and then communicated mitigations and that sort of thing. Maybe a year ago we would have largely turned our backs on the details about new exploit techniques or anything fancier, such as escalation of attack levels, because from our perspective it’s the same bug, and we just want to get that bug fixed. And that’s where most of our attention was.

“Some of the what we’ve changed over the past 12 months is that we’re really putting a lot more energy into understanding how attacks evolve and scale out. For instance, the very first attack using this vulnerability might have been against some high value individual or company, but then it goes from very targeted attacks to more widespread. That’s useful for us in how we communicate to the users and the kinds of mitigations that we can design at a more strategic level.

“The PSIRT team handles the security details of figuring out ‘Is it a bug? If it is, what’s the impact?’ They’ve got the special skills for how to handle malicious samples and that sort of thing. And then the major product teams also have incident response personnel that work directly as part of the product teams.”

JavaScript’s new line-item veto

“There’s a new feature that Reader added in October called a JavaScript blacklist. It allows users to define any specific JavaScript API as a blacklist item, which will then not be executed.

“In this case, the vulnerability is in a JavaScript API called ‘docmedia.newplayer.’ By putting that term into the blacklist, any PDF document which calls docmedia.newplayer will be denied. It will deny valid calls as well as malicious calls. This is something individual users can do, and it can also be done by administrators for managed desktop environments, using group policy objects to roll out the change as a registry key.

“What the JavaScript black list does is basically no-opts any call to that API. Now there are some JavaScript functions that are used very often, such as verifying date formats or form submission, and if you were to black list one of those items, then the document you’re working on wouldn’t work correctly. But docmedia.newplayer is used a lot less often, so for most users their experience and workflows will not be affected.

“The JavaScript black list function is the most powerful mitigation. It completely protects users against the attack, and at the same time it will cause the least disruption for legitimate uses of the program.”

Disabling JavaScript, safer operating systems

“Something that’s a lot more disruptive, but also completely mitigates the current attack, is disabling JavaScript altogether. So if the blacklist function is not acceptable for some reason, then disabling JavaScript is an alternative that users can also deploy.

“But JavaScript is really an integral part of how people do form submissions. Anytime you’re working with a PDF where you’re entering information, JavaScript is used to do things like verify that the date you entered is the right format. If you’re entering a phone number for a certain country it’ll verify that you’ve got the right number of digits… when you click ‘submit’ on the form it’ll go to the right place. All of this has JavaScript behind the scenes making it work, and it’s difficult to remove without causing problems.

“And in our testing, if you have Windows’ Data Execution Prevention (DEP) enabled, what happens is an attack that otherwise would have worked instead triggers a crash, which does not appear to be exploitable. Now, there are always clever things that people might be able to figure out, but DEP seems to offer another level of protection at the operating system and hardware level, which is separate from the configuration changes.”

Deployment concerns

“Once we get a fix into the code-base, then we need to roll that fix into the actual update that a user can install and deploy on their machine out in the real world. That process is not security-specific — it’s a question of, ‘How do you take a change in the code and turn that into an installer?’ For us, if you make a one-line change in the code, then you need to roll that into all the different versions and flavors of Reader and Acrobat that are getting deployed on all the different platforms that we support. When you’re dealing with software that gets deployed onto hundreds of millions of machines, the threshold is a lot different.

“Something like 29 different binaries get produced from that process… you have Adobe Reader for Windows Vista, Windows 7, Windows XP, Windows Server, all these different flavors, and then Mac, different Linux flavors. Out of those 29 binaries, then you multiply that across 80 languages, or however many are supported depending on the platform and the software versions. Then, you need to make sure that in the process of producing all of those different flavors of the actual installer, that we didn’t introduce any new problems, and that we didn’t introduce anything that might break something else down the line. In all of this, it’s not a matter of just ‘hitting the compile button’ and then an hour later you’re done. Once we get a build that looks like it addresses all our needs, then we need to run it through all the paces to make sure this will deploy correctly, that it will only change what we’re looking at, and not change configuration settings or anything else that people care about.

“That is the process that takes a long time. We can get a malicious sample, triage it, figure out in the code where the bug is, and then get a fix into that code-base within 12 to 24 hours. Then we test to make sure that the code change we made didn’t break anything else, and we also look around the code to see if there’s anything else that makes us uncomfortable that we should tighten-up. All of that gets done pretty quickly.

“But then it’s the build process from there on out, getting everything tested to make sure it works well. Because if we were to roll out a patch that had some tiny little bug that had an impact on less than one percent of users, then you’d still be talking about millions of machines worldwide.”

On effective guidance

“Whenever a report comes in, the first thing that we evaluate is its priority. In this case credible sources reported it to us, and during our initial research it seemed to be a real issue. Based on the fact that we received three reports, we figured this was something that more than a few people are seeing. And so we posted immediately saying, ‘From our perspective, this is real. We’re looking into it.’ At that point, that was really the only information we had.

“Now, an example of where things can go wrong in the beginning is that these attacks are very heavily obfuscated, because they’re trying to avoid all of the different anti-virus and anti-malware solutions that you’ll have at the gateway and the desktop. The three different organizations that reported the vulnerability to us, each of them had done a little bit of diagnosis themselves, and each of them had come-up with the idea that the bug was in a JavaScript API called util.print-d. And when we first looked at it, that’s what we thought, too, because it’s the last call in the attack, and so we figured that was triggering the crash that would then set-up the exploit.

“But if we had gone public with ‘We think it’s in JavaScript, and we think it’s this API. Here’s a link to the blacklist functionality, figure it out yourself, we’re going to start testing and doing more research,’ that would have been wrong. After we did some more research, we found out that it wasn’t actually in util.print-d at all — it was in a totally different area of the JavaScript API, in docmedia.newplayer.

“In the early hours of doing the triage work we’re uncovering a lot of information, but not all of it is accurate, and it’s certainly not complete. If a third-party security provider gives guidance that says, ‘Hey, it looks like util.print-d is the problem; here’s what you can do,’ and if it turns out they’re wrong, they can just say, ‘Oops, sorry.’

“For Adobe, if we roll out information, people assign a certain confidence level to that, and then they’ll go off and take action on it. If an administrator for a managed desktop environment with half a million machines had rolled-out this JavaScript blacklist for the wrong API, they would be pretty irritated with us. And if you multiply that across hundreds of millions of machines, across the entire universe of people that use our software, then that’s why we need a very high confidence criteria before we’ll publish any information.

“And another example about JavaScript is that pretty much all exploits use JavaScript in order to set-up the heap with the malicious executable content before they trigger the crash. The crash itself might or might not be through a JavaScript API. So the very first time you examine an exploit, it might look like it has something to do with JavaScript, but that isn’t always true. Even if you disable JavaScript it might not protect you, because there might be other ways to prepare the heap in order to get this external content onto the stack. These are the kinds of behind-the-scenes questions that we have to answer before we can publish information.

“In this case, because we knew it was going to go public, we wanted people to know that we were aware of it, that we were working on it, and that it looked real. In other cases, if someone reports something to us and it’s a situation where it looks like it was a targeted attack, there’s a good chance this one organization was the only entity in the world who’s seen this malicious PDF, then we might take a day or two in order to make sure that we really understand it before we do a full advisory in our first publication with all the details, all the information, all the platforms.

“We’ve got two needles that we measure very carefully. One of them is real-world attack activity. The other one is the fear level — the perception of real world attack activity, whether or not it’s real. Our goal is to drive both of those down into a healthy range where as few people as possible are being attacked, and where people have a good understanding of where the real threats are. There’s a lot of communication involved with that, and then a lot of technical things that we can do as well.

“We’re starting out from a negative in that there’s already a vulnerability. So our goal is to do what we can do to protect as many users as quickly as possible, and then just work as hard as we can to get that done. That’s the calculus that we went through coming up with this, and it’s definitely a hard decision. We put a lot of effort into trying to do the best job we could.”

Outreach

“We get a lot of e-mails every day that come into Adobe’s Product Security Incident Response Team via PSIRT-at-adobe.com, which is the way that most things get reported to us. There are different levels of priority in how we handle things.

“Those reports have a wide variety of quality. Some of the time they’ll say that it’s a bug in Reader, and they send us a Word doc. We also get a lot of contacts from anonymous people that are a one-time set-up, using Gmail accounts or something. There are reports like that, which don’t mean as much to us.

“But whenever something comes in advertised as being an exploit in the wild, then it gets our full attention. The contacts that we got on Monday were from partners that we had established relationships with, very high credibility, and within a few minutes we were able to verify that it appeared to be an exploitable crash against the fully patched version. So we pulled the alarm and everyone moved into our top flight response for these types of things.

“The PSIRT team here at Adobe not only looks after the incoming reports and getting patches out and that sort of thing, but they’re also hooked into the kind of places where you would pick-up the chatter about what’s going on — all the different mailing lists and forums and things like that. The PSIRT team is more tapped into what we see as trends that are happening out there in the real world attacks.”

Moving to background updates

“When we sat down and designed the new beta pilot for Reader, we thought very carefully about how can we protect the most users. Particularly at the consumer or individual level, these are the folks who don’t have managed environments where someone does it for them. For this new updater that we shipped it in October, the January release will be the first time that we use auto-updating for the beta users. We’ll learn from that, and then if all goes well, we’ll roll it out as the new updater for all users in the next release.

“What it will do is offer people an automatic download and installation of updates with no user interaction option. Or for people who want more control, it can notify and give them the choice to install, or they can turn it off completely.

“We want to be able to support people who have a well managed environment and who have good reason for why they don’t wish to immediately install an update. But most of the people who need to be protected don’t wish to be bothered by it, so that’s why we’ve got that automatic background update.

“This is something that we’re hoping to make the norm, where everybody just has it set up that way. But we also need to support people who don’t wish to be interrupted when they’re working inside a document, who desire more control. It’s not just at a technology level, but also at the human level, the human interface layer. If people are clicking “no thanks” when the update’s offered today, we need to give them a way to get the update installed without bothering them, without disrupting what they’re doing.”

“Flash Cookies”

“The terminology we use is ‘Flash Player Local Shared Objects’, because they behave quite differently from browser cookies. There are many great uses for local storage, such as improving network performance, queueing stuff up immediately rather than having to wait for network latency.

“It’s actually not any harder managing LSOs through Flash Player, if you measure the number of clicks required. It’s just less familiar to users, to people who know how to go to their browser’s File menu and click on ‘Clear Browser Cookies.’ But doing those same clicks for Flash Player is something that people aren’t as familiar with.

“For a long time have been trying to work with the web browser vendors for them to open up the API, so that when the user clicks ‘clear browser cookies,’ this will also clear the Flash Player Local Shared Objects. But the browsers don’t expose those APIs today. That’s something that we’ve been working with the browser vendors, because if they can open up that API ability then we can hook into that as Flash Player, so that when the user clicks ‘clear’ it will clear LSOs as well as the browser cookies.

“Our goal is to make it as easy and as intuitive as possible for the users to manage Local Shared Objects. There’s a lot of study going on right now around the user interface and the integration at the browser level of how we can best support that.”

On being the big target

“When you’re looking at it from the attacker’s perspective, the audience reach is a key attraction. Adobe Reader and Flash Player are installed on a lot more machines than Windows is. That massive installed base paints a big bullseye, and that’s not something which is going to change. Reader and Player are ubiquitous software, and the responsibility is on us to do the things we can do to help protect our users.”

A commenting concern

The article is titled “Should Adobe Auto-Update Flash and PDF Reader?”, and I was about to point out to the writer that Adobe Flash Player and Adobe Reader have offered configurable auto-update for years.

But then I noticed the site said “You need to Login or Register to comment,” which implies realworld identification.

And then I noticed that this particular webpage also requested third-party content from a dozen other domains, few of which I recognized. Third-party content on a webpage can set a cookie or log an IP address, subsequently recognizing the surfer across varied webpage domains.

If the site happened to pass a commenter’s realworld identity along to any of these third-parties, then the commenters could be known by realworld name as they subsequently visit other pages, on other domains, which happen to host the same third-party content.

So I’ve got a dilemma — the reporter and site may be legit and may respond well to better info, but they’re recycling old content without original research and are notifying a list of a dozen domains upon each visit. I’m already using an ad-blocker to avoid many of those unexpected third-party requests, and have already invested many years in trying to help commercial commentators get their facts right. Is it worth signing up for an account, and hoping that a comment makes it out of moderation, and that the comment actually makes a difference, if the website already notifies third-party trackers when I arrive, and then wants my realworld details too?

Can we take such sites at face value?

Beat back the hacks

Seems like Techememe is now listing that Business Week article “Can Adobe Beat Back the Hackers?”, which I mentioned on Twitter yesterday.

Here’s the hot line: “Vulnerabilities in such widely used software can cause myriad problems. More than a dozen sites, including those of The New York Times, USA Today, and Nature, have been infected with fake ads that exploit Adobe software.”

The latter phrase should read “exploit older, un-updated Adobe software.” Attackers will use the newest vulnerabilities in hopes of increasing their catch — no surprise. This article contains the worry, but not the general advice readers need: keep your Internet software current.

The more-interesting part of this exploit is mentioned only in passing… trusted websites cannot always assure the third-party content they serve. The Web, as we know it today, is infected… more last week and last June… even trustworthy sites are not sure what they’re serving you.

(There’s also a line later on: “Historically, Adobe hasn’t had to contend with attacks, so it hasn’t been focused on potential weaknesses.” The Internet Archive has pages from the Macromedia Security Zone dating back to 2002.)

Summary: Yes, criminals are trying to exploit you. But to reduce the risk, keep your Internet software current. And consider using browser software (such as an ad-blocker) to monitor third-party content which may be attached to a trusted site.

Newsgroups considered harmful?

Gavin O. Gorman of Symantec offers readable research into the Trojan.grups vulnerability, in which zombie computers receive updated commands by parsing instructions found in newsgroup postings. Here’s the gist:


When successfully logged in, the Trojan requests a page from a private newsgroup, escape2sun. The page contains commands for the Trojan to carry out. The command consists of an index number, a command line to execute, and optionally, a file to download. Responses are uploaded as posts to the newsgroup using the index number as a subject. The post and page contents are encrypted using the RC4 stream cipher and then base64 encoded. The attacker can thus issue confidential commands and read responses.

This is a handy layer of indirection for a zombie master, because public message boards are harder to blacklist than known-compromised servers. But this public command-and-control method also allows security researchers to study message content, replies, and overall volume levels — ironically, the zombie masters are publicly “opening up the source” of their network’s communications.

In this particular case, debug strings and low posting volumes indicate preliminary testing — but if this turns out to be a useful attack, it seems like it could be adopted fairly quickly.

So, should newsgroups be considered harmful? I don’t see how they could be, considering their proven history of improving global communication. But this article shows that even innocuous network technology is vulnerable to being parasitized by those who don’t yet deal honestly with each other.

When a shadow network is operating on citizens’ machines without their knowledge, and when public communication methods are used to transmit exploitative commands, how should our networks evolve in response? What’s the next step?

An infected Web

The Internet is “the network of all networks”. It is open to all, and this has brought many benefits. But that doesn’t mean our own computers and networks should be open to all. We individuals need to discriminate.

Dan Goodin at The Register has been covering the story of legit websites serving malware. Sites you trust may be bad. Sometimes the attackers gain control through a server exploit, sometimes through password cracking, sometimes through keystrokers.

The site owners rarely know they’re distributing malware to their audience. This exploit injects obfuscated JavaScript at the bottom of the site’s front page, redirecting visitors to various pages which attempt to force a download via old browser/plugin exploits.

Keeping your own software up-to-date, private and secure is necessary… the websites you’ve trusted may no longer be trustworthy. There is no “little network” of trust in the Web world — a browser will visit any site, and new hacks can demolish trust zones. (That’s why I’ll trust a separate AIR client more than I will an HTML5 uberbrowser.)

And in such a “network of all networks”, other people getting infected is bad for the rest of us — more noise, more confusion, less clarity.

Surfing the Web is like walking a strange city, particularly one with a high crime rate. The open-to-all nature requires us to be aware, and avoid unsafe situations.

Some sites we trust may be infected. We need to keep Web software up-to-date, and encourage others to do so.

You are the product

Striking study of web beacons and other devices on popular websites… New York Times summarizes:

“Google showed up as the most conspicuous tracker on third-party sites. Google Analytics, a free product that allows online publishers to gather statistics about visitors to their sites, was used on 81 of the top 100 online sites. Cookies from the advertising company DoubleClick, which is owned by Google, were present on 70 of those sites. When combining trackers from those two services, Google had a presence on 92 of the top 100 sites. Others weren’t far behind. Cookies from Atlas, Microsoft’s DoubleClick rival, appeared on 60 sites, and trackers from two other analytics companies, Quantcast and Omniture, showed up on 54 sites… What is striking in the Berkeley students’ report is that in a sample of nearly 400,000 Web domains, Google’s presence remained high, at 88 percent, while those of other companies declined sharply… ‘Our data shows that even if you are not going to Google, if you are browsing the Web they are collecting data about you.’”

Using a cookie-blocker is not enough… any bit of third-party content on a page sends an HTTP request from your IP address to such a central service. Over a surfing session a variety of such requests build up a profile of the surfer at that IP address. This can then be compared with similar session profiles within that general IP block from other days. And, of course, if you sign into a Google service then your name is associated with your IP address.

An ad-blocker is necessary defense. Just as a Flash-blocker protects you from poor choices by site owners, an ad-blocker prevents websites from advertising your arrival to such central repositories of information.

Is Google actually tracking and analyzing the data they collect? No one knows. They’ve been closed and non-transparent about their privacy practices ever since the initial controversies over their perpetual cookie. Their longtime “special advisor” is a polarizing former Vice President of the US who spearheaded the V-chip effort and was involved in ECHELON and CARNIVORE. The lack of a response to reasonable questions may itself be an answer.

The business model is to sell your exquisitely-qualified attention to advertisers. You are the product. The “open web” is used as a massive profiling tool. You are the product. The process is opaque, closed, proprietary. You are the product.

Many people initially deride their own personal privacy — “privacy is an illusion” and all that. Many also think they would never be mugged, and so flash wallets or iPhones on subways and deserted streets. Habits can change very quickly, once your own personal experience changes.

When you visit most sites, Google Knows. That’s too much power to place in such an opaque organization. To the degree you do not minimize your own exposure to such data collection, you are the product.

CNET clickjacking comment

I went through the registration process for CNET, and after creating the account it said my username was already in use. So instead of asking a clarifying question at the original article, I have to make a separate blogpost here, and hope the reporter sees it….

Elinor Mills at CNET today mentioned Flash and webcams during a clickjacking article. I’ll snip out the relevant passages: “In a demo at CNET offices on Thursday, Grossman showed how someone could launch a clickjacking attack using Flash to spy on someone by getting them to turn on their computer Web cam without knowing it… In the Web cam demo, the iFrame created contains a Flash pop-up window that asks the user to grant permission to have the Web cam turned on. When the victim clicks the link, the Web cam is turned on and secretly begins recording everything the user does in front of the computer… In the Web cam scenario, the best defense is probably to put a post-it note or other item over the Web cam lens and to disable the microphone in the software, he said. Flash Player 10 provides some protection by preventing anything from obscuring the security permissions dialogue box, he said… More details are in a white paper on the technique, written by Grossman and Robert Hansen of SecTheory and published in September 2008.”

Key question: Were you using the current Adobe Flash Player, or the version current at the time of last year’s whitepaper?

If someone has a new way to make various browsers obscure Player’s permissions dialog, then we need to know about it. But from the description above, with Player version undescribed, I can’t determine whether there’s a new issue here.

Background: What is “clickjacking”?

(a) It’s a failure in website security where a malevolent third-party has either hacked in their own code, or persuaded a site to use third-party code through social services or advertising — basically a trusted website hosting untrustworthy content. It’s a flaw in website integrity.

(b) It’s a failure in browser security where third-party code can hide what the reader is clicking on — where What You See Is NOT What You Click. The browser vendors each seem to say their offering fixes at least some of the methods to defeat click integrity while others do not, which makes me wonder whether any browser has truly addressed this failure in browsers’ click integrity.

(c) Flash isn’t involved directly in this “What You See Is NOT What You Click” problem. It’s used as a poster child of what can happen when infected sites can take advantage of browser failures.

Summary: There’s a new article, but it is not clear whether there’s a new issue.