Archive for January, 2010

Beggars Banquet

John Nack takes on some of the rougher speech out there about Flash. I’ve printed it out, read it half-a-dozen times already. I think you’ll find it enjoyable too. Go read. :)

Two quick points, one about it, one beyond it:

It’s a well-written essay. He has both conversational tone and corporate sensitivity… either is hard. The first screenful of text gives the high-level overview of all the rest. Subjects are blocked out into separate areas. It’s personal, not pompous. The essay is refreshingly accessible.

“If you can’t explain it simply, you don’t understand it well enough.” – Albert Einstein

Beyond the text, the Player team is doing something phenomenal… hidden in plain sight, but under the popular radar. This past year-plus they’ve been working under the Open Screen Project mandate. The amount of industry-wide cooperation in this is unprecedented… 19 of the Top 20 handset makers, Internet television, other devices. Flash Player Engineering has been uniting these devices, smoothing over the edges — engineering predictable advanced capability despite hardware changes, API changes, schedule changes. No other engineering team has the depth of cross-device knowledge and experience that they have earned. So many moving pieces in this project, and yet the Player team is making it happen. And they’re doing so in an environment of multi-partner cooperative chaos, rather than in a single-vendor controlled stack. Profound ramifications.

“Poets are the unacknowledged legislators of the world.” – Percy Bysshe Shelley

More later, but right now, please go read John’s “Sympathy for the Devil”.

Gordon, formats, runtimes

No big news here… just tying together observations on different types of file formats, and how they interact with different types of local runtime engines.

Last week Tobias Schneider released a project named Gordon, a JavaScript library which parses a SWF and renders some all SWF1 tags [demos]. It’s in the spirit of the previous “SVG in SWF” work from Helen Triolo, Claus Wahlers, Brad Neuberg and others — rendering one file format in a runtime designed for a different file format. Geeky, and admirable.

I liked seeing Gordon arrive, because it proves that SWF, SVG and such are just file formats, interconvertible with varying degrees of fidelity — of the same species. Neither is a black box, both have been publicly specified for over a decade, and different types of code can produce, consume, and work with them. Anybody can use either, do whatever they want with ‘em. SWF and SVG are both useful and flexible file formats.

But some of the reaction was a little strange — I’d rather not embarrass people by linking to articles, but (edited) titles like “Gordon Lets You Run Flash On Your iPhone” and “Open Source JavaScript to Replace Flash?” give a flavor. Much of the initial commentary did not seem to distinguish between a file format as a set of instructions, and a runtime as the clientside engine which executes those instructions.

Using instructions in unexpected ways is fun. Different examples include Microsoft Gestalt (writing Ruby or Python in an HTML page), Flash Pro (the announced ability to export ActionScript as iPhone-native), much of the Flash 3D work (taking the Collada modeling file as a datasource). On the flip side, XML is a file format which was expressly designed for universal use.

But each of these file formats is then processed by an engine of some type, often a clientside runtime. It’s the interaction of format and runtime that determines the actual feature set, the actual performance, the actual deployment cost.

Here’s an example of the confusion: “While the open source Gordon is available to all, it still doesn’t solve one of Flash’s biggest problems. These SWF files still hog the CPU. One demo, a simple vector graphic of a tiger, throws my desktop browser up to around 100% CPU usage.”

Do you see the problem? The author assumes that his performance is controlled by the content — the instructions — rather than the way those instructions are processed in his environment.

Another example: “This frees Flash from requiring a plug in. But it’s also a proof of concept that shows that you can do all the cool, propritary-plug-in stuff using just plain HTML.” Even if this JavaScript library could acceptably handle all the SWF10 tags (instead of some all SWF1 tags), it would still be a JavaScript library, downloaded afresh each time. Sending your runtime code as file format is not as efficient as using a native local runtime engine.

Perhaps the most concerning type of theme: “Any developer wishing to get their Flash files to work on iPhone just needs to ensure they include a HTML wrapper and through the power of Gordon it should ‘just work’.” Here there seems to be an assumption that all runtimes are interchangeable, and that certain file formats are more magic than others.

When a Gordon project “runs on an iPhone”, it’s actually a SWF being parsed by JavaScript instructions which are executed by Apple’s Safari runtime engine. Although this SWF file format is being used, it’s more properly a Safari app than a Flash app by that point.

It’s fun to use file formats in unexpected ways. But different runtime engines are tuned for different jobs, optimized for different types of instructions. Flash’s textfields can display HTML, but you wouldn’t use that to replace your local dedicated HTML-eating runtime engine. Different runtimes are engineered to take best advantage of different file formats.

I like the Gordon project, because it shows there’s nothing mysterious about SWF. Just like HTML, the file format definitions have been public for over a decade. They’re of similar nature, similar usability.

There are differences — SWF is binary, rather than text — that’s a meaningful difference. Another difference is that HTML is about hypertext, while SWF has been designed from the start for rich-media interactivity. Or you could argue that Adobe has provided saner governance over SWF than the WhatWG Consortium has over “HTML5″. The biggest difference between SWF and HTML is likely in the predictability of the runtime engine.

There are differences between HTML and SWF, but even a JavaScript engine can understand simple SWF files… nothing mysterious or alien about it.

[Update: Changed “some SWF1 tags” to “all SWF1 tags”… my apologies to Tobias, I had forgotten that the FutureSplash Animator version had only a dozen logic elements!]

Private Browsing

Good news… Adobe Flash Player 10.1 can tie into the “Private Browsing Mode” of some browsers, meaning that any local storage is flushed at the end of each session.

I haven’t worked with it across browsers myself yet, but it already tests successfully within Microsoft Internet Explorer 8.0, Mozilla FireFox 3.5, and Google Chrome 1.0. Apple Safari is also reported to be supporting it in the future. (I’m not sure about Opera’s Private Mode, especially in light of their quotes about plugins, but I hope it will also work in Opera.)

This won’t matter for most of us, but is good protection on shared screens (libraries, hotels, etc)… if we enter a password on a public computer, this could clean out all traces in both the browser’s cache and Flash’s Local Storage.

The Flash Player’s local storage options have existed since Player 6 in 2002, possibly earlier. They’re necessary for storing application state across sessions, and also for synching with servers. Back then browsers rarely offered interfaces for managing their own local storage, so Player offered a mini-UI on a context-click, and a larger Settings interface on an adobe.com webpage to expose your local data store. Now that browsers are offering more mature privacy controls — and ones with which plugins can connect! — it’s great to see many control aspects united in a common interface.

This may cause complications for content developers, however. While “Private Mode” makes most sense for public screens, their alternate name of “Porn Mode” implies use on family machines too. If someone returns to a game after entering Private Mode, then the game may not recognize their previous levels of accomplishment. Even within the same Private Mode session, browsers vary in how they handle inter-page communication, so it may be difficult to retain app-state across HTML refreshes. The last half of the Developer Connection article has more info on potential new support costs.

If you’re working in this area of local storage, then you might also want to check into “3rd-Party Cookies, DOM Storage and Privacy”, a recent survey of how different browsers deal with different third-party storage requests in different user-modes.

Cooperation on “Private Mode” settings is a positive step, but there’s still more work to do… as Adobe’s Brad Arkin recently explained: “For a long time we have been trying to work with the web browser vendors for them to open up the API, so that when the user clicks ‘clear browser cookies,’ this will also clear the Flash Player Local Shared Objects. But the browsers don’t expose those APIs today. That’s something that we’ve been working with the browser vendors, because if they can open up that API ability then we can hook into that as Flash Player, so that when the user clicks ‘clear’ it will clear LSOs as well as the browser cookies. Our goal is to make it as easy and as intuitive as possible for the users to manage Local Shared Objects. There’s a lot of study going on right now around the user interface and the integration at the browser level of how we can best support that.”

(btw, I do not agree with pundits’ speech that “privacy is really, really dead”… most of these arguments are of the form “privacy is not absolute; therefore it does not exist”. Just as with protection from crime or disease, the goal is to minimize your exposure to risk. In this case it’s prudent to minimize the effect of web trackers to create proprietary databases which are then potentially vulnerable to internal or external breaches. Just because we likely cannot resist a highly determined privacy attack does not imply that we should fail to protect against all other privacy risks.)

CES 2010 thoughts

This week’s Consumer Electronics Show should deliver on some early guidance given last year, about the home screen finally becoming an interactive communications device. I don’t know what the announcements will be, but here are some tips to put them in context.

Main theme: As phones and televisions become computers, nearly all manufacturers are optimizing for SWF as an interface layer.

  • This is only the very first generation. The widespread adoption by manufacturers signals a good future, but it will take us all a few generations to really understand multi-device interface design.
  • The early announcements may not make much mention of Flash. That’s normal — they’re announcing their new device, not a universal runtime. For the rest of us the big news is common cross-device capability, but most of the press material should be about device differences.
  • The early shipments will likely have differences in what’s available when — many, many schedules are being cross-plotted to each other, across an exceptionally large range of companies. But a key requirement in Open Screen Project is over-the-air updating. Player fragmentation should be relatively low.
  • I don’t know what the business opportunities will be, what types of stores and financial arrangements will come to pass. Apple’s App Store did a good thing by cutting developers a check. We’ll learn more of different types of contracts over the coming year.
  • Some devices may use Flash mainly in-the-browser, while others use them as native interface layer, or as a user-application layer, or perhaps even as a video overlay layer. Particularly in this very first generation, different manufacturers may make different choices.
  • Most of the “small screen” news should hit next month, at Mobile World Congress. One of the difficulties here is that today’s “World Wide Web” has been designed for workstation screens. Some sites do try to degrade-to-mobile, while others make a special mobile site, but webdesign-for-devices has in general been a moving target. Adobe has been doing outreach to many key websites to improve the user experience, but The Web as a whole may be a little rocky at first.
  • Many of these devices will have HTML renderers too. Brands and versions — and therefore capabilities — will vary. Flash will offer more advanced capabilities, more predictably, more widely.
  • There will be a very strong tendency in popular conversation to port today’s workstation use-cases to new devices. But your TV probably doesn’t need an email program, nor your car a WWW browser. We have to figure out how people can use the entire Internet, most appropriately, when they’re using the new device. We humans tend to see new things in terms of the past. Follow your instinct, not the crowd.
  • My own instinct is that the big home screen will take off when it adds a social layer atop viewing, when it’s used as a two-way communication device with distant people you already know. Early social networks like Twitter, Facebook, even Digg give only a hint of how we’ll naturally interact with our TVs. Think outside the box.

Summary: We’re in a transition year. Very exciting time, very promising, but 摸着石头过河 — we must cross the river by feeling the stones with our feet, it’s hard to predict the exact path beforehand. The other side sure does look nice, though…. ;-)

Inside Adobe Security

Dennis Fisher and Ryan Naraine recently conducted a great audio interview at Threatpost with Adobe’s Brad Arkin. They talk about what actually happens when a new way is found to do evil things with deployed software. Ryan also produced a full transcript of the audio, and I’ve extracted some of the main ideas below.

(Editing notes: I rearranged different segments into general topics, and did some pretty heavy editing to turn conversational speech into written speech. If in doubt, please refer to Brad’s original words in the audio interview at Threatpost.)

Thanks to Dennis and Ryan for conducting this interview and for publishing the transcript!

The Dec 15 incident

“We received several reports from different partners about a potential new Reader vulnerability, all within a couple of minutes of each other on Monday afternoon. We opened these files and did some quick triage work to verify that it was an attack that worked against the latest fully updated versions of Reader and Acrobat. Then, based on the number of contacts that we’d received, we figured this was something that people were seeing and that it was likely to get some coverage in the media soon.

“We posted to our PSIRT blog as soon as that information came in, just saying that this appeared to be real, we were looking into it, and that we’d have more information later. From there, the team worked here in North America, and then in our remote offices overseas, on the problem through the night and we got the advisory out on Tuesday. That advisory contained the CVE numbers, some details about what some users can do in order to mitigate the problem before a patch is available. We then updated the advisory later on Tuesday with the ship schedule.

“Our focus, historically, had been on getting a copy of a reproducible bug, and then we shifted all of our focus into remediating the bug, and then communicated mitigations and that sort of thing. Maybe a year ago we would have largely turned our backs on the details about new exploit techniques or anything fancier, such as escalation of attack levels, because from our perspective it’s the same bug, and we just want to get that bug fixed. And that’s where most of our attention was.

“Some of the what we’ve changed over the past 12 months is that we’re really putting a lot more energy into understanding how attacks evolve and scale out. For instance, the very first attack using this vulnerability might have been against some high value individual or company, but then it goes from very targeted attacks to more widespread. That’s useful for us in how we communicate to the users and the kinds of mitigations that we can design at a more strategic level.

“The PSIRT team handles the security details of figuring out ‘Is it a bug? If it is, what’s the impact?’ They’ve got the special skills for how to handle malicious samples and that sort of thing. And then the major product teams also have incident response personnel that work directly as part of the product teams.”

JavaScript’s new line-item veto

“There’s a new feature that Reader added in October called a JavaScript blacklist. It allows users to define any specific JavaScript API as a blacklist item, which will then not be executed.

“In this case, the vulnerability is in a JavaScript API called ‘docmedia.newplayer.’ By putting that term into the blacklist, any PDF document which calls docmedia.newplayer will be denied. It will deny valid calls as well as malicious calls. This is something individual users can do, and it can also be done by administrators for managed desktop environments, using group policy objects to roll out the change as a registry key.

“What the JavaScript black list does is basically no-opts any call to that API. Now there are some JavaScript functions that are used very often, such as verifying date formats or form submission, and if you were to black list one of those items, then the document you’re working on wouldn’t work correctly. But docmedia.newplayer is used a lot less often, so for most users their experience and workflows will not be affected.

“The JavaScript black list function is the most powerful mitigation. It completely protects users against the attack, and at the same time it will cause the least disruption for legitimate uses of the program.”

Disabling JavaScript, safer operating systems

“Something that’s a lot more disruptive, but also completely mitigates the current attack, is disabling JavaScript altogether. So if the blacklist function is not acceptable for some reason, then disabling JavaScript is an alternative that users can also deploy.

“But JavaScript is really an integral part of how people do form submissions. Anytime you’re working with a PDF where you’re entering information, JavaScript is used to do things like verify that the date you entered is the right format. If you’re entering a phone number for a certain country it’ll verify that you’ve got the right number of digits… when you click ‘submit’ on the form it’ll go to the right place. All of this has JavaScript behind the scenes making it work, and it’s difficult to remove without causing problems.

“And in our testing, if you have Windows’ Data Execution Prevention (DEP) enabled, what happens is an attack that otherwise would have worked instead triggers a crash, which does not appear to be exploitable. Now, there are always clever things that people might be able to figure out, but DEP seems to offer another level of protection at the operating system and hardware level, which is separate from the configuration changes.”

Deployment concerns

“Once we get a fix into the code-base, then we need to roll that fix into the actual update that a user can install and deploy on their machine out in the real world. That process is not security-specific — it’s a question of, ‘How do you take a change in the code and turn that into an installer?’ For us, if you make a one-line change in the code, then you need to roll that into all the different versions and flavors of Reader and Acrobat that are getting deployed on all the different platforms that we support. When you’re dealing with software that gets deployed onto hundreds of millions of machines, the threshold is a lot different.

“Something like 29 different binaries get produced from that process… you have Adobe Reader for Windows Vista, Windows 7, Windows XP, Windows Server, all these different flavors, and then Mac, different Linux flavors. Out of those 29 binaries, then you multiply that across 80 languages, or however many are supported depending on the platform and the software versions. Then, you need to make sure that in the process of producing all of those different flavors of the actual installer, that we didn’t introduce any new problems, and that we didn’t introduce anything that might break something else down the line. In all of this, it’s not a matter of just ‘hitting the compile button’ and then an hour later you’re done. Once we get a build that looks like it addresses all our needs, then we need to run it through all the paces to make sure this will deploy correctly, that it will only change what we’re looking at, and not change configuration settings or anything else that people care about.

“That is the process that takes a long time. We can get a malicious sample, triage it, figure out in the code where the bug is, and then get a fix into that code-base within 12 to 24 hours. Then we test to make sure that the code change we made didn’t break anything else, and we also look around the code to see if there’s anything else that makes us uncomfortable that we should tighten-up. All of that gets done pretty quickly.

“But then it’s the build process from there on out, getting everything tested to make sure it works well. Because if we were to roll out a patch that had some tiny little bug that had an impact on less than one percent of users, then you’d still be talking about millions of machines worldwide.”

On effective guidance

“Whenever a report comes in, the first thing that we evaluate is its priority. In this case credible sources reported it to us, and during our initial research it seemed to be a real issue. Based on the fact that we received three reports, we figured this was something that more than a few people are seeing. And so we posted immediately saying, ‘From our perspective, this is real. We’re looking into it.’ At that point, that was really the only information we had.

“Now, an example of where things can go wrong in the beginning is that these attacks are very heavily obfuscated, because they’re trying to avoid all of the different anti-virus and anti-malware solutions that you’ll have at the gateway and the desktop. The three different organizations that reported the vulnerability to us, each of them had done a little bit of diagnosis themselves, and each of them had come-up with the idea that the bug was in a JavaScript API called util.print-d. And when we first looked at it, that’s what we thought, too, because it’s the last call in the attack, and so we figured that was triggering the crash that would then set-up the exploit.

“But if we had gone public with ‘We think it’s in JavaScript, and we think it’s this API. Here’s a link to the blacklist functionality, figure it out yourself, we’re going to start testing and doing more research,’ that would have been wrong. After we did some more research, we found out that it wasn’t actually in util.print-d at all — it was in a totally different area of the JavaScript API, in docmedia.newplayer.

“In the early hours of doing the triage work we’re uncovering a lot of information, but not all of it is accurate, and it’s certainly not complete. If a third-party security provider gives guidance that says, ‘Hey, it looks like util.print-d is the problem; here’s what you can do,’ and if it turns out they’re wrong, they can just say, ‘Oops, sorry.’

“For Adobe, if we roll out information, people assign a certain confidence level to that, and then they’ll go off and take action on it. If an administrator for a managed desktop environment with half a million machines had rolled-out this JavaScript blacklist for the wrong API, they would be pretty irritated with us. And if you multiply that across hundreds of millions of machines, across the entire universe of people that use our software, then that’s why we need a very high confidence criteria before we’ll publish any information.

“And another example about JavaScript is that pretty much all exploits use JavaScript in order to set-up the heap with the malicious executable content before they trigger the crash. The crash itself might or might not be through a JavaScript API. So the very first time you examine an exploit, it might look like it has something to do with JavaScript, but that isn’t always true. Even if you disable JavaScript it might not protect you, because there might be other ways to prepare the heap in order to get this external content onto the stack. These are the kinds of behind-the-scenes questions that we have to answer before we can publish information.

“In this case, because we knew it was going to go public, we wanted people to know that we were aware of it, that we were working on it, and that it looked real. In other cases, if someone reports something to us and it’s a situation where it looks like it was a targeted attack, there’s a good chance this one organization was the only entity in the world who’s seen this malicious PDF, then we might take a day or two in order to make sure that we really understand it before we do a full advisory in our first publication with all the details, all the information, all the platforms.

“We’ve got two needles that we measure very carefully. One of them is real-world attack activity. The other one is the fear level — the perception of real world attack activity, whether or not it’s real. Our goal is to drive both of those down into a healthy range where as few people as possible are being attacked, and where people have a good understanding of where the real threats are. There’s a lot of communication involved with that, and then a lot of technical things that we can do as well.

“We’re starting out from a negative in that there’s already a vulnerability. So our goal is to do what we can do to protect as many users as quickly as possible, and then just work as hard as we can to get that done. That’s the calculus that we went through coming up with this, and it’s definitely a hard decision. We put a lot of effort into trying to do the best job we could.”

Outreach

“We get a lot of e-mails every day that come into Adobe’s Product Security Incident Response Team via PSIRT-at-adobe.com, which is the way that most things get reported to us. There are different levels of priority in how we handle things.

“Those reports have a wide variety of quality. Some of the time they’ll say that it’s a bug in Reader, and they send us a Word doc. We also get a lot of contacts from anonymous people that are a one-time set-up, using Gmail accounts or something. There are reports like that, which don’t mean as much to us.

“But whenever something comes in advertised as being an exploit in the wild, then it gets our full attention. The contacts that we got on Monday were from partners that we had established relationships with, very high credibility, and within a few minutes we were able to verify that it appeared to be an exploitable crash against the fully patched version. So we pulled the alarm and everyone moved into our top flight response for these types of things.

“The PSIRT team here at Adobe not only looks after the incoming reports and getting patches out and that sort of thing, but they’re also hooked into the kind of places where you would pick-up the chatter about what’s going on — all the different mailing lists and forums and things like that. The PSIRT team is more tapped into what we see as trends that are happening out there in the real world attacks.”

Moving to background updates

“When we sat down and designed the new beta pilot for Reader, we thought very carefully about how can we protect the most users. Particularly at the consumer or individual level, these are the folks who don’t have managed environments where someone does it for them. For this new updater that we shipped it in October, the January release will be the first time that we use auto-updating for the beta users. We’ll learn from that, and then if all goes well, we’ll roll it out as the new updater for all users in the next release.

“What it will do is offer people an automatic download and installation of updates with no user interaction option. Or for people who want more control, it can notify and give them the choice to install, or they can turn it off completely.

“We want to be able to support people who have a well managed environment and who have good reason for why they don’t wish to immediately install an update. But most of the people who need to be protected don’t wish to be bothered by it, so that’s why we’ve got that automatic background update.

“This is something that we’re hoping to make the norm, where everybody just has it set up that way. But we also need to support people who don’t wish to be interrupted when they’re working inside a document, who desire more control. It’s not just at a technology level, but also at the human level, the human interface layer. If people are clicking “no thanks” when the update’s offered today, we need to give them a way to get the update installed without bothering them, without disrupting what they’re doing.”

“Flash Cookies”

“The terminology we use is ‘Flash Player Local Shared Objects’, because they behave quite differently from browser cookies. There are many great uses for local storage, such as improving network performance, queueing stuff up immediately rather than having to wait for network latency.

“It’s actually not any harder managing LSOs through Flash Player, if you measure the number of clicks required. It’s just less familiar to users, to people who know how to go to their browser’s File menu and click on ‘Clear Browser Cookies.’ But doing those same clicks for Flash Player is something that people aren’t as familiar with.

“For a long time have been trying to work with the web browser vendors for them to open up the API, so that when the user clicks ‘clear browser cookies,’ this will also clear the Flash Player Local Shared Objects. But the browsers don’t expose those APIs today. That’s something that we’ve been working with the browser vendors, because if they can open up that API ability then we can hook into that as Flash Player, so that when the user clicks ‘clear’ it will clear LSOs as well as the browser cookies.

“Our goal is to make it as easy and as intuitive as possible for the users to manage Local Shared Objects. There’s a lot of study going on right now around the user interface and the integration at the browser level of how we can best support that.”

On being the big target

“When you’re looking at it from the attacker’s perspective, the audience reach is a key attraction. Adobe Reader and Flash Player are installed on a lot more machines than Windows is. That massive installed base paints a big bullseye, and that’s not something which is going to change. Reader and Player are ubiquitous software, and the responsibility is on us to do the things we can do to help protect our users.”