Inside Adobe Security

Dennis Fisher and Ryan Naraine recently conducted a great audio interview at Threatpost with Adobe’s Brad Arkin. They talk about what actually happens when a new way is found to do evil things with deployed software. Ryan also produced a full transcript of the audio, and I’ve extracted some of the main ideas below.

(Editing notes: I rearranged different segments into general topics, and did some pretty heavy editing to turn conversational speech into written speech. If in doubt, please refer to Brad’s original words in the audio interview at Threatpost.)

Thanks to Dennis and Ryan for conducting this interview and for publishing the transcript!

The Dec 15 incident

“We received several reports from different partners about a potential new Reader vulnerability, all within a couple of minutes of each other on Monday afternoon. We opened these files and did some quick triage work to verify that it was an attack that worked against the latest fully updated versions of Reader and Acrobat. Then, based on the number of contacts that we’d received, we figured this was something that people were seeing and that it was likely to get some coverage in the media soon.

“We posted to our PSIRT blog as soon as that information came in, just saying that this appeared to be real, we were looking into it, and that we’d have more information later. From there, the team worked here in North America, and then in our remote offices overseas, on the problem through the night and we got the advisory out on Tuesday. That advisory contained the CVE numbers, some details about what some users can do in order to mitigate the problem before a patch is available. We then updated the advisory later on Tuesday with the ship schedule.

“Our focus, historically, had been on getting a copy of a reproducible bug, and then we shifted all of our focus into remediating the bug, and then communicated mitigations and that sort of thing. Maybe a year ago we would have largely turned our backs on the details about new exploit techniques or anything fancier, such as escalation of attack levels, because from our perspective it’s the same bug, and we just want to get that bug fixed. And that’s where most of our attention was.

“Some of the what we’ve changed over the past 12 months is that we’re really putting a lot more energy into understanding how attacks evolve and scale out. For instance, the very first attack using this vulnerability might have been against some high value individual or company, but then it goes from very targeted attacks to more widespread. That’s useful for us in how we communicate to the users and the kinds of mitigations that we can design at a more strategic level.

“The PSIRT team handles the security details of figuring out ‘Is it a bug? If it is, what’s the impact?’ They’ve got the special skills for how to handle malicious samples and that sort of thing. And then the major product teams also have incident response personnel that work directly as part of the product teams.”

JavaScript’s new line-item veto

“There’s a new feature that Reader added in October called a JavaScript blacklist. It allows users to define any specific JavaScript API as a blacklist item, which will then not be executed.

“In this case, the vulnerability is in a JavaScript API called ‘docmedia.newplayer.’ By putting that term into the blacklist, any PDF document which calls docmedia.newplayer will be denied. It will deny valid calls as well as malicious calls. This is something individual users can do, and it can also be done by administrators for managed desktop environments, using group policy objects to roll out the change as a registry key.

“What the JavaScript black list does is basically no-opts any call to that API. Now there are some JavaScript functions that are used very often, such as verifying date formats or form submission, and if you were to black list one of those items, then the document you’re working on wouldn’t work correctly. But docmedia.newplayer is used a lot less often, so for most users their experience and workflows will not be affected.

“The JavaScript black list function is the most powerful mitigation. It completely protects users against the attack, and at the same time it will cause the least disruption for legitimate uses of the program.”

Disabling JavaScript, safer operating systems

“Something that’s a lot more disruptive, but also completely mitigates the current attack, is disabling JavaScript altogether. So if the blacklist function is not acceptable for some reason, then disabling JavaScript is an alternative that users can also deploy.

“But JavaScript is really an integral part of how people do form submissions. Anytime you’re working with a PDF where you’re entering information, JavaScript is used to do things like verify that the date you entered is the right format. If you’re entering a phone number for a certain country it’ll verify that you’ve got the right number of digits… when you click ‘submit’ on the form it’ll go to the right place. All of this has JavaScript behind the scenes making it work, and it’s difficult to remove without causing problems.

“And in our testing, if you have Windows’ Data Execution Prevention (DEP) enabled, what happens is an attack that otherwise would have worked instead triggers a crash, which does not appear to be exploitable. Now, there are always clever things that people might be able to figure out, but DEP seems to offer another level of protection at the operating system and hardware level, which is separate from the configuration changes.”

Deployment concerns

“Once we get a fix into the code-base, then we need to roll that fix into the actual update that a user can install and deploy on their machine out in the real world. That process is not security-specific — it’s a question of, ‘How do you take a change in the code and turn that into an installer?’ For us, if you make a one-line change in the code, then you need to roll that into all the different versions and flavors of Reader and Acrobat that are getting deployed on all the different platforms that we support. When you’re dealing with software that gets deployed onto hundreds of millions of machines, the threshold is a lot different.

“Something like 29 different binaries get produced from that process… you have Adobe Reader for Windows Vista, Windows 7, Windows XP, Windows Server, all these different flavors, and then Mac, different Linux flavors. Out of those 29 binaries, then you multiply that across 80 languages, or however many are supported depending on the platform and the software versions. Then, you need to make sure that in the process of producing all of those different flavors of the actual installer, that we didn’t introduce any new problems, and that we didn’t introduce anything that might break something else down the line. In all of this, it’s not a matter of just ‘hitting the compile button’ and then an hour later you’re done. Once we get a build that looks like it addresses all our needs, then we need to run it through all the paces to make sure this will deploy correctly, that it will only change what we’re looking at, and not change configuration settings or anything else that people care about.

“That is the process that takes a long time. We can get a malicious sample, triage it, figure out in the code where the bug is, and then get a fix into that code-base within 12 to 24 hours. Then we test to make sure that the code change we made didn’t break anything else, and we also look around the code to see if there’s anything else that makes us uncomfortable that we should tighten-up. All of that gets done pretty quickly.

“But then it’s the build process from there on out, getting everything tested to make sure it works well. Because if we were to roll out a patch that had some tiny little bug that had an impact on less than one percent of users, then you’d still be talking about millions of machines worldwide.”

On effective guidance

“Whenever a report comes in, the first thing that we evaluate is its priority. In this case credible sources reported it to us, and during our initial research it seemed to be a real issue. Based on the fact that we received three reports, we figured this was something that more than a few people are seeing. And so we posted immediately saying, ‘From our perspective, this is real. We’re looking into it.’ At that point, that was really the only information we had.

“Now, an example of where things can go wrong in the beginning is that these attacks are very heavily obfuscated, because they’re trying to avoid all of the different anti-virus and anti-malware solutions that you’ll have at the gateway and the desktop. The three different organizations that reported the vulnerability to us, each of them had done a little bit of diagnosis themselves, and each of them had come-up with the idea that the bug was in a JavaScript API called util.print-d. And when we first looked at it, that’s what we thought, too, because it’s the last call in the attack, and so we figured that was triggering the crash that would then set-up the exploit.

“But if we had gone public with ‘We think it’s in JavaScript, and we think it’s this API. Here’s a link to the blacklist functionality, figure it out yourself, we’re going to start testing and doing more research,’ that would have been wrong. After we did some more research, we found out that it wasn’t actually in util.print-d at all — it was in a totally different area of the JavaScript API, in docmedia.newplayer.

“In the early hours of doing the triage work we’re uncovering a lot of information, but not all of it is accurate, and it’s certainly not complete. If a third-party security provider gives guidance that says, ‘Hey, it looks like util.print-d is the problem; here’s what you can do,’ and if it turns out they’re wrong, they can just say, ‘Oops, sorry.’

“For Adobe, if we roll out information, people assign a certain confidence level to that, and then they’ll go off and take action on it. If an administrator for a managed desktop environment with half a million machines had rolled-out this JavaScript blacklist for the wrong API, they would be pretty irritated with us. And if you multiply that across hundreds of millions of machines, across the entire universe of people that use our software, then that’s why we need a very high confidence criteria before we’ll publish any information.

“And another example about JavaScript is that pretty much all exploits use JavaScript in order to set-up the heap with the malicious executable content before they trigger the crash. The crash itself might or might not be through a JavaScript API. So the very first time you examine an exploit, it might look like it has something to do with JavaScript, but that isn’t always true. Even if you disable JavaScript it might not protect you, because there might be other ways to prepare the heap in order to get this external content onto the stack. These are the kinds of behind-the-scenes questions that we have to answer before we can publish information.

“In this case, because we knew it was going to go public, we wanted people to know that we were aware of it, that we were working on it, and that it looked real. In other cases, if someone reports something to us and it’s a situation where it looks like it was a targeted attack, there’s a good chance this one organization was the only entity in the world who’s seen this malicious PDF, then we might take a day or two in order to make sure that we really understand it before we do a full advisory in our first publication with all the details, all the information, all the platforms.

“We’ve got two needles that we measure very carefully. One of them is real-world attack activity. The other one is the fear level — the perception of real world attack activity, whether or not it’s real. Our goal is to drive both of those down into a healthy range where as few people as possible are being attacked, and where people have a good understanding of where the real threats are. There’s a lot of communication involved with that, and then a lot of technical things that we can do as well.

“We’re starting out from a negative in that there’s already a vulnerability. So our goal is to do what we can do to protect as many users as quickly as possible, and then just work as hard as we can to get that done. That’s the calculus that we went through coming up with this, and it’s definitely a hard decision. We put a lot of effort into trying to do the best job we could.”

Outreach

“We get a lot of e-mails every day that come into Adobe’s Product Security Incident Response Team via PSIRT-at-adobe.com, which is the way that most things get reported to us. There are different levels of priority in how we handle things.

“Those reports have a wide variety of quality. Some of the time they’ll say that it’s a bug in Reader, and they send us a Word doc. We also get a lot of contacts from anonymous people that are a one-time set-up, using Gmail accounts or something. There are reports like that, which don’t mean as much to us.

“But whenever something comes in advertised as being an exploit in the wild, then it gets our full attention. The contacts that we got on Monday were from partners that we had established relationships with, very high credibility, and within a few minutes we were able to verify that it appeared to be an exploitable crash against the fully patched version. So we pulled the alarm and everyone moved into our top flight response for these types of things.

“The PSIRT team here at Adobe not only looks after the incoming reports and getting patches out and that sort of thing, but they’re also hooked into the kind of places where you would pick-up the chatter about what’s going on — all the different mailing lists and forums and things like that. The PSIRT team is more tapped into what we see as trends that are happening out there in the real world attacks.”

Moving to background updates

“When we sat down and designed the new beta pilot for Reader, we thought very carefully about how can we protect the most users. Particularly at the consumer or individual level, these are the folks who don’t have managed environments where someone does it for them. For this new updater that we shipped it in October, the January release will be the first time that we use auto-updating for the beta users. We’ll learn from that, and then if all goes well, we’ll roll it out as the new updater for all users in the next release.

“What it will do is offer people an automatic download and installation of updates with no user interaction option. Or for people who want more control, it can notify and give them the choice to install, or they can turn it off completely.

“We want to be able to support people who have a well managed environment and who have good reason for why they don’t wish to immediately install an update. But most of the people who need to be protected don’t wish to be bothered by it, so that’s why we’ve got that automatic background update.

“This is something that we’re hoping to make the norm, where everybody just has it set up that way. But we also need to support people who don’t wish to be interrupted when they’re working inside a document, who desire more control. It’s not just at a technology level, but also at the human level, the human interface layer. If people are clicking “no thanks” when the update’s offered today, we need to give them a way to get the update installed without bothering them, without disrupting what they’re doing.”

“Flash Cookies”

“The terminology we use is ‘Flash Player Local Shared Objects’, because they behave quite differently from browser cookies. There are many great uses for local storage, such as improving network performance, queueing stuff up immediately rather than having to wait for network latency.

“It’s actually not any harder managing LSOs through Flash Player, if you measure the number of clicks required. It’s just less familiar to users, to people who know how to go to their browser’s File menu and click on ‘Clear Browser Cookies.’ But doing those same clicks for Flash Player is something that people aren’t as familiar with.

“For a long time have been trying to work with the web browser vendors for them to open up the API, so that when the user clicks ‘clear browser cookies,’ this will also clear the Flash Player Local Shared Objects. But the browsers don’t expose those APIs today. That’s something that we’ve been working with the browser vendors, because if they can open up that API ability then we can hook into that as Flash Player, so that when the user clicks ‘clear’ it will clear LSOs as well as the browser cookies.

“Our goal is to make it as easy and as intuitive as possible for the users to manage Local Shared Objects. There’s a lot of study going on right now around the user interface and the integration at the browser level of how we can best support that.”

On being the big target

“When you’re looking at it from the attacker’s perspective, the audience reach is a key attraction. Adobe Reader and Flash Player are installed on a lot more machines than Windows is. That massive installed base paints a big bullseye, and that’s not something which is going to change. Reader and Player are ubiquitous software, and the responsibility is on us to do the things we can do to help protect our users.”