Archive for December, 2009

Background on Reader Update Ship Schedule

We posted an update to Security Advisory APSA09-07 that reflects the target ship date of January 12, 2010 for the update to remediate vulnerability CVE-2009-4324. I thought folks might be interested in some of the analysis that went into developing the schedule for the fix, so let me share some of the details in this post.
We evaluated two different options for patching this vulnerability:

  1. Stop everything else and start work immediately on an out-of-cycle security update to resolve this vulnerability with a one-off fix. We made major investments as part of our security initiative earlier this year that allow us to deliver patches more quickly. We estimated that delivering an out-of-cycle update would require somewhere between two and three weeks. Unfortunately, this option would also negatively impact the timing of the next quarterly security update for Adobe Reader and Acrobat scheduled for January 12, 2010.
  2. Roll the fix for vulnerability CVE-2009-4324 into the code branch for the scheduled January 12, 2010 release. The team determined that by putting additional resources over the holidays towards the engineering and testing work required to ship a high confidence fix for this issue with low risk of introducing any new problems, they could deliver the fix as part of the quarterly update on January 12, 2010.

Two important considerations that contributed to our decision to select the second option:

  • JavaScript Blacklist mitigation – This new feature, introduced in Adobe Reader and Acrobat versions 9.2 and 8.1.7, with the quarterly update in October, allows individuals as well as administrators of large enterprise managed desktop environments to easily disable access to individual JavaScript APIs. More details on the JavaScript Blacklist mitigation are available here. The feature design and our testing for this specific vulnerability indicate the JavaScript Blacklist is an effective mitigation against the threat without breaking other workflows that rely on JavaScript or other JavaScript APIs.
  • Customer schedules – The next quarterly security update for Adobe Reader and Acrobat, scheduled for release on January 12, 2010, will address a number of security vulnerabilities that were responsibly disclosed to Adobe. We are eager to get fixes for these issues out to our users on schedule. Many organizations are in the process of preparing for the January 12, 2010 update. The delay an out-of-cycle security update would force on the regularly scheduled quarterly release represents a significant negative. Additionally, an informal poll we conducted indicated that most of the organizations we talked with were in favor of the second option to better align with their schedules.

This is just a brief description of some of the points we considered in our analysis. Ultimately, the decision came down to what we could do to best mitigate threats to our customers, a critical priority to everyone at Adobe – and one we take very seriously.

Conquering Information Risk Management

Managing information risk is a complex business these days, especially when you look at (1) the range of information you need to protect, (2) the breadth of risks you need to mitigate, and (3) the management policies and tools available to today’s IT security professionals to protect that information. However:

“A well-realized information risk management strategy has other benefits [beyond security]: enhanced business agility, competitiveness, efficiency and cost savings.”

In other words, you can’t do without it!! 

The problem?  According to Deloitte, on
average, only half of the companies surveyed in their annual Global Security and Privacy Survey had formal security
policies or strategies.  Not a great foundation on which to build risk management on!

I wrote a recent article in Security Products magazine which confronts these challenges head-on, and provides some tips on navigating the “mind-boggling” task of information risk management.

Read the article here.

How to properly redact PDF files

Redaction was in the news again today with two large organizations publishing documents that weren’t properly redacted. So we’d like to remind everyone that removing sensitive information from an electronic document is easy…

Continue reading…

Fuzzing Reader – Lessons Learned

Kyle Randolph here. Today we have a guest post by Pat Wibbeler, who led a fuzzing tiger team as part of the Adobe Reader and Acrobat Security Initiative.
As a Software Engineering Manager on Adobe Reader, I take it personally every time an exploit is found in our code. I’ve always taken pride in my work, and I’m particularly proud to work with many brilliant engineers on a product installed on hundreds of millions of desktops.
This ubiquity makes Adobe Reader an appealing target for attack. We’re going to great lengths to protect our users through the Adobe Reader and Acrobat Security Initiative. One major component of this ongoing initiative is application fuzzing. I led a task force in fuzzing Reader when the security initiative began. We have always had fuzzing efforts within the Reader team, but this project scaled our efforts in key areas. I thought I’d share some of our experiences.
First – What is “Fuzzing?” One of the most common application vulnerabilities is caused by improper input validation. When developers fail to validate input data completely, an attacker may be able to craft application input that allows the attacker to gain control of the application or system. PDF files are the most common type of input consumed by Adobe Reader. Developers and testers intuitively test PDF file format parsing with valid cases – can Reader properly render a PDF generated by Adobe Acrobat or another PDF generator? Fuzzing automatically provides invalid and unexpected PDF data to an application, probing for cases where the PDF format may be poorly validated. For more information on fuzzing, you can read the following Wikipedia entry.
There are several existing fuzzers, and all fuzzers work basically like this:
high level pdf fuzzer.JPG
Valid data is fed into a fuzzer. The fuzzer mutates the data, creating invalid data, and feeds it to the program, monitoring for a crash or fault. In our case, the fuzzer mutates a PDF file and launches Adobe Reader, monitoring and logging any crashes or faults. Crashes are then analyzed and prioritized according to whether or not they are likely to be exploitable.
When we began, we set the following general plan: First, select a fuzzer, then select areas to target. Next, put the fuzzer into action by developing tools, and then building and running models in a newly developed infrastructure.
fuzzing-process-j.JPG
Fuzzers have several important features, but the most important for us are:

  • Data Mutation Strategies – How does the fuzzer manipulate input data?
  • Extension Mechanisms – Can the user extend the fuzzer to provide richer mutations or data definition?
  • Monitoring/Logging – How does the fuzzer monitor and sort faults?

The primary fuzzing engine we use is Michael Eddington’s Peach Fuzzer, a tool also used by other teams at Adobe. Peach provides:

  • A rich set of data mutators
  • Several mechanisms for extension and modification (including extending the engine itself since it’s open source)
  • Excellent Monitoring/Logging, using the !exploitable (bang exploitable) tool from Microsoft.

Select Targets
It’s critical to identify and prioritize specific targets during fuzzing. It’s extremely tempting to say there is only one target – “PDF.” If you look at the PDF specification, it quickly becomes clear how rich, complex and extensive the format is. It contains compressed streams, rich cross-referential structures, multimedia data, flash content, annotations, forms, and more. Fuzzing PDF is as complex as the file format itself.
We set the following goal:
Think like an attacker – target the areas of PDF that an attacker is likely to target
Since the Peach Fuzzer is a flexible, robust tool, we’re able to systematically focus on different areas of the PDF file format, covering the most likely targets first.
Build Models
Peach performs fuzzing mutations based on a model built by the developer. A model is an XML description of the data being attacked. Peach calls these “Pit” files. These models can describe the data in detail or simply treat the data as a blob. Peach mutates blob data by flipping bits and sliding specific data values through the blob one byte at a time. We first built simple models that treat streams as blobs. We referred to these as “dumb” or “naïve” models. These simple models were effective, and for me as a Reader developer, humbling.
We followed each simple model with a “smart” model. The Peach Pit format can describe relationships and offsets between parts of the fuzzed data. For instance, a 4-byte field may describe the length or offset of a stream that follows later. Peach will intentionally provide values for this 4-byte field that are at or near common range limits and truncate or lengthen the stream data in an attempt to overrun buffer lengths that may be naively set using the data input from the stream. These models are also quite effective.
Peach Pit models can describe data in two ways:

  • Generational Models – Generational models produce data only from the description of the data. For instance, the description might indicate a fixed-width bit-field of flags followed by a stream with properties indicated by those flags. A generational fuzzer might simply create a random bit-field and a series of random bytes.
  • Mutational Models – Mutational models consume template data based on the description of that data. A varied data set can be fed to these fuzzers, and the data type will be mutated. Mutational fuzzers have the distinct advantage that you can create a completely new fuzzing run simply by feeding the mutational model new input data. For instance, the same JPEG mutational model might consume any existing jpg, mutating it differently based on properties unique to that jpg (e.g. color depth, or whether or not the image is interlaced).

We use both generational and mutational models in our testing, broadening our coverage as much as possible.
Develop Tools
We found it extremely helpful to build tooling to assist in model development and results processing. We built several new tools:

  • Crash Analyzer/Processor – Peach “buckets” (categorizes) exploits based on the signature of the callstack. It’s hard to overstate the value of this bucketing since fuzzing produces many duplicate bugs. We developed a script that iterates through the bucket folders, creating an XML description of all of the results. We built a GUI Crash Analyzer (using Adobe AIR) that allows for visual sorting and analysis of the Peach buckets.
  • PIT Generator – We also developed a GUI tool that we could use to quickly select streams within an existing PDF and create a generational “naïve” model based on that data. This allowed us to quickly build simple models that could run while we spent time building richer smart models.
  • Enveloper – PDF is a format that contains many other formats. We found it somewhat difficult to build mutational models that ignored large portions of an input PDF. We also wanted to be able to find sample input data without having to manually wrap it in a PDF file before fuzzing. We built an “enveloping” extension to Peach that allowed us to build pit files that described only our targeted stream (e.g. JBIG2). The enveloper then wrapped the stream in a valid PDF structure before Peach launched Adobe Reader.

Develop Infrastructure
Unfortunately, models take an extremely long time to run because they have to launch and close an application for each file we generate. Many models can generate from tens to hundreds of thousands of mutations. To speed the run time we leveraged the Acrobat/Reader test automation team’s fabulous grid infrastructure to provision virtual machines, deploy Reader and launch a test. Additionally, Peach has a parallelization feature that allows different parts of a model to be run in different processes. Our automation team worked with the model development team to enhance their automation grid. The grid automatically partitions a Peach run and deploys it across dozens of virtual machines. At peak, we could run hundreds of thousands of iterations per day for a given model, drastically improving throughput.
Run the Models!
It seems obvious, but the most important part of the process is to actually run the models. Developers love to build things, and it’s easy to tweak forever, but those developers who ran the models early and often found the most bugs. We also found that it was easy to make a small mistake in modeling that invalidated the entire model. For example, there are several areas of PDF that Adobe Reader will reject early if they are malformed. In this case, the targeted code might never be executed! It’s critical to debug and troubleshoot models carefully so that precious iteration time is not wasted.
Document Process/Results
This almost goes without saying, but it’s important to keep close tabs on your process and your results. As we brought more people into the process, we found our project wiki to be invaluable. It’s now being used by other groups within Adobe as a learning resource to refine their own fuzzing efforts.
Use Results to Improve the SPLC
Each unique crasher or new vulnerability is an opportunity to drive improvements and optimization into the rest of the Secure Product Lifecycle (SPLC). The code level flaw might highlight a gap in security training, provide an opportunity for some high ROI target code reviews in the surrounding code, or suggest a new candidate for the banned API list. Fuzzing is best done not in a vacuum but as an integrated part of a broader security process.
Parting Advice
My last piece of advice:
Approach fuzzing your own product with humility.
Humility allows you to start simply. It also allows you to question a model that isn’t producing results. We often found that when a model wasn’t producing, it was because of a mistake in the model or because the tool was misconfigured rather than because the code was good. Humility also allows you to choose targets in the unbiased way an attacker will choose them. Finally, humility allows you to recognize that security vulnerabilities can sneak in under even the most watchful eye. Fuzzing is one way we help catch bugs that have survived all the other security processes. Application security should be an ongoing effort on any product, just as it is with Adobe Reader.