Posts tagged "Reader"

Inside Adobe Reader Protected Mode – Part 1 – Design

Introduction

Kyle Randolph here, along with the security team for the Acrobat family of products, including Adobe Reader. This is the first post in a multi-part series about the new sandboxing technology used in the Adobe Reader Protected Mode feature that was announced back in July. We will take a technical tour of the sandbox architecture and look at how its different components operate and communicate in ways that will help contain malicious code execution. 

What is sandboxing?

sandbox is a security mechanism used to run an application in a confined execution environment in which certain functions (such as installing or deleting files, or modifying system information) are prohibited. In Adobe Reader, “sandboxing” (also known as “Protected Mode”) adds an additional layer of defense by containing malicious code inside PDF files within the Adobe Reader sandbox and preventing elevated privilege execution on the user’s system. 

The Adobe Reader sandbox leverages the operating system’s security controls to constrain processes execution to the principle of least privilege. Thus, processes that could be subject to an attacker’s control run with limited capabilities and must perform actions such as accessing files through a separate, trusted process. This design has three primary effects:

  • All PDF processing such as PDF and image parsing, JavaScript execution, font rendering, and 3D rendering happens in the sandbox.
  • Processes that need to perform some action outside the sandbox boundary must do so through a trusted proxy called a “broker process.”
  • The sandbox creates a new distinction of two security principals: the user principal, which is the context in which the user’s logon session runs, and the PDF principal, which is the isolated process that parses and renders the PDF. This distinction is established by a trust boundary at the process level between the sandbox process and the rest of the user’s logon session and the operating system.

The goal of this design aspect is to process all potentially malicious data in the restricted context of the PDF principal and not in the context of the fully privileged user principal. As shown in the diagram below, Inter-process communication (IPC) is used when the sandbox needs an action performed by the broker as the user principal instead of the PDF principal, such as calling an OS API or getting write access to a file.

Sandboxing is relatively new for most enterprise applications because it is difficult to implement in mature software (i.e. software with millions of lines of code) that is already deployed across a virtually limitless number of environments. A few recently shipped products that demonstrate the sandboxing proof of concept include Microsoft Office 2007 MOICE, Google Chrome, and Office 2010 Protected View. The challenge is to enable sandboxing while keeping user workflows functional without turning off features users depend on. The ultimate goal is to proactively provide a high level of protection which supplements the mitigation of finding and fixing individual bugs.

Design Principles

The sandbox was designed with several security best practices in mind:

  • Leverage the existing operating system security architecture: The Adobe Reader sandbox relies on Windows operating system security features such as restricted tokens, job objects and low integrity levels.
  • Leverage existing implementations: The Adobe Reader sandbox builds on the Google Chrome sandbox and also took Microsoft Office 2010 Protected Mode into consideration.
  • Adhere to the principle of least privilege: Every process (executable code) can only access the resources necessary to perform its legitimate purpose.
  • Consider all sandbox data untrusted: Assume all data communicated out of the sandbox is potentially malicious until it has been validated. 

 
Mitigations Provided by the Reader Sandbox

For the sake of this discussion, let’s assume that an attacker has achieved arbitrary code execution by exploiting an unpatched vulnerability in the Adobe Reader renderer and is able to convince the user to click on a malicious PDF file delivered via an e-mail attachment or to click on a link to a malicious PDF file hosted on a website controlled by the attacker. Historically, just double-clicking and rendering this PDF file could lead to total compromise of the user’s machine. Say, for example, the attacker knows and is able to exploit an unpatched buffer overflow vulnerability in the Adobe Reader JavaScript APIs or an integer overflow vulnerability in the Fonts components. Once that’s done, it’s fairly trivial for the attacker to lure victims into opening the weaponized PDF file by deploying it via a spam e-mail/advertisement or hosting it on a popular website.

In-scope goals: Adobe Reader’s sandboxing architecture primarily focuses on preventing the attacker from doing two things:

  1. Installing malware on the user’s machine
  2. Monitoring the user’s keystrokes when the user interacts with another program

If the attacker succeeds in circumventing the above mentioned in-scope goals, then he or she can cause serious damage to the user. For example, once the attacker is able to install malicious software on the user’s computer, he or she could end up having write/update/delete access to the file system and registry and might attempt to install a botnet client that receives commands over the network and engages in coordinated attacks across the network. In the other scenario where the attacker is able to monitor the user’s keystrokes, he or she can attempt to steal confidential and sensitive information, such as passwords and credit card numbers.

So simply put, the Adobe Reader sandbox–much like the Google Chrome Sandbox–does not allow the attacker to install persistent malware or tamper with the user’s file system and thwarts the attacker from taking control of the user’s machine. This ties back to the design principle of least privilege: an exploit can possibly run within the application but cannot do anything malicious to the user’s machine because its access is completely blocked by the highly restrictive sandbox environment. In conclusion, this greatly reduces the attack surface of the Adobe Reader software application.

Limitations

The sandbox’s reliance on the operating system means that it could potentially be subject to its flaws. Like the Google Chrome sandbox, the Adobe Reader Protected Mode sandbox leverages the Windows security model and the operating system security it provides. This intrinsic dependency means the sandbox cannot protect against weakness or bugs in the operating system itself. However, it can limit the severity of such flaws when code executes inside the sandbox, since the sandbox blocks many common attack vectors.

Our first version of sandboxing is not designed to protect against:

Unauthorized read access to the file system or registry. We plan to address this in a future release.

Network access. We are investigating ways to restrict network access in the future.

Reading and writing to the Clipboard

Insecure operating system configuration

The last ingredient for a Windows sandbox according to the Practical Windows Sandboxing recipe is to use a separate desktop for rendering the user interface (UI). We chose not to use a separate desktop due to the complexity of the change in how Adobe Reader renders its UI. Instead, we enumerated the attack vectors that sharing a desktop leads to, shatter attacks and SetWindowsHookEx DLL Injection attacks. These attack vectors were mitigated through alternative means, using low integrity and limits in the sandbox job object, which will be discussed in more detail in our next post.

Conclusion

This concludes the overview of the Adobe Reader Protected Mode sandbox architecture and limitations. In future posts, we will explore the sandbox process and the broker process in more detail, as well as their inter-process communication (IPC) mechanisms. Finally, we will comment on the security testing we use to validate the security of the Adobe Reader sandbox.

-Liz McQuarrie, Ashutosh Mehra, Suchit Mishra, Kyle Randolph, and Ben Rogers

A Few Words on the January 2010 Security Update for Adobe Reader and Acrobat

Kyle Randolph here. I work closely with the Adobe Reader and Acrobat engineering team as we continue to work hard on the security initiative first announced back in May 2009. Today, the team announced new security improvements in Adobe Reader and Acrobat 9.3 and 8.2. This is the third quarterly security update for Adobe Reader and Acrobat and we are starting to roll out to users the configuration options and features that we began designing last summer to mitigate the evolving security threats we were seeing. Let me explain the security geek coolness factor of the improvements in this release as well as the improvements in the October quarterly security update.
New Adobe Reader Updater / Acrobat Updater
We introduced the new updater in the October Adobe Reader and Acrobat 9.2 and 8.1.7 update as beta technology, and today, we are testing the new technology with a real-world security update to users participating in the beta program. (Since we are still conducting the pilot, only users who are participating in the beta program are receiving today’s update via the new updater.) The new updater improves the user experience and helps users stay up to date with the new option of receiving security updates automatically, via background updates, which have been shown to have better patch adoption. Some customers, such as corporate IT administrators, need to know and manage which updates are installed and when. But a lot of customers, particularly consumers and individuals who don’t have the autopilot luxury of a managed desktop environment, just want to have the most secure and up-to-date version, and don’t want to be interrupted when it is time to install an update. By allowing customers to select an update process that automatically runs in the background, we can help protect more users from attacks against known, patched vulnerabilities.
JavaScript Blacklist Framework
Over the past two years, a significant number of external vulnerabilities found in Adobe Reader and Acrobat have been in JavaScript. The Adobe Reader and Acrobat engineering team has been busy creating new ways to help protect against this attack vector. The new Adobe Reader and Acrobat JavaScript Blacklist Framework, which was added with the October update, is great for security because it provides a method to disable a specific vulnerable API instead of disabling JavaScript completely. This allows Adobe Reader to be configured in a way that is not vulnerable if a 0-day vulnerability that exploits a JavaScript API is identified. Better still, the new blacklist is stored in the registry and can be configured centrally in enterprise environments using Group Policy Objects (GPO) to prevent end users from overriding it. As an example, the recent vulnerability CVE-2009-4324 could be mitigated by blocking the DocMedia.newPlayer API.
For more info on the JavaScript Blacklist Framework, check out http://kb2.adobe.com/cps/504/cpsid_50431.html.
Yellow Message Bar
The Yellow Message Bar was added in the October security update for Adobe Reader and Acrobat (9.2 and 8.1.7), but it is cool enough to mention here. This makes the user experience much more pleasant when a dangerous API is selectively blocked by the JavaScript Blacklist Framework or due to the Enhanced Security configuration. Previously, you’d get a modal dialog box asking users if they would like to re-enable some unsafe behavior, as shown in the screen shot below:
js_modal_warning.jpeg
Now the Yellow Message Bar appears at the top of the document as shown below:
js_yellowbar.png
Since the Yellow Message Bar stays out of the way, it enables users to interact with the PDF without exposure to a disabled feature’s security risk, if you don’t need that functionality. Additionally, the choices are more granular in that the Yellow Message Bar decision is to trust a document one time or always, as opposed to a decision to turn the entire feature back on for all documents. These changes should reduce the frequency and impact of accidental click-throughs or users getting into the habit of clicking through warnings without reading them, which can lead to social engineering and phishing attacks. This same type of change in security notification has been adopted in other vendors’ desktop products, such as Microsoft Office, as a security best practice. The Yellow Message Bar will appear when an action is blocked by Enhanced Security in Adobe Reader or Acrobat or by the JavaScript Blacklist Framework.
For more info on the Yellow Message Bar, see http://kb2.adobe.com/cps/504/cpsid_50432.html.
Multimedia (Legacy) off by Default
Another effective technique to reduce security risk for our customers is to reduce the attack surface of the product. Legacy multimedia is a set of rarely used features which have a broad attack surface. The Multimedia (Legacy) features are no longer trusted by default. Users that open PDFs that contain legacy multimedia will see a Yellow Message Bar at the top of the document.
Conclusion
This January update for Adobe Reader and Acrobat builds on the good work put into the October release to continue increasing the security protection for our customers with each quarterly security release in addition to fixing externally reported vulnerabilities. We’re excited to evaluate the results for the pilot of the new Adobe Reader Updater with its automatic mode for background updates. The Yellow Message Bar notifications provide an improved user interface to help protect users. And we’re providing more fine-grained control for any future JavaScript API vulnerabilities with the JavaScript Blacklist Framework. Finally, disabling Legacy Multimedia by default protects users against any potential security vulnerabilities identified in these rarely used features.

Fuzzing Reader – Lessons Learned

Kyle Randolph here. Today we have a guest post by Pat Wibbeler, who led a fuzzing tiger team as part of the Adobe Reader and Acrobat Security Initiative.
As a Software Engineering Manager on Adobe Reader, I take it personally every time an exploit is found in our code. I’ve always taken pride in my work, and I’m particularly proud to work with many brilliant engineers on a product installed on hundreds of millions of desktops.
This ubiquity makes Adobe Reader an appealing target for attack. We’re going to great lengths to protect our users through the Adobe Reader and Acrobat Security Initiative. One major component of this ongoing initiative is application fuzzing. I led a task force in fuzzing Reader when the security initiative began. We have always had fuzzing efforts within the Reader team, but this project scaled our efforts in key areas. I thought I’d share some of our experiences.
First – What is “Fuzzing?” One of the most common application vulnerabilities is caused by improper input validation. When developers fail to validate input data completely, an attacker may be able to craft application input that allows the attacker to gain control of the application or system. PDF files are the most common type of input consumed by Adobe Reader. Developers and testers intuitively test PDF file format parsing with valid cases – can Reader properly render a PDF generated by Adobe Acrobat or another PDF generator? Fuzzing automatically provides invalid and unexpected PDF data to an application, probing for cases where the PDF format may be poorly validated. For more information on fuzzing, you can read the following Wikipedia entry.
There are several existing fuzzers, and all fuzzers work basically like this:
high level pdf fuzzer.JPG
Valid data is fed into a fuzzer. The fuzzer mutates the data, creating invalid data, and feeds it to the program, monitoring for a crash or fault. In our case, the fuzzer mutates a PDF file and launches Adobe Reader, monitoring and logging any crashes or faults. Crashes are then analyzed and prioritized according to whether or not they are likely to be exploitable.
When we began, we set the following general plan: First, select a fuzzer, then select areas to target. Next, put the fuzzer into action by developing tools, and then building and running models in a newly developed infrastructure.
fuzzing-process-j.JPG
Fuzzers have several important features, but the most important for us are:

  • Data Mutation Strategies – How does the fuzzer manipulate input data?
  • Extension Mechanisms – Can the user extend the fuzzer to provide richer mutations or data definition?
  • Monitoring/Logging – How does the fuzzer monitor and sort faults?

The primary fuzzing engine we use is Michael Eddington’s Peach Fuzzer, a tool also used by other teams at Adobe. Peach provides:

  • A rich set of data mutators
  • Several mechanisms for extension and modification (including extending the engine itself since it’s open source)
  • Excellent Monitoring/Logging, using the !exploitable (bang exploitable) tool from Microsoft.

Select Targets
It’s critical to identify and prioritize specific targets during fuzzing. It’s extremely tempting to say there is only one target – “PDF.” If you look at the PDF specification, it quickly becomes clear how rich, complex and extensive the format is. It contains compressed streams, rich cross-referential structures, multimedia data, flash content, annotations, forms, and more. Fuzzing PDF is as complex as the file format itself.
We set the following goal:
Think like an attacker – target the areas of PDF that an attacker is likely to target
Since the Peach Fuzzer is a flexible, robust tool, we’re able to systematically focus on different areas of the PDF file format, covering the most likely targets first.
Build Models
Peach performs fuzzing mutations based on a model built by the developer. A model is an XML description of the data being attacked. Peach calls these “Pit” files. These models can describe the data in detail or simply treat the data as a blob. Peach mutates blob data by flipping bits and sliding specific data values through the blob one byte at a time. We first built simple models that treat streams as blobs. We referred to these as “dumb” or “naïve” models. These simple models were effective, and for me as a Reader developer, humbling.
We followed each simple model with a “smart” model. The Peach Pit format can describe relationships and offsets between parts of the fuzzed data. For instance, a 4-byte field may describe the length or offset of a stream that follows later. Peach will intentionally provide values for this 4-byte field that are at or near common range limits and truncate or lengthen the stream data in an attempt to overrun buffer lengths that may be naively set using the data input from the stream. These models are also quite effective.
Peach Pit models can describe data in two ways:

  • Generational Models – Generational models produce data only from the description of the data. For instance, the description might indicate a fixed-width bit-field of flags followed by a stream with properties indicated by those flags. A generational fuzzer might simply create a random bit-field and a series of random bytes.
  • Mutational Models – Mutational models consume template data based on the description of that data. A varied data set can be fed to these fuzzers, and the data type will be mutated. Mutational fuzzers have the distinct advantage that you can create a completely new fuzzing run simply by feeding the mutational model new input data. For instance, the same JPEG mutational model might consume any existing jpg, mutating it differently based on properties unique to that jpg (e.g. color depth, or whether or not the image is interlaced).

We use both generational and mutational models in our testing, broadening our coverage as much as possible.
Develop Tools
We found it extremely helpful to build tooling to assist in model development and results processing. We built several new tools:

  • Crash Analyzer/Processor – Peach “buckets” (categorizes) exploits based on the signature of the callstack. It’s hard to overstate the value of this bucketing since fuzzing produces many duplicate bugs. We developed a script that iterates through the bucket folders, creating an XML description of all of the results. We built a GUI Crash Analyzer (using Adobe AIR) that allows for visual sorting and analysis of the Peach buckets.
  • PIT Generator – We also developed a GUI tool that we could use to quickly select streams within an existing PDF and create a generational “naïve” model based on that data. This allowed us to quickly build simple models that could run while we spent time building richer smart models.
  • Enveloper – PDF is a format that contains many other formats. We found it somewhat difficult to build mutational models that ignored large portions of an input PDF. We also wanted to be able to find sample input data without having to manually wrap it in a PDF file before fuzzing. We built an “enveloping” extension to Peach that allowed us to build pit files that described only our targeted stream (e.g. JBIG2). The enveloper then wrapped the stream in a valid PDF structure before Peach launched Adobe Reader.

Develop Infrastructure
Unfortunately, models take an extremely long time to run because they have to launch and close an application for each file we generate. Many models can generate from tens to hundreds of thousands of mutations. To speed the run time we leveraged the Acrobat/Reader test automation team’s fabulous grid infrastructure to provision virtual machines, deploy Reader and launch a test. Additionally, Peach has a parallelization feature that allows different parts of a model to be run in different processes. Our automation team worked with the model development team to enhance their automation grid. The grid automatically partitions a Peach run and deploys it across dozens of virtual machines. At peak, we could run hundreds of thousands of iterations per day for a given model, drastically improving throughput.
Run the Models!
It seems obvious, but the most important part of the process is to actually run the models. Developers love to build things, and it’s easy to tweak forever, but those developers who ran the models early and often found the most bugs. We also found that it was easy to make a small mistake in modeling that invalidated the entire model. For example, there are several areas of PDF that Adobe Reader will reject early if they are malformed. In this case, the targeted code might never be executed! It’s critical to debug and troubleshoot models carefully so that precious iteration time is not wasted.
Document Process/Results
This almost goes without saying, but it’s important to keep close tabs on your process and your results. As we brought more people into the process, we found our project wiki to be invaluable. It’s now being used by other groups within Adobe as a learning resource to refine their own fuzzing efforts.
Use Results to Improve the SPLC
Each unique crasher or new vulnerability is an opportunity to drive improvements and optimization into the rest of the Secure Product Lifecycle (SPLC). The code level flaw might highlight a gap in security training, provide an opportunity for some high ROI target code reviews in the surrounding code, or suggest a new candidate for the banned API list. Fuzzing is best done not in a vacuum but as an integrated part of a broader security process.
Parting Advice
My last piece of advice:
Approach fuzzing your own product with humility.
Humility allows you to start simply. It also allows you to question a model that isn’t producing results. We often found that when a model wasn’t producing, it was because of a mistake in the model or because the tool was misconfigured rather than because the code was good. Humility also allows you to choose targets in the unbiased way an attacker will choose them. Finally, humility allows you to recognize that security vulnerabilities can sneak in under even the most watchful eye. Fuzzing is one way we help catch bugs that have survived all the other security processes. Application security should be an ongoing effort on any product, just as it is with Adobe Reader.