Inappropriate Use of Adobe Code Signing Certificate

We recently received two malicious utilities that appeared to be digitally signed using a valid Adobe code signing certificate. The discovery of these utilities was isolated to a single source. As soon as we verified the signatures, we immediately decommissioned the existing Adobe code signing infrastructure and initiated a forensics investigation to determine how these signatures were created. We have identified a compromised build server with access to the Adobe code signing infrastructure. We are proceeding with plans to revoke the certificate and publish updates for existing Adobe software signed using the impacted certificate. This only affects the Adobe software signed with the impacted certificate that runs on the Windows platform and three Adobe AIR applications* that run on both Windows and Macintosh. The revocation does not impact any other Adobe software for Macintosh or other platforms.

Sophisticated threat actors use malicious utilities like the signed samples during highly targeted attacks for privilege escalation and lateral movement within an environment following an initial machine compromise. As a result, we believe the vast majority of users are not at risk.  We have shared the samples via the Microsoft Active Protection Program (MAPP) so that security vendors can detect and block the malicious utilities.

Customers should not notice anything out of the ordinary during the certificate revocation process.  Details about what to expect and a utility to help determine what steps, if any, a user can take are available on the support page on Adobe.com.

Sample Details

The first malicious utility we received is pwdump7 v7.1.  This utility extracts password hashes from the Windows OS and is sometimes used as a single file that statically links the OpenSSL library libeay32.dll.  The sample we received included two separate and individually signed files. We believe the second malicious utility, myGeeksmail.dll, is a malicious ISAPI filter. Unlike the first utility, we are not aware of any publicly available versions of this ISAPI filter. More details describing the impacted certificate and the malicious utilities, including MD5 hash values for the files, are included in the Adobe security advisory.

In addition to working with your security vendors to ensure you have the latest updates containing protections against these utilities, system administrators for managed desktop Windows OS environments can create a Software Restriction Policy (SRP—via Group Policy) that disallows the execution of the malicious utilities and blocks them on the basis of the individual file hashes.

Our internal testing indicates that moving the impacted Adobe certificate to the Windows Untrusted Certificate Store does not block threat actors from executing the malicious utilities on a victim machine. However, this configuration does have a negative impact on the user experience and execution of valid Adobe software signed with the impacted certificate. Adobe does not recommend using the Untrusted Certificate Store in this situation.

Adobe Code Signing Infrastructure

The private keys associated with the Adobe code signing certificates were stored in Hardware Security Modules (HSMs) kept in physically secure facilities. All entities authorized to request digital signatures were provisioned according to an established procedure that verified the identity of the entity and verified that the release engineering environment met the relevant assurance criteria. All code signing requests were submitted via mutually authenticated Transport Layer Security (TLS) connections to the code signing service and were performed only if the requesting entity came from the originally provisioned IP address.

Within minutes of the initial triage of the first sample, we decommissioned our signing infrastructure and began a clean-room implementation of an interim signing service for re-signing components that were signed with the impacted key after July 10, 2012 and to continue code signing for regularly scheduled releases. The interim signing solution includes an offline human verification to ensure that all files scheduled for signature are valid Adobe software. We are in the process of designing and deploying a new, permanent signing solution.

Compromised Build Server

We have identified a compromised build server that required access to the code signing service as part of the build process. Although the details of the machine’s configuration were not to Adobe corporate standards for a build server, this was not caught during the normal provisioning process. We are investigating why our code signing access provisioning process in this case failed to identify these deficiencies. The compromised build server did not have rights to any public key infrastructure (PKI) functions other than the ability to make code signing requests to the code signing service.

Our forensic investigation is ongoing. To date we have identified malware on the build server and the likely mechanism used to first gain access to the build server. We also have forensic evidence linking the build server to the signing of the malicious utilities. We can confirm that the private key required for generating valid digital signatures was not extracted from the HSM. We believe the threat actors established a foothold on a different Adobe machine and then leveraged standard advanced persistent threat (APT) tactics to gain access to the build server and request signatures for the malicious utilities from the code signing service via the standard protocol used for valid Adobe software.

The build server used a dedicated account to access source code required for the build. This account had access to only one product. The build server had no access to Adobe source code for any other products and specifically did not have access to any of Adobe’s ubiquitous desktop runtimes such as Flash Player, Adobe Reader, Shockwave Player, or Adobe AIR. We have reviewed every commit made to the source repository the machine did have access to and confirmed that no source code changes or code insertions were made by the build server account. There is no evidence to date that any source code was stolen.

Next Steps

The revocation of the impacted certificate for all code signed after July 10, 2012 is planned for 1:15 pm PDT (GMT -7:00) on Thursday October 4, 2012. To determine what this means for current installations and what corrective steps (if any) are necessary, please refer to the support page on Adobe.com. The certificate revocation itself will be included in the certificate revocation list (CRL) published by VeriSign; no end user or administrator action is required to receive the updated CRL.

Through this process we learned a great deal about current issues with code signing and the impact of the inappropriate use of a code signing certificate. We plan to share our lessons learned as well as foster a conversation within the industry about the best way to protect users and minimize the impact on users in cases where the revocation of a certificate becomes necessary (as in this example). Please stay tuned for more details in the coming weeks.

* Adobe Muse and Adobe Story AIR applications as well as Acrobat.com desktop services

Collaboration for Better Software Security

At Adobe we have found that building working relationships between developers and vulnerability researchers is to the benefit of everyone–including, and especially, the general public. We will be speaking this week on this topic at the SOURCE Seattle 2012 conference. In our talk we’ll share case studies of successful developer-researcher collaboration by examining examples of security incidents including bug reports, zero-day attacks, and incident response.

If you’re going to be at SOURCE Seattle please drop by our talk: “Why Developers and Vulnerability Researchers Should Collaborate” at 12:10pm on Thursday, September 13. We’re eager to share what we have learned from our developer-researcher collaboration. And, of course, we especially look forward to catching up in hallway conversations!

Cheers,

Karthik Raman, Security Researcher, ASSET
David Rees, Lead Developer, Acrobat 3D

Adobe’s Support of “International Technology Upgrade Week”

Earlier today, Skype together with Norton by Symantec and TomTom kicked off “International Technology Upgrade Week,” a global initiative to encourage consumers to regularly download and install software updates. Keeping software up-to-date is probably the single-most important advice we can give to users—consumers and businesses alike. For details on this consumer-focused update initiative, we invite you to read the Adobe Reader blog post supporting this very important update initiative.

Join Skype, Norton by Symantec, TomTom and Adobe this week, and take the time to make sure your software is—and stays—up-to-date. For consumers outside of managed environments, choose automatic updates, if your software offers this option; or if it doesn’t, install updates when you first receive the update notification.

Inside Flash Player Protected Mode for Firefox

Today, we launched Flash Player Protected Mode for Firefox on Windows. Our Protected Mode implementation allows Flash Player to run as a low integrity process with several additional restrictions that prohibit the runtime from accessing sensitive resources. This approach is based on David LeBlanc’s Practical Windows Sandbox design and builds upon what Adobe created for the Adobe Reader X sandbox. By running the Flash Player as a restricted process, Adobe is making it more difficult for an attacker to turn a simple bug into a working exploit. This blog post will provide an overview of the technical implementation of the design.

When you first navigate to a page with Flash (SWF) content, you will notice that there are now three processes for Flash Player. This may seem odd at first, but there is a good reason for this approach. One of the goals for the design was to minimize the number of changes that were necessary in Firefox to support the sandbox. By keeping Flash Player and Firefox loosely coupled, both organizations can make changes to their respective code without the complexity of coordinating releases.

The first process under the Firefox instance is called “plugin-container.exe.” Firefox has run plugins in this separate process for quite some time, and we did not want to re-architect that implementation. With this design, the plugin container itself is only a thin shim that allows us to proxy NPAPI requests to the browser. We also use this process as our launching point for creating the broker process. Forking the broker as a separate process allows us to be independent of the browser and gives us the freedom to restrict the broker process in the future. From the broker process, we will launch the fully sandboxed process. The sandboxed process has significant restrictions applied to it. It is within the sandbox process that the Flash Player engine consumes and renders Web content.

The restrictions we apply to this sandboxed process come from the Windows OS. Windows Vista and Windows 7 provide the tools necessary to properly sandbox a process. For the Adobe Reader and Acrobat sandbox implementation introduced in 2010, Adobe spent significant engineering effort trying to approximate those same controls on Windows XP. Today, with Windows 8 just around the corner and Windows XP usage rapidly decreasing, it did not make sense for the Flash Player team to make that same engineering investment for Windows XP. Therefore, we’ve focused on making Protected Mode for Firefox available on Windows Vista and later.

For those operating systems, we take advantage of three major classes of controls:

The first control is that we run the sandboxed process at low integrity. By default, processes started by the user are executed at medium integrity. Running the process at a low integrity level prevents the process from writing to most areas of the user’s local profile and the registry which require a medium integrity level to access. This also allows us to take advantage of User Interface Privilege Isolation (UIPI) which prevents low integrity processes from sending windows messages to higher integrity processes.

The second class of controls applied to the sandboxed process is to restrict the capabilities of the access token. A process will inherit a list of available Security Identifiers (SIDs) from the user’s security profile. These SIDs represent the different OS groups to which the user belongs. The access token contains this list of SIDs along with a set of controls for those SIDs. The Windows OS will compare the SIDs in the access tokens to the group permissions of the target object (e.g a file) to determine if access is allowed. The Windows OS allows us to define how the process SIDs are used in that comparison.

In general, a sandboxed process will need to be able to access resources directly owned by the user. However, in most cases it is unlikely that the sandbox will need the extended set of resources available to the user via group permissions. As a concrete example, your company may have a contact list on a network share that is available to everyone within your “Employees” group. It isn’t your file but you can access it because you are in the “Employees” group for your company. The Flash Player sandbox process doesn’t need to be able to directly access that file.

We identified that our sandbox process will need to access OS resources using the following SIDs: BUILTIN\Users, Everyone, the User’s Logon SID, and NTAUTHORITY\INTERACTIVE. For any other SIDs that are inherited from the user, we set the deny-only attribute to prohibit the process from accessing the resource based solely on that SID. To continue the example of the contact list on the file share, the sandboxed process would not be able to access the contact list because the file is not owned by you and the deny-only attribute on the “Employees” group SID would prevent access using your group permission. Process privileges are also limited to only the SeChangeNotifyPrivilege, which is required for the process to be notified of file system changes and for certain APIs to work correctly. The graphic below shows the permissions applied to the sandboxed process.

The third control applied to the sandboxed process are job restrictions. As one example, we can prevent the sandboxed process from launching other processes by setting Active Processes to 1. We can also limit the sandbox’s ability to communicate with other processes by restricting access to USER Handles and Administrator Access. The USER Handles restriction complements UIPI by preventing the process from accessing user handles created by processes not associated with our job. Finally, we can limit the sandboxes ability to interfere with the OS by limiting access to System Parameters, Display Settings, Exit Windows and Desktop.

More information on job limits, privilege restrictions and UIPI can be found in Part 2 of Inside Adobe Reader Protected Mode.

Once you get past OS-provided controls, the next layer of defense is Flash Player broker controls.

The OS broker process runs at medium integrity and acts as a gatekeeper between the untrusted sandbox process and the operating system. The sandbox process must ask the OS broker process for access to sensitive resources that it may legitimately need. Some examples of resources that are managed by the broker include file system access, camera access, print access and clipboard access. For each resource request, the sandbox contains policies which define what can and cannot be accessed. For instance, the sandbox process can request file system access through the broker. However, the policy within the broker will limit access to the file system so that the sandbox can only write to a predetermined, specific set of file system paths. This prevents the sandbox from writing to arbitrary locations on the file system. As another example, the sandbox cannot launch applications directly. If Flash Player needs to launch the native control panel, the Flash Player engine must forward the request to the broker process. The broker will then handle the details of safely launching the native control panel. Access to other OS resources such as the camera are similarly controlled by the broker. This architecture ensures that the sandboxed process cannot directly access most parts of the operating system without that access first being verified by the broker.

Overall, the Flash Player sandbox process has been a journey of incremental improvements with each step bringing end-users a more secure environment. We started by supporting Protected Mode within Internet Explorer, which enabled Flash Player to run as a low integrity process with limited write capabilities. From there, we worked with Google on building the Chrome sandbox, which converted Flash Player to using a more robust broker implementation. This release of Flash Player Protected Mode for Firefox on Windows takes the Chrome implementation one step further by changing Flash Player to run with job limits on the process. With Flash Player Protected Mode being based on the same technology as Adobe Reader X, we are confident that this implementation will be a significant barrier and help prevent exploits via Flash Player for Firefox users. Going forward, we plan to continue to build on this infrastructure with more sandbox projects, such as our upcoming release of Flash Player for Chrome Pepper. As we combine these efforts with other efforts, such as the background updater, we are making it increasingly more difficult for Flash Player to be targeted for malicious purposes.

Peleus Uhley, Platform Security Strategist, ASSET
Rajesh Gwalani, Security Engineering Manager, Flash Runtime

Flash Player 11.3 delivers additional security capabilities for Mac and Firefox users

Today’s release of Flash Player 11.3 brings three important security improvements:

  • Flash Player Protected Mode (“sandboxing”) is now available for Firefox users on Windows.
  • For Mac users, this release will include the background updater for Mac OS X.
  • This release and all future Flash Player releases for Mac OS X will be signed with an Apple Developer ID, so that Flash Player can work with the new Gatekeeper technology for Mac OS X Mountain Lion (10.8).

Flash Player 11.3 brings the first production release of Flash Player Protected Mode for Firefox on Windows, which we first announced in February. This sandboxing technology is based on the same approach that is used within the Adobe Reader X Protected Mode sandbox. Flash Player Protected Mode for Firefox is another step in our efforts to raise the cost for attackers seeking to leverage a Flash Player bug in a working exploit that harms end-users. This approach has been very successful in protecting Adobe Reader X users, and we hope Flash Player Protected Mode will provide the same level of protection for Firefox users. For those interested in a more technical description of the sandbox, please see the blog post titled Inside Flash Player Protected Mode for Firefox authored by ASSET and the Flash Player team.

The background updater being delivered for Mac OS X uses the same design as the Flash Player updater on Windows. If the user chooses to accept background updates, then the Mac Launch Daemon will launch the background updater every hour to check for updates until it receives a response from the Adobe server. If the server responds that no update is available, the system will begin checking again 24 hours later. If a background update is available, the background updater can download and install the update without interrupting the end-user’s session with a prompt.

With Mac OS X Mountain Lion (10.8), Apple introduced a feature called “Gatekeeper,” which can help end-users distinguish trusted applications from potentially dangerous applications. Gatekeeper checks a developer’s unique Apple Developer ID to verify that an application is not known malware and that it hasn’t been tampered with. Starting with Flash Player 11.3, Adobe has started signing releases for Mac OS X using an Apple Developer ID certificate. Therefore, if the Gatekeeper setting is set to “Mac App Store and identified developers,” end-users will be able to install Flash Player without being blocked by Gatekeeper. If Gatekeeper blocks the installation of Flash Player with this setting, the end-user may have been subject to a phishing attack. That said, a reminder that Flash Player should only be downloaded from the www.adobe.com website.

ColdFusion 10 Provides Powerful New Security Tools

Today marks the release of ColdFusion 10. This release redefines many aspects of the ColdFusion 10 security model and incorporates the principles of the Adobe Secure Product Lifecycle (SPLC). With this release, we’ve worked to improve three major areas: Our goals were to improve patch adoption, improve the default configuration, and to make it easier for developers to create secure ColdFusion applications.

One of the most common reasons for a successful attack against a ColdFusion server is that it doesn’t have the latest security updates. In all fairness, this is not completely the administrator’s fault. Updating a ColdFusion server can be difficult due to the number of manual steps involved. Also, it is easy to miss a security update announcement. With ColdFusion 10, we make both of these steps easier. The ColdFusion 10 administration interface now incorporates a simple “Check For Updates” button. Alternatively, the server can be configured to automatically check for updates and send an email to the administrator once one becomes available.  Finally, the interface allows the developer to apply the patch through a single button click in the administrator interface. These features help make updating the server much more straightforward.

The second major area of improvement focused on making it easier for administrators to securely deploy ColdFusion 10. One of the most attractive characteristics of ColdFusion is that it has always been a simple development environment. Therefore, there were several features that favored making the early phases of development easier by leaving the complicated aspects disabled by default. The cost of this choice was that once developers were ready to deploy to production, they had to review a 35-page lockdown guide to enable and/or configure those more complicated features appropriately. With today’s release, we offer the option of starting the server in a secure-by-default configuration. This greatly simplifies the process of making a server production-ready with a secure configuration.

The last area of improvement focused on providing developers with an increased number of tools for creating secure ColdFusion applications. One example is that we have provided integrated OWASP ESAPI support in the platform. We originally started to include ESAPI in ColdFusion 9 just for our internal needs of addressing cross-site scripting (XSS). Once developers noticed the library in the update, they quickly developed several blogs on how to unofficially start using it in your ColdFusion code. Today’s release formally exposes several aspects of ESAPI through ColdFusion API’s to help developers avoid cross-site scripting vulnerabilities.

We also improved the session management capabilities in ColdFusion–another aspect of making it easier for developers to create ColdFusion applications. We have improved APIs to make it easier to set the HttpOnly and Secure flags on cookies. Session rotation has been improved through new SessionRotate and SessionInvalidate APIs.  To combat cross-site request forgery (CSRF) with active sessions, the ColdFusion team added an API for generating unique tokens for form requests. The team also added support for protecting against clickjacking attacks on active users by adding support for the X-FRAME-OPTIONS header.

ColdFusion 10 is a significant advancement in helping ColdFusion customers improve their secure product lifecycle processes. It is even easier to create secure content, deploy the content on a secure server and manage the server updates once it is deployed. This is only an introduction to the major security enhancements in ColdFusion 10. For more information on all the new security APIs for developers, please see the ColdFusion documentation on Security Enhancements in ColdFusion 10.  ColdFusion administrators should review the Administering Security and the Server Update sections for a complete list of server improvements.

Working Together on Keeping Our Mutual Customers Up-to-Date

No doubt, staying up-to-date on the latest security patches is critical in today’s threat environment. In addition to the many security initiatives we engage in as a vendor to help keep our products and our users safe, the single most important advice we can give to users is to always stay up-to-date. The vast majority of users who ever encountered a security problem using Adobe products were attacked via a known vulnerability that was patched in more recent versions of the software. This is why we’ve invested so much in the Adobe Reader/Acrobat update mechanism introduced in 2010, and more recently in the Flash Player background updater delivered in March of this year and used for the first time with last week’s Flash Player security update. Both update mechanisms give Windows users the option to install updates automatically, without user interaction. A Mac version of the Flash Player background updater is currently in beta and will be available very soon—stay tuned.

In the meantime, we welcome today’s initiative by Apple to encourage Mac users to stay up-to-date: With the Apple Safari 5.1.7 update released today, Apple is disabling older versions of Flash Player (specifically Flash Player 10.1.102.64 and earlier) and directing users to the Flash Player Download Center, from where they can install the latest, most secure version of Flash Player. For more information, visit http://support.apple.com/kb/HT5271.

Remember: The single most important thing we can do to protect ourselves from the bad guys is to stay up-to-date. A thank you to the security team at Apple for working with us to help protect our mutual customers!

A Basic Distributed Fuzzing Framework for FOE

Last week, CERT released a Python-based file format fuzzer for Windows called Failure Observation Engine (FOE). It is a Windows port of their Linux-based fuzzer, Basic Fuzzing Framework(BFF). CERT provided Adobe with an advanced copy of FOE for internal testing, and we have found it to be very useful. One of the key features of FOE is its simplicity. The configuration file is very straightforward, which makes it easy to introduce to new teams. We have also used the “copy” mode of FOE to help automate triaging large sets of external reports. It is a great tool to have for dumb fuzzing. For this blog, I am going to discuss a simple Python wrapper I created during my initial testing of the tool which helped to coordinate running FOE across multiple machines. This approach allows you to pull seed files from a centralized location. You can also view the status of all of the fuzzing runs and their results from the same location. If you are not interested in writing a distributed fuzzing framework, then you might want to stop reading because the rest of this blog is all about code. :-)

The goal of this distributed fuzzing framework design was to create something simple, lightweight since I was experimenting with a new tool. I set a personal limit of keeping the project to around 1,000 lines of code in order to scope my time investment. That said, I also wanted to build something that I could easily scale later in the event that I liked it enough to invest more time. For the client-side code, I used Python since that was already required for FOE. On the server side, I had a Linux/Apache/MySQL/Perl (LAMP) server. Knowing that everyone has their own preference for server-side authoring, I am only going to describe the server-side architecture rather than providing the Perl source. Nothing in the server-side code is so complicated that a Web developer couldn’t figure out how to do an implementation in the language of their choice from this description. While I designed this for testing the FOE fuzzer, only one file in the entire system is FOE-specific, which makes the infrastructure reusable for other fuzzers. The current name of the main script is “dffiac.py” because I thought of this project as a, “Distributed Fuzzing Framework in a Can”.

For this design, all of the tracking logic is consolidated on the centralized server. The Python script will issue requests for data using simple GETs and POSTs over HTTP. The server will respond to the requests with basic XML. The fuzzing seed files are hosted on the server in a public web server directory from which they can be downloaded. Identified crashes will be uploaded to the server and placed in a public web server directory. Both the client-side and server-side codes are agnostic with regards to the format of the seed files and the targeted application. Therefore, this should be relatively easy to set up in any infrastructure.

 

The database design

In this design, the mySQL server coordinates the runs across all the different machines. You first need a table containing all the files that you want to fuzz. At a bare minimum, it needs a unique primary key (fid), the name of the file and its location on the web server. I currently have a database of more than 60,000 SWF files that are sub-categorized based on type so that I can focus fuzzing to specific types of SWF files. However, name and location will get you started with fuzzing.

 

seed_files

Field Type Description
fid Integer (primary key, autoincrement) The unique File ID for this entry
name VARCHAR The filename
location VARCHAR The relative web directory for the file (e.g. “/fuzzing/files/”)

 

The next thing that you will need is a table to track all of the fuzzing runs. A “run” is defined as one or more servers testing with the same FOE configuration file against a defined set of seed files. There are multiple ways in which you can define the selected seed files for the run. For instance, you may want to use FOE against multiple types of applications. For this scenario, you might have a different seed_files for each file type. To support the need for different seed_files tables, the design of run_records requires that you provide the “table_name” that will be used for this run. Once a seed_files table is selected, it may be necessary to further restrict the run to a subset of files within the seed_files tables. For instance, you may only want to select a subset of files within the given table. Therefore, the design requires that you provide a “type” parameter which denotes the method for selecting files from the seed_files table. The value of type can include values such as “all”, “range” or any other sub-category you want to define. As an example, this particular run may be a “range” type that starts at start_fid and stops at end_fid.

 

run_records

Field Type Description
rid Integer (primary key, autoincrement) The unique ID for this run
name VARCHAR The human readable name for the run
description VARCHAR A description for the run (e.g. config or mutation used, # of iterations, etc.)
type VARCHAR Values can include (all, range, etc)
table_name VARCHAR The name of the seed_files table that will be used for testing
start_fid Integer The first fid from seed_files to be fuzzed in this run
end_fid Integer The last fid from seed_files to be fuzzed in this run
current_fid Integer This tracks the next fid to be tested during the run

 

For every run, you will have multiple servers running FOE. For each server instance, it will be necessary to track the server name, when it started, the current status of the server, and when it last provided an update. The status will include values such as “running” and “complete.”  You can infer whether a machine has died based on whether it has been too long since the timestamp for the last_update field was modified.

 

server_instances

Field Type Description
siid Integer (primary key, autoincrement) The unique server instance ID
server_name VARCHAR The name of the server (e.g hostname + IP address)
status VARCHAR Is it running or has it completed.
start_time timestamp When did this instance start?
last_update timestamp When was the last request from this instance?
rid Integer What run_record is this instance associated with?

 

Lastly, you will need a table to record the results. The script will record the server_instance ID (siid) where the crash was found in case there are issues with reproducing the crash. This will allow a QA to retest on the original machine where the crash occured. It is also necessary to track which run was able to identify the crash. The rid is not recorded because it can already be extrapolated from the siid. According to database normalization rules, redundant information should not be stored in tables. In this design, the script will record a result in fuzz_records regardless of whether a crash was identified.  This allows you to track which files have been tested against which FOE configurations. If a crash is identified, the web server directory where the crash result was stored is also recorded.

 

fuzz_records

Field Type Description
frid Integer (primary key, autoincrement) The unique fuzz record ID
fid Integer The seed_files ID for this entry
siid Integer The server instance ID for this entry
crash Boolean Whether a crash was recorded during this test
location VARCHAR Where the crash result was stored (e.g. /results/run_id/)

 

The config file

You will start the Python script by providing a simple configuration file in the command line: “python dffiac.py dffiac.cfg”. The configuration file is in the same format as the FOE configuration file and contains the following:

 

dffiac.cfg

[foeoptions]
python_location=C:Python26python.exe
config_location=C:FOEconfigsmy_foe_config.cfg

 

[runoptions]
run_id=1
web_server=http://my.internal.server.com
upload_cgi=/fuzzers/crash_uploader.cgi
action_cgi=/fuzzers/action_handler.cgi

 

[logoptions]
log_dir=C:dffiaclogs

 

The foeoptions section tells the script where to find the Python executable and the location of the FOE config script you will use for this run. The runoptions section provides the run id (rid) the database is using to track this run along with the location of the web server, the path to the action_handler.cgi and the path to the CGI that will handle the file uploads. The logoptions allows you to specify where the script will log local information regarding the run. The logs directory needs to exist prior to starting the script. The config_location and run_id are likely the only two elements that will change from run to run.

 

The transaction flow

For this next section, we will review the transactions between the dffiac.py script and the web server. The web server will read in the GET parameters, execute the relevant SQL query and return the results as XML. All but one request is handled by the action_handler defined in the dffiac.cfg config file. The upload of the crash results is handled by the upload_cgi defined in the dffiac.cfg config file.

Once dffiac.py has started and been initialized by the config file, the script will begin sending requests to the server. An “action” parameter informs the action_handler CGI which query to perform. The server will always respond to the Python script with the relevant information for the request in a simple XML format.

 

The first HTTP request from the Python code will be to gather all the information regarding the run_id provided in the config file:

GET /fuzzers/action_handler.cgi?action=getRunInfo&rid=1

 

The web server will then perform this SQL query with the rid that was provided:

select run_type,start_fid,end_fid from run_records where rid = ?

 

The results from the query will be used to return the following XML (assuming the run is defined as the range of fids from 1-25):

<xml>
  <run_type>range</run_type>
  <start_fid>1</start_fid>
  <end_fid>25</end_fid>
</xml>

 

Now that dffiac.py has the information for the run, it will then inform the web server that the run is starting:

GET /fuzzers/action_handler.cgi?action=recordServerStart&rid=1&serverName=server1

 

This HTTP request will result in the following SQL query:

insert into server_instances (server_name,status,start_time,rid) values (?,'running',NOW(),?)

 

The insert_id from this query (siid) becomes the unique identifier for this instance and is returned for use in later queries:

<xml>
  <siid>1</siid>
</xml>

 

Now that this instance has officially registered to contribute to this run, the Python script will begin requesting individual files to test:

GET /fuzzers/action_handler.cgi?action=getNextFid&rid=1&run_type=range

 

The corresponding SQL query will vary depending on how you have defined your run. For this example, we will assume that this is a basic run that will incrementally walk through the file IDs in the seed_files table. To accomplish this, we create an SQL variable called “value” and assign it the current_fid. By recording the value of the current fid and incrementing the “value” in a single statement, we can avoid a race condition when multiple servers are running.

update run_records set current_fid = current_fid + 1 where rid = ? and @value := current_fid;

 

At this point, “@value” is set to 1 which is the fid the Python script will test and the current_fid in the database table has been incremented to 2. The web server can then fetch “@value with the following SQL command:

select @value;

 

Since the process of asking for the next fid will automatically increment the value of current_fid, the value of current_fid will eventually exceed the value of the end_fid in the database table. While it may seem weird, it doesn’t hurt the process. This can be allowed to occur or you can add a little more server-side logic to have the server return -1 as the current_fid to stop the run when end_fid is reached.

 

The “select @value” result will be returned to Python script as the current_fid available for testing:

<xml>
  <current_fid>1</current_fid>
</xml>

 

The Python script will then compare the current_fid with the end_fid that it received earlier to determine whether to stop testing.

 

Once we have the fid of the file that we will test, we can then fetch the information for that specific file:

GET /fuzzers/action_handler.cgi?action=getFileInfo&rid=1&fid=1

 

Using the rid, the web server can query the run_records table to find the table_name that contains the seed files.

select table_name from run_records where rid = ?

 

Assuming the result of that query will be saved as the variable, “$table_name”, the web server can construct the query to retrieve the file name and the directory location that corresponds to the file id:

"select name, location from" . $table_name . "where fid = ?"

 

The web server will return the file name and location with the following XML:

<xml>
  <name>seed.txt</name>
  <location>/fuzzers/files/</location>
</xml>

 

Now, that the location of the seed file is known, it can be downloaded by dffiac.py and saved in the FOE seeds directory. The FOE fuzzer is then started, and dffiac.py waits for FOE to finish testing that seed file. Once FOE testing has completed, the result will need to be recorded by sending the fid and a boolean value indicating whether a crash was identified with that test:

GET /fuzzers/action_handler.cgi?action=recordResult&siid=1&fid=1&crash=1

 

This will result in the following query:

insert into fuzz_records (siid,fid,crash) values (?,?,?)

 

The web server will also record that it has received an update from this fuzzing server instance in the server_instances table to let us know that it is still alive and processing:

update server_instances set lastUpdate = NOW() where siid = ?

 

The result is recorded regardless of success or failure so that you can track which files have been successfully tested with which configs. You could infer this from the run_records, but if a machine dies, a file might be skipped. The server-side code will take the insert_id from the fuzz_records statement (frid) and return the following XML:

<xml>
  <frid>1</frid>
</xml>

 

If there was a crash, the Python script will zip up the crash directory, base64 encode the file and POST it to the upload_cgi identified in the dffiac configuration file. The script will leave the zip file on the fuzzing server if an error is detected during the upload. Along with the zip file, it will send the rid and frid. The rid is used to store files in a web server directory unique to that run. The frid is sent so that the action_handler can update the fuzz_records entry with the location of the uploaded crash file (e.g. “/results/1/zip_file_name.zip”) in the following SQL query:

update fuzz_records set location = ? where frid = ?

 

A successful upload will result in the following XML:

<xml>
  <success>1</success>
</xml>

 

A failed upload can return the description of the error to the client with the following XML:

<xml>
  <error>Replace me with the actual error description</error>
</xml>

 

The dffiac.py script will then continue retrieving new files and testing them with FOE until the end_fid is reached. Then the final call to the web server will record that this fuzzing server instance has completed its run and has stopped:

GET /fuzzers/action_handler.cgi?action=recordRunComplete&siid=1

 

The web server will record the completion with the following SQL query:

update server_instances set status='complete', lastUpdate=NOW() where siid = ?

 

The web server will respond to this last request with the following XML:

<xml>
  <success>1</success>
</xml>

 

The last XML response is currently ignored by the Python script but a more robust implementation could double-check for errors.

 

The Python code

The logic for the distributed fuzzing framework is split into one main file (dffiac.py) and three libraries that are contained in a /libs directory. We’ll start with the three libraries in the /libs directory. The code below is the library that contains the utilities for creating the zip file of the crash result.

 

ZipUtil.py (30 lines)

import zipfile
import os

 

class ZipUtil:

 

#Create a zip file and add everything in path_ref
def createZipFile(self, path_ref, filename):
  zip_file = zipfile.ZipFile(filename, 'w')

 

  #Check to see if path_ref is a file or folder
  if os.path.isfile(path_ref):
    zip_file.write(path_ref)
  else:
    self.addFolder(zip_file, path_ref)

 

  zip_file.close()

 

#Recursively add folder contents to the zip file
def addFolder(self, zip_file, folder):
  for file in os.listdir(folder):

 

    #Get path of child element
    child_path = os.path.join(folder, file)

 

    #Check to see if the child is a file or folder
    if os.path.isfile(child_path):
      zip_file.write(child_path)
    elif os.path.isdir(child_path):
      self.addFolder(zip_file, child_path)

 

The second library will base64 encode the zip file prior to uploading it to the web server via a POST method.  On the server side, you will need to base64 decode the file before writing it to disk.

 

PostHandler.py (77 lines)

import mimetools
import mimetypes
import urllib
import urllib2
import base64

 

class PostHandler(object):

 

  def __init__(self,webServer,uploadCGI):
    self.web_server = webServer
    self.upload_cgi = uploadCGI
    self.form_vars = []
    self.file_attachments = []
    self.mime_boundary = mimetools.choose_boundary()
    return

 

  #Add a form field to the request
  def add_form_vars(self, name, value):
    self.form_vars.append((name, value))
    return

 

  #Get the mimetype for the attachment
  def get_mimetype(self,filename):
    mimetype = mimetypes.guess_type(filename)[0] or 'application/octet-stream'
    return(mimetype)

  #Add a base64 encoded file attachment
  def append_file(self, var_name, filename, file_ref, mimetype=None):
    raw = file_ref.read()
    body = base64.standard_b64encode(raw)
    if mimetype is None:
      mimetype = self.get_mimetype(filename)
    self.file_attachments.append((var_name, filename, mimetype, body))

  #Get the body of the request as a string
  def get_request_body(self):
    lines = []
    section_boundary = '--' + self.mime_boundary

 

    # Add the form fields
    for (name, value) in self.form_vars:
      lines.append(section_boundary)
      lines.append('Content-Disposition: form-data; name="%s"' % name)
      lines.append('')
      lines.append(value)

 

    # Add the files to upload
    for var_name, filename, content_type, data in self.file_attachments:
      lines.append(section_boundary)
      lines.append('Content-Disposition: file; name="%s"; filename="%s"' % 
        (var_name, filename))
      lines.append('Content-Type: %s' % content_type)
      lines.append('Content-Transfer-Encoding: Base64')
      lines.append('')
      lines.append(data)

 

    #Add the final boundary
    lines.append('--' + self.mime_boundary + '--')
    lines.append('')

 

    #Combine the list into one long string
    CRLF = 'rn'
    return CRLF.join(lines)
  #Send the final request
  def send_request(self):
    request = urllib2.Request(self.web_server + self.upload_cgi)
    content_type = 'multipart/form-data; boundary=%s' % self.mime_boundary
    request.add_header('Content-type',content_type)

 

    form_data = self.get_request_body()
    request.add_header('Content-length',len(form_data))
    request.add_data(form_data)

 

    result = urllib2.urlopen(request).read()
    return result

 

 

The third library handles the communication between the client and server. It will generate the GET requests and parse the XML responses.

 

actionHandler.py (94 lines)

import urllib
import urllib2
from xml.dom.minidom import parseString

 

class ActionHandler:

 

  #Initialize with the information from the config file
  def __init__(self,options,localLog):
    self.webServer = options['runoptions']['web_server']
    self.uploadCGI = options['runoptions']['upload_cgi']
    self.actionCGI = options['runoptions']['action_cgi']
    localLog.write("Configured web servern")

 

  #Parse the XML for the requested text value
  def getText(self,nodelist):
    rc = []
    for node in nodelist:
      if node.nodeType == node.TEXT_NODE:
        rc.append(node.data)
    return ''.join(rc)

 

  #Make a web request to the server with the provided GET parameters
  def retrieveInfo(self,values):
    url = self.webServer + self.actionCGI
    data = urllib.urlencode(values)
    response = urllib2.urlopen(url,data)
    xml = response.read()
    response.close()
    return(xml)

 

  #Get the information for the rid provided in the config file
  def getRunInfo(self,rid):
    values = {'action':'getRunInfo',
      'rid': rid}
    xml = self.retrieveInfo(values)
    dom = parseString(xml)
    run_type = self.getText(dom.getElementsByTagName("run_type")[0].childNodes)
    start_fid = self.getText(dom.getElementsByTagName("start_fid")[0].childNodes)
    end_fid = self.getText(dom.getElementsByTagName("end_fid")[0].childNodes)
    return (run_type,start_fid,end_fid)

 

  #Record that this server instance is starting a run
  def recordServerStart(self,rid,serverName):
    values = {'action':'recordServerStart',
      'rid': rid,
      'serverName':serverName}
    xml = self.retrieveInfo(values)
    dom = parseString(xml)
    lastrowid = self.getText(dom.getElementsByTagName("siid")[0].childNodes)
    return (lastrowid)

 

  #Record that the server is now complete with its tests
  def recordRunComplete(self,siid):
    values = {'action':'recordRunComplete',
      'siid': siid}
    xml = self.retrieveInfo(values)

 

  #Get the fid for the next file to be fuzzed
  def getNextFid(self,rid,fid,run_type):
    values = {'action':'getNextFid',
      'fid':fid,
      'run_type':run_type,
      'rid':rid}
    xml = self.retrieveInfo(values)
    dom = parseString(xml)
    current_id = self.getText(dom.getElementsByTagName("current_fid")[0].childNodes)
    return current_id

 

  #Get the file name and location for the selected fid
  def getFileInfo(self,rid,fInfo):
    values = {'action':'getFileInfo',
      'rid':rid,
      'fid':fInfo.fid}
    xml = self.retrieveInfo(values)
    dom = parseString(xml)
    fInfo.name = self.getText(dom.getElementsByTagName("name")[0].childNodes)
    fInfo.location = self.getText(dom.getElementsByTagName("location")[0].childNodes)

 

#Record the result from the fuzzing test
  def recordResult(self,siid,fid,result):
    values = {'action':'recordResult',
      'siid':siid,
      'fid':fid,
      'crash':result}
    xml = self.retrieveInfo(values)
    dom = parseString(xml)
    frid = self.getText(dom.getElementsByTagName("frid")[0].childNodes)
    return frid

 

Finally, we get to the main file which is responsible for reading the config file and driving the fuzzing run. This is the only file that is specific to the FOE fuzzer.

 

dffiac.py (177 lines)

import os
import shutil
import socket
import subprocess
import sys
import urllib2
import ConfigParser
import time

 

sys.path.append("libs")

 

from ZipUtil import ZipUtil
from PostHandler import PostHandler
from ActionHandler import ActionHandler

 

#This will track the fid, and location of the file
class FileInfo:
  pass

 

#Convert the options in the config file to lists
def parse_options(config):
  options = {}
  for section in config.sections():
    options[section] = {}
    for (option, value) in config.items(section):
      options[section][option] = value
  return options

#Create a local text file for logging
def openLog(options):
  localLogDir = options['logoptions']['log_dir']
  runName = options['runoptions']['run_id']
  timestamp = int(time.time())
  localLog = open(localLogDir + runName + '_' + str(timestamp) + '.txt', 'w')
  localLog.write("Starting run: " + runName + " at " + str(timestamp) + "n")
  return localLog

 

#Close the local text file log
def closeLog(localLog):
  localLog.write("COMPLETEn")
  localLog.close()

 

#Download the next file to be fuzzed
def getNextFile(fInfo, options, foe_options, localLog):
  u = urllib2.urlopen(options['runoptions']['web_server'] + fInfo.location + fInfo.name)
  localFile = open(foe_options['runoptions']['seedsdir'] + "\" + fInfo.name, 'wb')
  localFile.write(u.read())
  localFile.close()
  localLog.write ('Created file: ' + foe_options['runoptions']['seedsdir'] + "\" + fInfo.name + 'n')

 

#Store the results in a zip file
def createZip(outputDir,filename):
  zipTool = ZipUtil()
  zipTool.toZip(outputDir,filename)
  zipFile = open(filename,'rb')
  return zipFile

 

#Post the zip file to the server
def postZip(options,frid,rid,filename,zipFile):
  form = PostHandler(options['runoptions']['web_server'], options['runoptions']['upload_cgi'])
  form.add_form_vars('frid',frid)
  form.add_form_vars('rid',rid)
  form.append_file('fname',filename,zipFile)
  result = form.send_request()
  return result

 

if __name__ == "__main__":
  if (len(sys.argv) < 2):
    print "usage: %s <runconfig.cfg>" % sys.argv[0]
    exit(1)

 

  #Read the dffiac config file
  configFile = sys.argv[1]
  if not os.path.exists(configFile):
    print "config file doesn't exist: %s" % configFile
    exit(1)
  config = ConfigParser.SafeConfigParser()
  config.read(configFile)

 

  #Read the foe config file
  options = parse_options(config)
  config2 = ConfigParser.SafeConfigParser()
  config2.read (options['foeoptions']['config_location'])
  foe_options = parse_options(config2)

 

  #Set up logging
  localLog = openLog(options)

 

  #Configure the web server
  aHandler = ActionHandler(options, localLog)

 

  #Get the information for this run
  rid = options['runoptions']['run_id']
  (run_type,start_fid,end_fid) = aHandler.getRunInfo(rid)

 

  #Record server start
  hostName = socket.gethostname()
  hostIP = socket.gethostbyname(hostName)
  serverName = hostName + "_" + hostIP
  siid = aHandler.recordServerStart(rid, serverName)
  localLog.write("Starting as server instance: " + siid + "n")

 

  #Get the first file to be processed
  fInfo = FileInfo()
  fInfo.fid = aHandler.getNextFid(rid,start_fid,run_type)
  localLog.flush()

 

  #loop until done
  while (int(fInfo.fid) <= int(end_fid)):
    #Get the location information for the current file
    aHandler.getFileInfo(rid,fInfo)

 

    #Download and store the file
    getNextFile(fInfo,options,foe_options,localLog)

    outputDir = foe_options['runoptions']['outputdir'] + "\" + foe_options['runoptions']['runid']

 

    #Run fuzzer
    exitCode = subprocess.call(options['foeoptions']['python_location'] + " " + options['foeoptions']['foe_location'] + " " + options['foeoptions']['config_location'], shell=True)

 

    #Check for completion of a succesful run
    if exitCode != 0:
      localLog.write("Error running foe on fid " + fInfo.fid + "n")
    else:
      dirList = os.listdir(outputDir)

 

      #Detect whether bugs were found
      if len(dirList) > 2:

 

        #Record the result in fuzz_records
        frid = aHandler.recordResult(siid,fInfo.fid,1)
        localLog.write("Recording frid: " + frid + "n")

 

        #Store the results in a zip file
        filename = frid + "-" + fInfo.name + ".zip"
        file_path = os.getcwd() + filename
        zipFile = createZip(outputDir,file_path)

 

        #Post the zip file back to the server
        result = postZip(options,frid,rid,filename,zipFile)
        zipFile.close()

 

        #Make sure the file got there OK
        if result.find("error") == -1:
          localLog.write("Results successfully uploaded.n")
          os.remove(file_path)
        else:
          localLog.write("There was an error in the upload: " + result + "n")

 

        localLog.write("Found bugs with " + fInfo.fid + "n")
      else:
        #Record no bugs found in the directory
        aHandler.recordResult(siid,fInfo.fid,0)
        localLog.write("No bugs found with " + fInfo.fid + "n")

 

    #The if len(dirlist) check on the results is complete
    #Erase files so that FOE starts clean on the next run
    os.remove(foe_options['runoptions']['seedsdir'] + "\" + fInfo.name)
    shutil.rmtree(outputDir)
    localLog.flush()

 

    #Get the next FID
    fInfo.fid = aHandler.getNextFid(rid,fInfo.fid,run_type)

 

  #The while loop is complete
  #Record this run instance as being complete
  aHandler.recordRunComplete(siid)

 

  #Close the local file log
  closeLog(localLog)

 

This blog is only meant to describe how you can stand up a basic distributed fuzzing framework based on FOE fairly quickly in approximately 1,000 lines of code. The client-side code turned out to be 378 lines, my server-side action_handler CGI was 150 lines and the upload CGI was 72 lines of Perl. That is enough to get the script to run based on information from a database. With the remaining 400 lines, I created a CGI to display the status of my runs and a CGI to generate a run. You will also want to write a script to mirror the dffiac.cfg and FOE configuration file across machines. Over time, I expect that you would make this design more robust for your particular infrastructure and needs. You can also expand this infrastructure for your other fuzzers with some modifications to the main file. What I provide here is just enough to help you get started performing distributed fuzzing with a small amount of coding and the FOE fuzzer.

 

Permission for this blog entry is granted as CCplus, http://www.adobe.com/communities/guidelines/ccplus/commercialcode_plus_permission.html

 

Straight from the Source: SOURCE Boston

Karthik here from Adobe PSIRT. My colleague from the Adobe Acrobat team, Manish Pali, and I will be speaking next week at the SOURCE Boston conference. In our talk, we’ll cover some of the processes behind incident response at Adobe, including our security community outreach via the Microsoft Active Protections Program (MAPP), and automation strategies and solutions from the trenches for new and known vulnerability reports.

Demo alert! Manish is going to demo one of his tools for incident-triage automation—we’re hoping this and other aspects of the talk will benefit our friends on other incident response teams.

Please swing by our talk, if you’ll be at SOURCE Boston. We look forward to catching up in hallway conversations.

See you in Boston,

Karthik

Background on Security Bulletin APSB12-08

Today we released Security Bulletin APSB12-08 along with corresponding updates for Adobe Reader and Acrobat. We’d like to highlight a few changes we are making with today’s releases.

Rendering Flash (SWF) Content in Adobe Reader and Acrobat 9.5.1

First off, starting with the Adobe Reader and Acrobat 9.5.1 updates, Adobe Reader and Acrobat 9.x on Windows and Macintosh will use the Adobe Flash Player plugin version installed on the user’s system (rather than the Authplay component that ships with Adobe Reader and Acrobat) to render any Flash (SWF) content contained in PDF files. We added an Application Programming Interface (API) to both Adobe Reader/Acrobat and Flash Player to allow Adobe Reader/Acrobat to communicate directly with a Netscape Plugin Application Programming Interface (NPAPI) version of Flash Player installed on the user’s system. From a security perspective, this means that Adobe Reader/Acrobat 9.x users will no longer have to update Adobe Reader/Acrobat each time we make available an update for Flash Player. This will be particularly beneficial to customers in managed environments because fewer updates help reduce the overhead for IT administration.

If Adobe Reader or Acrobat 9.5.1 is installed on a system that does not have the NPAPI version of Flash Player installed and the user opens a PDF file that includes Flash (SWF) content, a dialog will prompt the user to download and install the latest Flash Player. (Browsers such as Firefox, Opera and Safari use the NPAPI version of Flash Player as opposed to the ActiveX version of Flash Player used by Internet Explorer. Chrome uses a bundled version of Flash Player, even if there is an NPAPI version of Flash Player installed on the system.)

We are currently working on integrating the same API into Adobe Reader and Acrobat X, and will follow up with another blog post once this functionality is available in version X.

Rendering 3D Content in PDF Files

We also changed the default behavior in Adobe Reader and Acrobat 9.5.1 to disable the rendering of 3D content. Since the majority of consumers do not typically open PDF files that include 3D content and 3D content in untrusted documents has been a previous vector of attack we have disabled this functionality by default starting with version 9.5.1. Users have the option to enable 3D content, but a Yellow Message Bar will flag potentially harmful documents in the event that untrusted documents attempt to render 3D content. IT administrators in managed environments will also have the option of turning this behavior off for trusted documents.

More information on the two changes to content rendering described above is available in the Adobe Reader and Acrobat 9.5.1 release notes.

Further Alignment of the Adobe Reader/Acrobat Update Cycle with Microsoft’s Model

In June 2009, we shipped our first quarterly security update for Adobe Reader and Acrobat. Since then, we have come a long way in putting mitigations into place that make Adobe Reader and Acrobat a less attractive attack target. Sandboxing Adobe Reader and Acrobat X, in particular, has led to greater than expected results. Attackers have indicated through their target selection thus far that the extra effort required to attack version X is not currently worth it. Additionally, we have seen a lower volume of vulnerability reports overall against Adobe Reader and Adobe Acrobat. Given the shift in the threat landscape and the lower volume of vulnerability reports, we have revisited the decision to follow a strict quarterly release cycle.

After three years of shipping a security update once a quarter and announcing the date of the next update the same day we ship the current update, we are making a change. We are shifting to a model that more closely aligns with the familiar “Microsoft Patch Tuesday” cadence. We will continue to publish a prenotification three business days before we release a security update to Adobe Reader and Acrobat. We will continue to publish security updates on the second Tuesday of the month. We will continue to be flexible and respond “out of cycle” to urgent needs such as a zero-day attack. What we are discontinuing is the quarterly cadence and the pre-announcement of the next scheduled release date in the security bulletin for the previous release. We will publish updates to Adobe Reader and Acrobat as needed throughout the year to best address customer requirements and keep all of our users safe.

A Note on the Update Priority Ratings in APSB12-08

Finally, in today’s Security Bulletin, we rated Adobe Reader and Acrobat 9.5.1 for Windows as a “Priority 1″ update, while Adobe Reader and Acrobat X (10.1.2) was rated a “Priority 2″ update. This was an interesting decision, and we thought we would provide some background information: Although there are no exploits in the wild targeting any of the vulnerabilities addressed in Adobe Reader 9.5.1, Adobe Reader 9.x continues to be a target for attackers, so, for users who can not update to Adobe Reader X, we feel that urgently updating Adobe Reader 9.x remains a must to stay ahead of potential attacks.

Since the release of Adobe Reader X, Protected Mode mitigations (or the Protected View mitigations in Adobe Acrobat X version 10.1 and later) continue to be the best way to block potentially malicious behavior in PDF files. Therefore, a “Priority 2″ designation is appropriate for the Adobe Reader X and Acrobat X 10.1.2 updates. Adobe Reader and Acrobat for Macintosh and Linux have not historically been a target of attacks, and therefore are also assigned a “Priority 2.”