Posts in Category "Security"

Adobe @ DefCon 2017

A few weeks ago we joined fellow members of the Adobe security team at Defcon 2017. The conference attendance has grown in size over the years as security has become a mainstream in today’s world. We were looking forward to the great line up of briefings, villages, and capture the flag (CTF) contests – Defcon never disappoints.

Here are some of the briefings this year that we found interesting and valuable to our work here at Adobe.

“A New Era of SSRF – Exploiting URL Parser in Trending Programming Languages” by Orange Tsai

The best part of this presentation was that it was very hands-on and less theoretical – something we look forward to in a presentation at DefCon. The presentation discussed zero-day vulnerabilities in URL parsers and requesters for widely-used languages like Java, Python, Ruby, JavaScript, and more. It was really helpful since Adobe is a multilingual shop. They also discussed about the mitigation strategies. Orange Tsai, the presenter, followed the talk with an interesting demo. He chained 4 different vulnerabilities together including SSRF, CRLF injection, unsafe marshal in memcache, and ruby gem to perform a RCE (Remote Code Execution) on Github Enterprise. The combined technique was called “Protocol Smuggling.” It earned him a bounty of $12,500 from GitHub.

“First SHA-1 collision” presented by Elie Bursztein from Google

This was one of the presentations most looked forward to by attendees – there was a significant wait to even get in. This presentation was super helpful since they demoed how an attacker could forge PDF documents to have the same hash yet different content. We really appreciated the effort that has been put into the research from the anti-abuse team within Google. This work was based on cryptanalysis – considered to be 100,000 times more effective than a brute-force attack. For the tech community, these findings emphasize the need for reducing SHA-1 usage. Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates. The team also briefly discussed safer hashing algorithms such as SHA256 and bcrypt. They also spent some time discussing the future of hash security.

“Friday the 13th: JSON Attacks!” by Alvaro Muñoz and Oleksandr Mirosh

The briefing kicked off with examples of deserialization attacks and an explanation of how 2016 came to be known as the year of the “Java Deserialization Apocalypse.” The talk focused on JSON libraries which allow arbitrary code execution upon deserialization of untrusted data. It was followed by a walkthrough of deserialization vulnerabilities in some of the most common Java and .NET libraries. The talk emphasized that the format used for serialization is irrelevant for deserialization attacks. It could be binary data, text such as XML, JSON, or even custom binary formats. The presenter noted that serializers cannot be trusted with untrusted data. The talk provided guidance on detecting if a serializer could be attacked. The briefing ended with the speakers providing mitigation advice to help avoid vulnerable configurations that could leave serialization libraries vulnerable. This briefing was particularly valuable as it helped us better understand JSON attacks, how to discover vulnerable deserialization library configurations, and how to mitigate known issues.

“CableTap: Wirelessly Tapping Your Home Network” by Chris Grayson, Logan Lamb, and Marc Newlin

At this briefing, presenters discussed 26 critical vulnerabilities they discovered in some of the major ISP provided network devices. They also showcased some cool attack chains enabling someone to take complete control over these devices and their network.

Hacking Village

One of the other major highlights of Defcon 25 was the Voting Machine Village. For the first time ever US voting machines were brought into the Hacking Village. Many vulnerabilities were found in these machines over the course of DefCon. It was also reported that the machines were hacked in under 2 hours. The Recon-village also never fails to deliver the best of social engineering exploits. It reminds us of the importance of security training and education. Additionally, the demo labs were thought provoking. We found a lot of tools to potentially add to our toolkits. A couple of the cool ones included Android Tamer by Anant Srivastava which focused on Android Security and EAPHammer – a toolkit for targeted twin attacks on WPA2-Enterprise networks by Gabriel Ryan.

Overall these industry events provide a great opportunity for our own security researchers to mingle with and learn from the broader security community. They help keep our knowledge and skills up-to-date. They also provide invaluable tools to help us better mitigate threats and continue to evolve our Adobe SPLC (Secure Product Lifecycle) process.

Lakshmi Sudheer & Narayan Gowraj
Security Researchers

Leveraging Security Headers for Better Web App Security

Modern browsers support quite a few HTTP headers that provide an additional layer in any defense-in-depth strategy. If present in an HTTP response, these headers enable compatible browsers to enforce certain security properties. In early 2016, we undertook an Adobe-wide project to implement security headers.

Implementing a security control in a medium to large scale organization, consisting of many teams and projects, presents its own unique challenges. For example, should we go after select headers or all, which headers do we select, how do we encourage adoption, how do we verify, and so on. Here are a few interesting observations made in our quest to implement security headers:

Browser vs. Non-browser clients

Compatible browsers enforce security policies contained in security headers while incompatible browsers or non-browser clients simply ignore them. However, additional considerations are necessary that may be unique to your setup. For example, Adobe has a range of clients: full browser-based, headless browser-based (e.g., Chromium embedded framework), and desktop applications. Implementing the security property required by a HTTP Strict Transport Security (HSTS) header (i.e. all traffic is sent over TLS for all such clients) requires a combination of an enabling HSTS header and 302 redirects from HTTP to HTTPS. This is not as secure. The reason is that incompatible headless browser-based clients or desktop applications will ignore a HSTS header sent by the server-side. Also, if the landing page URL is provided over HTTP, the clients will continue to send requests over HTTP if the 302 redirect approach is not used. The problem of updating all such clients to have HTTPS landing URLs, which would require updating clients installed on customers’ machines, is a thorny problem. The key is to understand the unique aspects of your applications and customer base to determine how each header could be implemented.

Effort ROI

Adding some security headers requires less effort than others. In our assessment, we determined that there is an increasing order of difficulty and required investment in implementation. In order: X-XSS-Protection, X-Content-Type-Options, X-Frame-Options, HSTS, and Content Security Policy (CSP). For the first two, we have not found any reason not to use them and, so far, we have not seen any team needing to do any extra work when turning these headers on. Enabling a X-Frame-Options header may require some additional considerations. For example, are there legitimate reasons for allowing framing at all or from same origin? Content Security Policy (CSP) headers is by far the most prolific, subsuming all other security headers. Depending on goals, it may require the most effort to employ. For example, if cross-site scripting prevention is the goal, it might result in refactoring of the code to separate JavaScript.

We decided to adopt a phased implementation. For Phase 1: X-XSS-Protection, X-Content-Type-Options, X-Frame-Options. For Phase 2: HSTS, CSP (we haven’t yet considered the Public Key Pinning Extension for HTTP header). Dividing massive efforts into phases offers two main benefits:

  1. ability to make initial phases lightweight to build momentum with product teams – later phases can focus on more complex tasks
  2. the opportunity to iron out quirks in processes and automation.

Application stack vs. Operations stack

Who should we ask to enable these headers? Should it be the developers writing application code or the operations engineer managing servers? There are pros and cons of each option (e.g. specifying many security headers in server configurations such as nginx, IIS, and apache is programming language agnostic and doesn’t require changes to the code). Introducing such headers could even be made part of the hardened deployment templates (i.e. Chef recipes). However, some headers may require deeper understanding of the application landscape such as which sites, if any, should be allowed to frame it. We provided both options to teams and the table below provides some useful links that helped inform our decisions.

Server based nginx, IIS, Apache,
Programing Language-based Defenses Ruby on Rails

Affecting the change

We leveraged our existing Secure Product Life Cycle engagement process with product teams to affect this change. We use a security backlog as part of this process to capture pending security work for a team. Adding security header work to this backlog ensured that the security headers implementation work would be completed as part of our regular processes.

Checking Compliance

Having provided necessary phased guidance, the last piece of the puzzle was to develop an automated way to verify correct implementation of security headers. Manually checking status of these headers would not only be effort intensive but also repetitive and ineffective. Using publicly available scanners (such as is not always an option due to a stage/dev environments’ accessibility restrictions or specific phased implementation guidance. As noted here, Adobe has undertaken several company-wide security initiatives. A major one is the Security Automation Framework (SAF). SAF allows creating assertions, with various programming languages supported, and run them periodically with reporting. First, we organically compiled a list of end points for Adobe web properties that were discovered through other initiatives. Then Phase 1 and Phase 2 header checks were implemented as SAF assertions to run weekly as scans across the sites on this list. These automated scans have been instrumental in getting a bird’s eye view of adoption progress and give information needed to start a dialogue with product teams.

Security headers provide a useful layer in any defense-in-depth strategy. Most of these are relatively easy to implement. The size of an organization and number of products can introduce challenges that are not necessarily specific to implementing security headers. Within any organization, its critical to plan security initiatives so they help form a malleable ecosystem to help ease the implementation of future initiatives. It does indeed take a village to implement security controls in an organization as big as Adobe.

Prithvi Bisht
Senior Security Researcher

Help Protect Your Devices: Update Your Software

When you receive a notification from your computer that it’s time to update your software, do you immediately accept it or do you delay your software update because you’re in the middle of something? If you’re like 64 percent of American, computer-owning adults, you recognize how critical software updates are and update your software immediately. Another 30 percent update their software depending on what the update is for. That’s 94 percent who recognize software updates and at least consider taking action when prompted.

We asked a nationally representative sample of ~2,000 computer-owning adults in the United States about their behaviors and knowledge when it comes to cybersecurity. Interestingly, attitudes toward updating software has changed for the better in the last five years. It seems consumers are more likely to update their software immediately, indicating that updates are becoming easier for consumers to install, and that computer-owning adults are better informed on how and why updating software is so important when trying to protect their identity and devices. While a majority update their software promptly on their computer, 83 percent are equally or more diligent in updating their smartphones than their computers. No matter what type of device you own – computer, tablet or smartphone – it’s critical to keep all your software up to date, as soon as the update is available.

Here are some additional insights from the survey on current practices regarding software updates and also some tips and reminders on why you should be updating your software – no matter the device – regularly.

Keep Your Software Up to Date (It’s Critical)

Across the industry, we continue to see how attackers are finding holes and exploiting software that is not up-to-date. In fact, attackers may target vulnerabilities for months – or even years – long after patches have been made available. Keeping your software up-to-date is a critical part of protecting your devices, online identity and information. The good news is that according to our survey results, 78 percent of consumers recognize the importance of keeping software up-to-date. Among those who typically update their software, 68 percent indicate that both security and crash control are top reasons for updating.

No matter the reason, keeping your software up to date should become a part of your regular routine;

  • Select automatic updates. When possible, select automatic updates for your software – that way, your devices will automatically update without having to add another item on your to do list.
  • Select notification reminders. If you prefer to know exactly what updates are being installed, you can set notifications to remind you to update the software yourself. Our survey results show that 1 in 3 people update on the first notification; interestingly, adults of the Baby Boomer generation are most likely to update their software after one prompt while those tech-savvy Millennials are more likely to need 3 to 5 notifications to update software. For all those not updating on the first prompt, we suggest selecting automatic software updates when possible.

Legitimate Software Updates

While a majority of our survey respondents noted that they frequently update their software, there was a very small group that indicated the reason for not updating their software is because they don’t trust that the update is legitimate. If you share this same concern, here are a few tips and reminders to help ensure you are downloading legitimate software:

  • Set automatic software updates. To help ensure that you are downloading legitimate software, when possible select for your software to be automatically updated. One less thing to do on your end that keeps your computer in check!
  • Check for the software update directly on the company website. When updates or patches to software are available, companies typically have updates on their website. If you’re unsure about a notification, double check on the software company’s website.
  • Be wary of notifications via email. Some companies may send notifications of software updates via email. Be cautious with these, as attackers often use fake email messages that may contain viruses that appear to be software updates. If you’re unsure about the software updates you receive as an email, check the company’s website to download the latest patches. And don’t fall victim to phishing ploys! See our blog post on tips for recognizing phishing emails.

Staying One Step Ahead

The technology industry is consistently moving forward and the task of updating software should continue to progress and be made as simple as possible for users. Especially since the majority of exploits appear to target software installations that are not up-to-date on the latest security updates.  Adobe strongly recommends that users install security updates as soon as they are available. Or better yet, select the option to allow updates automatically which will install updates automatically in the background without requiring further user action.

Dave Lenoe
Director, Adobe Secure Software Engineering Team (ASSET)

Survey Infographic (PDF download)

About the Survey (PDF download)

OWASP, IR, ML, and Internal Bug Bounties

A few weeks ago, I traveled to the OWASP Summit located just outside of London. The OWASP Summit is not a conference. It is a remote offsite event for OWASP leaders and the community to brain storm on how to improve OWASP.  There were a series of one hour sessions on how to tackle different topics and what OWASP could offer to make the world better. This is a summary of some of the more interesting sessions that I was involved in from my personal perspective. These views are not in any way official OWASP views nor are they intended to represent the views of everyone who participated in the respective working groups.

One session that I attended dealt with incident response (IR). Often times, incident response guidelines are written for the response team. They cover how to do forensics, analyze logs, and other response tasks covered by the core IR team. However, there is an opportunity for creating incident response guidelines which target the security champions and developers that IR team interacts with during an incident. In addition, OWASP can relate how this information ties back into their existing security development best practices. As an example, one of the first questions from an IR team is how does customer data flow through the application. This allows the IR team to quickly determine what might have been exposed. In theory, a threat model should contain this type of diagram and could be an immediate starting point for this discussion. In addition, after the IR event, it is good for the security champion to have a post mortem review of the threat model to determine how the process could have better identified the exploited risk. Many of the secure development best practices recommended in the security development lifecycle support both proactive and reactive security efforts. Also, a security champion should know how to contact the IR team and regularly sync with them on the current response policies. These types of recommendations from OWASP could help companies ensure that a security champion on the development team are prepared to assist the IR team during a potential incident. Many of the ideas from our discussion were captured in this outcomes page from the session:

Another interesting discussion from the summit dealt with internal bug bounties. This is an area where Devesh Bhatt and myself were able to provide input based on our experiences at Adobe. Devesh has participated in our internal bounty programs as well as several external bounty programs. Internal bug bounties have been a hot topic in the security community. On the one hand, many companies have employees who participate in public bounties and it would be great to focus those skills on internal projects. Employees who participate in public bug bounties often include general developers who are outside of the security team and therefore aren’t directly tied into internal security testing efforts. On the other hand, you want to avoid creating conflicting incentives within the company. If this is a topic of interest to you, Adobe’s Pieter Ockers will be discussing the internal bug bounty program he created in detail at the O’Reilly Security conference in New York this October:

Lastly, there was a session on machine learning. This has been a recent research area of mine since it is the next natural step of evolution for applying the data that is collected from our security automation work. Adobe also applies machine learning to projects like Sensei. Even though the session was on Friday, there was a large turnout by the OWASP leaders.  We discussed if there were ways to share machine learning training data sets and methodologies for generating them using common tools.  One of the observations was that many people are still very new to the topic of machine learning. Therefore, I decided to start by drafting out a machine learning resources page for OWASP. The goal of the page isn’t to copy pages of existing introductory content onto an OWASP page where it could quickly become dated. The page also isn’t designed to drown the reader with a link to every machine learning reference that has ever been created. Instead, it focuses on a small selection of references that are useful to get someone started with the basic concepts. The reader can then go find their own content that goes deeper into the areas that interest them. For instance, Coursera provides an 11 week Stanford course on machine learning but that would overwhelm the person just seeking a high-level overview. The first draft of my straw man proposal for the OWASP page can be found here: As OWASP creates more machine learning content, this page could eventually be a useful appendix. This page is only a proposal and it is not an official OWASP project at this stage.  Additional ideas from the workshop discussion can be found on the outcomes page:

OWASP is a community driven group where all are invited to participate. If these topics interest you, then feel free to join an OWASP mailing list and start to provide more ideas. There were several other great sessions from the summit and you can find a list of outcomes from each session here:

Peleus Uhley
Principal Scientist

Lessons Learned from Improving Transport Layer Security (TLS) at Adobe

Transport Layer Security (TLS) is the foundation of security on the internet. As our team evolved from primarily consultative role to solve problems for the entire company, we chose TLS as one of the areas to improve. The goal of this blog post is to share the lessons we’ve learned from this project.

TLS primer

TLS is a commonly used protocol to secure communications between two entities. If a client is talking to a server over TLS, it expects the following:

  1. Confidentiality – The data between the client and the server is encrypted and a network eavesdropper should not be able to decipher the communication.
  2. Integrity – The data between the client and the server should not be modifiable by a network attacker.
  3. Authentication – In the most common case, the identity of the server is authenticated by the client during the establishment of the connection via certificates. You can also have 2-way authentication, but that is not commonly used.

Lessons learned

Here are the main lessons we learned:

Have a clearly defined scope

Instead of trying to boil the ocean, we decided to focus on around 100 domains belonging to our Creative Cloud, Document Cloud and Experience Cloud solutions. This helped us focus on these services first versus being drowned by the thousands of other Adobe domains.

Have clearly defined goals

TLS is a complicated protocol and the definition of a “good” TLS configuration keeps changing over time. We wanted a simple, easy to test, pass/fail criteria for all requirements on the endpoints in scope. We ended up choosing the following:

SSL Labs grade

SSL Labs does a great job of testing a TLS configuration and boiling it down to a grade. Grade ‘A’ was viewed as a pass and anything else was considered a fail. There might be some endpoints that had valid reasons to support certain ciphers that resulted in a lower grade. I will talk about that later in this post.

Apple Transport Security

Apple has a minimum bar for TLS configuration that all endpoints must pass if iOS apps are to connect to that endpoint. We reviewed this criteria and all the requirements were deemed sensible. We decided to make it a requirement for all endpoints, regardless if an endpoint was being accessed from an iOS app or not. We found a few corner cases where a configuration would get SSL Labs grade A and fail ATS (and vice-versa) that we resolved on a case-by-case basis.

HTTP Strict Transport Security

HSTS (HTTP Strict Transport Security) is a HTTP response header that informs compliant clients to always use HTTPS to connect to a website. It helps solve the problem of initial request being made over plain HTTP when a user types in the site without specifying the protocol and helps prevent the hijacking of connections. When a compliant client receives this header, it only uses HTTPS to make connections to this website for a max-age value set by the header. The max-age count is reset every time the client receives this header. You can read the details about HSTS in RFC 6797.

Complete automation of testing workflow

We wanted to have minimal human cost for these tests on an ongoing basis. This project allowed us to utilize our Security Automation Framework. Once the scans are setup and scheduled, they keep running daily and the results are passed on to us via email/slack/etc. After the initial push to get all the endpoints pass all the tests, it was very easy to catch any drift when we saw a failed test. Here is what these results looks like in the SAF UI:

Devil is in the Detail

From a high level it seems fairly straightforward to go about improving TLS configurations. However, it is a little more complicated when you get into the details. I wanted to talk a little bit about how we went about removing ciphers that were hampering the SSL Labs grade.

To understand the issues, you have to know a little bit about the TLS handshake. During the handshake, the client and the server decide on which cipher to use for the connection. The client sends the list of ciphers it supports in the client “hello” message of the handshake to the server. If server side preference is enabled, the cipher that is listed highest in the server preference and also supported by client is picked. In our case, the cipher that was causing the grade degradation was listed fairly high on the list. As a result, when we looked at the ciphers used for connections, this cipher was used in a significant percentage of the traffic. We didn’t want to just remove it because of the potential risk of dropping support for some customers without any notification. Therefore, we initially moved it to the bottom of the supported cipher list. This reduced the percentage of traffic using that cipher to a very small value. We were then able to identify that a partner integration was responsible to all the traffic for this cipher. We reached out to that partner and notified them to make appropriate changes before disabling that cipher. If you found this interesting, you might want to consider working for us on these projects.

Future work

In the future, we want to expand the scope of this project. We also want to expand the requirements for services that have achieved the requirements described in this post. One of the near-term goals is to get some of our domains added to the HSTS preload list. Another goal is to do more thorough monitoring of certificate transparency logs for better alerting for new certificates issued for Adobe domains. We have also been experimenting with HPKP. However, as with all new technologies, there are issues we must tackle to continue to ensure the best balance of security and experience for our customers.

Gurpartap Sandhu
Security Researcher

Getting Secrets Out of Source Code

Secrets are valuable information targeted by attackers to get access to your system and data. Secrets can be encryption keys, passwords, private keys, AWS secrets, Oauth tokens, JWT tokens, Slack tokens, API secrets, and so on. Unfortunately, secrets are sometimes hardcoded or stored along with source code by developers. Even though the source code may be kept securely in software control management (SCM) tools, that is not a suitable place to store secrets. For instance, it is not possible to restrict access to source code repositories as engineering teams collaborate to write and review code. Any secrets in source code will also be copied to clones or forks of your repository making them hard to track and remove. If a secret is ever committed in code stored in SCM tools, then you should consider it to be potentially compromised. There are other risks in storing secrets in source code. Source code could be accidentally exposed on public networks due to simple misconfiguration or released software could be reverse engineered by attackers. For all of these reasons, you should make sure secrets are never stored in source code and SCM tools.

Security teams should take a holistic approach to tackle this problem. First and foremost, educate developers to not hardcode or store secrets in source code. Next, look for secrets while doing security code reviews. If you are using static analysis tools, then consider writing custom rules to automate this process. You could also have automated tests that look for secrets and will fail the code audit if they are found. Lastly, evaluate existing source code and enumerate secrets that are already hardcoded or stored along with source code and migrate them to password management vaults.

However, finding all of the secrets potentially hiding in source code could be challenging depending on the size of the organization and number of code repositories. There are, fortunately, a few tools available to help find secrets in source code and SCM tools. Gitrob is an open source tool that aids organizations in identifying sensitive information lingering in Github repositories. Gitrob iterates over all the organization’s repositories to find files that might contain sensitive information using a range of known patterns. Gitrob can be an efficient way to more quickly identify files which are known to contain sensitive information (e.g. private key file *.key).

Gitrob can, however, generate thousands of findings which can lead to a number of false positives or false negatives. I recommend complementing Gitrob with other tools such as ‘truffleHog’ developed by Dylan Ayrey or  ‘git-secrets’ from AWS labs. These tools are able to do deeper searches of code and may help you cut down on some of the false reports.

Our team chose to complement Gitrob with custom python scripts that looked into the file content. The script identified secrets based on regular expression patterns and entropy. The patterns were created based on the secrets found through Gitrob and understanding of the structure of the secrets in our code. For example, to find an AWS Access ID and secret, I used a regular expression suggested by Amazon in one of their blog posts:

Pattern for access key IDs: (?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9])
Pattern for secret access keys: 

In order to scale, you can share these tools and guidance with product teams to have them run and triage the findings. You should also create clear guidelines for the product team on how to store secrets and move them securely to established secret management tools (e.g. password vaults) for your production environment. The basic principle here is to not store passwords or secrets in clear text and make sure they are encrypted at rest and transit until they reach the production environment. Secrets stored insecurely must be invalidated or rotated. Password management tools might also provide features such as audit logs, access control, and secret rotation which can further help keep your production environment secure.

Given how valuable secrets are – and how much harm they can cause your organization if they were to unwittingly get out – security teams must proactively tackle this problem. No team wants to be the one responsible for leaking secrets through their source code.

Karthik Thotta Ganesh
Security Researcher

Developing an Amazon Web Services (AWS) Security Standard

Adobe has an established footprint on Amazon Web Services (AWS).  It started in 2008 with Managed Services, and expanded greatly with the launch of Creative Cloud in 2012 and the migration of Business Catalyst to AWS in 2013. In this time, we found challenges in keeping up with AWS security review needs.  In order to increase scalability, it was clear we needed a defined set of minimum AWS security requirements and tooling automation for auditing AWS environments against it.  This might sound simple, but like many things, the devil was in the details. It took focused effort to ensure the result was a success.  So how did we get here?  Let’s start from the top.

First, the optimal output format needed to be decided upon.  Adobe consists of multiple Business Units (BUs) and there are many teams within those BUs.  We needed security requirements that could be broadly applied across the company as well as to acquisitions. so we needed requirements that could not only be applied to existing services and new services across BUs; but also be future-proof. Given these constraints, creating a formal standard for our teams to follow was the best choice.

Second, we needed to build a community of stakeholders in the project. For projects with broad impact such as this, it’s best to have equally broad stakeholder engagement.  I made sure we had multiple key representatives from all the BUs (leads, architects, & engineers) and that various security roles were represented (application security, operational security, incident response, and our security operations center).  This led to many strong opinions about direction. Thus, it was important to be an active communication facilitator for all teams to ensure their needs are met.

Third, we reviewed other efforts in the security industry to see what information we could learn.  There are many AWS security-related whitepapers from various members of the security community.  There have been multiple security-focused AWS re:Invent presentations over the years.  There’s also AWS’s Trusted Advisor and Config Rules, plus open source AWS security assessment tools like Security Monkey from Netflix and Scout2 from NCC Group.  These are all good places to glean information from.

While all of these varied information sources are fine and dandy, is their security guidance relevant to Adobe?  Does it address Adobe’s highest risk areas in AWS?  Uncritically following public guidance could result in the existence of a standard for the sake of having a standard – not one that delivered benefits for Adobe.

A combination of security community input, internally and externally documented best practices, and looking for patterns and possible areas of improvement was used to define an initial scope to the standard.  At the time the requirements were being drafted, AWS had over 30 services. It was unreasonable (and unnecessary) to create security guidance covering all of them.  The initial scope for the draft minimum security requirements was AWS account management, Identity & Access Management (IAM), and Compute (Amazon Elastic Compute Cloud (EC2) and Virtual Private Cloud (VPC)).

We worked with AWS stakeholders within Adobe through monthly one-hour meetings to get agreement on the minimum bar security requirements for AWS and which were to be applied to all of Adobe’s AWS accounts (dev, stage, prod, testing, QA, R&D, personal projects, etc).  We knew we’d want a higher security bar for environments that handle more sensitive classes of data or were customer facing. We held a two-day AWS security summit that was purely focused on defining these higher bar security requirements to ensure all stakeholders had their voices heard and avoid any contention as the standard was finalized.

As a result of the summit, the teams were able to define higher security requirements that covered account management/IAM and compute (spanning architecture, fleet management, data handling, and even requirements beyond EC2/VPC including expansion into AWS-managed services such as S3, DynamoDB, SQS, etc.).

I then worked with Adobe’s Information Systems Security Policies & Standards team to publish an Adobe-wide standard.  I transformed the technical requirements into an appropriate standard.  This was then submitted to Adobe’s broader standards’ teams to review.  After this review, it was ready for formal approval.

The necessary teams agreed to the standard and it was officially published internally in August 2016.  I then created documentation to help teams use the AWS CLI to audit for and correct issues from the minimum bar requirements. We also communicated the availability of the standard and began assisting teams towards meeting compliance with it.

Overall the standard has been well received by teams.  They understand the value of the standard and its requirements in helping Adobe ensure better security across our AWS deployments.  We have also developed timelines with various teams to help them achieve compliance with the standard. And, since our AWS Security Standard was released we have seen noted scalability improvements and fewer reported security issues.  This effort continues to help us in delivering the security and reliability our customers expect from our products and services.

Cynthia Spiess
Web Security Researcher

Evolving an Application Security Team

A centralized application security team, similar to ours here at Adobe, can be the key to driving the security vision of the company. It helps implement the Secure Product Lifecycle (SPLC) and provide security expertise within the organization.  To stay current and have impact within the organization, a security team also needs to be in the mode of continuous evolution and learning. At inception of such a team, impact is usually localized more simply to applications that the team reviews.  As the team matures, the team can start to influence the security posture across the whole organization. I lead the team of security researchers at Adobe. Our team’s charter is to provide security expertise to all application teams in the company.  At Adobe, we have seen our team mature over time. As we look back, we would like to share the various phases of evolution that we have gone through along the way.

Stage 1:  Dig Deeper

In the first stage, the team is in the phase of forming and acquires the required security skills through hiring and organic learning. The individuals on the team bring varied security expertise, experience, and a desired skillset to the team. During this stage, the team looks for applicability of security knowledge to the applications that the engineering teams develop.  The security team starts this by doing deep dives into the application architecture and understanding why the products are being created in the first place. Here the team understands the organization’s application portfolio, observes common design patterns, and then starts to build the bigger picture on how applications come together as a solution.   Sharing learnings within the team is key to accelerating to the next stage.

By reviewing applications individually, the security team starts to understand the “elephants in the room” better and is also able to prioritize based on risk profile. A security team will primarily witness this stage during inception. But, it could witness it again if it undergoes major course changes, takes on new areas such as an acquisition, or must take on a new technical direction.

Stage 2: Research

In the second stage, the security team is already able to perform security reviews for most applications, or at least a thematically related group of them, with relative ease.  The security team may then start to observe gaps in their security knowhow due to things such as changes in broader industry or company engineering practices or adoption of new technology stacks.

During this phase, the security team starts to invest time in researching any necessary security tradeoffs and relative security strength of newer technologies being explored or adopted by application engineering teams. This research and its practical application within the organization has the benefit of helping to establish security experts on a wider range of topics within the team.

This stage helps security teams stay ahead of the curve, grow security subject matter expertise, update any training materials, and helps them give more meaningful security advice to other engineering teams. For example, Adobe’s application security team was initially skilled in desktop security best practices. It evolved its skillset as the company launched products centered around the cloud and mobile platforms. This newly acquired skillset required further specializationwhen the company started exploring more “bleeding edge” cloud technologies such as containerization for building micro-services.

Stage 3: Security Impact

As security teams become efficient in reviewing solutions and can keep up with technological advances, they can then start looking at homogeneous security improvements across their entire organization.  This has the potential of a much broader impact on the organization. Therefore, this requires the security team to be appropriately scalable to match possible increasing demands upon it.

If a security team member wants to make this broader impact, the first step is identification of a problem that can be applied to a larger set of applications.  In other words, you must ask members of a security team to pick and own a particularly interesting security problem and try to solve it across a larger section of the company.

Within Adobe, we were able to identify a handful of key projects that fit the above criteria for our security team to tackle. Some examples include:

  1. Defining the Amazon Web Services (AWS) minimum security bar for the entire company
  2. Implementing appropriate transport layer security (TLS) configurations on Adobe domains
  3. Validating that product teams did not store secrets or credentials in their source code
  4. Forcing use of browser supported security flags (i.e. XSS-Protection, X-Frame-Options, etc.) to help protect web applications.

The scope of these solutions varied from just business critical applications to the entire company.

Some guidelines that we set within our own team to achieve this were as follows:

  1. The problem statement, like an elevator pitch, should be simple and easily understandable by all levels within the engineering teams – including senior management.
  2. The security researcher was free to define the scope and choose how the problem could be solved.
  3. The improvements made by engineering teams should be measurable in a repeatable way. This would allow for easier identification of any security regressions.
  4. Existing channels for reporting security backlog items to engineering teams must be utilized versus spinning up new processes.
  5. Though automation is generally viewed as a key to scalability for these types of solutions, the team also had flexibility to adopt any method deemed most appropriate. For example, a researcher could front-load code analysis and only provide precise security flaws uncovered to the application engineering team.  Similarly, a researcher could establish required “minimum bars” for application engineering teams helping to set new company security standards. The onus is then placed on the application engineering teams to achieve compliance against the new or updated standards.

For projects that required running tests repeatedly, we leveraged our Security Automation Framework. This helped automate tasks such as validation. For others, clear standards were established for application security teams. Once a defined confidence goal is reached within the security team about compliance against those standards, automated validation could be introduced.

Pivoting Around an Application versus a Problem

When applications undergo a penetration test, threat modeling, or a tool-based scan, teams must first address critical issues before resolving lower priority issues. Such an effort probes an application from many directions attempting to extract all known security issues.  In this case, the focus is on the application and its security issues are not known when the engagement starts.  Once problems are found, the application team owns fixing it.

On the other hand, if you choose to tackle one of the broader security problems for the organization, you test against a range of applications, mitigate it as quickly as possible for those applications, and make a goal to eventually eradicate the issue entirely from the organization.  Today, teams are often forced into reactively resolving such big problems as critical issues – often due to broader industry security vulnerabilities that affect multiple companies all at once.  Heartbleed and other similar named vulnerabilities are good examples of this.  The Adobe security team attempts to resolve as many known issues as possible proactively in an attempt to help lessen the organizational disruption when big industry-wide issues come along. This approach is our recipe for having a positive security impact across the organization.

It is worth noting that security teams will move in and out of the above stages and the stages will tend to repeat themselves over time.  For example, a new acquisition or a new platform might require deeper dives to understand.  Similarly, new technology trends will require investment in research.  Going after specific, broad security problems complements the deep dives and helps improve the security posture for the entire company.

We have found it very useful to have members of the security team take ownership of these “really big” security trends we see and help improve results across the company around it. These efforts are ongoing and we will share more insights in future blog posts.

Mohit Kalra
Sr. Manager, Secure Software Engineering

Pwn2Own 2017

The ZDI Pwn2Own contest celebrated its tenth anniversary this year. Working for Adobe over the past ten years, I have seen a lot of changes in the contest as both an observer and as a vendor triaging the reports.

People often focus on the high level of aspects of who got pwned, how many times, in how many seconds, etc. However, very little of the hyped information is relevant to the actual state of security for the targeted application. There are a lot of factors that determine whether a team chooses to target a platform outside of its relative difficulty.  These can include weighing how to maximize points, the time to prepare, personal skill sets, difficulty in customizing for the target environment, and overall team strategy. ZDI has to publish extremely detailed specs for the targeted environment because even minor device driver differences can kill some exploit chains.

For instance, some of the products that were new additions this year were harder for the teams to add to their target list in time for the competition. On the other hand, it was unsurprising that Tencent and Qihoo 360 both competed against Flash Player since they regularly contribute responsible disclosures to Adobe’s Flash Player and Reader teams. In fact, Tencent was listed in our January Flash Player bulletin and we credited two Flash Player CVEs to the Qihoo 360 Vulcan Team in our March Flash Player security bulletin that went out the day before the contest. On the Reader side, Tencent team members were responsible 19 CVEs in the January release. Therefore, both teams regularly contribute to Adobe’s product security regardless of Pwn2Own.

The vendors don’t make it easy for the competitors. For instance, since the contest occurs after Patch Tuesday, there is always the chance that their bugs will collide with the vendors patch release. For instance, Chrome released 36 patches in March, VMWare had two security updates in March, and Flash Player released seven patches in our March release. In addition, new mitigations sometimes coincide with the contest. Last year, Flash Player switched to Microsoft’s Low Fragmentation Heap and started zeroing memory on free in the release prior to the contest. As a result, one of the Pwn2Own teams from that year did not have time to adjust their attack. This year, Flash Player added more mitigations around heap and metadata isolation in the Patch Tuesday release.

Adobe doesn’t target the mitigations for the contest specifically. These occur as part of our Adobe Secure Product Lifecycle (SPLC) process that continually adds new mitigations. For instance, between Pwn2Own last year and Pwn2Own this year, we added zeroing memory on allocation, running Flash Player in a separate process on Edge, blocking Win32k system calls and font access in Chrome, adding memory protections based on the Microsoft MemProtect concept, and several similar mitigations that are too detailed to include in a simple list. Similarly, Mozilla has been working on instrumenting sandboxing for their browser over the last year. Therefore, this is a contest that does not get any easier as time goes on. If you want to try and sweep all the Pwn2Own categories, then you need a team to do it.

In fact, a lot has changed since 2008 when Flash Player was first hacked in a Pwn2Own contest. The list of mitigations that Flash Player currently has in place includes compiler flags such as GS, SEH, DEP, ASLR and CFG. We have added sandboxing techniques such as job controls, low integrity processes, and app containers for Firefox, Chrome, Safari, and Edge. There have been memory defenses added that include constant folding, constant blinding, random NOP insertion, heap isolation, object length checks, replacing custom heaps, and implementing MemProtect. In addition, the code has gone through rounds of Google fuzzing, Project Zero reviews, and countless contributions from the security community to help eliminate bugs in addition to our internal efforts.

While advanced teams such as Qihoo 360 and Tencent can keep up, that security hardening has hindered others who target Flash Player. For instance, ThreatPost recently wrote an article noting that Trustwave measured a 300% drop in exploit kit activity. While exploit kit activity can be influenced by several factors including law enforcement and market ROI, the CVEs noted in exploit kits are for older versions of Flash Player. As we add mitigations, they not only need new bugs but also new mitigation bypass strategies in order to keep their code working. In addition, ThreatPost noted a recent Qualys report measuring a 40% increase in Flash Player patch adoption which helps to limit the impact of those older bugs. Zero days also have been pushed towards targeting a more limited set of environments.

All of that said, nobody is resting on their laurels. Advanced persistent threats (APTs) will select technology targets based on their mission. If your software is deployed in the environment an APT is targeting, then they will do what is necessary to accomplish their mission. Similarly, in Pwn2Own, we see teams go to extraordinary lengths to accomplish their goals. For instance, Chaitin Security Research Lab chained together six different bugs in order to exploit Safari on MacOS.  Therefore, seeing these creative weaponization techniques inspires us to think of new ways that we can further improve our defenses against determined malicious attackers.

The ZDI team has done a tremendous job improving Pwn2Own each year and adding interesting new levels of gamification. It is amazing to watch how different teams rise to the occasion. Due to the increased difficulty added by vendors each year, even winning a single category becomes a bigger deal with each new year. Thanks to everyone who contributed to making Pwn2Own 2017 a great success.

Peleus Uhley
Principal Scientist

Adobe @ CanSecWest 2017

It was another great year for the Adobe security team at CanSec West 2017 in beautiful Vancouver. CanSec West 2017 was an eclectic mix of federal employees, independent researchers and representatives of industry, brought together in one space, to hear about the latest exploits and defense strategies. As a first time attendee, I was impressed not just by the depth and breadth of the talks, but also by the incredibly inclusive community of security professionals that makes up the CanSec family. Adobe sponsor’s many conferences throughout the year, but the intimate feel of CanSec West is unique.

As the industry shifts towards a more cloud-centric playbook, hot topics such as virtualization exploits became a highlight of the conference.  Several presenters addressed the growing concern of virtualization security including the Marvel team, who gave an excellent presentation demonstrating the Hearthstone UAF and OOB vulnerabilities to exploit RPC calls in VMWare.   Additionally, the Qihoo 360 gear team, continued on their theme from last year on qemu exploitation. Demonstrating attacks that ranged from leveraging trusted input from vulnerable third party drivers to attacking shared libraries within qemu itself.

IoT also continued to be a hot topic of conversation with several talks describing both ends of the exploitation spectrum, such as the limited scale but potentially catastrophic effect of attacking automobile safety systems and the wide-scale DOS style attacks of a multitude of insecure devices banding together to form zombie armies. Jun Li, from the Unicorn team of Qihoo gave an informative talk on exploiting the CAN BUS in modern automobiles to compromise critical safety systems. On the other end of the attack spectrum Yuhao Song of GeekPwn Lab, & KEEN + Huiming Liu of  GeekPwn Lab & Tencent from  Xuanwu Lab presented on mobilizing millions of IoT devices can cause wide-scale devastation across core internet services. 

There were many talks on how the strategy for vulnerability prevention is changing from attempting to correct individual pieces of vulnerable code to implementing class-excluding mitigations that make 0-day exploitation time consuming and costlier. In a rare moment of agreement from attackers and defenders, both David Weston from Microsoft and Peng Qiu and Shefang Zhong, Qihoo 360 touted the improvements in Windows 10 architecture, such as Control Flow Guard, Code Integrity Guard and Arbitrary Code Guard that prevents entire classes of exploits. Similar to previous class busting preventions like ASLR, the main problems with wide-scale adoption of these new technologies will be a challenge as we continue to chase a multitude of third-party binaries as well as trying to ensure continuing compatibility with legacy software. As David Weston reiterated in his talk, even these improvements are not a panacea for security and there is still much work to be done from the industry to ensure a workable blend of security and usability.

Finally, my personal favorite presentation was presented by Chuanda Ding from TenCent, who gave a detailed analysis of the state of shared libraries in systems. In a world of modular software we are quickly becoming joined to each other in an intricate web of shared libraries that may not be fully understood either by the defenders or the by the consumers. Chuanda Ding cited Heartbleed as a benchmark example of what happens when a critical software bug is discovered in a widely used common library. As defenders and creators of software this is often one of the most complex issues we deal with. As code we move to a more interwoven software landscape and software offerings increase, it becomes harder to identify where shared third-party code exists, at what versions it exists and how to effectively patch them all when a vulnerability arises. I cannot understate how much I loved his last chart on shared libraries, you should check it and the rest of the great talks out on the  Cansec West slideshare.  Also be sure to catch our next blog post on the results of the Pwn2Own contest.

Tracie Martin
Security Technical Program Manager