Posts in Category "Security"

Lessons Learned from Improving Transport Layer Security (TLS) at Adobe

Transport Layer Security (TLS) is the foundation of security on the internet. As our team evolved from primarily consultative role to solve problems for the entire company, we chose TLS as one of the areas to improve. The goal of this blog post is to share the lessons we’ve learned from this project.

TLS primer

TLS is a commonly used protocol to secure communications between two entities. If a client is talking to a server over TLS, it expects the following:

  1. Confidentiality – The data between the client and the server is encrypted and a network eavesdropper should not be able to decipher the communication.
  2. Integrity – The data between the client and the server should not be modifiable by a network attacker.
  3. Authentication – In the most common case, the identity of the server is authenticated by the client during the establishment of the connection via certificates. You can also have 2-way authentication, but that is not commonly used.

Lessons learned

Here are the main lessons we learned:

Have a clearly defined scope

Instead of trying to boil the ocean, we decided to focus on around 100 domains belonging to our Creative Cloud, Document Cloud and Experience Cloud solutions. This helped us focus on these services first versus being drowned by the thousands of other Adobe domains.

Have clearly defined goals

TLS is a complicated protocol and the definition of a “good” TLS configuration keeps changing over time. We wanted a simple, easy to test, pass/fail criteria for all requirements on the endpoints in scope. We ended up choosing the following:

SSL Labs grade

SSL Labs does a great job of testing a TLS configuration and boiling it down to a grade. Grade ‘A’ was viewed as a pass and anything else was considered a fail. There might be some endpoints that had valid reasons to support certain ciphers that resulted in a lower grade. I will talk about that later in this post.

Apple Transport Security

Apple has a minimum bar for TLS configuration that all endpoints must pass if iOS apps are to connect to that endpoint. We reviewed this criteria and all the requirements were deemed sensible. We decided to make it a requirement for all endpoints, regardless if an endpoint was being accessed from an iOS app or not. We found a few corner cases where a configuration would get SSL Labs grade A and fail ATS (and vice-versa) that we resolved on a case-by-case basis.

HTTP Strict Transport Security

HSTS (HTTP Strict Transport Security) is a HTTP response header that informs compliant clients to always use HTTPS to connect to a website. It helps solve the problem of initial request being made over plain HTTP when a user types in the site without specifying the protocol and helps prevent the hijacking of connections. When a compliant client receives this header, it only uses HTTPS to make connections to this website for a max-age value set by the header. The max-age count is reset every time the client receives this header. You can read the details about HSTS in RFC 6797.

Complete automation of testing workflow

We wanted to have minimal human cost for these tests on an ongoing basis. This project allowed us to utilize our Security Automation Framework. Once the scans are setup and scheduled, they keep running daily and the results are passed on to us via email/slack/etc. After the initial push to get all the endpoints pass all the tests, it was very easy to catch any drift when we saw a failed test. Here is what these results looks like in the SAF UI:

Devil is in the Detail

From a high level it seems fairly straightforward to go about improving TLS configurations. However, it is a little more complicated when you get into the details. I wanted to talk a little bit about how we went about removing ciphers that were hampering the SSL Labs grade.

To understand the issues, you have to know a little bit about the TLS handshake. During the handshake, the client and the server decide on which cipher to use for the connection. The client sends the list of ciphers it supports in the client “hello” message of the handshake to the server. If server side preference is enabled, the cipher that is listed highest in the server preference and also supported by client is picked. In our case, the cipher that was causing the grade degradation was listed fairly high on the list. As a result, when we looked at the ciphers used for connections, this cipher was used in a significant percentage of the traffic. We didn’t want to just remove it because of the potential risk of dropping support for some customers without any notification. Therefore, we initially moved it to the bottom of the supported cipher list. This reduced the percentage of traffic using that cipher to a very small value. We were then able to identify that a partner integration was responsible to all the traffic for this cipher. We reached out to that partner and notified them to make appropriate changes before disabling that cipher. If you found this interesting, you might want to consider working for us on these projects.

Future work

In the future, we want to expand the scope of this project. We also want to expand the requirements for services that have achieved the requirements described in this post. One of the near-term goals is to get some of our domains added to the HSTS preload list. Another goal is to do more thorough monitoring of certificate transparency logs for better alerting for new certificates issued for Adobe domains. We have also been experimenting with HPKP. However, as with all new technologies, there are issues we must tackle to continue to ensure the best balance of security and experience for our customers.

Gurpartap Sandhu
Security Researcher

Getting Secrets Out of Source Code

Secrets are valuable information targeted by attackers to get access to your system and data. Secrets can be encryption keys, passwords, private keys, AWS secrets, Oauth tokens, JWT tokens, Slack tokens, API secrets, and so on. Unfortunately, secrets are sometimes hardcoded or stored along with source code by developers. Even though the source code may be kept securely in software control management (SCM) tools, that is not a suitable place to store secrets. For instance, it is not possible to restrict access to source code repositories as engineering teams collaborate to write and review code. Any secrets in source code will also be copied to clones or forks of your repository making them hard to track and remove. If a secret is ever committed in code stored in SCM tools, then you should consider it to be potentially compromised. There are other risks in storing secrets in source code. Source code could be accidentally exposed on public networks due to simple misconfiguration or released software could be reverse engineered by attackers. For all of these reasons, you should make sure secrets are never stored in source code and SCM tools.

Security teams should take a holistic approach to tackle this problem. First and foremost, educate developers to not hardcode or store secrets in source code. Next, look for secrets while doing security code reviews. If you are using static analysis tools, then consider writing custom rules to automate this process. You could also have automated tests that look for secrets and will fail the code audit if they are found. Lastly, evaluate existing source code and enumerate secrets that are already hardcoded or stored along with source code and migrate them to password management vaults.

However, finding all of the secrets potentially hiding in source code could be challenging depending on the size of the organization and number of code repositories. There are, fortunately, a few tools available to help find secrets in source code and SCM tools. Gitrob is an open source tool that aids organizations in identifying sensitive information lingering in Github repositories. Gitrob iterates over all the organization’s repositories to find files that might contain sensitive information using a range of known patterns. Gitrob can be an efficient way to more quickly identify files which are known to contain sensitive information (e.g. private key file *.key).

Gitrob can, however, generate thousands of findings which can lead to a number of false positives or false negatives. I recommend complementing Gitrob with other tools such as ‘truffleHog’ developed by Dylan Ayrey or  ‘git-secrets’ from AWS labs. These tools are able to do deeper searches of code and may help you cut down on some of the false reports.

Our team chose to complement Gitrob with custom python scripts that looked into the file content. The script identified secrets based on regular expression patterns and entropy. The patterns were created based on the secrets found through Gitrob and understanding of the structure of the secrets in our code. For example, to find an AWS Access ID and secret, I used a regular expression suggested by Amazon in one of their blog posts:

Pattern for access key IDs: (?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9])
Pattern for secret access keys: 
(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=])

In order to scale, you can share these tools and guidance with product teams to have them run and triage the findings. You should also create clear guidelines for the product team on how to store secrets and move them securely to established secret management tools (e.g. password vaults) for your production environment. The basic principle here is to not store passwords or secrets in clear text and make sure they are encrypted at rest and transit until they reach the production environment. Secrets stored insecurely must be invalidated or rotated. Password management tools might also provide features such as audit logs, access control, and secret rotation which can further help keep your production environment secure.

Given how valuable secrets are – and how much harm they can cause your organization if they were to unwittingly get out – security teams must proactively tackle this problem. No team wants to be the one responsible for leaking secrets through their source code.

Karthik Thotta Ganesh
Security Researcher

Developing an Amazon Web Services (AWS) Security Standard

Adobe has an established footprint on Amazon Web Services (AWS).  It started in 2008 with Managed Services, and expanded greatly with the launch of Creative Cloud in 2012 and the migration of Business Catalyst to AWS in 2013. In this time, we found challenges in keeping up with AWS security review needs.  In order to increase scalability, it was clear we needed a defined set of minimum AWS security requirements and tooling automation for auditing AWS environments against it.  This might sound simple, but like many things, the devil was in the details. It took focused effort to ensure the result was a success.  So how did we get here?  Let’s start from the top.

First, the optimal output format needed to be decided upon.  Adobe consists of multiple Business Units (BUs) and there are many teams within those BUs.  We needed security requirements that could be broadly applied across the company as well as to acquisitions. so we needed requirements that could not only be applied to existing services and new services across BUs; but also be future-proof. Given these constraints, creating a formal standard for our teams to follow was the best choice.

Second, we needed to build a community of stakeholders in the project. For projects with broad impact such as this, it’s best to have equally broad stakeholder engagement.  I made sure we had multiple key representatives from all the BUs (leads, architects, & engineers) and that various security roles were represented (application security, operational security, incident response, and our security operations center).  This led to many strong opinions about direction. Thus, it was important to be an active communication facilitator for all teams to ensure their needs are met.

Third, we reviewed other efforts in the security industry to see what information we could learn.  There are many AWS security-related whitepapers from various members of the security community.  There have been multiple security-focused AWS re:Invent presentations over the years.  There’s also AWS’s Trusted Advisor and Config Rules, plus open source AWS security assessment tools like Security Monkey from Netflix and Scout2 from NCC Group.  These are all good places to glean information from.

While all of these varied information sources are fine and dandy, is their security guidance relevant to Adobe?  Does it address Adobe’s highest risk areas in AWS?  Uncritically following public guidance could result in the existence of a standard for the sake of having a standard – not one that delivered benefits for Adobe.

A combination of security community input, internally and externally documented best practices, and looking for patterns and possible areas of improvement was used to define an initial scope to the standard.  At the time the requirements were being drafted, AWS had over 30 services. It was unreasonable (and unnecessary) to create security guidance covering all of them.  The initial scope for the draft minimum security requirements was AWS account management, Identity & Access Management (IAM), and Compute (Amazon Elastic Compute Cloud (EC2) and Virtual Private Cloud (VPC)).

We worked with AWS stakeholders within Adobe through monthly one-hour meetings to get agreement on the minimum bar security requirements for AWS and which were to be applied to all of Adobe’s AWS accounts (dev, stage, prod, testing, QA, R&D, personal projects, etc).  We knew we’d want a higher security bar for environments that handle more sensitive classes of data or were customer facing. We held a two-day AWS security summit that was purely focused on defining these higher bar security requirements to ensure all stakeholders had their voices heard and avoid any contention as the standard was finalized.

As a result of the summit, the teams were able to define higher security requirements that covered account management/IAM and compute (spanning architecture, fleet management, data handling, and even requirements beyond EC2/VPC including expansion into AWS-managed services such as S3, DynamoDB, SQS, etc.).

I then worked with Adobe’s Information Systems Security Policies & Standards team to publish an Adobe-wide standard.  I transformed the technical requirements into an appropriate standard.  This was then submitted to Adobe’s broader standards’ teams to review.  After this review, it was ready for formal approval.

The necessary teams agreed to the standard and it was officially published internally in August 2016.  I then created documentation to help teams use the AWS CLI to audit for and correct issues from the minimum bar requirements. We also communicated the availability of the standard and began assisting teams towards meeting compliance with it.

Overall the standard has been well received by teams.  They understand the value of the standard and its requirements in helping Adobe ensure better security across our AWS deployments.  We have also developed timelines with various teams to help them achieve compliance with the standard. And, since our AWS Security Standard was released we have seen noted scalability improvements and fewer reported security issues.  This effort continues to help us in delivering the security and reliability our customers expect from our products and services.

Cynthia Spiess
Web Security Researcher

Evolving an Application Security Team

A centralized application security team, similar to ours here at Adobe, can be the key to driving the security vision of the company. It helps implement the Secure Product Lifecycle (SPLC) and provide security expertise within the organization.  To stay current and have impact within the organization, a security team also needs to be in the mode of continuous evolution and learning. At inception of such a team, impact is usually localized more simply to applications that the team reviews.  As the team matures, the team can start to influence the security posture across the whole organization. I lead the team of security researchers at Adobe. Our team’s charter is to provide security expertise to all application teams in the company.  At Adobe, we have seen our team mature over time. As we look back, we would like to share the various phases of evolution that we have gone through along the way.

Stage 1:  Dig Deeper

In the first stage, the team is in the phase of forming and acquires the required security skills through hiring and organic learning. The individuals on the team bring varied security expertise, experience, and a desired skillset to the team. During this stage, the team looks for applicability of security knowledge to the applications that the engineering teams develop.  The security team starts this by doing deep dives into the application architecture and understanding why the products are being created in the first place. Here the team understands the organization’s application portfolio, observes common design patterns, and then starts to build the bigger picture on how applications come together as a solution.   Sharing learnings within the team is key to accelerating to the next stage.

By reviewing applications individually, the security team starts to understand the “elephants in the room” better and is also able to prioritize based on risk profile. A security team will primarily witness this stage during inception. But, it could witness it again if it undergoes major course changes, takes on new areas such as an acquisition, or must take on a new technical direction.

Stage 2: Research

In the second stage, the security team is already able to perform security reviews for most applications, or at least a thematically related group of them, with relative ease.  The security team may then start to observe gaps in their security knowhow due to things such as changes in broader industry or company engineering practices or adoption of new technology stacks.

During this phase, the security team starts to invest time in researching any necessary security tradeoffs and relative security strength of newer technologies being explored or adopted by application engineering teams. This research and its practical application within the organization has the benefit of helping to establish security experts on a wider range of topics within the team.

This stage helps security teams stay ahead of the curve, grow security subject matter expertise, update any training materials, and helps them give more meaningful security advice to other engineering teams. For example, Adobe’s application security team was initially skilled in desktop security best practices. It evolved its skillset as the company launched products centered around the cloud and mobile platforms. This newly acquired skillset required further specializationwhen the company started exploring more “bleeding edge” cloud technologies such as containerization for building micro-services.

Stage 3: Security Impact

As security teams become efficient in reviewing solutions and can keep up with technological advances, they can then start looking at homogeneous security improvements across their entire organization.  This has the potential of a much broader impact on the organization. Therefore, this requires the security team to be appropriately scalable to match possible increasing demands upon it.

If a security team member wants to make this broader impact, the first step is identification of a problem that can be applied to a larger set of applications.  In other words, you must ask members of a security team to pick and own a particularly interesting security problem and try to solve it across a larger section of the company.

Within Adobe, we were able to identify a handful of key projects that fit the above criteria for our security team to tackle. Some examples include:

  1. Defining the Amazon Web Services (AWS) minimum security bar for the entire company
  2. Implementing appropriate transport layer security (TLS) configurations on Adobe domains
  3. Validating that product teams did not store secrets or credentials in their source code
  4. Forcing use of browser supported security flags (i.e. XSS-Protection, X-Frame-Options, etc.) to help protect web applications.

The scope of these solutions varied from just business critical applications to the entire company.

Some guidelines that we set within our own team to achieve this were as follows:

  1. The problem statement, like an elevator pitch, should be simple and easily understandable by all levels within the engineering teams – including senior management.
  2. The security researcher was free to define the scope and choose how the problem could be solved.
  3. The improvements made by engineering teams should be measurable in a repeatable way. This would allow for easier identification of any security regressions.
  4. Existing channels for reporting security backlog items to engineering teams must be utilized versus spinning up new processes.
  5. Though automation is generally viewed as a key to scalability for these types of solutions, the team also had flexibility to adopt any method deemed most appropriate. For example, a researcher could front-load code analysis and only provide precise security flaws uncovered to the application engineering team.  Similarly, a researcher could establish required “minimum bars” for application engineering teams helping to set new company security standards. The onus is then placed on the application engineering teams to achieve compliance against the new or updated standards.

For projects that required running tests repeatedly, we leveraged our Security Automation Framework. This helped automate tasks such as validation. For others, clear standards were established for application security teams. Once a defined confidence goal is reached within the security team about compliance against those standards, automated validation could be introduced.

Pivoting Around an Application versus a Problem

When applications undergo a penetration test, threat modeling, or a tool-based scan, teams must first address critical issues before resolving lower priority issues. Such an effort probes an application from many directions attempting to extract all known security issues.  In this case, the focus is on the application and its security issues are not known when the engagement starts.  Once problems are found, the application team owns fixing it.

On the other hand, if you choose to tackle one of the broader security problems for the organization, you test against a range of applications, mitigate it as quickly as possible for those applications, and make a goal to eventually eradicate the issue entirely from the organization.  Today, teams are often forced into reactively resolving such big problems as critical issues – often due to broader industry security vulnerabilities that affect multiple companies all at once.  Heartbleed and other similar named vulnerabilities are good examples of this.  The Adobe security team attempts to resolve as many known issues as possible proactively in an attempt to help lessen the organizational disruption when big industry-wide issues come along. This approach is our recipe for having a positive security impact across the organization.

It is worth noting that security teams will move in and out of the above stages and the stages will tend to repeat themselves over time.  For example, a new acquisition or a new platform might require deeper dives to understand.  Similarly, new technology trends will require investment in research.  Going after specific, broad security problems complements the deep dives and helps improve the security posture for the entire company.

We have found it very useful to have members of the security team take ownership of these “really big” security trends we see and help improve results across the company around it. These efforts are ongoing and we will share more insights in future blog posts.

Mohit Kalra
Sr. Manager, Secure Software Engineering

Pwn2Own 2017

The ZDI Pwn2Own contest celebrated its tenth anniversary this year. Working for Adobe over the past ten years, I have seen a lot of changes in the contest as both an observer and as a vendor triaging the reports.

People often focus on the high level of aspects of who got pwned, how many times, in how many seconds, etc. However, very little of the hyped information is relevant to the actual state of security for the targeted application. There are a lot of factors that determine whether a team chooses to target a platform outside of its relative difficulty.  These can include weighing how to maximize points, the time to prepare, personal skill sets, difficulty in customizing for the target environment, and overall team strategy. ZDI has to publish extremely detailed specs for the targeted environment because even minor device driver differences can kill some exploit chains.

For instance, some of the products that were new additions this year were harder for the teams to add to their target list in time for the competition. On the other hand, it was unsurprising that Tencent and Qihoo 360 both competed against Flash Player since they regularly contribute responsible disclosures to Adobe’s Flash Player and Reader teams. In fact, Tencent was listed in our January Flash Player bulletin and we credited two Flash Player CVEs to the Qihoo 360 Vulcan Team in our March Flash Player security bulletin that went out the day before the contest. On the Reader side, Tencent team members were responsible 19 CVEs in the January release. Therefore, both teams regularly contribute to Adobe’s product security regardless of Pwn2Own.

The vendors don’t make it easy for the competitors. For instance, since the contest occurs after Patch Tuesday, there is always the chance that their bugs will collide with the vendors patch release. For instance, Chrome released 36 patches in March, VMWare had two security updates in March, and Flash Player released seven patches in our March release. In addition, new mitigations sometimes coincide with the contest. Last year, Flash Player switched to Microsoft’s Low Fragmentation Heap and started zeroing memory on free in the release prior to the contest. As a result, one of the Pwn2Own teams from that year did not have time to adjust their attack. This year, Flash Player added more mitigations around heap and metadata isolation in the Patch Tuesday release.

Adobe doesn’t target the mitigations for the contest specifically. These occur as part of our Adobe Secure Product Lifecycle (SPLC) process that continually adds new mitigations. For instance, between Pwn2Own last year and Pwn2Own this year, we added zeroing memory on allocation, running Flash Player in a separate process on Edge, blocking Win32k system calls and font access in Chrome, adding memory protections based on the Microsoft MemProtect concept, and several similar mitigations that are too detailed to include in a simple list. Similarly, Mozilla has been working on instrumenting sandboxing for their browser over the last year. Therefore, this is a contest that does not get any easier as time goes on. If you want to try and sweep all the Pwn2Own categories, then you need a team to do it.

In fact, a lot has changed since 2008 when Flash Player was first hacked in a Pwn2Own contest. The list of mitigations that Flash Player currently has in place includes compiler flags such as GS, SEH, DEP, ASLR and CFG. We have added sandboxing techniques such as job controls, low integrity processes, and app containers for Firefox, Chrome, Safari, and Edge. There have been memory defenses added that include constant folding, constant blinding, random NOP insertion, heap isolation, object length checks, replacing custom heaps, and implementing MemProtect. In addition, the code has gone through rounds of Google fuzzing, Project Zero reviews, and countless contributions from the security community to help eliminate bugs in addition to our internal efforts.

While advanced teams such as Qihoo 360 and Tencent can keep up, that security hardening has hindered others who target Flash Player. For instance, ThreatPost recently wrote an article noting that Trustwave measured a 300% drop in exploit kit activity. While exploit kit activity can be influenced by several factors including law enforcement and market ROI, the CVEs noted in exploit kits are for older versions of Flash Player. As we add mitigations, they not only need new bugs but also new mitigation bypass strategies in order to keep their code working. In addition, ThreatPost noted a recent Qualys report measuring a 40% increase in Flash Player patch adoption which helps to limit the impact of those older bugs. Zero days also have been pushed towards targeting a more limited set of environments.

All of that said, nobody is resting on their laurels. Advanced persistent threats (APTs) will select technology targets based on their mission. If your software is deployed in the environment an APT is targeting, then they will do what is necessary to accomplish their mission. Similarly, in Pwn2Own, we see teams go to extraordinary lengths to accomplish their goals. For instance, Chaitin Security Research Lab chained together six different bugs in order to exploit Safari on MacOS.  Therefore, seeing these creative weaponization techniques inspires us to think of new ways that we can further improve our defenses against determined malicious attackers.

The ZDI team has done a tremendous job improving Pwn2Own each year and adding interesting new levels of gamification. It is amazing to watch how different teams rise to the occasion. Due to the increased difficulty added by vendors each year, even winning a single category becomes a bigger deal with each new year. Thanks to everyone who contributed to making Pwn2Own 2017 a great success.

Peleus Uhley
Principal Scientist

Adobe @ CanSecWest 2017

It was another great year for the Adobe security team at CanSec West 2017 in beautiful Vancouver. CanSec West 2017 was an eclectic mix of federal employees, independent researchers and representatives of industry, brought together in one space, to hear about the latest exploits and defense strategies. As a first time attendee, I was impressed not just by the depth and breadth of the talks, but also by the incredibly inclusive community of security professionals that makes up the CanSec family. Adobe sponsor’s many conferences throughout the year, but the intimate feel of CanSec West is unique.

As the industry shifts towards a more cloud-centric playbook, hot topics such as virtualization exploits became a highlight of the conference.  Several presenters addressed the growing concern of virtualization security including the Marvel team, who gave an excellent presentation demonstrating the Hearthstone UAF and OOB vulnerabilities to exploit RPC calls in VMWare.   Additionally, the Qihoo 360 gear team, continued on their theme from last year on qemu exploitation. Demonstrating attacks that ranged from leveraging trusted input from vulnerable third party drivers to attacking shared libraries within qemu itself.

IoT also continued to be a hot topic of conversation with several talks describing both ends of the exploitation spectrum, such as the limited scale but potentially catastrophic effect of attacking automobile safety systems and the wide-scale DOS style attacks of a multitude of insecure devices banding together to form zombie armies. Jun Li, from the Unicorn team of Qihoo gave an informative talk on exploiting the CAN BUS in modern automobiles to compromise critical safety systems. On the other end of the attack spectrum Yuhao Song of GeekPwn Lab, & KEEN + Huiming Liu of  GeekPwn Lab & Tencent from  Xuanwu Lab presented on mobilizing millions of IoT devices can cause wide-scale devastation across core internet services. 

There were many talks on how the strategy for vulnerability prevention is changing from attempting to correct individual pieces of vulnerable code to implementing class-excluding mitigations that make 0-day exploitation time consuming and costlier. In a rare moment of agreement from attackers and defenders, both David Weston from Microsoft and Peng Qiu and Shefang Zhong, Qihoo 360 touted the improvements in Windows 10 architecture, such as Control Flow Guard, Code Integrity Guard and Arbitrary Code Guard that prevents entire classes of exploits. Similar to previous class busting preventions like ASLR, the main problems with wide-scale adoption of these new technologies will be a challenge as we continue to chase a multitude of third-party binaries as well as trying to ensure continuing compatibility with legacy software. As David Weston reiterated in his talk, even these improvements are not a panacea for security and there is still much work to be done from the industry to ensure a workable blend of security and usability.

Finally, my personal favorite presentation was presented by Chuanda Ding from TenCent, who gave a detailed analysis of the state of shared libraries in systems. In a world of modular software we are quickly becoming joined to each other in an intricate web of shared libraries that may not be fully understood either by the defenders or the by the consumers. Chuanda Ding cited Heartbleed as a benchmark example of what happens when a critical software bug is discovered in a widely used common library. As defenders and creators of software this is often one of the most complex issues we deal with. As code we move to a more interwoven software landscape and software offerings increase, it becomes harder to identify where shared third-party code exists, at what versions it exists and how to effectively patch them all when a vulnerability arises. I cannot understate how much I loved his last chart on shared libraries, you should check it and the rest of the great talks out on the  Cansec West slideshare.  Also be sure to catch our next blog post on the results of the Pwn2Own contest.

Tracie Martin
Security Technical Program Manager

Critical Vulnerability Uncovered in JSON Encryption

Executive Summary

If you are using go-jose, node-jose, jose2go, Nimbus JOSE+JWT or jose4 with ECDH-ES please update to the latest version. RFC 7516 aka JSON Web Encryption (JWE) Invalid Curve Attack. This can allow an attacker to recover the secret key of a party using JWE with Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES), where the sender could extract receiver’s private key.

Premise

In this blog post I assume that you are already knowledgeable about elliptic curves and their use in cryptography. If not Nick Sullivan‘s A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography or Andrea Corbellini’s series Elliptic Curve Cryptography: finite fields and discrete logarithms are great starting points. Then if you further want to climb the elliptic learning curve including the related attacks you might also want to visit https://safecurves.cr.yp.to/ . Also DJB and Tanja talk at 31c3 comes with an explanation of this very attack (see minute 43) or  Juraj Somorovsky et al’s research can become handy for learners.

Note that this research was started and inspired by Quan Nguyen from Google and then refined by Antonio Sanso from Adobe.

Introduction

JSON Web Token (JWT) is a JSON-based open standard (RFC 7519) defined in the OAuth specification family used for creating access tokens. The Javascript Object Signing and Encryption (JOSE) IETF expert group was then formed to formalize a set of signing and encryption methods for JWT that led to the release of  RFC 7515 aka  JSON Web Signature (JWS) and RFC 7516 aka JSON Web Encryption (JWE). In this post we are going to focus on JWE.

A typical JWE is dot separated string that contains five parts:

  • The JWE Protected Header
  • The JWE Encrypted Key
  • The JWE Initialization Vector
  • The JWE Ciphertext
  • The JWE Authentication Tag

An example of a JWE taken from the specification would look like:

      eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkEyNTZHQ00ifQ.
OKOawDo13gRp2ojaHV7LFpZcgV7T6DVZKTyKOMTYUmKoTCVJRgckCL9kiMT03JGe
ipsEdY3mx_etLbbWSrFr05kLzcSr4qKAq7YN7e9jwQRb23nfa6c9d-StnImGyFDb
Sv04uVuxIp5Zms1gNxKKK2Da14B8S4rzVRltdYwam_lDp5XnZAYpQdb76FdIKLaV
mqgfwX7XWRxv2322i-vDxRfqNzo_tETKzpVLzfiwQyeyPGLBIO56YJ7eObdv0je8
1860ppamavo35UgoRdbYaBcoh9QcfylQr66oc6vFWXRcZ_ZT2LawVCWTIy3brGPi
6UklfCpIMfIjf7iGdXKHzg.
48V1_ALb6US04U3b.
5eym8TW_c8SuK0ltJ3rpYIzOeDQz7TALvtu6UG9oMo4vpzs9tX_EFShS8iB7j6ji
SdiwkIr3ajwQzaBtQD_A.
XFBoMYUZodetZdvTiFvSkQ

This JWE employs RSA-OAEP for key encryption and A256GCM for content encryption:

This is only one of the many possibilities JWE provides. A separate specification called RFC 7518 aka JSON Web Algorithms (JWA) lists the possible available algorithms that can be used. The one we are discussing today is the Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES).  This algorithm allows deriving an ephemeral shared secret (this blog post from Neil Madden shows a concrete example on how to do ephemeral key agreement).

In this case the JWE Protected Header lists as well the used elliptic curve used for  the key agreement:

Once the shared secret is calculated the key agreement result can be used in one of two ways:

  1. Directly as the Content Encryption Key (CEK) for the “enc” algorithm, in the Direct Key Agreement mode, or
  1. As a symmetric key used to wrap the CEK with the A128KW, A192KW, or A256KW algorithms, in the Key Agreement with Key Wrapping mode.

This is out of scope for this post but as for the other algorithms the JOSE Cookbook contains example of usage for ECDH-ES in combination with AES-GCM or AES-CBC plus HMAC.

Observation

As highlighted by Quan during his talk at RWC 2017:

Decryption/Signature verification input is always under attacker’s control

As we will see thorough this post, this simple observation will be enough to recover the receiver’s private key. But first we need to dig a bit into elliptic curve bits and pieces.

Elliptic Curves

An elliptic curve is the set of solutions defined by an equation of the form:

y^2 = ax^3 + ax + b

Equations of this type are called Weierstrass equations. An elliptic curve would look like:

y^2 = x^3 + 4x + 20

 

In order to apply the theory of elliptic curves to cryptography we need to look at elliptic curves whose points have coordinates in a finite field Fq. The same curve will then look like below over Finite Field of size 191:

y^2 = x^3 + 4x + 20 over Finite Field of size 191

For JWE the elliptic curves in scope are the one defined in Suite B and (only recently) DJB‘s curve.

Between those, the curve that so far has reached the higher amount of usage is the famous P-256 (defined in Suite B).

Time to open Sage. Let’s define P-256:

The order of the curve is a really huge number hence there isn’t much an attacker can do with this curve (if the software implements ECDH correctly) in order to guess the private key used in the agreement. This brings us to the next section.

The Attack

The attack described here is really the classical Invalid Curve Attack. The attack is simple and powerful and takes advantage from the mere fact that Weierstrass’s formula for scalar multiplication does not take into consideration the coefficient b of the curve equation:

y^2 = ax^3 + ax + b

The original’s P-256 equation is:

As we mention above, the order of this curve is really big. So we need now to find a more convenient curve for the attacker. Easy peasy with Sage:

As you can see from the image above we just found a nicer curve (from the attacker point of view) that has an order with many small factors. Then we found a point P on the curve that has a really small order (2447 in this example).

Now we can build malicious JWEs (see the Demo Time section below) and extract the value of the secret key modulo 2447 with complexity O(2447).

A crucial part for the attack to succeed is to have the victim to repeat his own contribution to the resulting shared key. In other words this means that the victim should have his private key be the same for each key agreement. Conveniently enough this is how the Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES) works. Indeed ES stands for Ephemeral-Static were Static is the contribution of the victim.

At this stage we can repeat these operations (find a new curve, craft malicious JWEs, recover the secret key modulo the small order) many, many times and collecting information about the secret key modulo’s many, many small orders.

And finally Chinese Remainder Theorem for the win.

At the end of the day, the issue here is that the specification and consequently all the libraries that I checked missed validating that the received public key (contained in the JWE Protected Header is on the curve). You can see the Vulnerable Libraries section below to check how the various libraries fixed the issue.

Again you can find details of the attack in the original paper.

Demo Time

You can view the demo at an external site.

 

Explanation

In order to show how the attack would work in practice I set up a live demo in Heroku. In https://obscure-everglades-31759.herokuapp.com/ is up and running one Node.js server app that will act as a victim in this case. The assumption is this: in order to communicate with this web application you need to encrypt a token using the Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES). The static public key from the server needed for the key agreement is in https://obscure-everglades-31759.herokuapp.com/ecdh-es-public.json:

An application that wants to POST data to this server needs first to do a key agreement using the server’s public key above and then encrypt the payload using the derived shared key using the JWE format. Once the JWE is in place this can be posted to https://obscure-everglades-31759.herokuapp.com/secret . The web app will respond with a response status 200 if all went well (namely if it can decrypt the payload content) and with a response status 400 if for some reason the received token is missing or invalid. This will act as an oracle for any potential attacker in the way shown in the previous The Attack section.

I set up an attacker application in https://afternoon-fortress-81941.herokuapp.com/.

You can visit it and click the ‘Recover Key‘ button and observe how the attacker is able to recover the secret key from the server piece by piece. Note that this is only a demo application so the recovered secret key is really small in order to reduce the waiting time. In practice the secret key will be significantly larger (hence it will take a bit more to recover the key).

In case you experience problem with the live demo, or simply if  want to see the code under the hood, you can find the demo code in Github:

Vulnerable Libraries

Here you can find a list of libraries that were vulnerable to this particular attack so far:

Some of the libraries were implemented in a programming language that already protects against this attack checking that the result of the scalar multiplication is on the curve:

*Latest version of Node.js appears to be immune to this attack. It was still possible to be vulnerable when using browsers without web crypto support.

**Affected was the default Java SUN JCA provider that comes with Java prior to version 1.8.0_51. Later Java versions and the BouncyCastle JCA provider do not seem to be affected.

Improving the JWE standard

I reported this issue to the JOSE working group via a mail to the appropriate mailing list. We all seem to agree that an errata where the problem is listed is at least welcomed. This post is a direct attempt to raise awareness about this specific problem.

Acknowledgement

The author would like to thanks the maintainers of go-jose, node-jose, jose2go, Nimbus JOSE+JWT and jose4 for the responsiveness on fixing the issue. Francesco Mari for helping out with the development of the demo application. Tommaso Teofili and Simone Tripodi for troubleshooting. Finally as mentioned above I would like to thank Quan Nguyen from Google, indeed this research could not be possible without his initial incipit.

That’s all folks. For more crypto goodies, follow me on Twitter.

Antonio Sanso
Sr. Software Engineer – Digital Marketing

The Adobe Team Reigns Again at the Winja CTF Competition

Nishtha Behal from our corporate security team in Noida, India, was the winner of the recent Winja Capture the Flag (CTF) competition hosted at the NullCon Goa security conference. The Winja CTF this year comprised of a set of simulated hacking challenges relating to “Web Security”. The winning prize was a scholarship from The SANS Institute for security training courses. The competition saw great participation with almost 60 women coming together to challenge their knowledge of the security domain. The contest is organized as a set of rounds of increasing difficulty. It began with teams of two or three women solving the challenges. The first round comprised of multiple choice questions aimed at testing the participant’s knowledge in different areas of web application security. The second round comprised of six problems where each question comprised of a mini web application and the participant’s task was to identify the single most vulnerable snippet of the code and name the vulnerability that could be exploited. The final challenges pitted the members of winning teams against each other to determine the individual winner. We would like to congratulate Nishtha on this well-deserved win! This marks the second year in a row that some of our participating Adobe team members have won this competition.

Adobe is an ongoing proud supporter of events and activities encouraging women to pursue careers in cybersecurity. We are also sponsoring the upcoming Women in Cybersecurity conference March 31st to April 1st in Tucson, Arizona. Members of our security team will be there at the conference. If you are attending, please take the time to meet and network with them. We also work with and sponsor many other important programs to encourage more women to enter the technology field including Girls Who Code and the Executive Women’s Forum.

David Lenoe
Director, Product Security

Saying Goodbye to a Leader

We learned last Thursday of the passing of Howard Schmidt. I knew this day was coming due to his long illness, but the sense of loss upon hearing the news isn’t any less. While others have written more detailed accounts of his accomplishments, I would like to add some personal recollections.

I first met Howard at the RSA Conference during my first role at Adobe as director for Product Security. After that first hallway chat I had many more opportunities to spend time with Howard and learn from watching him work, particularly during our time together on the SAFECode board.

I always marveled at his energy, confidence, and consistency in front of a crowd — not only his ability to knock out one good speech, but the fact that I never saw him turn in a bad one. Despite his enthusiasm, Howard had a clear eye on the challenges, but never gave in to security nihilism.

Howard loved to tell stories, and he had an inexhaustible supply of them – from his time working as an undercover cop in Arizona when he once posed as a biker — to his time working at the White House (driving his Harley to work there, naturally), and beyond. But he also loved to hear stories from others. As a result, he had a massive network of friends he could tap into in order to get things done. As such, he was a real facilitator and leader, and always eager to help.

I will remember Howard as an incredibly accomplished man who could get along with just about anyone, and I will miss having him in my life. The outpouring of warm memories the last couple of days shows that, not surprisingly, I am far from alone.

Brad Arkin
Chief Security Officer

Building Better Security Takes a Village

Hacker Village was introduced at Adobe Tech Summit in 2015. The Hacker Village was designed to provide hands-on, interactive learning about common security attacks that could target Adobe systems and services. Hacker Village was created to illustrate why certain security vulnerabilities create a risk for Adobe. More traditional training techniques can sometimes fall short when trying to communicate the impact that a significant vulnerability can have on organization. Hacker Village provides real-world examples for our teams by showing how hackers might successfully attack a system- illustrating using the same techniques those attackers often use. In 2015, it consisted of six booths. Each booth was focused on a specific type of industry common attack (cross-site scripting, SQL injection, etc.) or other security-related topic. The concept was to encourage our engineers to challenge themselves by “thinking like a hacker” and attempt to be successful with various known exploits in web applications, cryptography, and more.

The first iteration of Hacker Village was a success. Most of the participants completed multiple labs, with many visiting all six booths. The feedback was positive and the practical knowledge gained was helpful for all of our engineering teams across the country.

2017 brought the return of Hacker Village to Tech Summit. We wanted to build on the success of the first Hacker Village by bringing back some revised versions of the popular booths. 2017 saw new iterations of systems hacking using Metasploit, password cracking with John the Ripper, and more advanced web application vulnerability exploitation. This year we introduced some exciting new booths as well. Visitors were able to attempt to bypass firewalls to gain network access or attempt to spy on network traffic with a “man in the middle” attack. The hardware hacking booth challenged participants to take over a computer via USB port exploits like a USB “Rubber Ducky.” Elsewhere, participants could deploy their own honeypot with a RaspberryPi at the honeypot booth or attempt hacks of connected smart devices in the Internet of Things booth.

Since we did not have enough room in the first iteration for all that were interested from our engineering teams, we made sure to increase the available space to allow a broader group of engineers access to the Village. We increased the number of booths from six to eight and more than doubled the number of lab stations. With the increased number of stations, participation nearly doubled as well. The feedback was very positive once again with the only complaint being that everyone wanted a lot more time to try out new ideas.

We are currently considering a “travelling” Hacker Village as well – a more portable version that can be set up at additional Adobe office locations and at times in between our regular Tech Summits. The Hacker Village is just one of the many programs we have at Adobe for building a better security culture.

Taylor Lobb
Manager, Security and Privacy