Posts in Category "Security"

ReproNow: Triage Assistant

Bug bounty programs (i.e.crowdsourced security) can bring a lot of benefits. Organizations are able to leverage talent from all over the world while bug hunters can get compensated for submitting bugs and improve their personal reputation within the security community. While all of this is amazing, as security engineers we still have to endure the most painful of this process – triaging.

The Problem

The amount of information required to understand a bug and reproduce it in today’s complex ecosystems is one the biggest challenges faced by security engineers. Providing so many pieces of information like user flows, videos, and requests is a challenge for bug bounty hunters. This is what we wanted to solve with ReproNow.

What is ReproNow?

ReproNow is an open source browser extension to help bug bounty hunters and engineers triage quicker and better. This tool captures your screen and the underlying network data and presents it as a video. It also provides a “Previewer” – an interactive UI to view the screen capture and the corresponding network requests in context. It also displays the headers and body of any “Request” call selected and presents the corresponding “Response” headers. Everything happens on the client side. As paranoid as we are as security engineers, the last thing to do is to trust and store vulnerabilities of various organizations on another server. So, everything in this tool is built on the client side.

This tool uses 3 main components:

  1. Screen Capture – Achieved by using chrome.screenCapture (which uses getUserMedia API)
  2. Network Capture – Achieved using getWebRequest API
  3. Export Screen + Network as MKV –  Achieved by storing the network information in attachment section of MKV using ffmpeg.js/ts-ebml on client.

For more technical details on how this works, refer to this blogpost.

ReproNow Previewer

ReproNow Previewer

Tool Features:

  • Capture screen and network data
  • Option to copy any request as “Curl” or “Raw”
  • Video on local storage
  • Previewer with a clean UI to have all information on one screen
  • Single file with network and screen
  • History on the extension shows the previous videos
  • Host Previewer locally or just go to https://www.repro-now.com/previewer/
  • Lots of options to customize what you want to capture
  • Everything on client side
  • Open Source – Customize and change as you like

Demo

Go to https://www.repro-now.com/previewer/ and load the demo video.

Get It Now

ReproNow is available on the Chrome Store as an extension. The project “repo” is available on GitHub.

 

Lakshmi Sudheer
Security Researcher

It’s National Cyber Security Awareness Month: Learn How Adobe Security Participates and How You Can Get Involved

With email phishing scams, ransomware and more, the internet can sometimes feel like a dangerous place. But the good news is that we all can greatly reduce our exposure to cyber threats by being aware and following a few simple tips, like how to create stronger passwords and how to back up your data. Adobe participates in National Cyber Security Awareness Month (NCSAM) by promoting cyber-safety best practices to its employees. It’s important to raise awareness for this topic as it helps to better protect our customers and our employees while elevating our security posture at Adobe.

We host a number of internal, security-focused events. Security best practices are posted in all our office elevators and our internal intranet provides tips on how to protect against phishing attacks, create stronger passwords and teaches employees why it’s important to keep software up-to-date. At Adobe, there’s new security-related information or activities to look for every week of October.

During NCSAM, Adobe’s security and safety experts will discuss Internet safety with Adobe employees. Those who attend the sessions will learn how to be safer online, understand social engineering tactics and discover ways to utilize privacy controls in social media platforms. The Adobe Secure Software Engineering Team (ASSET) is sponsoring an internal bug bounty, in which employees compete for prizes by finding and reporting security vulnerabilities in an internal application.

In addition to our annual internal bug bounty and capture-the-flag competitions, our Adobe Secure Software Engineering Team (ASSET) in India will host a series of internal events in Noida and Bangalore to promote security awareness, all of these events leading up to a multi-event Tech Talk. This year’s Tech Talk is open to all Adobe employees, and will feature presentations on content security, intelligent browsers, protecting against third-party vulnerabilities and more.

At Adobe, we like to celebrate NCSAM and sharpen our security skills internally. What kind of activities do you participate in during the month to celebrate cyber security awareness? Tweet @AdobeSecurity with #CyberAware to join the discussion.

Also, last year Adobe conducted a survey and posted an infographic on cyber security awareness statistics; check it out!

http://blogs.adobe.com/security/files/2017/07/Adobe-NCSAM-Infographic_11.17.162.pdf

For more information and how to get involved:  https://staysafeonline.org/ncsam/get-involved/

 

Julia Knecht
Manager, Security & Privacy Architecture
Digital Marketing

Adobe @ DefCon 2017

A few weeks ago we joined fellow members of the Adobe security team at Defcon 2017. The conference attendance has grown in size over the years as security has become a mainstream in today’s world. We were looking forward to the great line up of briefings, villages, and capture the flag (CTF) contests – Defcon never disappoints.

Here are some of the briefings this year that we found interesting and valuable to our work here at Adobe.

“A New Era of SSRF – Exploiting URL Parser in Trending Programming Languages” by Orange Tsai

The best part of this presentation was that it was very hands-on and less theoretical – something we look forward to in a presentation at DefCon. The presentation discussed zero-day vulnerabilities in URL parsers and requesters for widely-used languages like Java, Python, Ruby, JavaScript, and more. It was really helpful since Adobe is a multilingual shop. They also discussed about the mitigation strategies. Orange Tsai, the presenter, followed the talk with an interesting demo. He chained 4 different vulnerabilities together including SSRF, CRLF injection, unsafe marshal in memcache, and ruby gem to perform a RCE (Remote Code Execution) on Github Enterprise. The combined technique was called “Protocol Smuggling.” It earned him a bounty of $12,500 from GitHub.

“First SHA-1 collision” presented by Elie Bursztein from Google

This was one of the presentations most looked forward to by attendees – there was a significant wait to even get in. This presentation was super helpful since they demoed how an attacker could forge PDF documents to have the same hash yet different content. We really appreciated the effort that has been put into the research from the anti-abuse team within Google. This work was based on cryptanalysis – considered to be 100,000 times more effective than a brute-force attack. For the tech community, these findings emphasize the need for reducing SHA-1 usage. Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates. The team also briefly discussed safer hashing algorithms such as SHA256 and bcrypt. They also spent some time discussing the future of hash security.

“Friday the 13th: JSON Attacks!” by Alvaro Muñoz and Oleksandr Mirosh

The briefing kicked off with examples of deserialization attacks and an explanation of how 2016 came to be known as the year of the “Java Deserialization Apocalypse.” The talk focused on JSON libraries which allow arbitrary code execution upon deserialization of untrusted data. It was followed by a walkthrough of deserialization vulnerabilities in some of the most common Java and .NET libraries. The talk emphasized that the format used for serialization is irrelevant for deserialization attacks. It could be binary data, text such as XML, JSON, or even custom binary formats. The presenter noted that serializers cannot be trusted with untrusted data. The talk provided guidance on detecting if a serializer could be attacked. The briefing ended with the speakers providing mitigation advice to help avoid vulnerable configurations that could leave serialization libraries vulnerable. This briefing was particularly valuable as it helped us better understand JSON attacks, how to discover vulnerable deserialization library configurations, and how to mitigate known issues.

“CableTap: Wirelessly Tapping Your Home Network” by Chris Grayson, Logan Lamb, and Marc Newlin

At this briefing, presenters discussed 26 critical vulnerabilities they discovered in some of the major ISP provided network devices. They also showcased some cool attack chains enabling someone to take complete control over these devices and their network.

Hacking Village

One of the other major highlights of Defcon 25 was the Voting Machine Village. For the first time ever US voting machines were brought into the Hacking Village. Many vulnerabilities were found in these machines over the course of DefCon. It was also reported that the machines were hacked in under 2 hours. The Recon-village also never fails to deliver the best of social engineering exploits. It reminds us of the importance of security training and education. Additionally, the demo labs were thought provoking. We found a lot of tools to potentially add to our toolkits. A couple of the cool ones included Android Tamer by Anant Srivastava which focused on Android Security and EAPHammer – a toolkit for targeted twin attacks on WPA2-Enterprise networks by Gabriel Ryan.

Overall these industry events provide a great opportunity for our own security researchers to mingle with and learn from the broader security community. They help keep our knowledge and skills up-to-date. They also provide invaluable tools to help us better mitigate threats and continue to evolve our Adobe SPLC (Secure Product Lifecycle) process.

Lakshmi Sudheer & Narayan Gowraj
Security Researchers

Leveraging Security Headers for Better Web App Security

Modern browsers support quite a few HTTP headers that provide an additional layer in any defense-in-depth strategy. If present in an HTTP response, these headers enable compatible browsers to enforce certain security properties. In early 2016, we undertook an Adobe-wide project to implement security headers.

Implementing a security control in a medium to large scale organization, consisting of many teams and projects, presents its own unique challenges. For example, should we go after select headers or all, which headers do we select, how do we encourage adoption, how do we verify, and so on. Here are a few interesting observations made in our quest to implement security headers:

Browser vs. Non-browser clients

Compatible browsers enforce security policies contained in security headers while incompatible browsers or non-browser clients simply ignore them. However, additional considerations are necessary that may be unique to your setup. For example, Adobe has a range of clients: full browser-based, headless browser-based (e.g., Chromium embedded framework), and desktop applications. Implementing the security property required by a HTTP Strict Transport Security (HSTS) header (i.e. all traffic is sent over TLS for all such clients) requires a combination of an enabling HSTS header and 302 redirects from HTTP to HTTPS. This is not as secure. The reason is that incompatible headless browser-based clients or desktop applications will ignore a HSTS header sent by the server-side. Also, if the landing page URL is provided over HTTP, the clients will continue to send requests over HTTP if the 302 redirect approach is not used. The problem of updating all such clients to have HTTPS landing URLs, which would require updating clients installed on customers’ machines, is a thorny problem. The key is to understand the unique aspects of your applications and customer base to determine how each header could be implemented.

Effort ROI

Adding some security headers requires less effort than others. In our assessment, we determined that there is an increasing order of difficulty and required investment in implementation. In order: X-XSS-Protection, X-Content-Type-Options, X-Frame-Options, HSTS, and Content Security Policy (CSP). For the first two, we have not found any reason not to use them and, so far, we have not seen any team needing to do any extra work when turning these headers on. Enabling a X-Frame-Options header may require some additional considerations. For example, are there legitimate reasons for allowing framing at all or from same origin? Content Security Policy (CSP) headers is by far the most prolific, subsuming all other security headers. Depending on goals, it may require the most effort to employ. For example, if cross-site scripting prevention is the goal, it might result in refactoring of the code to separate JavaScript.

We decided to adopt a phased implementation. For Phase 1: X-XSS-Protection, X-Content-Type-Options, X-Frame-Options. For Phase 2: HSTS, CSP (we haven’t yet considered the Public Key Pinning Extension for HTTP header). Dividing massive efforts into phases offers two main benefits:

  1. ability to make initial phases lightweight to build momentum with product teams – later phases can focus on more complex tasks
  2. the opportunity to iron out quirks in processes and automation.

Application stack vs. Operations stack

Who should we ask to enable these headers? Should it be the developers writing application code or the operations engineer managing servers? There are pros and cons of each option (e.g. specifying many security headers in server configurations such as nginx, IIS, and apache is programming language agnostic and doesn’t require changes to the code). Introducing such headers could even be made part of the hardened deployment templates (i.e. Chef recipes). However, some headers may require deeper understanding of the application landscape such as which sites, if any, should be allowed to frame it. We provided both options to teams and the table below provides some useful links that helped inform our decisions.

Server based nginx, IIS, Apache https://blog.g3rt.nl/nginx-add_header-pitfall.html, https://scotthelme.co.uk/hardening-your-http-response-headers
Programing Language-based Defenses Ruby on Rails https://github.com/twitter/secureheaders
JavaScript https://github.com/helmetjs/helmet, https://github.com/seanmonstar/hood, https://github.com/nlf/blankiehttps://github.com/rwjblue/ember-cli-content-security-policy
Java https://spring.io/blog/2013/08/23/spring-security-3-2-0-rc1-highlights-security-headers
ASP.NET https://github.com/NWebsec/NWebsec/wiki
Python https://github.com/mozilla/django-csp, https://github.com/jsocol/commonware, https://github.com/sdelements/django-security
Go https://github.com/kr/secureheader
Elixir https://github.com/anotherhale/secure_headers

Affecting the change

We leveraged our existing Secure Product Life Cycle engagement process with product teams to affect this change. We use a security backlog as part of this process to capture pending security work for a team. Adding security header work to this backlog ensured that the security headers implementation work would be completed as part of our regular processes.

Checking Compliance

Having provided necessary phased guidance, the last piece of the puzzle was to develop an automated way to verify correct implementation of security headers. Manually checking status of these headers would not only be effort intensive but also repetitive and ineffective. Using publicly available scanners (such as https://securityheaders.io) is not always an option due to a stage/dev environments’ accessibility restrictions or specific phased implementation guidance. As noted here, Adobe has undertaken several company-wide security initiatives. A major one is the Security Automation Framework (SAF). SAF allows creating assertions, with various programming languages supported, and run them periodically with reporting. First, we organically compiled a list of end points for Adobe web properties that were discovered through other initiatives. Then Phase 1 and Phase 2 header checks were implemented as SAF assertions to run weekly as scans across the sites on this list. These automated scans have been instrumental in getting a bird’s eye view of adoption progress and give information needed to start a dialogue with product teams.

Security headers provide a useful layer in any defense-in-depth strategy. Most of these are relatively easy to implement. The size of an organization and number of products can introduce challenges that are not necessarily specific to implementing security headers. Within any organization, its critical to plan security initiatives so they help form a malleable ecosystem to help ease the implementation of future initiatives. It does indeed take a village to implement security controls in an organization as big as Adobe.

Prithvi Bisht
Senior Security Researcher

Help Protect Your Devices: Update Your Software

When you receive a notification from your computer that it’s time to update your software, do you immediately accept it or do you delay your software update because you’re in the middle of something? If you’re like 64 percent of American, computer-owning adults, you recognize how critical software updates are and update your software immediately. Another 30 percent update their software depending on what the update is for. That’s 94 percent who recognize software updates and at least consider taking action when prompted.

We asked a nationally representative sample of ~2,000 computer-owning adults in the United States about their behaviors and knowledge when it comes to cybersecurity. Interestingly, attitudes toward updating software has changed for the better in the last five years. It seems consumers are more likely to update their software immediately, indicating that updates are becoming easier for consumers to install, and that computer-owning adults are better informed on how and why updating software is so important when trying to protect their identity and devices. While a majority update their software promptly on their computer, 83 percent are equally or more diligent in updating their smartphones than their computers. No matter what type of device you own – computer, tablet or smartphone – it’s critical to keep all your software up to date, as soon as the update is available.

Here are some additional insights from the survey on current practices regarding software updates and also some tips and reminders on why you should be updating your software – no matter the device – regularly.

Keep Your Software Up to Date (It’s Critical)

Across the industry, we continue to see how attackers are finding holes and exploiting software that is not up-to-date. In fact, attackers may target vulnerabilities for months – or even years – long after patches have been made available. Keeping your software up-to-date is a critical part of protecting your devices, online identity and information. The good news is that according to our survey results, 78 percent of consumers recognize the importance of keeping software up-to-date. Among those who typically update their software, 68 percent indicate that both security and crash control are top reasons for updating.

No matter the reason, keeping your software up to date should become a part of your regular routine;

  • Select automatic updates. When possible, select automatic updates for your software – that way, your devices will automatically update without having to add another item on your to do list.
  • Select notification reminders. If you prefer to know exactly what updates are being installed, you can set notifications to remind you to update the software yourself. Our survey results show that 1 in 3 people update on the first notification; interestingly, adults of the Baby Boomer generation are most likely to update their software after one prompt while those tech-savvy Millennials are more likely to need 3 to 5 notifications to update software. For all those not updating on the first prompt, we suggest selecting automatic software updates when possible.

Legitimate Software Updates

While a majority of our survey respondents noted that they frequently update their software, there was a very small group that indicated the reason for not updating their software is because they don’t trust that the update is legitimate. If you share this same concern, here are a few tips and reminders to help ensure you are downloading legitimate software:

  • Set automatic software updates. To help ensure that you are downloading legitimate software, when possible select for your software to be automatically updated. One less thing to do on your end that keeps your computer in check!
  • Check for the software update directly on the company website. When updates or patches to software are available, companies typically have updates on their website. If you’re unsure about a notification, double check on the software company’s website.
  • Be wary of notifications via email. Some companies may send notifications of software updates via email. Be cautious with these, as attackers often use fake email messages that may contain viruses that appear to be software updates. If you’re unsure about the software updates you receive as an email, check the company’s website to download the latest patches. And don’t fall victim to phishing ploys! See our blog post on tips for recognizing phishing emails.

Staying One Step Ahead

The technology industry is consistently moving forward and the task of updating software should continue to progress and be made as simple as possible for users. Especially since the majority of exploits appear to target software installations that are not up-to-date on the latest security updates.  Adobe strongly recommends that users install security updates as soon as they are available. Or better yet, select the option to allow updates automatically which will install updates automatically in the background without requiring further user action.

Dave Lenoe
Director, Adobe Secure Software Engineering Team (ASSET)

Survey Infographic (PDF download)

About the Survey (PDF download)

OWASP, IR, ML, and Internal Bug Bounties

A few weeks ago, I traveled to the OWASP Summit located just outside of London. The OWASP Summit is not a conference. It is a remote offsite event for OWASP leaders and the community to brain storm on how to improve OWASP.  There were a series of one hour sessions on how to tackle different topics and what OWASP could offer to make the world better. This is a summary of some of the more interesting sessions that I was involved in from my personal perspective. These views are not in any way official OWASP views nor are they intended to represent the views of everyone who participated in the respective working groups.

One session that I attended dealt with incident response (IR). Often times, incident response guidelines are written for the response team. They cover how to do forensics, analyze logs, and other response tasks covered by the core IR team. However, there is an opportunity for creating incident response guidelines which target the security champions and developers that IR team interacts with during an incident. In addition, OWASP can relate how this information ties back into their existing security development best practices. As an example, one of the first questions from an IR team is how does customer data flow through the application. This allows the IR team to quickly determine what might have been exposed. In theory, a threat model should contain this type of diagram and could be an immediate starting point for this discussion. In addition, after the IR event, it is good for the security champion to have a post mortem review of the threat model to determine how the process could have better identified the exploited risk. Many of the secure development best practices recommended in the security development lifecycle support both proactive and reactive security efforts. Also, a security champion should know how to contact the IR team and regularly sync with them on the current response policies. These types of recommendations from OWASP could help companies ensure that a security champion on the development team are prepared to assist the IR team during a potential incident. Many of the ideas from our discussion were captured in this outcomes page from the session: https://owaspsummit.org/Outcomes/Playbooks/Incident-Response-Playbook.html.

Another interesting discussion from the summit dealt with internal bug bounties. This is an area where Devesh Bhatt and myself were able to provide input based on our experiences at Adobe. Devesh has participated in our internal bounty programs as well as several external bounty programs. Internal bug bounties have been a hot topic in the security community. On the one hand, many companies have employees who participate in public bounties and it would be great to focus those skills on internal projects. Employees who participate in public bug bounties often include general developers who are outside of the security team and therefore aren’t directly tied into internal security testing efforts. On the other hand, you want to avoid creating conflicting incentives within the company. If this is a topic of interest to you, Adobe’s Pieter Ockers will be discussing the internal bug bounty program he created in detail at the O’Reilly Security conference in New York this October: https://conferences.oreilly.com/security/sec-ny/public/schedule/detail/62443.

Lastly, there was a session on machine learning. This has been a recent research area of mine since it is the next natural step of evolution for applying the data that is collected from our security automation work. Adobe also applies machine learning to projects like Sensei. Even though the session was on Friday, there was a large turnout by the OWASP leaders.  We discussed if there were ways to share machine learning training data sets and methodologies for generating them using common tools.  One of the observations was that many people are still very new to the topic of machine learning. Therefore, I decided to start by drafting out a machine learning resources page for OWASP. The goal of the page isn’t to copy pages of existing introductory content onto an OWASP page where it could quickly become dated. The page also isn’t designed to drown the reader with a link to every machine learning reference that has ever been created. Instead, it focuses on a small selection of references that are useful to get someone started with the basic concepts. The reader can then go find their own content that goes deeper into the areas that interest them. For instance, Coursera provides an 11 week Stanford course on machine learning but that would overwhelm the person just seeking a high-level overview. The first draft of my straw man proposal for the OWASP page can be found here: https://www.owasp.org/index.php/OWASP_Machine_Learning_Resources. As OWASP creates more machine learning content, this page could eventually be a useful appendix. This page is only a proposal and it is not an official OWASP project at this stage.  Additional ideas from the workshop discussion can be found on the outcomes page: https://owaspsummit.org/Outcomes/machine-learning-and-security/machine-learning-and-security.html.

OWASP is a community driven group where all are invited to participate. If these topics interest you, then feel free to join an OWASP mailing list and start to provide more ideas. There were several other great sessions from the summit and you can find a list of outcomes from each session here: https://owaspsummit.org/Outcomes/.

Peleus Uhley
Principal Scientist

Lessons Learned from Improving Transport Layer Security (TLS) at Adobe

Transport Layer Security (TLS) is the foundation of security on the internet. As our team evolved from primarily consultative role to solve problems for the entire company, we chose TLS as one of the areas to improve. The goal of this blog post is to share the lessons we’ve learned from this project.

TLS primer

TLS is a commonly used protocol to secure communications between two entities. If a client is talking to a server over TLS, it expects the following:

  1. Confidentiality – The data between the client and the server is encrypted and a network eavesdropper should not be able to decipher the communication.
  2. Integrity – The data between the client and the server should not be modifiable by a network attacker.
  3. Authentication – In the most common case, the identity of the server is authenticated by the client during the establishment of the connection via certificates. You can also have 2-way authentication, but that is not commonly used.

Lessons learned

Here are the main lessons we learned:

Have a clearly defined scope

Instead of trying to boil the ocean, we decided to focus on around 100 domains belonging to our Creative Cloud, Document Cloud and Experience Cloud solutions. This helped us focus on these services first versus being drowned by the thousands of other Adobe domains.

Have clearly defined goals

TLS is a complicated protocol and the definition of a “good” TLS configuration keeps changing over time. We wanted a simple, easy to test, pass/fail criteria for all requirements on the endpoints in scope. We ended up choosing the following:

SSL Labs grade

SSL Labs does a great job of testing a TLS configuration and boiling it down to a grade. Grade ‘A’ was viewed as a pass and anything else was considered a fail. There might be some endpoints that had valid reasons to support certain ciphers that resulted in a lower grade. I will talk about that later in this post.

Apple Transport Security

Apple has a minimum bar for TLS configuration that all endpoints must pass if iOS apps are to connect to that endpoint. We reviewed this criteria and all the requirements were deemed sensible. We decided to make it a requirement for all endpoints, regardless if an endpoint was being accessed from an iOS app or not. We found a few corner cases where a configuration would get SSL Labs grade A and fail ATS (and vice-versa) that we resolved on a case-by-case basis.

HTTP Strict Transport Security

HSTS (HTTP Strict Transport Security) is a HTTP response header that informs compliant clients to always use HTTPS to connect to a website. It helps solve the problem of initial request being made over plain HTTP when a user types in the site without specifying the protocol and helps prevent the hijacking of connections. When a compliant client receives this header, it only uses HTTPS to make connections to this website for a max-age value set by the header. The max-age count is reset every time the client receives this header. You can read the details about HSTS in RFC 6797.

Complete automation of testing workflow

We wanted to have minimal human cost for these tests on an ongoing basis. This project allowed us to utilize our Security Automation Framework. Once the scans are setup and scheduled, they keep running daily and the results are passed on to us via email/slack/etc. After the initial push to get all the endpoints pass all the tests, it was very easy to catch any drift when we saw a failed test. Here is what these results looks like in the SAF UI:

Devil is in the Detail

From a high level it seems fairly straightforward to go about improving TLS configurations. However, it is a little more complicated when you get into the details. I wanted to talk a little bit about how we went about removing ciphers that were hampering the SSL Labs grade.

To understand the issues, you have to know a little bit about the TLS handshake. During the handshake, the client and the server decide on which cipher to use for the connection. The client sends the list of ciphers it supports in the client “hello” message of the handshake to the server. If server side preference is enabled, the cipher that is listed highest in the server preference and also supported by client is picked. In our case, the cipher that was causing the grade degradation was listed fairly high on the list. As a result, when we looked at the ciphers used for connections, this cipher was used in a significant percentage of the traffic. We didn’t want to just remove it because of the potential risk of dropping support for some customers without any notification. Therefore, we initially moved it to the bottom of the supported cipher list. This reduced the percentage of traffic using that cipher to a very small value. We were then able to identify that a partner integration was responsible to all the traffic for this cipher. We reached out to that partner and notified them to make appropriate changes before disabling that cipher. If you found this interesting, you might want to consider working for us on these projects.

Future work

In the future, we want to expand the scope of this project. We also want to expand the requirements for services that have achieved the requirements described in this post. One of the near-term goals is to get some of our domains added to the HSTS preload list. Another goal is to do more thorough monitoring of certificate transparency logs for better alerting for new certificates issued for Adobe domains. We have also been experimenting with HPKP. However, as with all new technologies, there are issues we must tackle to continue to ensure the best balance of security and experience for our customers.

Gurpartap Sandhu
Security Researcher

Getting Secrets Out of Source Code

Secrets are valuable information targeted by attackers to get access to your system and data. Secrets can be encryption keys, passwords, private keys, AWS secrets, Oauth tokens, JWT tokens, Slack tokens, API secrets, and so on. Unfortunately, secrets are sometimes hardcoded or stored along with source code by developers. Even though the source code may be kept securely in software control management (SCM) tools, that is not a suitable place to store secrets. For instance, it is not possible to restrict access to source code repositories as engineering teams collaborate to write and review code. Any secrets in source code will also be copied to clones or forks of your repository making them hard to track and remove. If a secret is ever committed in code stored in SCM tools, then you should consider it to be potentially compromised. There are other risks in storing secrets in source code. Source code could be accidentally exposed on public networks due to simple misconfiguration or released software could be reverse engineered by attackers. For all of these reasons, you should make sure secrets are never stored in source code and SCM tools.

Security teams should take a holistic approach to tackle this problem. First and foremost, educate developers to not hardcode or store secrets in source code. Next, look for secrets while doing security code reviews. If you are using static analysis tools, then consider writing custom rules to automate this process. You could also have automated tests that look for secrets and will fail the code audit if they are found. Lastly, evaluate existing source code and enumerate secrets that are already hardcoded or stored along with source code and migrate them to password management vaults.

However, finding all of the secrets potentially hiding in source code could be challenging depending on the size of the organization and number of code repositories. There are, fortunately, a few tools available to help find secrets in source code and SCM tools. Gitrob is an open source tool that aids organizations in identifying sensitive information lingering in Github repositories. Gitrob iterates over all the organization’s repositories to find files that might contain sensitive information using a range of known patterns. Gitrob can be an efficient way to more quickly identify files which are known to contain sensitive information (e.g. private key file *.key).

Gitrob can, however, generate thousands of findings which can lead to a number of false positives or false negatives. I recommend complementing Gitrob with other tools such as ‘truffleHog’ developed by Dylan Ayrey or  ‘git-secrets’ from AWS labs. These tools are able to do deeper searches of code and may help you cut down on some of the false reports.

Our team chose to complement Gitrob with custom python scripts that looked into the file content. The script identified secrets based on regular expression patterns and entropy. The patterns were created based on the secrets found through Gitrob and understanding of the structure of the secrets in our code. For example, to find an AWS Access ID and secret, I used a regular expression suggested by Amazon in one of their blog posts:

Pattern for access key IDs: (?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9])
Pattern for secret access keys: 
(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=])

In order to scale, you can share these tools and guidance with product teams to have them run and triage the findings. You should also create clear guidelines for the product team on how to store secrets and move them securely to established secret management tools (e.g. password vaults) for your production environment. The basic principle here is to not store passwords or secrets in clear text and make sure they are encrypted at rest and transit until they reach the production environment. Secrets stored insecurely must be invalidated or rotated. Password management tools might also provide features such as audit logs, access control, and secret rotation which can further help keep your production environment secure.

Given how valuable secrets are – and how much harm they can cause your organization if they were to unwittingly get out – security teams must proactively tackle this problem. No team wants to be the one responsible for leaking secrets through their source code.

Karthik Thotta Ganesh
Security Researcher

Developing an Amazon Web Services (AWS) Security Standard

Adobe has an established footprint on Amazon Web Services (AWS).  It started in 2008 with Managed Services, and expanded greatly with the launch of Creative Cloud in 2012 and the migration of Business Catalyst to AWS in 2013. In this time, we found challenges in keeping up with AWS security review needs.  In order to increase scalability, it was clear we needed a defined set of minimum AWS security requirements and tooling automation for auditing AWS environments against it.  This might sound simple, but like many things, the devil was in the details. It took focused effort to ensure the result was a success.  So how did we get here?  Let’s start from the top.

First, the optimal output format needed to be decided upon.  Adobe consists of multiple Business Units (BUs) and there are many teams within those BUs.  We needed security requirements that could be broadly applied across the company as well as to acquisitions. so we needed requirements that could not only be applied to existing services and new services across BUs; but also be future-proof. Given these constraints, creating a formal standard for our teams to follow was the best choice.

Second, we needed to build a community of stakeholders in the project. For projects with broad impact such as this, it’s best to have equally broad stakeholder engagement.  I made sure we had multiple key representatives from all the BUs (leads, architects, & engineers) and that various security roles were represented (application security, operational security, incident response, and our security operations center).  This led to many strong opinions about direction. Thus, it was important to be an active communication facilitator for all teams to ensure their needs are met.

Third, we reviewed other efforts in the security industry to see what information we could learn.  There are many AWS security-related whitepapers from various members of the security community.  There have been multiple security-focused AWS re:Invent presentations over the years.  There’s also AWS’s Trusted Advisor and Config Rules, plus open source AWS security assessment tools like Security Monkey from Netflix and Scout2 from NCC Group.  These are all good places to glean information from.

While all of these varied information sources are fine and dandy, is their security guidance relevant to Adobe?  Does it address Adobe’s highest risk areas in AWS?  Uncritically following public guidance could result in the existence of a standard for the sake of having a standard – not one that delivered benefits for Adobe.

A combination of security community input, internally and externally documented best practices, and looking for patterns and possible areas of improvement was used to define an initial scope to the standard.  At the time the requirements were being drafted, AWS had over 30 services. It was unreasonable (and unnecessary) to create security guidance covering all of them.  The initial scope for the draft minimum security requirements was AWS account management, Identity & Access Management (IAM), and Compute (Amazon Elastic Compute Cloud (EC2) and Virtual Private Cloud (VPC)).

We worked with AWS stakeholders within Adobe through monthly one-hour meetings to get agreement on the minimum bar security requirements for AWS and which were to be applied to all of Adobe’s AWS accounts (dev, stage, prod, testing, QA, R&D, personal projects, etc).  We knew we’d want a higher security bar for environments that handle more sensitive classes of data or were customer facing. We held a two-day AWS security summit that was purely focused on defining these higher bar security requirements to ensure all stakeholders had their voices heard and avoid any contention as the standard was finalized.

As a result of the summit, the teams were able to define higher security requirements that covered account management/IAM and compute (spanning architecture, fleet management, data handling, and even requirements beyond EC2/VPC including expansion into AWS-managed services such as S3, DynamoDB, SQS, etc.).

I then worked with Adobe’s Information Systems Security Policies & Standards team to publish an Adobe-wide standard.  I transformed the technical requirements into an appropriate standard.  This was then submitted to Adobe’s broader standards’ teams to review.  After this review, it was ready for formal approval.

The necessary teams agreed to the standard and it was officially published internally in August 2016.  I then created documentation to help teams use the AWS CLI to audit for and correct issues from the minimum bar requirements. We also communicated the availability of the standard and began assisting teams towards meeting compliance with it.

Overall the standard has been well received by teams.  They understand the value of the standard and its requirements in helping Adobe ensure better security across our AWS deployments.  We have also developed timelines with various teams to help them achieve compliance with the standard. And, since our AWS Security Standard was released we have seen noted scalability improvements and fewer reported security issues.  This effort continues to help us in delivering the security and reliability our customers expect from our products and services.

Cynthia Spiess
Web Security Researcher

Evolving an Application Security Team

A centralized application security team, similar to ours here at Adobe, can be the key to driving the security vision of the company. It helps implement the Secure Product Lifecycle (SPLC) and provide security expertise within the organization.  To stay current and have impact within the organization, a security team also needs to be in the mode of continuous evolution and learning. At inception of such a team, impact is usually localized more simply to applications that the team reviews.  As the team matures, the team can start to influence the security posture across the whole organization. I lead the team of security researchers at Adobe. Our team’s charter is to provide security expertise to all application teams in the company.  At Adobe, we have seen our team mature over time. As we look back, we would like to share the various phases of evolution that we have gone through along the way.

Stage 1:  Dig Deeper

In the first stage, the team is in the phase of forming and acquires the required security skills through hiring and organic learning. The individuals on the team bring varied security expertise, experience, and a desired skillset to the team. During this stage, the team looks for applicability of security knowledge to the applications that the engineering teams develop.  The security team starts this by doing deep dives into the application architecture and understanding why the products are being created in the first place. Here the team understands the organization’s application portfolio, observes common design patterns, and then starts to build the bigger picture on how applications come together as a solution.   Sharing learnings within the team is key to accelerating to the next stage.

By reviewing applications individually, the security team starts to understand the “elephants in the room” better and is also able to prioritize based on risk profile. A security team will primarily witness this stage during inception. But, it could witness it again if it undergoes major course changes, takes on new areas such as an acquisition, or must take on a new technical direction.

Stage 2: Research

In the second stage, the security team is already able to perform security reviews for most applications, or at least a thematically related group of them, with relative ease.  The security team may then start to observe gaps in their security knowhow due to things such as changes in broader industry or company engineering practices or adoption of new technology stacks.

During this phase, the security team starts to invest time in researching any necessary security tradeoffs and relative security strength of newer technologies being explored or adopted by application engineering teams. This research and its practical application within the organization has the benefit of helping to establish security experts on a wider range of topics within the team.

This stage helps security teams stay ahead of the curve, grow security subject matter expertise, update any training materials, and helps them give more meaningful security advice to other engineering teams. For example, Adobe’s application security team was initially skilled in desktop security best practices. It evolved its skillset as the company launched products centered around the cloud and mobile platforms. This newly acquired skillset required further specializationwhen the company started exploring more “bleeding edge” cloud technologies such as containerization for building micro-services.

Stage 3: Security Impact

As security teams become efficient in reviewing solutions and can keep up with technological advances, they can then start looking at homogeneous security improvements across their entire organization.  This has the potential of a much broader impact on the organization. Therefore, this requires the security team to be appropriately scalable to match possible increasing demands upon it.

If a security team member wants to make this broader impact, the first step is identification of a problem that can be applied to a larger set of applications.  In other words, you must ask members of a security team to pick and own a particularly interesting security problem and try to solve it across a larger section of the company.

Within Adobe, we were able to identify a handful of key projects that fit the above criteria for our security team to tackle. Some examples include:

  1. Defining the Amazon Web Services (AWS) minimum security bar for the entire company
  2. Implementing appropriate transport layer security (TLS) configurations on Adobe domains
  3. Validating that product teams did not store secrets or credentials in their source code
  4. Forcing use of browser supported security flags (i.e. XSS-Protection, X-Frame-Options, etc.) to help protect web applications.

The scope of these solutions varied from just business critical applications to the entire company.

Some guidelines that we set within our own team to achieve this were as follows:

  1. The problem statement, like an elevator pitch, should be simple and easily understandable by all levels within the engineering teams – including senior management.
  2. The security researcher was free to define the scope and choose how the problem could be solved.
  3. The improvements made by engineering teams should be measurable in a repeatable way. This would allow for easier identification of any security regressions.
  4. Existing channels for reporting security backlog items to engineering teams must be utilized versus spinning up new processes.
  5. Though automation is generally viewed as a key to scalability for these types of solutions, the team also had flexibility to adopt any method deemed most appropriate. For example, a researcher could front-load code analysis and only provide precise security flaws uncovered to the application engineering team.  Similarly, a researcher could establish required “minimum bars” for application engineering teams helping to set new company security standards. The onus is then placed on the application engineering teams to achieve compliance against the new or updated standards.

For projects that required running tests repeatedly, we leveraged our Security Automation Framework. This helped automate tasks such as validation. For others, clear standards were established for application security teams. Once a defined confidence goal is reached within the security team about compliance against those standards, automated validation could be introduced.

Pivoting Around an Application versus a Problem

When applications undergo a penetration test, threat modeling, or a tool-based scan, teams must first address critical issues before resolving lower priority issues. Such an effort probes an application from many directions attempting to extract all known security issues.  In this case, the focus is on the application and its security issues are not known when the engagement starts.  Once problems are found, the application team owns fixing it.

On the other hand, if you choose to tackle one of the broader security problems for the organization, you test against a range of applications, mitigate it as quickly as possible for those applications, and make a goal to eventually eradicate the issue entirely from the organization.  Today, teams are often forced into reactively resolving such big problems as critical issues – often due to broader industry security vulnerabilities that affect multiple companies all at once.  Heartbleed and other similar named vulnerabilities are good examples of this.  The Adobe security team attempts to resolve as many known issues as possible proactively in an attempt to help lessen the organizational disruption when big industry-wide issues come along. This approach is our recipe for having a positive security impact across the organization.

It is worth noting that security teams will move in and out of the above stages and the stages will tend to repeat themselves over time.  For example, a new acquisition or a new platform might require deeper dives to understand.  Similarly, new technology trends will require investment in research.  Going after specific, broad security problems complements the deep dives and helps improve the security posture for the entire company.

We have found it very useful to have members of the security team take ownership of these “really big” security trends we see and help improve results across the company around it. These efforts are ongoing and we will share more insights in future blog posts.

Mohit Kalra
Sr. Manager, Secure Software Engineering