DefendCon – All Systems Go

We are excited to host DefendCon from Adobe –  the first security conference that combines the gender inclusive nature of a traditional women-in-tech conference with cutting-edge, quality technical content presented by a diverse array of speakers.

With DefendCon, we are creating a welcoming environment where attendees will not only learn about security best practices, but also gain insight on hot topics in the industry like artificial intelligence, IoT security, incident response and machine learning.

We’re all familiar with the stats around the growth of jobs in information security and the fact that women make up less than 11% of the cybersecurity workforce∗.  Historically, women also leave the IT workforce at almost twice the rate of men. In an industry with an increasing demand for qualified candidates, we need to attract, train and retain high performing individuals.  We know that diverse teams lead to higher performance and better results and we‘re continuing to build on our initiatives in diversity, security best practices, and security education to help creatively solve these issues.

The first ever DefendCon will take place this week on September 21-22 at the Adobe Seattle office.  We hope to provide women and men in the security industry with a quality experience to connect, collaborate and learn. We currently have speakers and participants from across the tech sector including LinkedIn, Netflix, Apple, Microsoft, Salesforce and Google.   From Adobe, our own Senior Security Researcher  Cindy Spiess, Security Researcher  Todd Baumeister, as well as Principal Scientist  Peleus Uhley  will be presenting.

With DefendCon, we’re helping the industry move faster than the status quo and addressing a serious need for more women in cybersecurity.  We look forward to building upon this inaugural effort in the months and years to come.

Check out our full list of speakers and sessions.  You can also follow the latest around the event  on Twitter @DefendCon.

 

Brad Arkin
Vice President and Chief Security Officer

* https://iamcybersafe.org/wp-content/uploads/2017/03/WomensReport.pdf

Announcing DefendCon from Adobe

The rate of growth for jobs in information security is projected at 37% through 20221— much faster than the average for other occupations. Did you know that there will be a 2 million job shortfall for cybersecurity professionals by 20192 and women currently make up less than 11% of the cybersecurity workforce3?

Adobe believes that diverse teams lead to higher performance and better results and is continuing to build upon its initiatives in diversity, security best practices, and security education to help creatively solve these issues. We wanted to create a conference that combines the gender inclusive nature of a traditional women in tech conference with cutting-edge technical content, thus DefendCon was born. This conference aims to provide women and male allies in the security industry with a quality experience to connect, collaborate and learn. We currently have speakers and participants from across the tech sector including LinkedIn, Netflix, Microsoft, Google and more! Topics include machine learning and AI in security, IoT security, bug bounties, incident response, and container security among others.

Check out our exciting full list of speakers and sessions:  https://www.adobe.com/go/defendcon.

If you want to learn more about how you can help the gender diversity issue in cybersecurity, learn from industry leading talent, and meet the next generation of technical leaders please email the DefendCon team today. You can also follow the latest around the event on Twitter @DefendCon.

Tracie Martin
Technical Program Manager – Security

 

1 https://www.bls.gov/ooh/computer-and-information-technology/information-security-analysts.htm
2 https://image-store.slidesharecdn.com/be4eaf1a-eea6-4b97-b36e-b62dfc8dcbae-original.jpeg
3 https://iamcybersafe.org/wp-content/uploads/2017/03/WomensReport.pdf

Adobe @ DefCon 2017

A few weeks ago we joined fellow members of the Adobe security team at Defcon 2017. The conference attendance has grown in size over the years as security has become a mainstream in today’s world. We were looking forward to the great line up of briefings, villages, and capture the flag (CTF) contests – Defcon never disappoints.

Here are some of the briefings this year that we found interesting and valuable to our work here at Adobe.

“A New Era of SSRF – Exploiting URL Parser in Trending Programming Languages” by Orange Tsai

The best part of this presentation was that it was very hands-on and less theoretical – something we look forward to in a presentation at DefCon. The presentation discussed zero-day vulnerabilities in URL parsers and requesters for widely-used languages like Java, Python, Ruby, JavaScript, and more. It was really helpful since Adobe is a multilingual shop. They also discussed about the mitigation strategies. Orange Tsai, the presenter, followed the talk with an interesting demo. He chained 4 different vulnerabilities together including SSRF, CRLF injection, unsafe marshal in memcache, and ruby gem to perform a RCE (Remote Code Execution) on Github Enterprise. The combined technique was called “Protocol Smuggling.” It earned him a bounty of $12,500 from GitHub.

“First SHA-1 collision” presented by Elie Bursztein from Google

This was one of the presentations most looked forward to by attendees – there was a significant wait to even get in. This presentation was super helpful since they demoed how an attacker could forge PDF documents to have the same hash yet different content. We really appreciated the effort that has been put into the research from the anti-abuse team within Google. This work was based on cryptanalysis – considered to be 100,000 times more effective than a brute-force attack. For the tech community, these findings emphasize the need for reducing SHA-1 usage. Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates. The team also briefly discussed safer hashing algorithms such as SHA256 and bcrypt. They also spent some time discussing the future of hash security.

“Friday the 13th: JSON Attacks!” by Alvaro Muñoz and Oleksandr Mirosh

The briefing kicked off with examples of deserialization attacks and an explanation of how 2016 came to be known as the year of the “Java Deserialization Apocalypse.” The talk focused on JSON libraries which allow arbitrary code execution upon deserialization of untrusted data. It was followed by a walkthrough of deserialization vulnerabilities in some of the most common Java and .NET libraries. The talk emphasized that the format used for serialization is irrelevant for deserialization attacks. It could be binary data, text such as XML, JSON, or even custom binary formats. The presenter noted that serializers cannot be trusted with untrusted data. The talk provided guidance on detecting if a serializer could be attacked. The briefing ended with the speakers providing mitigation advice to help avoid vulnerable configurations that could leave serialization libraries vulnerable. This briefing was particularly valuable as it helped us better understand JSON attacks, how to discover vulnerable deserialization library configurations, and how to mitigate known issues.

“CableTap: Wirelessly Tapping Your Home Network” by Chris Grayson, Logan Lamb, and Marc Newlin

At this briefing, presenters discussed 26 critical vulnerabilities they discovered in some of the major ISP provided network devices. They also showcased some cool attack chains enabling someone to take complete control over these devices and their network.

Hacking Village

One of the other major highlights of Defcon 25 was the Voting Machine Village. For the first time ever US voting machines were brought into the Hacking Village. Many vulnerabilities were found in these machines over the course of DefCon. It was also reported that the machines were hacked in under 2 hours. The Recon-village also never fails to deliver the best of social engineering exploits. It reminds us of the importance of security training and education. Additionally, the demo labs were thought provoking. We found a lot of tools to potentially add to our toolkits. A couple of the cool ones included Android Tamer by Anant Srivastava which focused on Android Security and EAPHammer – a toolkit for targeted twin attacks on WPA2-Enterprise networks by Gabriel Ryan.

Overall these industry events provide a great opportunity for our own security researchers to mingle with and learn from the broader security community. They help keep our knowledge and skills up-to-date. They also provide invaluable tools to help us better mitigate threats and continue to evolve our Adobe SPLC (Secure Product Lifecycle) process.

Lakshmi Sudheer & Narayan Gowraj
Security Researchers

Leveraging Security Headers for Better Web App Security

Modern browsers support quite a few HTTP headers that provide an additional layer in any defense-in-depth strategy. If present in an HTTP response, these headers enable compatible browsers to enforce certain security properties. In early 2016, we undertook an Adobe-wide project to implement security headers.

Implementing a security control in a medium to large scale organization, consisting of many teams and projects, presents its own unique challenges. For example, should we go after select headers or all, which headers do we select, how do we encourage adoption, how do we verify, and so on. Here are a few interesting observations made in our quest to implement security headers:

Browser vs. Non-browser clients

Compatible browsers enforce security policies contained in security headers while incompatible browsers or non-browser clients simply ignore them. However, additional considerations are necessary that may be unique to your setup. For example, Adobe has a range of clients: full browser-based, headless browser-based (e.g., Chromium embedded framework), and desktop applications. Implementing the security property required by a HTTP Strict Transport Security (HSTS) header (i.e. all traffic is sent over TLS for all such clients) requires a combination of an enabling HSTS header and 302 redirects from HTTP to HTTPS. This is not as secure. The reason is that incompatible headless browser-based clients or desktop applications will ignore a HSTS header sent by the server-side. Also, if the landing page URL is provided over HTTP, the clients will continue to send requests over HTTP if the 302 redirect approach is not used. The problem of updating all such clients to have HTTPS landing URLs, which would require updating clients installed on customers’ machines, is a thorny problem. The key is to understand the unique aspects of your applications and customer base to determine how each header could be implemented.

Effort ROI

Adding some security headers requires less effort than others. In our assessment, we determined that there is an increasing order of difficulty and required investment in implementation. In order: X-XSS-Protection, X-Content-Type-Options, X-Frame-Options, HSTS, and Content Security Policy (CSP). For the first two, we have not found any reason not to use them and, so far, we have not seen any team needing to do any extra work when turning these headers on. Enabling a X-Frame-Options header may require some additional considerations. For example, are there legitimate reasons for allowing framing at all or from same origin? Content Security Policy (CSP) headers is by far the most prolific, subsuming all other security headers. Depending on goals, it may require the most effort to employ. For example, if cross-site scripting prevention is the goal, it might result in refactoring of the code to separate JavaScript.

We decided to adopt a phased implementation. For Phase 1: X-XSS-Protection, X-Content-Type-Options, X-Frame-Options. For Phase 2: HSTS, CSP (we haven’t yet considered the Public Key Pinning Extension for HTTP header). Dividing massive efforts into phases offers two main benefits:

  1. ability to make initial phases lightweight to build momentum with product teams – later phases can focus on more complex tasks
  2. the opportunity to iron out quirks in processes and automation.

Application stack vs. Operations stack

Who should we ask to enable these headers? Should it be the developers writing application code or the operations engineer managing servers? There are pros and cons of each option (e.g. specifying many security headers in server configurations such as nginx, IIS, and apache is programming language agnostic and doesn’t require changes to the code). Introducing such headers could even be made part of the hardened deployment templates (i.e. Chef recipes). However, some headers may require deeper understanding of the application landscape such as which sites, if any, should be allowed to frame it. We provided both options to teams and the table below provides some useful links that helped inform our decisions.

Server based nginx, IIS, Apache https://blog.g3rt.nl/nginx-add_header-pitfall.html, https://scotthelme.co.uk/hardening-your-http-response-headers
Programing Language-based Defenses Ruby on Rails https://github.com/twitter/secureheaders
JavaScript https://github.com/helmetjs/helmet, https://github.com/seanmonstar/hood, https://github.com/nlf/blankiehttps://github.com/rwjblue/ember-cli-content-security-policy
Java https://spring.io/blog/2013/08/23/spring-security-3-2-0-rc1-highlights-security-headers
ASP.NET https://github.com/NWebsec/NWebsec/wiki
Python https://github.com/mozilla/django-csp, https://github.com/jsocol/commonware, https://github.com/sdelements/django-security
Go https://github.com/kr/secureheader
Elixir https://github.com/anotherhale/secure_headers

Affecting the change

We leveraged our existing Secure Product Life Cycle engagement process with product teams to affect this change. We use a security backlog as part of this process to capture pending security work for a team. Adding security header work to this backlog ensured that the security headers implementation work would be completed as part of our regular processes.

Checking Compliance

Having provided necessary phased guidance, the last piece of the puzzle was to develop an automated way to verify correct implementation of security headers. Manually checking status of these headers would not only be effort intensive but also repetitive and ineffective. Using publicly available scanners (such as https://securityheaders.io) is not always an option due to a stage/dev environments’ accessibility restrictions or specific phased implementation guidance. As noted here, Adobe has undertaken several company-wide security initiatives. A major one is the Security Automation Framework (SAF). SAF allows creating assertions, with various programming languages supported, and run them periodically with reporting. First, we organically compiled a list of end points for Adobe web properties that were discovered through other initiatives. Then Phase 1 and Phase 2 header checks were implemented as SAF assertions to run weekly as scans across the sites on this list. These automated scans have been instrumental in getting a bird’s eye view of adoption progress and give information needed to start a dialogue with product teams.

Security headers provide a useful layer in any defense-in-depth strategy. Most of these are relatively easy to implement. The size of an organization and number of products can introduce challenges that are not necessarily specific to implementing security headers. Within any organization, its critical to plan security initiatives so they help form a malleable ecosystem to help ease the implementation of future initiatives. It does indeed take a village to implement security controls in an organization as big as Adobe.

Prithvi Bisht
Senior Security Researcher

Help Protect Your Devices: Update Your Software

When you receive a notification from your computer that it’s time to update your software, do you immediately accept it or do you delay your software update because you’re in the middle of something? If you’re like 64 percent of American, computer-owning adults, you recognize how critical software updates are and update your software immediately. Another 30 percent update their software depending on what the update is for. That’s 94 percent who recognize software updates and at least consider taking action when prompted.

We asked a nationally representative sample of ~2,000 computer-owning adults in the United States about their behaviors and knowledge when it comes to cybersecurity. Interestingly, attitudes toward updating software has changed for the better in the last five years. It seems consumers are more likely to update their software immediately, indicating that updates are becoming easier for consumers to install, and that computer-owning adults are better informed on how and why updating software is so important when trying to protect their identity and devices. While a majority update their software promptly on their computer, 83 percent are equally or more diligent in updating their smartphones than their computers. No matter what type of device you own – computer, tablet or smartphone – it’s critical to keep all your software up to date, as soon as the update is available.

Here are some additional insights from the survey on current practices regarding software updates and also some tips and reminders on why you should be updating your software – no matter the device – regularly.

Keep Your Software Up to Date (It’s Critical)

Across the industry, we continue to see how attackers are finding holes and exploiting software that is not up-to-date. In fact, attackers may target vulnerabilities for months – or even years – long after patches have been made available. Keeping your software up-to-date is a critical part of protecting your devices, online identity and information. The good news is that according to our survey results, 78 percent of consumers recognize the importance of keeping software up-to-date. Among those who typically update their software, 68 percent indicate that both security and crash control are top reasons for updating.

No matter the reason, keeping your software up to date should become a part of your regular routine;

  • Select automatic updates. When possible, select automatic updates for your software – that way, your devices will automatically update without having to add another item on your to do list.
  • Select notification reminders. If you prefer to know exactly what updates are being installed, you can set notifications to remind you to update the software yourself. Our survey results show that 1 in 3 people update on the first notification; interestingly, adults of the Baby Boomer generation are most likely to update their software after one prompt while those tech-savvy Millennials are more likely to need 3 to 5 notifications to update software. For all those not updating on the first prompt, we suggest selecting automatic software updates when possible.

Legitimate Software Updates

While a majority of our survey respondents noted that they frequently update their software, there was a very small group that indicated the reason for not updating their software is because they don’t trust that the update is legitimate. If you share this same concern, here are a few tips and reminders to help ensure you are downloading legitimate software:

  • Set automatic software updates. To help ensure that you are downloading legitimate software, when possible select for your software to be automatically updated. One less thing to do on your end that keeps your computer in check!
  • Check for the software update directly on the company website. When updates or patches to software are available, companies typically have updates on their website. If you’re unsure about a notification, double check on the software company’s website.
  • Be wary of notifications via email. Some companies may send notifications of software updates via email. Be cautious with these, as attackers often use fake email messages that may contain viruses that appear to be software updates. If you’re unsure about the software updates you receive as an email, check the company’s website to download the latest patches. And don’t fall victim to phishing ploys! See our blog post on tips for recognizing phishing emails.

Staying One Step Ahead

The technology industry is consistently moving forward and the task of updating software should continue to progress and be made as simple as possible for users. Especially since the majority of exploits appear to target software installations that are not up-to-date on the latest security updates.  Adobe strongly recommends that users install security updates as soon as they are available. Or better yet, select the option to allow updates automatically which will install updates automatically in the background without requiring further user action.

Dave Lenoe
Director, Adobe Secure Software Engineering Team (ASSET)

Survey Infographic (PDF download)

About the Survey (PDF download)

OWASP, IR, ML, and Internal Bug Bounties

A few weeks ago, I traveled to the OWASP Summit located just outside of London. The OWASP Summit is not a conference. It is a remote offsite event for OWASP leaders and the community to brain storm on how to improve OWASP.  There were a series of one hour sessions on how to tackle different topics and what OWASP could offer to make the world better. This is a summary of some of the more interesting sessions that I was involved in from my personal perspective. These views are not in any way official OWASP views nor are they intended to represent the views of everyone who participated in the respective working groups.

One session that I attended dealt with incident response (IR). Often times, incident response guidelines are written for the response team. They cover how to do forensics, analyze logs, and other response tasks covered by the core IR team. However, there is an opportunity for creating incident response guidelines which target the security champions and developers that IR team interacts with during an incident. In addition, OWASP can relate how this information ties back into their existing security development best practices. As an example, one of the first questions from an IR team is how does customer data flow through the application. This allows the IR team to quickly determine what might have been exposed. In theory, a threat model should contain this type of diagram and could be an immediate starting point for this discussion. In addition, after the IR event, it is good for the security champion to have a post mortem review of the threat model to determine how the process could have better identified the exploited risk. Many of the secure development best practices recommended in the security development lifecycle support both proactive and reactive security efforts. Also, a security champion should know how to contact the IR team and regularly sync with them on the current response policies. These types of recommendations from OWASP could help companies ensure that a security champion on the development team are prepared to assist the IR team during a potential incident. Many of the ideas from our discussion were captured in this outcomes page from the session: https://owaspsummit.org/Outcomes/Playbooks/Incident-Response-Playbook.html.

Another interesting discussion from the summit dealt with internal bug bounties. This is an area where Devesh Bhatt and myself were able to provide input based on our experiences at Adobe. Devesh has participated in our internal bounty programs as well as several external bounty programs. Internal bug bounties have been a hot topic in the security community. On the one hand, many companies have employees who participate in public bounties and it would be great to focus those skills on internal projects. Employees who participate in public bug bounties often include general developers who are outside of the security team and therefore aren’t directly tied into internal security testing efforts. On the other hand, you want to avoid creating conflicting incentives within the company. If this is a topic of interest to you, Adobe’s Pieter Ockers will be discussing the internal bug bounty program he created in detail at the O’Reilly Security conference in New York this October: https://conferences.oreilly.com/security/sec-ny/public/schedule/detail/62443.

Lastly, there was a session on machine learning. This has been a recent research area of mine since it is the next natural step of evolution for applying the data that is collected from our security automation work. Adobe also applies machine learning to projects like Sensei. Even though the session was on Friday, there was a large turnout by the OWASP leaders.  We discussed if there were ways to share machine learning training data sets and methodologies for generating them using common tools.  One of the observations was that many people are still very new to the topic of machine learning. Therefore, I decided to start by drafting out a machine learning resources page for OWASP. The goal of the page isn’t to copy pages of existing introductory content onto an OWASP page where it could quickly become dated. The page also isn’t designed to drown the reader with a link to every machine learning reference that has ever been created. Instead, it focuses on a small selection of references that are useful to get someone started with the basic concepts. The reader can then go find their own content that goes deeper into the areas that interest them. For instance, Coursera provides an 11 week Stanford course on machine learning but that would overwhelm the person just seeking a high-level overview. The first draft of my straw man proposal for the OWASP page can be found here: https://www.owasp.org/index.php/OWASP_Machine_Learning_Resources. As OWASP creates more machine learning content, this page could eventually be a useful appendix. This page is only a proposal and it is not an official OWASP project at this stage.  Additional ideas from the workshop discussion can be found on the outcomes page: https://owaspsummit.org/Outcomes/machine-learning-and-security/machine-learning-and-security.html.

OWASP is a community driven group where all are invited to participate. If these topics interest you, then feel free to join an OWASP mailing list and start to provide more ideas. There were several other great sessions from the summit and you can find a list of outcomes from each session here: https://owaspsummit.org/Outcomes/.

Peleus Uhley
Principal Scientist

Lessons Learned from Improving Transport Layer Security (TLS) at Adobe

Transport Layer Security (TLS) is the foundation of security on the internet. As our team evolved from primarily consultative role to solve problems for the entire company, we chose TLS as one of the areas to improve. The goal of this blog post is to share the lessons we’ve learned from this project.

TLS primer

TLS is a commonly used protocol to secure communications between two entities. If a client is talking to a server over TLS, it expects the following:

  1. Confidentiality – The data between the client and the server is encrypted and a network eavesdropper should not be able to decipher the communication.
  2. Integrity – The data between the client and the server should not be modifiable by a network attacker.
  3. Authentication – In the most common case, the identity of the server is authenticated by the client during the establishment of the connection via certificates. You can also have 2-way authentication, but that is not commonly used.

Lessons learned

Here are the main lessons we learned:

Have a clearly defined scope

Instead of trying to boil the ocean, we decided to focus on around 100 domains belonging to our Creative Cloud, Document Cloud and Experience Cloud solutions. This helped us focus on these services first versus being drowned by the thousands of other Adobe domains.

Have clearly defined goals

TLS is a complicated protocol and the definition of a “good” TLS configuration keeps changing over time. We wanted a simple, easy to test, pass/fail criteria for all requirements on the endpoints in scope. We ended up choosing the following:

SSL Labs grade

SSL Labs does a great job of testing a TLS configuration and boiling it down to a grade. Grade ‘A’ was viewed as a pass and anything else was considered a fail. There might be some endpoints that had valid reasons to support certain ciphers that resulted in a lower grade. I will talk about that later in this post.

Apple Transport Security

Apple has a minimum bar for TLS configuration that all endpoints must pass if iOS apps are to connect to that endpoint. We reviewed this criteria and all the requirements were deemed sensible. We decided to make it a requirement for all endpoints, regardless if an endpoint was being accessed from an iOS app or not. We found a few corner cases where a configuration would get SSL Labs grade A and fail ATS (and vice-versa) that we resolved on a case-by-case basis.

HTTP Strict Transport Security

HSTS (HTTP Strict Transport Security) is a HTTP response header that informs compliant clients to always use HTTPS to connect to a website. It helps solve the problem of initial request being made over plain HTTP when a user types in the site without specifying the protocol and helps prevent the hijacking of connections. When a compliant client receives this header, it only uses HTTPS to make connections to this website for a max-age value set by the header. The max-age count is reset every time the client receives this header. You can read the details about HSTS in RFC 6797.

Complete automation of testing workflow

We wanted to have minimal human cost for these tests on an ongoing basis. This project allowed us to utilize our Security Automation Framework. Once the scans are setup and scheduled, they keep running daily and the results are passed on to us via email/slack/etc. After the initial push to get all the endpoints pass all the tests, it was very easy to catch any drift when we saw a failed test. Here is what these results looks like in the SAF UI:

Devil is in the Detail

From a high level it seems fairly straightforward to go about improving TLS configurations. However, it is a little more complicated when you get into the details. I wanted to talk a little bit about how we went about removing ciphers that were hampering the SSL Labs grade.

To understand the issues, you have to know a little bit about the TLS handshake. During the handshake, the client and the server decide on which cipher to use for the connection. The client sends the list of ciphers it supports in the client “hello” message of the handshake to the server. If server side preference is enabled, the cipher that is listed highest in the server preference and also supported by client is picked. In our case, the cipher that was causing the grade degradation was listed fairly high on the list. As a result, when we looked at the ciphers used for connections, this cipher was used in a significant percentage of the traffic. We didn’t want to just remove it because of the potential risk of dropping support for some customers without any notification. Therefore, we initially moved it to the bottom of the supported cipher list. This reduced the percentage of traffic using that cipher to a very small value. We were then able to identify that a partner integration was responsible to all the traffic for this cipher. We reached out to that partner and notified them to make appropriate changes before disabling that cipher. If you found this interesting, you might want to consider working for us on these projects.

Future work

In the future, we want to expand the scope of this project. We also want to expand the requirements for services that have achieved the requirements described in this post. One of the near-term goals is to get some of our domains added to the HSTS preload list. Another goal is to do more thorough monitoring of certificate transparency logs for better alerting for new certificates issued for Adobe domains. We have also been experimenting with HPKP. However, as with all new technologies, there are issues we must tackle to continue to ensure the best balance of security and experience for our customers.

Gurpartap Sandhu
Security Researcher

Adobe @ Security of Things World 2017

Dave Lenoe, Director of the Adobe Secure Software Engineering Team, will be speaking this week at Security of Things World 2017 in Berlin, Germany. Dave will be speaking about “Building in Seamless Security Updates – Lessons Learned for IoT Device Manufacturers” at 13:45 on Monday, June 12th. The session will discuss the importance of providing security updates quickly, making them easy to install for users, and best practices that can be built into IoT devices from the ground up. If you are attending Security of Things World this week in Berlin, we hope you are able to join Dave for this informative session.

Getting Secrets Out of Source Code

Secrets are valuable information targeted by attackers to get access to your system and data. Secrets can be encryption keys, passwords, private keys, AWS secrets, Oauth tokens, JWT tokens, Slack tokens, API secrets, and so on. Unfortunately, secrets are sometimes hardcoded or stored along with source code by developers. Even though the source code may be kept securely in software control management (SCM) tools, that is not a suitable place to store secrets. For instance, it is not possible to restrict access to source code repositories as engineering teams collaborate to write and review code. Any secrets in source code will also be copied to clones or forks of your repository making them hard to track and remove. If a secret is ever committed in code stored in SCM tools, then you should consider it to be potentially compromised. There are other risks in storing secrets in source code. Source code could be accidentally exposed on public networks due to simple misconfiguration or released software could be reverse engineered by attackers. For all of these reasons, you should make sure secrets are never stored in source code and SCM tools.

Security teams should take a holistic approach to tackle this problem. First and foremost, educate developers to not hardcode or store secrets in source code. Next, look for secrets while doing security code reviews. If you are using static analysis tools, then consider writing custom rules to automate this process. You could also have automated tests that look for secrets and will fail the code audit if they are found. Lastly, evaluate existing source code and enumerate secrets that are already hardcoded or stored along with source code and migrate them to password management vaults.

However, finding all of the secrets potentially hiding in source code could be challenging depending on the size of the organization and number of code repositories. There are, fortunately, a few tools available to help find secrets in source code and SCM tools. Gitrob is an open source tool that aids organizations in identifying sensitive information lingering in Github repositories. Gitrob iterates over all the organization’s repositories to find files that might contain sensitive information using a range of known patterns. Gitrob can be an efficient way to more quickly identify files which are known to contain sensitive information (e.g. private key file *.key).

Gitrob can, however, generate thousands of findings which can lead to a number of false positives or false negatives. I recommend complementing Gitrob with other tools such as ‘truffleHog’ developed by Dylan Ayrey or  ‘git-secrets’ from AWS labs. These tools are able to do deeper searches of code and may help you cut down on some of the false reports.

Our team chose to complement Gitrob with custom python scripts that looked into the file content. The script identified secrets based on regular expression patterns and entropy. The patterns were created based on the secrets found through Gitrob and understanding of the structure of the secrets in our code. For example, to find an AWS Access ID and secret, I used a regular expression suggested by Amazon in one of their blog posts:

Pattern for access key IDs: (?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9])
Pattern for secret access keys: 
(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=])

In order to scale, you can share these tools and guidance with product teams to have them run and triage the findings. You should also create clear guidelines for the product team on how to store secrets and move them securely to established secret management tools (e.g. password vaults) for your production environment. The basic principle here is to not store passwords or secrets in clear text and make sure they are encrypted at rest and transit until they reach the production environment. Secrets stored insecurely must be invalidated or rotated. Password management tools might also provide features such as audit logs, access control, and secret rotation which can further help keep your production environment secure.

Given how valuable secrets are – and how much harm they can cause your organization if they were to unwittingly get out – security teams must proactively tackle this problem. No team wants to be the one responsible for leaking secrets through their source code.

Karthik Thotta Ganesh
Security Researcher

Thanks to All Who Have Downloaded Open Source CCF

As announced at ISACA North America CACS this past Monday, May 1st, by our own Abhi Pandit, Adobe has made available an open source version of its Common Controls Framework (CCF). We have had significant interest in this unique project both before and after the event. We would like to thank all of those that have downloaded open source CCF to date. As always, we welcome  your feedback as you make use of the framework in your own compliance efforts. You can contact us directly at opensourceccf@adobe.com. If you have not yet downloaded your copy, please visit our website to register and download today.

For our European customers and friends, Abhi will be speaking about open source CCF at the upcoming Euro CACS conference in Munich on Monday, May 29th, from 14:15 – 15:30. If you are planning to attend Euro CACS, we look forward to seeing you at the session. We will also have a booth in the expo area with more information about open source CCF and our other security and compliance initiatives across Adobe.

We look forward to seeing you there!

The Adobe Team at North America CACS 2017

Chris Parkerson
Sr. Campaign Manager