Posts in Category "Security"

Adobe @ CanSecWest 2017

It was another great year for the Adobe security team at CanSec West 2017 in beautiful Vancouver. CanSec West 2017 was an eclectic mix of federal employees, independent researchers and representatives of industry, brought together in one space, to hear about the latest exploits and defense strategies. As a first time attendee, I was impressed not just by the depth and breadth of the talks, but also by the incredibly inclusive community of security professionals that makes up the CanSec family. Adobe sponsor’s many conferences throughout the year, but the intimate feel of CanSec West is unique.

As the industry shifts towards a more cloud-centric playbook, hot topics such as virtualization exploits became a highlight of the conference.  Several presenters addressed the growing concern of virtualization security including the Marvel team, who gave an excellent presentation demonstrating the Hearthstone UAF and OOB vulnerabilities to exploit RPC calls in VMWare.   Additionally, the Qihoo 360 gear team, continued on their theme from last year on qemu exploitation. Demonstrating attacks that ranged from leveraging trusted input from vulnerable third party drivers to attacking shared libraries within qemu itself.

IoT also continued to be a hot topic of conversation with several talks describing both ends of the exploitation spectrum, such as the limited scale but potentially catastrophic effect of attacking automobile safety systems and the wide-scale DOS style attacks of a multitude of insecure devices banding together to form zombie armies. Jun Li, from the Unicorn team of Qihoo gave an informative talk on exploiting the CAN BUS in modern automobiles to compromise critical safety systems. On the other end of the attack spectrum Yuhao Song of GeekPwn Lab, & KEEN + Huiming Liu of  GeekPwn Lab & Tencent from  Xuanwu Lab presented on mobilizing millions of IoT devices can cause wide-scale devastation across core internet services. 

There were many talks on how the strategy for vulnerability prevention is changing from attempting to correct individual pieces of vulnerable code to implementing class-excluding mitigations that make 0-day exploitation time consuming and costlier. In a rare moment of agreement from attackers and defenders, both David Weston from Microsoft and Peng Qiu and Shefang Zhong, Qihoo 360 touted the improvements in Windows 10 architecture, such as Control Flow Guard, Code Integrity Guard and Arbitrary Code Guard that prevents entire classes of exploits. Similar to previous class busting preventions like ASLR, the main problems with wide-scale adoption of these new technologies will be a challenge as we continue to chase a multitude of third-party binaries as well as trying to ensure continuing compatibility with legacy software. As David Weston reiterated in his talk, even these improvements are not a panacea for security and there is still much work to be done from the industry to ensure a workable blend of security and usability.

Finally, my personal favorite presentation was presented by Chuanda Ding from TenCent, who gave a detailed analysis of the state of shared libraries in systems. In a world of modular software we are quickly becoming joined to each other in an intricate web of shared libraries that may not be fully understood either by the defenders or the by the consumers. Chuanda Ding cited Heartbleed as a benchmark example of what happens when a critical software bug is discovered in a widely used common library. As defenders and creators of software this is often one of the most complex issues we deal with. As code we move to a more interwoven software landscape and software offerings increase, it becomes harder to identify where shared third-party code exists, at what versions it exists and how to effectively patch them all when a vulnerability arises. I cannot understate how much I loved his last chart on shared libraries, you should check it and the rest of the great talks out on the  Cansec West slideshare.  Also be sure to catch our next blog post on the results of the Pwn2Own contest.

Tracie Martin
Security Technical Program Manager

Critical Vulnerability Uncovered in JSON Encryption

Executive Summary

If you are using go-jose, node-jose, jose2go, Nimbus JOSE+JWT or jose4 with ECDH-ES please update to the latest version. RFC 7516 aka JSON Web Encryption (JWE) Invalid Curve Attack. This can allow an attacker to recover the secret key of a party using JWE with Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES), where the sender could extract receiver’s private key.

Premise

In this blog post I assume that you are already knowledgeable about elliptic curves and their use in cryptography. If not Nick Sullivan‘s A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography or Andrea Corbellini’s series Elliptic Curve Cryptography: finite fields and discrete logarithms are great starting points. Then if you further want to climb the elliptic learning curve including the related attacks you might also want to visit https://safecurves.cr.yp.to/ . Also DJB and Tanja talk at 31c3 comes with an explanation of this very attack (see minute 43) or  Juraj Somorovsky et al’s research can become handy for learners.

Note that this research was started and inspired by Quan Nguyen from Google and then refined by Antonio Sanso from Adobe.

Introduction

JSON Web Token (JWT) is a JSON-based open standard (RFC 7519) defined in the OAuth specification family used for creating access tokens. The Javascript Object Signing and Encryption (JOSE) IETF expert group was then formed to formalize a set of signing and encryption methods for JWT that led to the release of  RFC 7515 aka  JSON Web Signature (JWS) and RFC 7516 aka JSON Web Encryption (JWE). In this post we are going to focus on JWE.

A typical JWE is dot separated string that contains five parts:

  • The JWE Protected Header
  • The JWE Encrypted Key
  • The JWE Initialization Vector
  • The JWE Ciphertext
  • The JWE Authentication Tag

An example of a JWE taken from the specification would look like:

      eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkEyNTZHQ00ifQ.
OKOawDo13gRp2ojaHV7LFpZcgV7T6DVZKTyKOMTYUmKoTCVJRgckCL9kiMT03JGe
ipsEdY3mx_etLbbWSrFr05kLzcSr4qKAq7YN7e9jwQRb23nfa6c9d-StnImGyFDb
Sv04uVuxIp5Zms1gNxKKK2Da14B8S4rzVRltdYwam_lDp5XnZAYpQdb76FdIKLaV
mqgfwX7XWRxv2322i-vDxRfqNzo_tETKzpVLzfiwQyeyPGLBIO56YJ7eObdv0je8
1860ppamavo35UgoRdbYaBcoh9QcfylQr66oc6vFWXRcZ_ZT2LawVCWTIy3brGPi
6UklfCpIMfIjf7iGdXKHzg.
48V1_ALb6US04U3b.
5eym8TW_c8SuK0ltJ3rpYIzOeDQz7TALvtu6UG9oMo4vpzs9tX_EFShS8iB7j6ji
SdiwkIr3ajwQzaBtQD_A.
XFBoMYUZodetZdvTiFvSkQ

This JWE employs RSA-OAEP for key encryption and A256GCM for content encryption:

This is only one of the many possibilities JWE provides. A separate specification called RFC 7518 aka JSON Web Algorithms (JWA) lists the possible available algorithms that can be used. The one we are discussing today is the Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES).  This algorithm allows deriving an ephemeral shared secret (this blog post from Neil Madden shows a concrete example on how to do ephemeral key agreement).

In this case the JWE Protected Header lists as well the used elliptic curve used for  the key agreement:

Once the shared secret is calculated the key agreement result can be used in one of two ways:

  1. Directly as the Content Encryption Key (CEK) for the “enc” algorithm, in the Direct Key Agreement mode, or
  1. As a symmetric key used to wrap the CEK with the A128KW, A192KW, or A256KW algorithms, in the Key Agreement with Key Wrapping mode.

This is out of scope for this post but as for the other algorithms the JOSE Cookbook contains example of usage for ECDH-ES in combination with AES-GCM or AES-CBC plus HMAC.

Observation

As highlighted by Quan during his talk at RWC 2017:

Decryption/Signature verification input is always under attacker’s control

As we will see thorough this post, this simple observation will be enough to recover the receiver’s private key. But first we need to dig a bit into elliptic curve bits and pieces.

Elliptic Curves

An elliptic curve is the set of solutions defined by an equation of the form:

y^2 = ax^3 + ax + b

Equations of this type are called Weierstrass equations. An elliptic curve would look like:

y^2 = x^3 + 4x + 20

 

In order to apply the theory of elliptic curves to cryptography we need to look at elliptic curves whose points have coordinates in a finite field Fq. The same curve will then look like below over Finite Field of size 191:

y^2 = x^3 + 4x + 20 over Finite Field of size 191

For JWE the elliptic curves in scope are the one defined in Suite B and (only recently) DJB‘s curve.

Between those, the curve that so far has reached the higher amount of usage is the famous P-256 (defined in Suite B).

Time to open Sage. Let’s define P-256:

The order of the curve is a really huge number hence there isn’t much an attacker can do with this curve (if the software implements ECDH correctly) in order to guess the private key used in the agreement. This brings us to the next section.

The Attack

The attack described here is really the classical Invalid Curve Attack. The attack is simple and powerful and takes advantage from the mere fact that Weierstrass’s formula for scalar multiplication does not take into consideration the coefficient b of the curve equation:

y^2 = ax^3 + ax + b

The original’s P-256 equation is:

As we mention above, the order of this curve is really big. So we need now to find a more convenient curve for the attacker. Easy peasy with Sage:

As you can see from the image above we just found a nicer curve (from the attacker point of view) that has an order with many small factors. Then we found a point P on the curve that has a really small order (2447 in this example).

Now we can build malicious JWEs (see the Demo Time section below) and extract the value of the secret key modulo 2447 with complexity O(2447).

A crucial part for the attack to succeed is to have the victim to repeat his own contribution to the resulting shared key. In other words this means that the victim should have his private key be the same for each key agreement. Conveniently enough this is how the Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES) works. Indeed ES stands for Ephemeral-Static were Static is the contribution of the victim.

At this stage we can repeat these operations (find a new curve, craft malicious JWEs, recover the secret key modulo the small order) many, many times and collecting information about the secret key modulo’s many, many small orders.

And finally Chinese Remainder Theorem for the win.

At the end of the day, the issue here is that the specification and consequently all the libraries that I checked missed validating that the received public key (contained in the JWE Protected Header is on the curve). You can see the Vulnerable Libraries section below to check how the various libraries fixed the issue.

Again you can find details of the attack in the original paper.

Demo Time

You can view the demo at an external site.

 

Explanation

In order to show how the attack would work in practice I set up a live demo in Heroku. In https://obscure-everglades-31759.herokuapp.com/ is up and running one Node.js server app that will act as a victim in this case. The assumption is this: in order to communicate with this web application you need to encrypt a token using the Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES). The static public key from the server needed for the key agreement is in https://obscure-everglades-31759.herokuapp.com/ecdh-es-public.json:

An application that wants to POST data to this server needs first to do a key agreement using the server’s public key above and then encrypt the payload using the derived shared key using the JWE format. Once the JWE is in place this can be posted to https://obscure-everglades-31759.herokuapp.com/secret . The web app will respond with a response status 200 if all went well (namely if it can decrypt the payload content) and with a response status 400 if for some reason the received token is missing or invalid. This will act as an oracle for any potential attacker in the way shown in the previous The Attack section.

I set up an attacker application in https://afternoon-fortress-81941.herokuapp.com/.

You can visit it and click the ‘Recover Key‘ button and observe how the attacker is able to recover the secret key from the server piece by piece. Note that this is only a demo application so the recovered secret key is really small in order to reduce the waiting time. In practice the secret key will be significantly larger (hence it will take a bit more to recover the key).

In case you experience problem with the live demo, or simply if  want to see the code under the hood, you can find the demo code in Github:

Vulnerable Libraries

Here you can find a list of libraries that were vulnerable to this particular attack so far:

Some of the libraries were implemented in a programming language that already protects against this attack checking that the result of the scalar multiplication is on the curve:

*Latest version of Node.js appears to be immune to this attack. It was still possible to be vulnerable when using browsers without web crypto support.

**Affected was the default Java SUN JCA provider that comes with Java prior to version 1.8.0_51. Later Java versions and the BouncyCastle JCA provider do not seem to be affected.

Improving the JWE standard

I reported this issue to the JOSE working group via a mail to the appropriate mailing list. We all seem to agree that an errata where the problem is listed is at least welcomed. This post is a direct attempt to raise awareness about this specific problem.

Acknowledgement

The author would like to thanks the maintainers of go-jose, node-jose, jose2go, Nimbus JOSE+JWT and jose4 for the responsiveness on fixing the issue. Francesco Mari for helping out with the development of the demo application. Tommaso Teofili and Simone Tripodi for troubleshooting. Finally as mentioned above I would like to thank Quan Nguyen from Google, indeed this research could not be possible without his initial incipit.

That’s all folks. For more crypto goodies, follow me on Twitter.

Antonio Sanso
Sr. Software Engineer – Digital Marketing

The Adobe Team Reigns Again at the Winja CTF Competition

Nishtha Behal from our corporate security team in Noida, India, was the winner of the recent Winja Capture the Flag (CTF) competition hosted at the NullCon Goa security conference. The Winja CTF this year comprised of a set of simulated hacking challenges relating to “Web Security”. The winning prize was a scholarship from The SANS Institute for security training courses. The competition saw great participation with almost 60 women coming together to challenge their knowledge of the security domain. The contest is organized as a set of rounds of increasing difficulty. It began with teams of two or three women solving the challenges. The first round comprised of multiple choice questions aimed at testing the participant’s knowledge in different areas of web application security. The second round comprised of six problems where each question comprised of a mini web application and the participant’s task was to identify the single most vulnerable snippet of the code and name the vulnerability that could be exploited. The final challenges pitted the members of winning teams against each other to determine the individual winner. We would like to congratulate Nishtha on this well-deserved win! This marks the second year in a row that some of our participating Adobe team members have won this competition.

Adobe is an ongoing proud supporter of events and activities encouraging women to pursue careers in cybersecurity. We are also sponsoring the upcoming Women in Cybersecurity conference March 31st to April 1st in Tucson, Arizona. Members of our security team will be there at the conference. If you are attending, please take the time to meet and network with them. We also work with and sponsor many other important programs to encourage more women to enter the technology field including Girls Who Code and the Executive Women’s Forum.

David Lenoe
Director, Product Security

Saying Goodbye to a Leader

We learned last Thursday of the passing of Howard Schmidt. I knew this day was coming due to his long illness, but the sense of loss upon hearing the news isn’t any less. While others have written more detailed accounts of his accomplishments, I would like to add some personal recollections.

I first met Howard at the RSA Conference during my first role at Adobe as director for Product Security. After that first hallway chat I had many more opportunities to spend time with Howard and learn from watching him work, particularly during our time together on the SAFECode board.

I always marveled at his energy, confidence, and consistency in front of a crowd — not only his ability to knock out one good speech, but the fact that I never saw him turn in a bad one. Despite his enthusiasm, Howard had a clear eye on the challenges, but never gave in to security nihilism.

Howard loved to tell stories, and he had an inexhaustible supply of them – from his time working as an undercover cop in Arizona when he once posed as a biker — to his time working at the White House (driving his Harley to work there, naturally), and beyond. But he also loved to hear stories from others. As a result, he had a massive network of friends he could tap into in order to get things done. As such, he was a real facilitator and leader, and always eager to help.

I will remember Howard as an incredibly accomplished man who could get along with just about anyone, and I will miss having him in my life. The outpouring of warm memories the last couple of days shows that, not surprisingly, I am far from alone.

Brad Arkin
Chief Security Officer

Building Better Security Takes a Village

Hacker Village was introduced at Adobe Tech Summit in 2015. The Hacker Village was designed to provide hands-on, interactive learning about common security attacks that could target Adobe systems and services. Hacker Village was created to illustrate why certain security vulnerabilities create a risk for Adobe. More traditional training techniques can sometimes fall short when trying to communicate the impact that a significant vulnerability can have on organization. Hacker Village provides real-world examples for our teams by showing how hackers might successfully attack a system- illustrating using the same techniques those attackers often use. In 2015, it consisted of six booths. Each booth was focused on a specific type of industry common attack (cross-site scripting, SQL injection, etc.) or other security-related topic. The concept was to encourage our engineers to challenge themselves by “thinking like a hacker” and attempt to be successful with various known exploits in web applications, cryptography, and more.

The first iteration of Hacker Village was a success. Most of the participants completed multiple labs, with many visiting all six booths. The feedback was positive and the practical knowledge gained was helpful for all of our engineering teams across the country.

2017 brought the return of Hacker Village to Tech Summit. We wanted to build on the success of the first Hacker Village by bringing back some revised versions of the popular booths. 2017 saw new iterations of systems hacking using Metasploit, password cracking with John the Ripper, and more advanced web application vulnerability exploitation. This year we introduced some exciting new booths as well. Visitors were able to attempt to bypass firewalls to gain network access or attempt to spy on network traffic with a “man in the middle” attack. The hardware hacking booth challenged participants to take over a computer via USB port exploits like a USB “Rubber Ducky.” Elsewhere, participants could deploy their own honeypot with a RaspberryPi at the honeypot booth or attempt hacks of connected smart devices in the Internet of Things booth.

Since we did not have enough room in the first iteration for all that were interested from our engineering teams, we made sure to increase the available space to allow a broader group of engineers access to the Village. We increased the number of booths from six to eight and more than doubled the number of lab stations. With the increased number of stations, participation nearly doubled as well. The feedback was very positive once again with the only complaint being that everyone wanted a lot more time to try out new ideas.

We are currently considering a “travelling” Hacker Village as well – a more portable version that can be set up at additional Adobe office locations and at times in between our regular Tech Summits. The Hacker Village is just one of the many programs we have at Adobe for building a better security culture.

Taylor Lobb
Manager, Security and Privacy

The Adobe Security Team at RSA Conference 2017

It feels like we just got through the last “world’s largest security conference,” but here we are again. While the weather is not looking to be the best this year (although this is our rainy season, so we Bay Area folks do consider this “normal”), the Adobe security team would again like to welcome all of you descending on our home turf here in San Francisco next week, February 13 – 17, 2017.

This year, I will be emceeing the Executive Security Action Forum (ESAF) taking place on Monday, February 13th, to kick off the conference. I hope to see many of you there.

On Thursday, February 16th, from 9:15 – 10:00 a.m in Moscone South Room 301, our own Mike Mellor and Bryce Kunz will also be speaking in the “Cloud Security and Virtualization” track on the topic of “Orchestration Ownage: Exploiting Container-Centric Data Center Platforms.” This session will be a live coaching session illustrating how to hack the popular DC/OS container operating environment. We hope the information you learn from this live demo will give you the ammunition you need to take home and better protect your own container environments. This year you are able to pre-register for conference sessions. We expect this one to be popular given the live hacking demo, so, please try and grab a seat if you have not already.

As always, members of our security teams and myself will be attending the conference to network, learn about the latest trends in the security industry, and share our knowledge. Looking forward to seeing you.

Brad Arkin
Chief Security Officer

Security Automation Part III: The Adobe Security Automation Framework

In previous blogs [1],[2], we discussed alternatives for creating a large-scale automation framework if you don’t have the resources for a multi-month development project. This blog assumes that you are ready to jump in with both feet on designing your own internal automation solution.

Step 1: Research

While we particularly like our design, that doesn’t mean that our design is the best for your organization. Take time to look at designs, such as Salesforce’s Chimera, Mozilla’s Minion, ThreadFix, Twitter SADB, Guantlt, and other approaches. These projects tackle large scale security implementations in distinctly different ways. It is important to understand the full range of approaches in order to select the one that is the best fit for your organization. Projects like Mozilla Minion and Guantlt are open-source if you are looking for code in addition to ideas. Some tools follow specific development processes, such as Guantlt’s adherence to Behavior Driven Development, which need to be considered. The OWASP AppSec Pipeline project provides information on how to architect around solutions such as ThreadFix.

The tools often break down into aggregators or scanners. Aggregators focus on consolidating information from your diverse deployment of existing tools and then trying to make sense of the information. ThreadFix is an example of this type of project. Scanners look to deploy either existing analysis tools or custom analysis tools at a massive scale. Chimera, Minion, and Adobe’s Security Automation Framework take this approach.  This blog focuses on our scanner approach using our Security Automation Framework but we are in the midst of designing an aggregator, as well.

Step 2: Put together a strong team

The design of this solution was not the result of any one person’s genius. You need a group of strong people who can help point out when your idea is not as clever as you might think. This project involved several core people including Mohit Kalra, Kriti Aggarwal, Vijay Kumar Sahu, and Mayank Goyal. Even with a strong team, there was a re-architecting after the version 1 beta as our approach evolved. For the project to be a success in your organization, you will want many perspectives including management, product teams, and fellow researchers.

Step 3: Designing scanning automation

A well thought out design is critical to any tool’s long term success. This blog will provide a technical overview of Adobe’s implementation and our reasoning behind each decision.

The Adobe Security Automation Framework:

The Adobe Security Automation Framework (SAF) is designed around a few core principles that dictate the downstream implementation decisions. They are as follows:

  1. The “framework” is, in fact, a framework. It is designed to facilitate security automation but it does not try to be more than that. This means:
    1. SAF does not care what security assessment tool is being run. It just needs the tool to communicate progress and results via a specified API. This allows us to run any tool, based on any language, without adding hard coded support to SAF for each tool.
    2. SAF provides access to the results data but it is not the primary UI for results data. Each team will want their data viewed in a team specific manner. The SAF APIs allow teams to pull the data and render it as best fits their needs. This also allows the SAF development team to focus their time on the core engine.
  2. The “framework” is based on Docker. SAF is designed to be multi-cloud and Docker allows portability. The justifications for using a Docker based approach include:
    1. SAF can be run in cloud environments, in our internal corporate network, or run from our laptops for debugging.
    2. Development teams can instantiate their own mini-copies of SAF for testing.
    3. Using Docker allows us to put security assessment tools in separate containers where their dependencies won’t interfere with each other.
    4. Docker allows us to scale the number of instances of each security assessment tool dynamically with respect to their respective job size.
  3. SAF is modularized with each service (UI, scheduler, tool instance, etc.) in its own Docker container. This allows for the following advantages:
    1. The UI is separated from the front-end API allowing the web interface to be just another client of the front-end API. While people will initiate scans from the UI, SAF also allows for API driven scan requests.
    2. The scanning environments are independent. The security assessment tools may need to be run from various locations depending on their target. For instance, the scan may need to run within an internal network, external network, a specific geographic location, or just within a team’s dedicated test environments. With loose-coupling and a modular design, the security assessment tools can be run globally while still having a local main controller.
    3. Docker modularity allows for choosing the language and technology stack that is appropriate for that module’s function.
  4. By having each security test encapsulated within its own Docker container, anyone in the company can have their security assessment tools included in SAF by providing an appropriately configured Docker image. Volunteers can write a simple Python driver based on the SAF SDK that translates a security testing tool’s output into compatible messages for the SAF API and provide that to the SAF team as a Docker image. We do this because:
    1. The SAF team does not want to be the bottleneck for innovation. By allowing external contributions to SAF, the number of tools that it can run increases at a far faster rate. Given the wide array of technology stacks deployed at Adobe, this allows development teams to contribute tools that are best suited for their environments.
    2. In certain incident response scenarios, it may be necessary to widely deploy a quick script to analyze your exposure to a situation. With SAF, you could get a quick measurement by adding the script to a Docker image template and uploading it to the framework.
  5. The “security assertion”, or the test that you want to run, should test a specific vulnerability and provide a true/false result that can be used for accurate measurements of the environment. This is similar to the Behavior Driven Development approaches seen in tools like Guantlt. SAF is not designed to run a generic, catch-all web application penetration tool that will return a slew of results for human triage. Instead it is designed for analysis of specific issues. This has the following advantages:
    1. If you run an individual test, then you can file an individual bug for tracking the issue.
    2. You create tests specifically around the security issues that are critical to your organization. The specific tests can then be accurately measured at scale.
    3. Development teams do not feel that their time is being wasted by being flooded with false positives.
    4. Since it is an individual test, the developer in charge of fixing that one issue can reproduce the results using the same tool as was deployed by the framework. They could also add the test to their existing automation testing suite.

Mayank Goyal on our team took the above principles and re-architected our version 1 implementation into a design depicted in the following architecture diagram:

The SAF UI

The SAF UI is simplistic in nature since it was not designed to be the analysis or reporting suite. The UI is a single page based web application which works with SAF’s APIs. The UI focuses on allowing researchers to configure their security assertions with the appropriate parameters. The core components of the UI are:

  • Defining the assertion: Assertions (tests) are saved within an internal GitHub instance, built via Jenkins and posted to a Docker repository. SAF pulls them as required. The Github repository contains the DockerFile, the code for the test, and the code that acts as bridge between the tool and the SAF APIs using the SAF SDK. Tests can be shared with other team members or kept private.
  • Defining the scan: It is possible that the same assertion(test) may be run with different configurations for different situations. The scan page is where you define the parameters for different runs.
  • Results: The results page provides access to the raw results. The results are broken down into pass, fail, or error for each host tested. It is accompanied by a simple blob of text that is associated with each result.
  • Scans can be set to run at specific intervals.

This screenshot demonstrates an assertion that can identify whether the given URL parameter has login forms are available over HTTP. This assertion is stored in Git, is initiated by /src/startup.sh, and it will use version 4 of the configuration parameters.

A Scan is then configured for the assertion which says when the test will be run and which input list of URLs to test. A scan can run more than one assertion for the purposes of batching results.

The SAF API Server

This API server is responsible for preparing the information for the slave testing environments in the next stage. It receives the information for the scan from the UI, an API client, or based on the saved schedule. The tool details and parameters are packaged and uploaded to the job queue in a slave environment for processing. It assembles all the meta information for testing for the task/configuration executor. The master controller also listens for the responses from the queuing system and stores the results in the database. Everything downstream from the master controller is loosely coupled so that we can deploy work out to multiple locations and different geographies.

The Job Queueing system

The Queueing system is responsible for basic queuing and allows the scheduler to schedule tasks based on resource availability and defer them when needed. While cloud providers offer queuing systems, ours is based on RabbitMQ because we wanted to have deployment mobility.

The Task Scheduler

This is the brains of running the tool. It is responsible monitoring all the Docker containers, scaling, resource scheduling and killing rogue tasks. It has the API that receives the status and result messages from the Docker containers. That information is then relayed back to the API server.

The Docker images

The Docker images are based on a micro-service architecture approach. The default baseline is based on Alpine Linux to keep the image footprint small. SAF assertions can also be quite small. For instance, the test can be a small Python script which makes a request to the homepage of a web server and verifies whether an HSTS header was included in the response. This micro-service approach allows the environment to run multiple instances with minimum overhead. The assertion script communicates its status (e.g. keep-alives) and results (pass/fail/error) back to the task executor using the SAF SDK.

Conclusion

While this overview still leaves a lot of the specific details unanswered, it should provide a basic description of our security automation framework approach at the architectural and philosophical level. For a security automation project to be a success at the detailed implementation level, it must be customized to the organization’s technology stack and operational flow. As we progress with our implementation, we will continue to post the lessons that we learn along the way.

Peleus Uhley
Principal Scientist

Centralized Security Governance Practices To Help Drive Better Compliance

Adobe CCF has helped us achieve several security compliance goals and meet regulatory requirements across various products and solutions. In addition, we have also achieved SOX 404 compliance across our financial functions to further support our governance efforts. In order to achieve this and to scale the security controls across business units at Adobe, we required the adaptable foundation that a solid centralized security governance framework can help provide. Over the past few years we have made significant investments in this area by centralizing our security governance processes, tools, and technologies. Part of this effort includes establishing a driver/subscriber model for scalable services. A “driver” is responsible for developing a security service which will address a CCF control requirement. This can then by consumed by business unit “subscribers” to help meet compliance requirements through integration with the central process. Examples of such useful processes are:

  • Centralized logging & alerting: Adobe makes use of SIEM solutions that let you investigate, troubleshoot, monitor, alert, and report on what’s happening in our technology infrastructure
  • Centralized policy, security & standards initiative: An initiative to scale Adobe-wide Security Policies and Standards for compliance efforts, best practices and Adobe specific requirements. Policies and standards can now be easily found in one place and readily communicated to employees.
  • ASAP (Adobe Self-Assessment Program): In order to help ensure CCF controls are consistently applied, service teams are expected to certify the operating effectiveness of these controls on a quarterly basis by way of automated self-assessment questionnaires. The teams are also expected to monitor the landscape of risk and compliance functions for their organization. This program is driven through an enterprise GRC solution
  • Availability Monitoring: The availability of key customer facing services is monitored by a control NOC (Network Operations Center) team

In addition to the above, Adobe has implemented governance best practices from ISO 27001 (Information Security Management System) at the corporate level to help support our security program. All of the above control and compliance processes have been designed and implemented in a way that strives to cause minimal impact to the product and engineering teams. We follow an integrated audit approach for security compliance initiatives so that the evidence to support audit testing has to be provided one time and we can take advantage of any overlaps that exist between external audit engagements. Centralized processes and increased automation also help to reduce friction between teams, improve overall communication and response, and help ensure Adobe remains adaptable to changes in compliance standards and regulations.

Prabhath Karanth
Sr. IT Risk Analyst

Boldly Leading the Possibilities in Cybersecurity, Risk, and Privacy

During the last week in October, five members of the Adobe Security team and I attended the Executive Women’s Forum (EWF) National Conference as first-time attendees.  Over 400 were in attendance at the fourteenth annual conference.  It was the first time three separate tracks were offered, which focused on the primary topic of “Balancing Risk and Opportunity, by Transforming Cybersecurity, Risk, and Privacy beyond the Enterprise.”

The Executive Women’s Forum has emerged as a leading member organization using education, leadership development and trusted relationships to attract, develop and advance women in the Information Security, IT Risk Management, Privacy, Governance, Compliance and Risk Assurance industries.  Additionally, EWF membership offers virtual access to peers and thought leadership globally, networking opportunities both locally and at industry conferences, advancement education and opportunities via EWF’s leadership program, plus their peer and mentoring program.  EWF also provides learning interaction at their national conference, regional meetings and webinar series.

At this year’s national EWF conference, several of the presentations and sessions stood out, namely:

  • Several keynote speakers wowed the crowd with their personal stories of industry challenges, personal hardships and their rise through the ranks. Speakers of interest included:
    • Susan Keating (President and Chief Executive Officer at National Foundation for Credit Counseling). Keating recounted her personal story of managing through what was in 2001 the largest banking fraud in US history and lessons learned from the experience.  Her message and advice focused in on being resilient, prioritizing prevention and recovery preparation activities, remembering communication is imperative and one needs to be tireless in connecting at all points (employees, customers, partners, etc.), that bullying and intimidating behavior is not to be tolerated, and that it’s important to keep a culture healthy by always remembering the human component – if you’re not staying connected to people you could miss something.
    • Meg McCarthy (Executive Vice President of Operations and Technology at Aetna). Interviewed by Joyce Brocaglia, McCarthy spoke of her career journey to the executive suite, the challenges she faced along the way, and what it takes to thrive as a leader at the top.  Among her advice, three pieces stuck out: Talk the talk – get communication coaching, get into executive strategy meetings, and identify and study role models.  Saying yes – be careful declining – McCarthy always took opportunities offered to her.  The proof is in the pudding – build a track record, visualize your goals, and always look the part.
    • Valerie Plame (Former Operations Officer at US Central Intelligence Agency). Plame told her story as an undercover operations officer for the CIA, who served her country by gathering intel on weapons of mass destruction.  When her husband Joe Wilson spoke out about the falsities that were levied publically to justify the Iraq War, the administration retaliated by revealing Plame’s position in the CIA, ruining her career and reputation, and exposing her to domestic and foreign enemies.  She encouraged all to hold people, government and organizations accountable for their words and actions.
    • Nina Burleigh (Author and National Correspondent at Newsweek Magazine). Burleigh explained how the issue of women’s equality is a challenge everyone wants to address and is approaching a tipping point.  She foresees 2017 being the year of women, and topics, especially in the US, about female political representation, family and maternal leave and women’s health care will be at the forefront.
  • Additionally, there were several breakout talks that bear mention:
    • The pre-conference workshop on Conversational Intelligence facilitated by Linda Dolceamore of EWF focused on the chemical reactions in our brains in response to different types of communication. The workshop taught us what to do in order to activate the prefrontal cortex for high-level thinking, as well as evaluate whether our conversations are transactional, positional, or transformational.  Proper application of this information should enable a person to build better relationships, which will then evoke higher levels of trust and collaboration.
    • A panel session where five C-level executives talked about what they see next and what keeps them up at night. Takeaways included:
      • Trust is the currency of the future.
      • The digital vortex is upon us and only smart digitization will see us through.
      • Stay true to yourself. Stay curious.  Ask why.
    • The presentation regarding EWF’s initiative for Voice Privacy. As products proliferate utilizing voice interaction, it is imperative we consider the security and privacy aspect of our voices and provide the industry with appropriate guidance for voice enabled technology.
    • Yolanda Smith’s presentation on The New Device Threat Landscape.  Client-side attacks generally start off the corporate network.  Smith demonstrated a karma attack using a Hak5 Pineapple Nano as the deviant access point (complete with a phony landing page) and the Social Engineering Toolkit to generate a payload for a reverse TCP shell.  To mitigate the threat of these sort of attacks, remove probes from your devices and refrain from connecting devices to unknown networks.

EWF’s goal of extending the influence and strength of women’s voices in the industry, aligns well with Adobe’s mission to establish Adobe as a leader within the industry for creating an environment which supports the growth and development of global women leaders.  Therefore, it’s exciting for Adobe to partner with the Executive Women’s Forum organization.  If EWF’s national conference is a taste of their yearly impact, it will be compelling to participate in the additional year-round initiatives, events and opportunities available through EWF’s membership. We look forward to connecting with colleagues and friends at more events going forward.

Security Automation Part II: Defining Requirements

Every security engineer wants to build the big security automation framework for the challenge of designing something with complexity. Building those big projects have their set of challenges. Like any good coding project, you want to have a plan before setting out on the adventure.

In the last blog, we dealt with some of the high level business concerns that were necessary to consider in order to design a project that would produce the right results for the organization. In this blog we will look at the high level design considerations from the software architect’s perspective. In the next blog, we will look at the implementer’s concerns. For now, most architects are concerned with the following:

Maintainability

This is a concern for both the implementer and architect, but they often have different perspectives. If you are designing a tool that the organization is going to use as a foundation of its security program, then you need to design the tool such that the team can maintain it over time.

Maintainability through project scope

There are already automation and scalability projects that are deployed by the development team. These may include tools such as Jenkins, Git, Chef, or Maven. All of these frameworks are extensible. If all you want to do is run code with each build, then you might consider integrating into these existing frameworks rather than building your own automation. They will handle things such as logging, alerting, scheduling, and interacting with the target environment. Your team just has to write code to tell them what you want done with each build.

If you are attempting a larger project, do you have a roadmap of smaller deliverables to validate the design as you progress? The roadmap should prioritize the key elements of success for the project in order to get an early sense if you are heading in the right direction with your design. In addition, while it is important to define everything that your project will do, it is also important to define all the things that your tool will not perform. Think ahead to all of the potential tangential use cases that your framework could be asked to perform by management and customers. By establishing what is out of scope for your project, you can set proper expectations earlier in the process and those restrictions will become guardrails to keep you on track when requests for tangential features come in.

Maintainability through function delegation

Can you leverage third-party services for operational issues?  Can you use the cloud so that baseline network and machine uptime is maintained by someone else? Can you leverage tools such as Splunk so that log management is handled by someone else? What third-party libraries already exist so that you are only inventing the wheels that need to be specific to your organization? For instance, tools like RabbitMQ are sufficient to handle most queueing needs.  The more of the “busy work” that can be delegated to third-party services or code, the more time that the internal developers can spend on perfecting the framework’s core mission.

Deployment

It is important to know where your large scale security framework may be deployed. Do you need to scan staging environments that are located on an internal network in order to verify security features before shipping? Do you need to scan production systems on an external network to verify proper deployment? Do you need to scan the production instances from outside the corporate network because internal security controls would interfere with the scan? Do you want to have multiple scanning nodes in both the internal and external network? Should you decouple the job runner from the scanning nodes so that the job runner can be on the internal network even if the scanning node is external?  Do you want to allow teams to be able to deploy their own instances so that they can run tests themselves? For instance, it may be faster if an India based team can conduct the scan locally than to run the scan from US based hosts. In addition, geographical load balancers will direct traffic to the nearest hosts which may cause scanning blind spots. Do you care if the scanners get deployed to multiple  geographic locations so long as they all report back to the same database?

Tool selection

It is important to spend time thinking about the tools that you will want your large security automation framework to run because security testing tools change. You do not want your massive project to die just because the tool it was initially built to execute falls out of fashion and is no longer maintained. If there is a loose coupling between the scanning tool and the framework that runs it, then you will be able to run alternative tools once the ROI on the initial scanning tool diminishes. If you are not doing a large scale framework and are instead just modifying existing automation frameworks, the same principles will apply even if they are at a smaller scale.

Tool dependencies

While the robustness of tests results is an important criterion for tool selection, complex tools often have complex dependencies. Some tools only require the targeted URL and some tools need complex configuration files.  Do you just need to run a few tools or do you want to spend the time to make your framework be security tool agnostic? Can you use a Docker image for each tool in order to avoid dependency collisions between security assessment tools? When the testing tool conducts the attack on the remote host, does the attack presume that code injected into the remote host’s environment can send a message back to the testing tool?  If you are building a scalable scanning system with dynamically allocated, short-lived hosts that live behind a NAT server, then it may be tricky for the remote attack code to send a message back to the original security assessment tool.

Inputs and outputs

Do the tools require a complex, custom configuration file per target or do you just need to provide the hostname? If you want to scale across a large number of sites, tools that require complex, per-site configuration files may slow the speed at which you can scale and require more maintenance over time. Does the tool provide a single clear response that is easy to record or does it provide detailed, nuanced responses that require intelligent parsing? Complex results with many different findings may make it more difficult to add alerting around specific issues to the tool. They could also make metrics more challenging depending on what and how you measure.

Tool scalability

How many instances of the security assessment tool can be run on a single host? For instance, tools that listen on ports limit the number of instances per server. If so, you may need Docker or a similar auto-scaling solution. Complex tools take longer to run which may cause issues with detecting time outs. How will the tool handle re-tests for identified issues? Does the tool have granularity so that dev team can test their proposed patch against the specific issue? Or does the entire test suite need to be re-run every time the developer wants to verify their patch?

Focus and roll out

If you are tackling a large project, it is important to understand what is the minimum viable product? What is the one thing that makes this tool different than just buying the enterprise version of the equivalent commercial tool? Could the entire project be replaced with a few Python scripts and crontab? If you can’t articulate what extra value your approach will bring over the alternative commercial or crontab approach, then the project will not succeed. The people who would leverage the platform may get impatient waiting for your development to be done. They could instead opt for a quicker solution, such as buying a service, so that they can move on to the next problem.  As you design your project, always ask yourself, “Why not cron?” This will help you focus on the key elements of the project that will bring unique value to the organization. Your roadmap should focus on delivering those first.

Team adoption

Just because you are building a tool to empower the security team, doesn’t mean that your software won’t have other customers. This tool will need to interact with the development teams’ environments. This security tool will produce outputs that will eventually need to be processed by the development team. The development teams should not be an afterthought in your design. You will be holding them accountable for the results and they need methods for understanding the context of what your team has found and being able to independently retest.

For instance, one argument for integrating into something like Jenkins or GIt, is that you are using a tool the development team already understands. When you try to explain how your project will affect their environment, using a tool that they know means that the discussion will be in language that they understand. They will still have concerns that your code might have negative impacts on their environment. However, they may have more faith in the project if they can mentally quantify the risk based on known systems. When you create standalone frameworks, then it is harder for them to understand the scale of the risk because it is completely foreign to them.

At Adobe, we have been able to work directly with the development teams for building security automation. In a previous blog, an Adobe developer described the tools that he built as part of his pursuit of an internal black belt security training certification. There are several advantages to having the security champions on development teams build the development tools rather than the core security team. One is that full time developers are often better coders than the security teams and the developers better understand the framework integration. Also, in the event of an issue with the tool, the development team has the knowledge to take emergency action. Often times, a security team just needs the tool to meet specific requirements and the implementation and operational management of the tool can be handled by the team responsible for the environment. This can make the development team more at ease with having the tool in their environment and it frees up the core security team to focus on larger issues.

Conclusion

While jumping right into the challenges of the implementation is always tempting, thinking through the complete data flow for the proposed tools can help save you a lot of rewriting. It also important that you avoid trying to boil the ocean by scoping more than your team can manage. Most importantly, always keep focus on the unique value of your approach and the customers that you need to buy into the tool once it is launched. The next blog will focus on an implementer’s concerns around platform selection, queuing, scheduling, and scaling by looking at example implementations.