The Adobe Team Reigns Again at the Winja CTF Competition

Nishtha Behal from our corporate security team in Noida, India, was the winner of the recent Winja Capture the Flag (CTF) competition hosted at the NullCon Goa security conference. The Winja CTF this year comprised of a set of simulated hacking challenges relating to “Web Security”. The winning prize was a scholarship from The SANS Institute for security training courses. The competition saw great participation with almost 60 women coming together to challenge their knowledge of the security domain. The contest is organized as a set of rounds of increasing difficulty. It began with teams of two or three women solving the challenges. The first round comprised of multiple choice questions aimed at testing the participant’s knowledge in different areas of web application security. The second round comprised of six problems where each question comprised of a mini web application and the participant’s task was to identify the single most vulnerable snippet of the code and name the vulnerability that could be exploited. The final challenges pitted the members of winning teams against each other to determine the individual winner. We would like to congratulate Nishtha on this well-deserved win! This marks the second year in a row that some of our participating Adobe team members have won this competition.

Adobe is an ongoing proud supporter of events and activities encouraging women to pursue careers in cybersecurity. We are also sponsoring the upcoming Women in Cybersecurity conference March 31st to April 1st in Tucson, Arizona. Members of our security team will be there at the conference. If you are attending, please take the time to meet and network with them. We also work with and sponsor many other important programs to encourage more women to enter the technology field including Girls Who Code and the Executive Women’s Forum.

David Lenoe
Director, Product Security

Saying Goodbye to a Leader

We learned last Thursday of the passing of Howard Schmidt. I knew this day was coming due to his long illness, but the sense of loss upon hearing the news isn’t any less. While others have written more detailed accounts of his accomplishments, I would like to add some personal recollections.

I first met Howard at the RSA Conference during my first role at Adobe as director for Product Security. After that first hallway chat I had many more opportunities to spend time with Howard and learn from watching him work, particularly during our time together on the SAFECode board.

I always marveled at his energy, confidence, and consistency in front of a crowd — not only his ability to knock out one good speech, but the fact that I never saw him turn in a bad one. Despite his enthusiasm, Howard had a clear eye on the challenges, but never gave in to security nihilism.

Howard loved to tell stories, and he had an inexhaustible supply of them – from his time working as an undercover cop in Arizona when he once posed as a biker — to his time working at the White House (driving his Harley to work there, naturally), and beyond. But he also loved to hear stories from others. As a result, he had a massive network of friends he could tap into in order to get things done. As such, he was a real facilitator and leader, and always eager to help.

I will remember Howard as an incredibly accomplished man who could get along with just about anyone, and I will miss having him in my life. The outpouring of warm memories the last couple of days shows that, not surprisingly, I am far from alone.

Brad Arkin
Chief Security Officer

Building Better Security Takes a Village

Hacker Village was introduced at Adobe Tech Summit in 2015. The Hacker Village was designed to provide hands-on, interactive learning about common security attacks that could target Adobe systems and services. Hacker Village was created to illustrate why certain security vulnerabilities create a risk for Adobe. More traditional training techniques can sometimes fall short when trying to communicate the impact that a significant vulnerability can have on organization. Hacker Village provides real-world examples for our teams by showing how hackers might successfully attack a system- illustrating using the same techniques those attackers often use. In 2015, it consisted of six booths. Each booth was focused on a specific type of industry common attack (cross-site scripting, SQL injection, etc.) or other security-related topic. The concept was to encourage our engineers to challenge themselves by “thinking like a hacker” and attempt to be successful with various known exploits in web applications, cryptography, and more.

The first iteration of Hacker Village was a success. Most of the participants completed multiple labs, with many visiting all six booths. The feedback was positive and the practical knowledge gained was helpful for all of our engineering teams across the country.

2017 brought the return of Hacker Village to Tech Summit. We wanted to build on the success of the first Hacker Village by bringing back some revised versions of the popular booths. 2017 saw new iterations of systems hacking using Metasploit, password cracking with John the Ripper, and more advanced web application vulnerability exploitation. This year we introduced some exciting new booths as well. Visitors were able to attempt to bypass firewalls to gain network access or attempt to spy on network traffic with a “man in the middle” attack. The hardware hacking booth challenged participants to take over a computer via USB port exploits like a USB “Rubber Ducky.” Elsewhere, participants could deploy their own honeypot with a RaspberryPi at the honeypot booth or attempt hacks of connected smart devices in the Internet of Things booth.

Since we did not have enough room in the first iteration for all that were interested from our engineering teams, we made sure to increase the available space to allow a broader group of engineers access to the Village. We increased the number of booths from six to eight and more than doubled the number of lab stations. With the increased number of stations, participation nearly doubled as well. The feedback was very positive once again with the only complaint being that everyone wanted a lot more time to try out new ideas.

We are currently considering a “travelling” Hacker Village as well – a more portable version that can be set up at additional Adobe office locations and at times in between our regular Tech Summits. The Hacker Village is just one of the many programs we have at Adobe for building a better security culture.

Taylor Lobb
Manager, Security and Privacy

The Adobe Security Team at RSA Conference 2017

It feels like we just got through the last “world’s largest security conference,” but here we are again. While the weather is not looking to be the best this year (although this is our rainy season, so we Bay Area folks do consider this “normal”), the Adobe security team would again like to welcome all of you descending on our home turf here in San Francisco next week, February 13 – 17, 2017.

This year, I will be emceeing the Executive Security Action Forum (ESAF) taking place on Monday, February 13th, to kick off the conference. I hope to see many of you there.

On Thursday, February 16th, from 9:15 – 10:00 a.m in Moscone South Room 301, our own Mike Mellor and Bryce Kunz will also be speaking in the “Cloud Security and Virtualization” track on the topic of “Orchestration Ownage: Exploiting Container-Centric Data Center Platforms.” This session will be a live coaching session illustrating how to hack the popular DC/OS container operating environment. We hope the information you learn from this live demo will give you the ammunition you need to take home and better protect your own container environments. This year you are able to pre-register for conference sessions. We expect this one to be popular given the live hacking demo, so, please try and grab a seat if you have not already.

As always, members of our security teams and myself will be attending the conference to network, learn about the latest trends in the security industry, and share our knowledge. Looking forward to seeing you.

Brad Arkin
Chief Security Officer

Security Automation Part III: The Adobe Security Automation Framework

In previous blogs [1],[2], we discussed alternatives for creating a large-scale automation framework if you don’t have the resources for a multi-month development project. This blog assumes that you are ready to jump in with both feet on designing your own internal automation solution.

Step 1: Research

While we particularly like our design, that doesn’t mean that our design is the best for your organization. Take time to look at designs, such as Salesforce’s Chimera, Mozilla’s Minion, ThreadFix, Twitter SADB, Guantlt, and other approaches. These projects tackle large scale security implementations in distinctly different ways. It is important to understand the full range of approaches in order to select the one that is the best fit for your organization. Projects like Mozilla Minion and Guantlt are open-source if you are looking for code in addition to ideas. Some tools follow specific development processes, such as Guantlt’s adherence to Behavior Driven Development, which need to be considered. The OWASP AppSec Pipeline project provides information on how to architect around solutions such as ThreadFix.

The tools often break down into aggregators or scanners. Aggregators focus on consolidating information from your diverse deployment of existing tools and then trying to make sense of the information. ThreadFix is an example of this type of project. Scanners look to deploy either existing analysis tools or custom analysis tools at a massive scale. Chimera, Minion, and Adobe’s Security Automation Framework take this approach.  This blog focuses on our scanner approach using our Security Automation Framework but we are in the midst of designing an aggregator, as well.

Step 2: Put together a strong team

The design of this solution was not the result of any one person’s genius. You need a group of strong people who can help point out when your idea is not as clever as you might think. This project involved several core people including Mohit Kalra, Kriti Aggarwal, Vijay Kumar Sahu, and Mayank Goyal. Even with a strong team, there was a re-architecting after the version 1 beta as our approach evolved. For the project to be a success in your organization, you will want many perspectives including management, product teams, and fellow researchers.

Step 3: Designing scanning automation

A well thought out design is critical to any tool’s long term success. This blog will provide a technical overview of Adobe’s implementation and our reasoning behind each decision.

The Adobe Security Automation Framework:

The Adobe Security Automation Framework (SAF) is designed around a few core principles that dictate the downstream implementation decisions. They are as follows:

  1. The “framework” is, in fact, a framework. It is designed to facilitate security automation but it does not try to be more than that. This means:
    1. SAF does not care what security assessment tool is being run. It just needs the tool to communicate progress and results via a specified API. This allows us to run any tool, based on any language, without adding hard coded support to SAF for each tool.
    2. SAF provides access to the results data but it is not the primary UI for results data. Each team will want their data viewed in a team specific manner. The SAF APIs allow teams to pull the data and render it as best fits their needs. This also allows the SAF development team to focus their time on the core engine.
  2. The “framework” is based on Docker. SAF is designed to be multi-cloud and Docker allows portability. The justifications for using a Docker based approach include:
    1. SAF can be run in cloud environments, in our internal corporate network, or run from our laptops for debugging.
    2. Development teams can instantiate their own mini-copies of SAF for testing.
    3. Using Docker allows us to put security assessment tools in separate containers where their dependencies won’t interfere with each other.
    4. Docker allows us to scale the number of instances of each security assessment tool dynamically with respect to their respective job size.
  3. SAF is modularized with each service (UI, scheduler, tool instance, etc.) in its own Docker container. This allows for the following advantages:
    1. The UI is separated from the front-end API allowing the web interface to be just another client of the front-end API. While people will initiate scans from the UI, SAF also allows for API driven scan requests.
    2. The scanning environments are independent. The security assessment tools may need to be run from various locations depending on their target. For instance, the scan may need to run within an internal network, external network, a specific geographic location, or just within a team’s dedicated test environments. With loose-coupling and a modular design, the security assessment tools can be run globally while still having a local main controller.
    3. Docker modularity allows for choosing the language and technology stack that is appropriate for that module’s function.
  4. By having each security test encapsulated within its own Docker container, anyone in the company can have their security assessment tools included in SAF by providing an appropriately configured Docker image. Volunteers can write a simple Python driver based on the SAF SDK that translates a security testing tool’s output into compatible messages for the SAF API and provide that to the SAF team as a Docker image. We do this because:
    1. The SAF team does not want to be the bottleneck for innovation. By allowing external contributions to SAF, the number of tools that it can run increases at a far faster rate. Given the wide array of technology stacks deployed at Adobe, this allows development teams to contribute tools that are best suited for their environments.
    2. In certain incident response scenarios, it may be necessary to widely deploy a quick script to analyze your exposure to a situation. With SAF, you could get a quick measurement by adding the script to a Docker image template and uploading it to the framework.
  5. The “security assertion”, or the test that you want to run, should test a specific vulnerability and provide a true/false result that can be used for accurate measurements of the environment. This is similar to the Behavior Driven Development approaches seen in tools like Guantlt. SAF is not designed to run a generic, catch-all web application penetration tool that will return a slew of results for human triage. Instead it is designed for analysis of specific issues. This has the following advantages:
    1. If you run an individual test, then you can file an individual bug for tracking the issue.
    2. You create tests specifically around the security issues that are critical to your organization. The specific tests can then be accurately measured at scale.
    3. Development teams do not feel that their time is being wasted by being flooded with false positives.
    4. Since it is an individual test, the developer in charge of fixing that one issue can reproduce the results using the same tool as was deployed by the framework. They could also add the test to their existing automation testing suite.

Mayank Goyal on our team took the above principles and re-architected our version 1 implementation into a design depicted in the following architecture diagram:

The SAF UI

The SAF UI is simplistic in nature since it was not designed to be the analysis or reporting suite. The UI is a single page based web application which works with SAF’s APIs. The UI focuses on allowing researchers to configure their security assertions with the appropriate parameters. The core components of the UI are:

  • Defining the assertion: Assertions (tests) are saved within an internal GitHub instance, built via Jenkins and posted to a Docker repository. SAF pulls them as required. The Github repository contains the DockerFile, the code for the test, and the code that acts as bridge between the tool and the SAF APIs using the SAF SDK. Tests can be shared with other team members or kept private.
  • Defining the scan: It is possible that the same assertion(test) may be run with different configurations for different situations. The scan page is where you define the parameters for different runs.
  • Results: The results page provides access to the raw results. The results are broken down into pass, fail, or error for each host tested. It is accompanied by a simple blob of text that is associated with each result.
  • Scans can be set to run at specific intervals.

This screenshot demonstrates an assertion that can identify whether the given URL parameter has login forms are available over HTTP. This assertion is stored in Git, is initiated by /src/startup.sh, and it will use version 4 of the configuration parameters.

A Scan is then configured for the assertion which says when the test will be run and which input list of URLs to test. A scan can run more than one assertion for the purposes of batching results.

The SAF API Server

This API server is responsible for preparing the information for the slave testing environments in the next stage. It receives the information for the scan from the UI, an API client, or based on the saved schedule. The tool details and parameters are packaged and uploaded to the job queue in a slave environment for processing. It assembles all the meta information for testing for the task/configuration executor. The master controller also listens for the responses from the queuing system and stores the results in the database. Everything downstream from the master controller is loosely coupled so that we can deploy work out to multiple locations and different geographies.

The Job Queueing system

The Queueing system is responsible for basic queuing and allows the scheduler to schedule tasks based on resource availability and defer them when needed. While cloud providers offer queuing systems, ours is based on RabbitMQ because we wanted to have deployment mobility.

The Task Scheduler

This is the brains of running the tool. It is responsible monitoring all the Docker containers, scaling, resource scheduling and killing rogue tasks. It has the API that receives the status and result messages from the Docker containers. That information is then relayed back to the API server.

The Docker images

The Docker images are based on a micro-service architecture approach. The default baseline is based on Alpine Linux to keep the image footprint small. SAF assertions can also be quite small. For instance, the test can be a small Python script which makes a request to the homepage of a web server and verifies whether an HSTS header was included in the response. This micro-service approach allows the environment to run multiple instances with minimum overhead. The assertion script communicates its status (e.g. keep-alives) and results (pass/fail/error) back to the task executor using the SAF SDK.

Conclusion

While this overview still leaves a lot of the specific details unanswered, it should provide a basic description of our security automation framework approach at the architectural and philosophical level. For a security automation project to be a success at the detailed implementation level, it must be customized to the organization’s technology stack and operational flow. As we progress with our implementation, we will continue to post the lessons that we learn along the way.

Peleus Uhley
Principal Scientist

Centralized Security Governance Practices To Help Drive Better Compliance

Adobe CCF has helped us achieve several security compliance goals and meet regulatory requirements across various products and solutions. In addition, we have also achieved SOX 404 compliance across our financial functions to further support our governance efforts. In order to achieve this and to scale the security controls across business units at Adobe, we required the adaptable foundation that a solid centralized security governance framework can help provide. Over the past few years we have made significant investments in this area by centralizing our security governance processes, tools, and technologies. Part of this effort includes establishing a driver/subscriber model for scalable services. A “driver” is responsible for developing a security service which will address a CCF control requirement. This can then by consumed by business unit “subscribers” to help meet compliance requirements through integration with the central process. Examples of such useful processes are:

  • Centralized logging & alerting: Adobe makes use of SIEM solutions that let you investigate, troubleshoot, monitor, alert, and report on what’s happening in our technology infrastructure
  • Centralized policy, security & standards initiative: An initiative to scale Adobe-wide Security Policies and Standards for compliance efforts, best practices and Adobe specific requirements. Policies and standards can now be easily found in one place and readily communicated to employees.
  • ASAP (Adobe Self-Assessment Program): In order to help ensure CCF controls are consistently applied, service teams are expected to certify the operating effectiveness of these controls on a quarterly basis by way of automated self-assessment questionnaires. The teams are also expected to monitor the landscape of risk and compliance functions for their organization. This program is driven through an enterprise GRC solution
  • Availability Monitoring: The availability of key customer facing services is monitored by a control NOC (Network Operations Center) team

In addition to the above, Adobe has implemented governance best practices from ISO 27001 (Information Security Management System) at the corporate level to help support our security program. All of the above control and compliance processes have been designed and implemented in a way that strives to cause minimal impact to the product and engineering teams. We follow an integrated audit approach for security compliance initiatives so that the evidence to support audit testing has to be provided one time and we can take advantage of any overlaps that exist between external audit engagements. Centralized processes and increased automation also help to reduce friction between teams, improve overall communication and response, and help ensure Adobe remains adaptable to changes in compliance standards and regulations.

Prabhath Karanth
Sr. IT Risk Analyst

SOC 2-Type 2 (Security & Availability) and ISO 27001:2013 Compliance Across All Adobe Enterprise Clouds

We are pleased to report that Adobe has achieved SOC 2 – Type 2 (Security & Availability) and ISO 27001:2013 certifications for enterprise products within Adobe’s cloud offerings:

  • Adobe Marketing Cloud*
  • Adobe Document Cloud (incl. Adobe Sign)
  • Adobe Creative Cloud for enterprise
  • Adobe Managed Services*
    • Adobe Experience Manager Managed Services
    • Adobe Connect Managed Services
  • Adobe Captivate Prime
*(Excludes recent acquisitions including Livefyre and TubeMogul)

The criteria for these certifications have been an important part of the Common Controls Framework (CCF) by Adobe, a consolidated set of controls to allow Adobe teams supporting Adobe’s enterprise cloud offerings across the organization to meet the requirements of various industry information security and privacy standards.

As part of our ongoing commitment to help protect our customers and their data, and to help ensure that our standards effectively meet our customers’ expectations, we are constantly refining this framework based on industry requirement changes, customer asks, and internal feedback.

Following a number of requests from the security and compliance community, we are planning to publicly release an open source version of the CCF framework and guidance sometime in FY17 so that other companies may benefit from our experience.

Brad Arkin
Chief Security Officer

Boldly Leading the Possibilities in Cybersecurity, Risk, and Privacy

During the last week in October, five members of the Adobe Security team and I attended the Executive Women’s Forum (EWF) National Conference as first-time attendees.  Over 400 were in attendance at the fourteenth annual conference.  It was the first time three separate tracks were offered, which focused on the primary topic of “Balancing Risk and Opportunity, by Transforming Cybersecurity, Risk, and Privacy beyond the Enterprise.”

The Executive Women’s Forum has emerged as a leading member organization using education, leadership development and trusted relationships to attract, develop and advance women in the Information Security, IT Risk Management, Privacy, Governance, Compliance and Risk Assurance industries.  Additionally, EWF membership offers virtual access to peers and thought leadership globally, networking opportunities both locally and at industry conferences, advancement education and opportunities via EWF’s leadership program, plus their peer and mentoring program.  EWF also provides learning interaction at their national conference, regional meetings and webinar series.

At this year’s national EWF conference, several of the presentations and sessions stood out, namely:

  • Several keynote speakers wowed the crowd with their personal stories of industry challenges, personal hardships and their rise through the ranks. Speakers of interest included:
    • Susan Keating (President and Chief Executive Officer at National Foundation for Credit Counseling). Keating recounted her personal story of managing through what was in 2001 the largest banking fraud in US history and lessons learned from the experience.  Her message and advice focused in on being resilient, prioritizing prevention and recovery preparation activities, remembering communication is imperative and one needs to be tireless in connecting at all points (employees, customers, partners, etc.), that bullying and intimidating behavior is not to be tolerated, and that it’s important to keep a culture healthy by always remembering the human component – if you’re not staying connected to people you could miss something.
    • Meg McCarthy (Executive Vice President of Operations and Technology at Aetna). Interviewed by Joyce Brocaglia, McCarthy spoke of her career journey to the executive suite, the challenges she faced along the way, and what it takes to thrive as a leader at the top.  Among her advice, three pieces stuck out: Talk the talk – get communication coaching, get into executive strategy meetings, and identify and study role models.  Saying yes – be careful declining – McCarthy always took opportunities offered to her.  The proof is in the pudding – build a track record, visualize your goals, and always look the part.
    • Valerie Plame (Former Operations Officer at US Central Intelligence Agency). Plame told her story as an undercover operations officer for the CIA, who served her country by gathering intel on weapons of mass destruction.  When her husband Joe Wilson spoke out about the falsities that were levied publically to justify the Iraq War, the administration retaliated by revealing Plame’s position in the CIA, ruining her career and reputation, and exposing her to domestic and foreign enemies.  She encouraged all to hold people, government and organizations accountable for their words and actions.
    • Nina Burleigh (Author and National Correspondent at Newsweek Magazine). Burleigh explained how the issue of women’s equality is a challenge everyone wants to address and is approaching a tipping point.  She foresees 2017 being the year of women, and topics, especially in the US, about female political representation, family and maternal leave and women’s health care will be at the forefront.
  • Additionally, there were several breakout talks that bear mention:
    • The pre-conference workshop on Conversational Intelligence facilitated by Linda Dolceamore of EWF focused on the chemical reactions in our brains in response to different types of communication. The workshop taught us what to do in order to activate the prefrontal cortex for high-level thinking, as well as evaluate whether our conversations are transactional, positional, or transformational.  Proper application of this information should enable a person to build better relationships, which will then evoke higher levels of trust and collaboration.
    • A panel session where five C-level executives talked about what they see next and what keeps them up at night. Takeaways included:
      • Trust is the currency of the future.
      • The digital vortex is upon us and only smart digitization will see us through.
      • Stay true to yourself. Stay curious.  Ask why.
    • The presentation regarding EWF’s initiative for Voice Privacy. As products proliferate utilizing voice interaction, it is imperative we consider the security and privacy aspect of our voices and provide the industry with appropriate guidance for voice enabled technology.
    • Yolanda Smith’s presentation on The New Device Threat Landscape.  Client-side attacks generally start off the corporate network.  Smith demonstrated a karma attack using a Hak5 Pineapple Nano as the deviant access point (complete with a phony landing page) and the Social Engineering Toolkit to generate a payload for a reverse TCP shell.  To mitigate the threat of these sort of attacks, remove probes from your devices and refrain from connecting devices to unknown networks.

EWF’s goal of extending the influence and strength of women’s voices in the industry, aligns well with Adobe’s mission to establish Adobe as a leader within the industry for creating an environment which supports the growth and development of global women leaders.  Therefore, it’s exciting for Adobe to partner with the Executive Women’s Forum organization.  If EWF’s national conference is a taste of their yearly impact, it will be compelling to participate in the additional year-round initiatives, events and opportunities available through EWF’s membership. We look forward to connecting with colleagues and friends at more events going forward.

Security Automation Part II: Defining Requirements

Every security engineer wants to build the big security automation framework for the challenge of designing something with complexity. Building those big projects have their set of challenges. Like any good coding project, you want to have a plan before setting out on the adventure.

In the last blog, we dealt with some of the high level business concerns that were necessary to consider in order to design a project that would produce the right results for the organization. In this blog we will look at the high level design considerations from the software architect’s perspective. In the next blog, we will look at the implementer’s concerns. For now, most architects are concerned with the following:

Maintainability

This is a concern for both the implementer and architect, but they often have different perspectives. If you are designing a tool that the organization is going to use as a foundation of its security program, then you need to design the tool such that the team can maintain it over time.

Maintainability through project scope

There are already automation and scalability projects that are deployed by the development team. These may include tools such as Jenkins, Git, Chef, or Maven. All of these frameworks are extensible. If all you want to do is run code with each build, then you might consider integrating into these existing frameworks rather than building your own automation. They will handle things such as logging, alerting, scheduling, and interacting with the target environment. Your team just has to write code to tell them what you want done with each build.

If you are attempting a larger project, do you have a roadmap of smaller deliverables to validate the design as you progress? The roadmap should prioritize the key elements of success for the project in order to get an early sense if you are heading in the right direction with your design. In addition, while it is important to define everything that your project will do, it is also important to define all the things that your tool will not perform. Think ahead to all of the potential tangential use cases that your framework could be asked to perform by management and customers. By establishing what is out of scope for your project, you can set proper expectations earlier in the process and those restrictions will become guardrails to keep you on track when requests for tangential features come in.

Maintainability through function delegation

Can you leverage third-party services for operational issues?  Can you use the cloud so that baseline network and machine uptime is maintained by someone else? Can you leverage tools such as Splunk so that log management is handled by someone else? What third-party libraries already exist so that you are only inventing the wheels that need to be specific to your organization? For instance, tools like RabbitMQ are sufficient to handle most queueing needs.  The more of the “busy work” that can be delegated to third-party services or code, the more time that the internal developers can spend on perfecting the framework’s core mission.

Deployment

It is important to know where your large scale security framework may be deployed. Do you need to scan staging environments that are located on an internal network in order to verify security features before shipping? Do you need to scan production systems on an external network to verify proper deployment? Do you need to scan the production instances from outside the corporate network because internal security controls would interfere with the scan? Do you want to have multiple scanning nodes in both the internal and external network? Should you decouple the job runner from the scanning nodes so that the job runner can be on the internal network even if the scanning node is external?  Do you want to allow teams to be able to deploy their own instances so that they can run tests themselves? For instance, it may be faster if an India based team can conduct the scan locally than to run the scan from US based hosts. In addition, geographical load balancers will direct traffic to the nearest hosts which may cause scanning blind spots. Do you care if the scanners get deployed to multiple  geographic locations so long as they all report back to the same database?

Tool selection

It is important to spend time thinking about the tools that you will want your large security automation framework to run because security testing tools change. You do not want your massive project to die just because the tool it was initially built to execute falls out of fashion and is no longer maintained. If there is a loose coupling between the scanning tool and the framework that runs it, then you will be able to run alternative tools once the ROI on the initial scanning tool diminishes. If you are not doing a large scale framework and are instead just modifying existing automation frameworks, the same principles will apply even if they are at a smaller scale.

Tool dependencies

While the robustness of tests results is an important criterion for tool selection, complex tools often have complex dependencies. Some tools only require the targeted URL and some tools need complex configuration files.  Do you just need to run a few tools or do you want to spend the time to make your framework be security tool agnostic? Can you use a Docker image for each tool in order to avoid dependency collisions between security assessment tools? When the testing tool conducts the attack on the remote host, does the attack presume that code injected into the remote host’s environment can send a message back to the testing tool?  If you are building a scalable scanning system with dynamically allocated, short-lived hosts that live behind a NAT server, then it may be tricky for the remote attack code to send a message back to the original security assessment tool.

Inputs and outputs

Do the tools require a complex, custom configuration file per target or do you just need to provide the hostname? If you want to scale across a large number of sites, tools that require complex, per-site configuration files may slow the speed at which you can scale and require more maintenance over time. Does the tool provide a single clear response that is easy to record or does it provide detailed, nuanced responses that require intelligent parsing? Complex results with many different findings may make it more difficult to add alerting around specific issues to the tool. They could also make metrics more challenging depending on what and how you measure.

Tool scalability

How many instances of the security assessment tool can be run on a single host? For instance, tools that listen on ports limit the number of instances per server. If so, you may need Docker or a similar auto-scaling solution. Complex tools take longer to run which may cause issues with detecting time outs. How will the tool handle re-tests for identified issues? Does the tool have granularity so that dev team can test their proposed patch against the specific issue? Or does the entire test suite need to be re-run every time the developer wants to verify their patch?

Focus and roll out

If you are tackling a large project, it is important to understand what is the minimum viable product? What is the one thing that makes this tool different than just buying the enterprise version of the equivalent commercial tool? Could the entire project be replaced with a few Python scripts and crontab? If you can’t articulate what extra value your approach will bring over the alternative commercial or crontab approach, then the project will not succeed. The people who would leverage the platform may get impatient waiting for your development to be done. They could instead opt for a quicker solution, such as buying a service, so that they can move on to the next problem.  As you design your project, always ask yourself, “Why not cron?” This will help you focus on the key elements of the project that will bring unique value to the organization. Your roadmap should focus on delivering those first.

Team adoption

Just because you are building a tool to empower the security team, doesn’t mean that your software won’t have other customers. This tool will need to interact with the development teams’ environments. This security tool will produce outputs that will eventually need to be processed by the development team. The development teams should not be an afterthought in your design. You will be holding them accountable for the results and they need methods for understanding the context of what your team has found and being able to independently retest.

For instance, one argument for integrating into something like Jenkins or GIt, is that you are using a tool the development team already understands. When you try to explain how your project will affect their environment, using a tool that they know means that the discussion will be in language that they understand. They will still have concerns that your code might have negative impacts on their environment. However, they may have more faith in the project if they can mentally quantify the risk based on known systems. When you create standalone frameworks, then it is harder for them to understand the scale of the risk because it is completely foreign to them.

At Adobe, we have been able to work directly with the development teams for building security automation. In a previous blog, an Adobe developer described the tools that he built as part of his pursuit of an internal black belt security training certification. There are several advantages to having the security champions on development teams build the development tools rather than the core security team. One is that full time developers are often better coders than the security teams and the developers better understand the framework integration. Also, in the event of an issue with the tool, the development team has the knowledge to take emergency action. Often times, a security team just needs the tool to meet specific requirements and the implementation and operational management of the tool can be handled by the team responsible for the environment. This can make the development team more at ease with having the tool in their environment and it frees up the core security team to focus on larger issues.

Conclusion

While jumping right into the challenges of the implementation is always tempting, thinking through the complete data flow for the proposed tools can help save you a lot of rewriting. It also important that you avoid trying to boil the ocean by scoping more than your team can manage. Most importantly, always keep focus on the unique value of your approach and the customers that you need to buy into the tool once it is launched. The next blog will focus on an implementer’s concerns around platform selection, queuing, scheduling, and scaling by looking at example implementations.

Security Automation for PCI Certification of the Adobe Shared Cloud

Software engineering is a unique and exciting profession. Engineers must employ continuous learning habits in order to keep up with constantly morphing software ecosystem. This is especially true in the software security space.  The continuous introduction of new software also means new security vulnerabilities are introduced. The problem at its core is actually quite simple. It’s a human problem.  Engineers are people, and, like all people, they sometimes make mistakes.  These mistakes can manifest themselves in the form of ‘bugs’ and usually occur when the software is used in a way the engineer didn’t expect. When these bugs are left uncorrected it can leave the software vulnerable. Some mistakes are bigger than others and many are preventable. However, as they say, hindsight is always 20/20.  You need not necessarily experience these mistakes to learn from them. As my father often told me, a smart person learns from his mistakes, but a wise person learns from the mistakes of others. And so it goes with software security. In today’s software world, it’s not enough to just be smart, but you also need to be wise.

After working at Adobe for just shy of 5 years I have achieved the current coveted rank of ‘Black Belt’ in Adobe’s security program through the development of some internal security tools and assisting in the recent PCI certification of Shared Cloud (the internal platform upon which, Creative Cloud and Document Cloud are based).  Through Adobe’s security program my understanding of security has certainly broadened.  I earned my white belt within just a few months of joining Adobe which consisted of some online courses covering very basic security best practices. When Adobe created the role of “Security Champion” within every product team, I volunteered. Seeking to make myself a valuable resource to my team, I quickly eared my green belt which consisted of completing several advanced online courses covering a range of security topics from SQL Injection & XXS to Buffer Overflows. I now had 2 belts down,  only 2 to go.

At the beginning of 2015, the Adobe Shared Cloud team started down the path of PCI Compliance.  When it became clear that a dedicated team would be needed to manage this, myself and a few others made a career shift from software engineer to security engineer in order to form a dedicated security team for the Shared Cloud.  To bring ourselves up to speed, we began attending BlackHat and OWASP conferences to further our expertise. We also started the long, arduous task of breaking down the PCI requirements into concrete engineering tasks.  It was out of this PCI effort that I developed three tools – one of which would earn me my Brown Belt, and the other two my Black Belt.

The first tool came from the PCI requirement that requires you track all of 3rd party software libraries for vulnerabilities and remediate them based on severity. Working closely with the ASSET team we developed an API that would allow you to push product dependencies and versions into applications as they are built.   Once that was completed, I wrote an integrated and highly configurable Maven plugin which consumed the API during build time, thereby helping to keep applications up-to-date automatically as part of our continuous delivery system. After completing this tool, I submitted it as a project and was rewarded with my Brown Belt. My plugin has also been adopted by several teams across Adobe.

The second tool also came from a PCI requirement. It states that no changes are allowed on production servers without a review step, including code changes. At first glance this doesn’t seem so bad – after all we were already doing regular code reviews. So, it shouldn’t be a big deal, right? WRONG! The burden of PCI is that you have to prove that changes were reviewed and that no change was allowed to go to production without first being reviewed.  There were a number of manual approaches that one could take to meet this requirement. But, who wants the hassle and overhead of such a manual process? Enter my first Black Belt project – the Git SDLC Enforcer Plugin.  I developed a Jenkins plugin that ran with a merge onto a project’s release branch.  The plugin reviews the commit history and helps ensure that every commit belongs to a pull request and that each pull request was merged by someone other than the author of the pull request.  If any offending commits or pull requests are found then the build fails and an issue is opened on the project in its GitHub space.  This turned out to be a huge time saver and a very effective mechanism for ensuring every change done to the code is reviewed.

The project that finally earned me my Black Belt, however, was the development of a tool that will eventually fully replace the Adobe Shared Cloud’s secret injection mechanism. When working with Amazon Web Services, you have a little bit of a chicken and egg problem when you begin to automate deployments. At some point, you need an automated way to get the right credentials into the EC2 instances that your application needs to run. Currently the Shared Cloud’s secrets management leverages a combination of custom baked AMI’s, IAM Roles, S3, and encrypted data bags stored in the “Hosted Chef” service. For many reasons, we wanted to move away from Chef’s managed solution, and add some additional layers of security such as the ability to rotate encryption keys, logging access to secrets, the ability to restrict access to secrets based on environment and role, as well as making it auditable. This new tool was designed to be a drop in replacement for “Hosted Chef” – it made it easier to implement in our baking tool chain and replaced the data bag functionality provided by the previous tool as well as added some additional security functionality.  The tool works splendidly and by the end of the year the Shared Cloud will be exclusively using this tool resulting in a much more secure, efficient, reliable, and cost-effective mechanism for injecting secrets.

My take away from all this is that Adobe has developed a top notch security training program, and even though I have learned a ton about software security through it, I still have much to learn. I look forward to continue making a difference at Adobe.

Jed Glazner
Security Engineer