The Adobe Security Team at RSA Conference 2017

It feels like we just got through the last “world’s largest security conference,” but here we are again. While the weather is not looking to be the best this year (although this is our rainy season, so we Bay Area folks do consider this “normal”), the Adobe security team would again like to welcome all of you descending on our home turf here in San Francisco next week, February 13 – 17, 2017.

This year, I will be emceeing the Executive Security Action Forum (ESAF) taking place on Monday, February 13th, to kick off the conference. I hope to see many of you there.

On Thursday, February 16th, from 9:15 – 10:00 a.m in Moscone South Room 301, our own Mike Mellor and Bryce Kunz will also be speaking in the “Cloud Security and Virtualization” track on the topic of “Orchestration Ownage: Exploiting Container-Centric Data Center Platforms.” This session will be a live coaching session illustrating how to hack the popular DC/OS container operating environment. We hope the information you learn from this live demo will give you the ammunition you need to take home and better protect your own container environments. This year you are able to pre-register for conference sessions. We expect this one to be popular given the live hacking demo, so, please try and grab a seat if you have not already.

As always, members of our security teams and myself will be attending the conference to network, learn about the latest trends in the security industry, and share our knowledge. Looking forward to seeing you.

Brad Arkin
Chief Security Officer

Security Automation Part III: The Adobe Security Automation Framework

In previous blogs [1],[2], we discussed alternatives for creating a large-scale automation framework if you don’t have the resources for a multi-month development project. This blog assumes that you are ready to jump in with both feet on designing your own internal automation solution.

Step 1: Research

While we particularly like our design, that doesn’t mean that our design is the best for your organization. Take time to look at designs, such as Salesforce’s Chimera, Mozilla’s Minion, ThreadFix, Twitter SADB, Guantlt, and other approaches. These projects tackle large scale security implementations in distinctly different ways. It is important to understand the full range of approaches in order to select the one that is the best fit for your organization. Projects like Mozilla Minion and Guantlt are open-source if you are looking for code in addition to ideas. Some tools follow specific development processes, such as Guantlt’s adherence to Behavior Driven Development, which need to be considered. The OWASP AppSec Pipeline project provides information on how to architect around solutions such as ThreadFix.

The tools often break down into aggregators or scanners. Aggregators focus on consolidating information from your diverse deployment of existing tools and then trying to make sense of the information. ThreadFix is an example of this type of project. Scanners look to deploy either existing analysis tools or custom analysis tools at a massive scale. Chimera, Minion, and Adobe’s Security Automation Framework take this approach.  This blog focuses on our scanner approach using our Security Automation Framework but we are in the midst of designing an aggregator, as well.

Step 2: Put together a strong team

The design of this solution was not the result of any one person’s genius. You need a group of strong people who can help point out when your idea is not as clever as you might think. This project involved several core people including Mohit Kalra, Kriti Aggarwal, Vijay Kumar Sahu, and Mayank Goyal. Even with a strong team, there was a re-architecting after the version 1 beta as our approach evolved. For the project to be a success in your organization, you will want many perspectives including management, product teams, and fellow researchers.

Step 3: Designing scanning automation

A well thought out design is critical to any tool’s long term success. This blog will provide a technical overview of Adobe’s implementation and our reasoning behind each decision.

The Adobe Security Automation Framework:

The Adobe Security Automation Framework (SAF) is designed around a few core principles that dictate the downstream implementation decisions. They are as follows:

  1. The “framework” is, in fact, a framework. It is designed to facilitate security automation but it does not try to be more than that. This means:
    1. SAF does not care what security assessment tool is being run. It just needs the tool to communicate progress and results via a specified API. This allows us to run any tool, based on any language, without adding hard coded support to SAF for each tool.
    2. SAF provides access to the results data but it is not the primary UI for results data. Each team will want their data viewed in a team specific manner. The SAF APIs allow teams to pull the data and render it as best fits their needs. This also allows the SAF development team to focus their time on the core engine.
  2. The “framework” is based on Docker. SAF is designed to be multi-cloud and Docker allows portability. The justifications for using a Docker based approach include:
    1. SAF can be run in cloud environments, in our internal corporate network, or run from our laptops for debugging.
    2. Development teams can instantiate their own mini-copies of SAF for testing.
    3. Using Docker allows us to put security assessment tools in separate containers where their dependencies won’t interfere with each other.
    4. Docker allows us to scale the number of instances of each security assessment tool dynamically with respect to their respective job size.
  3. SAF is modularized with each service (UI, scheduler, tool instance, etc.) in its own Docker container. This allows for the following advantages:
    1. The UI is separated from the front-end API allowing the web interface to be just another client of the front-end API. While people will initiate scans from the UI, SAF also allows for API driven scan requests.
    2. The scanning environments are independent. The security assessment tools may need to be run from various locations depending on their target. For instance, the scan may need to run within an internal network, external network, a specific geographic location, or just within a team’s dedicated test environments. With loose-coupling and a modular design, the security assessment tools can be run globally while still having a local main controller.
    3. Docker modularity allows for choosing the language and technology stack that is appropriate for that module’s function.
  4. By having each security test encapsulated within its own Docker container, anyone in the company can have their security assessment tools included in SAF by providing an appropriately configured Docker image. Volunteers can write a simple Python driver based on the SAF SDK that translates a security testing tool’s output into compatible messages for the SAF API and provide that to the SAF team as a Docker image. We do this because:
    1. The SAF team does not want to be the bottleneck for innovation. By allowing external contributions to SAF, the number of tools that it can run increases at a far faster rate. Given the wide array of technology stacks deployed at Adobe, this allows development teams to contribute tools that are best suited for their environments.
    2. In certain incident response scenarios, it may be necessary to widely deploy a quick script to analyze your exposure to a situation. With SAF, you could get a quick measurement by adding the script to a Docker image template and uploading it to the framework.
  5. The “security assertion”, or the test that you want to run, should test a specific vulnerability and provide a true/false result that can be used for accurate measurements of the environment. This is similar to the Behavior Driven Development approaches seen in tools like Guantlt. SAF is not designed to run a generic, catch-all web application penetration tool that will return a slew of results for human triage. Instead it is designed for analysis of specific issues. This has the following advantages:
    1. If you run an individual test, then you can file an individual bug for tracking the issue.
    2. You create tests specifically around the security issues that are critical to your organization. The specific tests can then be accurately measured at scale.
    3. Development teams do not feel that their time is being wasted by being flooded with false positives.
    4. Since it is an individual test, the developer in charge of fixing that one issue can reproduce the results using the same tool as was deployed by the framework. They could also add the test to their existing automation testing suite.

Mayank Goyal on our team took the above principles and re-architected our version 1 implementation into a design depicted in the following architecture diagram:

The SAF UI

The SAF UI is simplistic in nature since it was not designed to be the analysis or reporting suite. The UI is a single page based web application which works with SAF’s APIs. The UI focuses on allowing researchers to configure their security assertions with the appropriate parameters. The core components of the UI are:

  • Defining the assertion: Assertions (tests) are saved within an internal GitHub instance, built via Jenkins and posted to a Docker repository. SAF pulls them as required. The Github repository contains the DockerFile, the code for the test, and the code that acts as bridge between the tool and the SAF APIs using the SAF SDK. Tests can be shared with other team members or kept private.
  • Defining the scan: It is possible that the same assertion(test) may be run with different configurations for different situations. The scan page is where you define the parameters for different runs.
  • Results: The results page provides access to the raw results. The results are broken down into pass, fail, or error for each host tested. It is accompanied by a simple blob of text that is associated with each result.
  • Scans can be set to run at specific intervals.

This screenshot demonstrates an assertion that can identify whether the given URL parameter has login forms are available over HTTP. This assertion is stored in Git, is initiated by /src/startup.sh, and it will use version 4 of the configuration parameters.

A Scan is then configured for the assertion which says when the test will be run and which input list of URLs to test. A scan can run more than one assertion for the purposes of batching results.

The SAF API Server

This API server is responsible for preparing the information for the slave testing environments in the next stage. It receives the information for the scan from the UI, an API client, or based on the saved schedule. The tool details and parameters are packaged and uploaded to the job queue in a slave environment for processing. It assembles all the meta information for testing for the task/configuration executor. The master controller also listens for the responses from the queuing system and stores the results in the database. Everything downstream from the master controller is loosely coupled so that we can deploy work out to multiple locations and different geographies.

The Job Queueing system

The Queueing system is responsible for basic queuing and allows the scheduler to schedule tasks based on resource availability and defer them when needed. While cloud providers offer queuing systems, ours is based on RabbitMQ because we wanted to have deployment mobility.

The Task Scheduler

This is the brains of running the tool. It is responsible monitoring all the Docker containers, scaling, resource scheduling and killing rogue tasks. It has the API that receives the status and result messages from the Docker containers. That information is then relayed back to the API server.

The Docker images

The Docker images are based on a micro-service architecture approach. The default baseline is based on Alpine Linux to keep the image footprint small. SAF assertions can also be quite small. For instance, the test can be a small Python script which makes a request to the homepage of a web server and verifies whether an HSTS header was included in the response. This micro-service approach allows the environment to run multiple instances with minimum overhead. The assertion script communicates its status (e.g. keep-alives) and results (pass/fail/error) back to the task executor using the SAF SDK.

Conclusion

While this overview still leaves a lot of the specific details unanswered, it should provide a basic description of our security automation framework approach at the architectural and philosophical level. For a security automation project to be a success at the detailed implementation level, it must be customized to the organization’s technology stack and operational flow. As we progress with our implementation, we will continue to post the lessons that we learn along the way.

Peleus Uhley
Principal Scientist

Centralized Security Governance Practices To Help Drive Better Compliance

Adobe CCF has helped us achieve several security compliance goals and meet regulatory requirements across various products and solutions. In addition, we have also achieved SOX 404 compliance across our financial functions to further support our governance efforts. In order to achieve this and to scale the security controls across business units at Adobe, we required the adaptable foundation that a solid centralized security governance framework can help provide. Over the past few years we have made significant investments in this area by centralizing our security governance processes, tools, and technologies. Part of this effort includes establishing a driver/subscriber model for scalable services. A “driver” is responsible for developing a security service which will address a CCF control requirement. This can then by consumed by business unit “subscribers” to help meet compliance requirements through integration with the central process. Examples of such useful processes are:

  • Centralized logging & alerting: Adobe makes use of SIEM solutions that let you investigate, troubleshoot, monitor, alert, and report on what’s happening in our technology infrastructure
  • Centralized policy, security & standards initiative: An initiative to scale Adobe-wide Security Policies and Standards for compliance efforts, best practices and Adobe specific requirements. Policies and standards can now be easily found in one place and readily communicated to employees.
  • ASAP (Adobe Self-Assessment Program): In order to help ensure CCF controls are consistently applied, service teams are expected to certify the operating effectiveness of these controls on a quarterly basis by way of automated self-assessment questionnaires. The teams are also expected to monitor the landscape of risk and compliance functions for their organization. This program is driven through an enterprise GRC solution
  • Availability Monitoring: The availability of key customer facing services is monitored by a control NOC (Network Operations Center) team

In addition to the above, Adobe has implemented governance best practices from ISO 27001 (Information Security Management System) at the corporate level to help support our security program. All of the above control and compliance processes have been designed and implemented in a way that strives to cause minimal impact to the product and engineering teams. We follow an integrated audit approach for security compliance initiatives so that the evidence to support audit testing has to be provided one time and we can take advantage of any overlaps that exist between external audit engagements. Centralized processes and increased automation also help to reduce friction between teams, improve overall communication and response, and help ensure Adobe remains adaptable to changes in compliance standards and regulations.

Prabhath Karanth
Sr. IT Risk Analyst

SOC 2-Type 2 (Security & Availability) and ISO 27001:2013 Compliance Across All Adobe Enterprise Clouds

We are pleased to report that Adobe has achieved SOC 2 – Type 2 (Security & Availability) and ISO 27001:2013 certifications for enterprise products within Adobe’s cloud offerings:

  • Adobe Marketing Cloud*
  • Adobe Document Cloud (incl. Adobe Sign)
  • Adobe Creative Cloud for enterprise
  • Adobe Managed Services*
    • Adobe Experience Manager Managed Services
    • Adobe Connect Managed Services
  • Adobe Captivate Prime
*(Excludes recent acquisitions including Livefyre and TubeMogul)

The criteria for these certifications have been an important part of the Common Controls Framework (CCF) by Adobe, a consolidated set of controls to allow Adobe teams supporting Adobe’s enterprise cloud offerings across the organization to meet the requirements of various industry information security and privacy standards.

As part of our ongoing commitment to help protect our customers and their data, and to help ensure that our standards effectively meet our customers’ expectations, we are constantly refining this framework based on industry requirement changes, customer asks, and internal feedback.

Following a number of requests from the security and compliance community, we are planning to publicly release an open source version of the CCF framework and guidance sometime in FY17 so that other companies may benefit from our experience.

Brad Arkin
Chief Security Officer

Boldly Leading the Possibilities in Cybersecurity, Risk, and Privacy

During the last week in October, five members of the Adobe Security team and I attended the Executive Women’s Forum (EWF) National Conference as first-time attendees.  Over 400 were in attendance at the fourteenth annual conference.  It was the first time three separate tracks were offered, which focused on the primary topic of “Balancing Risk and Opportunity, by Transforming Cybersecurity, Risk, and Privacy beyond the Enterprise.”

The Executive Women’s Forum has emerged as a leading member organization using education, leadership development and trusted relationships to attract, develop and advance women in the Information Security, IT Risk Management, Privacy, Governance, Compliance and Risk Assurance industries.  Additionally, EWF membership offers virtual access to peers and thought leadership globally, networking opportunities both locally and at industry conferences, advancement education and opportunities via EWF’s leadership program, plus their peer and mentoring program.  EWF also provides learning interaction at their national conference, regional meetings and webinar series.

At this year’s national EWF conference, several of the presentations and sessions stood out, namely:

  • Several keynote speakers wowed the crowd with their personal stories of industry challenges, personal hardships and their rise through the ranks. Speakers of interest included:
    • Susan Keating (President and Chief Executive Officer at National Foundation for Credit Counseling). Keating recounted her personal story of managing through what was in 2001 the largest banking fraud in US history and lessons learned from the experience.  Her message and advice focused in on being resilient, prioritizing prevention and recovery preparation activities, remembering communication is imperative and one needs to be tireless in connecting at all points (employees, customers, partners, etc.), that bullying and intimidating behavior is not to be tolerated, and that it’s important to keep a culture healthy by always remembering the human component – if you’re not staying connected to people you could miss something.
    • Meg McCarthy (Executive Vice President of Operations and Technology at Aetna). Interviewed by Joyce Brocaglia, McCarthy spoke of her career journey to the executive suite, the challenges she faced along the way, and what it takes to thrive as a leader at the top.  Among her advice, three pieces stuck out: Talk the talk – get communication coaching, get into executive strategy meetings, and identify and study role models.  Saying yes – be careful declining – McCarthy always took opportunities offered to her.  The proof is in the pudding – build a track record, visualize your goals, and always look the part.
    • Valerie Plame (Former Operations Officer at US Central Intelligence Agency). Plame told her story as an undercover operations officer for the CIA, who served her country by gathering intel on weapons of mass destruction.  When her husband Joe Wilson spoke out about the falsities that were levied publically to justify the Iraq War, the administration retaliated by revealing Plame’s position in the CIA, ruining her career and reputation, and exposing her to domestic and foreign enemies.  She encouraged all to hold people, government and organizations accountable for their words and actions.
    • Nina Burleigh (Author and National Correspondent at Newsweek Magazine). Burleigh explained how the issue of women’s equality is a challenge everyone wants to address and is approaching a tipping point.  She foresees 2017 being the year of women, and topics, especially in the US, about female political representation, family and maternal leave and women’s health care will be at the forefront.
  • Additionally, there were several breakout talks that bear mention:
    • The pre-conference workshop on Conversational Intelligence facilitated by Linda Dolceamore of EWF focused on the chemical reactions in our brains in response to different types of communication. The workshop taught us what to do in order to activate the prefrontal cortex for high-level thinking, as well as evaluate whether our conversations are transactional, positional, or transformational.  Proper application of this information should enable a person to build better relationships, which will then evoke higher levels of trust and collaboration.
    • A panel session where five C-level executives talked about what they see next and what keeps them up at night. Takeaways included:
      • Trust is the currency of the future.
      • The digital vortex is upon us and only smart digitization will see us through.
      • Stay true to yourself. Stay curious.  Ask why.
    • The presentation regarding EWF’s initiative for Voice Privacy. As products proliferate utilizing voice interaction, it is imperative we consider the security and privacy aspect of our voices and provide the industry with appropriate guidance for voice enabled technology.
    • Yolanda Smith’s presentation on The New Device Threat Landscape.  Client-side attacks generally start off the corporate network.  Smith demonstrated a karma attack using a Hak5 Pineapple Nano as the deviant access point (complete with a phony landing page) and the Social Engineering Toolkit to generate a payload for a reverse TCP shell.  To mitigate the threat of these sort of attacks, remove probes from your devices and refrain from connecting devices to unknown networks.

EWF’s goal of extending the influence and strength of women’s voices in the industry, aligns well with Adobe’s mission to establish Adobe as a leader within the industry for creating an environment which supports the growth and development of global women leaders.  Therefore, it’s exciting for Adobe to partner with the Executive Women’s Forum organization.  If EWF’s national conference is a taste of their yearly impact, it will be compelling to participate in the additional year-round initiatives, events and opportunities available through EWF’s membership. We look forward to connecting with colleagues and friends at more events going forward.

Security Automation Part II: Defining Requirements

Every security engineer wants to build the big security automation framework for the challenge of designing something with complexity. Building those big projects have their set of challenges. Like any good coding project, you want to have a plan before setting out on the adventure.

In the last blog, we dealt with some of the high level business concerns that were necessary to consider in order to design a project that would produce the right results for the organization. In this blog we will look at the high level design considerations from the software architect’s perspective. In the next blog, we will look at the implementer’s concerns. For now, most architects are concerned with the following:

Maintainability

This is a concern for both the implementer and architect, but they often have different perspectives. If you are designing a tool that the organization is going to use as a foundation of its security program, then you need to design the tool such that the team can maintain it over time.

Maintainability through project scope

There are already automation and scalability projects that are deployed by the development team. These may include tools such as Jenkins, Git, Chef, or Maven. All of these frameworks are extensible. If all you want to do is run code with each build, then you might consider integrating into these existing frameworks rather than building your own automation. They will handle things such as logging, alerting, scheduling, and interacting with the target environment. Your team just has to write code to tell them what you want done with each build.

If you are attempting a larger project, do you have a roadmap of smaller deliverables to validate the design as you progress? The roadmap should prioritize the key elements of success for the project in order to get an early sense if you are heading in the right direction with your design. In addition, while it is important to define everything that your project will do, it is also important to define all the things that your tool will not perform. Think ahead to all of the potential tangential use cases that your framework could be asked to perform by management and customers. By establishing what is out of scope for your project, you can set proper expectations earlier in the process and those restrictions will become guardrails to keep you on track when requests for tangential features come in.

Maintainability through function delegation

Can you leverage third-party services for operational issues?  Can you use the cloud so that baseline network and machine uptime is maintained by someone else? Can you leverage tools such as Splunk so that log management is handled by someone else? What third-party libraries already exist so that you are only inventing the wheels that need to be specific to your organization? For instance, tools like RabbitMQ are sufficient to handle most queueing needs.  The more of the “busy work” that can be delegated to third-party services or code, the more time that the internal developers can spend on perfecting the framework’s core mission.

Deployment

It is important to know where your large scale security framework may be deployed. Do you need to scan staging environments that are located on an internal network in order to verify security features before shipping? Do you need to scan production systems on an external network to verify proper deployment? Do you need to scan the production instances from outside the corporate network because internal security controls would interfere with the scan? Do you want to have multiple scanning nodes in both the internal and external network? Should you decouple the job runner from the scanning nodes so that the job runner can be on the internal network even if the scanning node is external?  Do you want to allow teams to be able to deploy their own instances so that they can run tests themselves? For instance, it may be faster if an India based team can conduct the scan locally than to run the scan from US based hosts. In addition, geographical load balancers will direct traffic to the nearest hosts which may cause scanning blind spots. Do you care if the scanners get deployed to multiple  geographic locations so long as they all report back to the same database?

Tool selection

It is important to spend time thinking about the tools that you will want your large security automation framework to run because security testing tools change. You do not want your massive project to die just because the tool it was initially built to execute falls out of fashion and is no longer maintained. If there is a loose coupling between the scanning tool and the framework that runs it, then you will be able to run alternative tools once the ROI on the initial scanning tool diminishes. If you are not doing a large scale framework and are instead just modifying existing automation frameworks, the same principles will apply even if they are at a smaller scale.

Tool dependencies

While the robustness of tests results is an important criterion for tool selection, complex tools often have complex dependencies. Some tools only require the targeted URL and some tools need complex configuration files.  Do you just need to run a few tools or do you want to spend the time to make your framework be security tool agnostic? Can you use a Docker image for each tool in order to avoid dependency collisions between security assessment tools? When the testing tool conducts the attack on the remote host, does the attack presume that code injected into the remote host’s environment can send a message back to the testing tool?  If you are building a scalable scanning system with dynamically allocated, short-lived hosts that live behind a NAT server, then it may be tricky for the remote attack code to send a message back to the original security assessment tool.

Inputs and outputs

Do the tools require a complex, custom configuration file per target or do you just need to provide the hostname? If you want to scale across a large number of sites, tools that require complex, per-site configuration files may slow the speed at which you can scale and require more maintenance over time. Does the tool provide a single clear response that is easy to record or does it provide detailed, nuanced responses that require intelligent parsing? Complex results with many different findings may make it more difficult to add alerting around specific issues to the tool. They could also make metrics more challenging depending on what and how you measure.

Tool scalability

How many instances of the security assessment tool can be run on a single host? For instance, tools that listen on ports limit the number of instances per server. If so, you may need Docker or a similar auto-scaling solution. Complex tools take longer to run which may cause issues with detecting time outs. How will the tool handle re-tests for identified issues? Does the tool have granularity so that dev team can test their proposed patch against the specific issue? Or does the entire test suite need to be re-run every time the developer wants to verify their patch?

Focus and roll out

If you are tackling a large project, it is important to understand what is the minimum viable product? What is the one thing that makes this tool different than just buying the enterprise version of the equivalent commercial tool? Could the entire project be replaced with a few Python scripts and crontab? If you can’t articulate what extra value your approach will bring over the alternative commercial or crontab approach, then the project will not succeed. The people who would leverage the platform may get impatient waiting for your development to be done. They could instead opt for a quicker solution, such as buying a service, so that they can move on to the next problem.  As you design your project, always ask yourself, “Why not cron?” This will help you focus on the key elements of the project that will bring unique value to the organization. Your roadmap should focus on delivering those first.

Team adoption

Just because you are building a tool to empower the security team, doesn’t mean that your software won’t have other customers. This tool will need to interact with the development teams’ environments. This security tool will produce outputs that will eventually need to be processed by the development team. The development teams should not be an afterthought in your design. You will be holding them accountable for the results and they need methods for understanding the context of what your team has found and being able to independently retest.

For instance, one argument for integrating into something like Jenkins or GIt, is that you are using a tool the development team already understands. When you try to explain how your project will affect their environment, using a tool that they know means that the discussion will be in language that they understand. They will still have concerns that your code might have negative impacts on their environment. However, they may have more faith in the project if they can mentally quantify the risk based on known systems. When you create standalone frameworks, then it is harder for them to understand the scale of the risk because it is completely foreign to them.

At Adobe, we have been able to work directly with the development teams for building security automation. In a previous blog, an Adobe developer described the tools that he built as part of his pursuit of an internal black belt security training certification. There are several advantages to having the security champions on development teams build the development tools rather than the core security team. One is that full time developers are often better coders than the security teams and the developers better understand the framework integration. Also, in the event of an issue with the tool, the development team has the knowledge to take emergency action. Often times, a security team just needs the tool to meet specific requirements and the implementation and operational management of the tool can be handled by the team responsible for the environment. This can make the development team more at ease with having the tool in their environment and it frees up the core security team to focus on larger issues.

Conclusion

While jumping right into the challenges of the implementation is always tempting, thinking through the complete data flow for the proposed tools can help save you a lot of rewriting. It also important that you avoid trying to boil the ocean by scoping more than your team can manage. Most importantly, always keep focus on the unique value of your approach and the customers that you need to buy into the tool once it is launched. The next blog will focus on an implementer’s concerns around platform selection, queuing, scheduling, and scaling by looking at example implementations.

Security Automation for PCI Certification of the Adobe Shared Cloud

Software engineering is a unique and exciting profession. Engineers must employ continuous learning habits in order to keep up with constantly morphing software ecosystem. This is especially true in the software security space.  The continuous introduction of new software also means new security vulnerabilities are introduced. The problem at its core is actually quite simple. It’s a human problem.  Engineers are people, and, like all people, they sometimes make mistakes.  These mistakes can manifest themselves in the form of ‘bugs’ and usually occur when the software is used in a way the engineer didn’t expect. When these bugs are left uncorrected it can leave the software vulnerable. Some mistakes are bigger than others and many are preventable. However, as they say, hindsight is always 20/20.  You need not necessarily experience these mistakes to learn from them. As my father often told me, a smart person learns from his mistakes, but a wise person learns from the mistakes of others. And so it goes with software security. In today’s software world, it’s not enough to just be smart, but you also need to be wise.

After working at Adobe for just shy of 5 years I have achieved the current coveted rank of ‘Black Belt’ in Adobe’s security program through the development of some internal security tools and assisting in the recent PCI certification of Shared Cloud (the internal platform upon which, Creative Cloud and Document Cloud are based).  Through Adobe’s security program my understanding of security has certainly broadened.  I earned my white belt within just a few months of joining Adobe which consisted of some online courses covering very basic security best practices. When Adobe created the role of “Security Champion” within every product team, I volunteered. Seeking to make myself a valuable resource to my team, I quickly eared my green belt which consisted of completing several advanced online courses covering a range of security topics from SQL Injection & XXS to Buffer Overflows. I now had 2 belts down,  only 2 to go.

At the beginning of 2015, the Adobe Shared Cloud team started down the path of PCI Compliance.  When it became clear that a dedicated team would be needed to manage this, myself and a few others made a career shift from software engineer to security engineer in order to form a dedicated security team for the Shared Cloud.  To bring ourselves up to speed, we began attending BlackHat and OWASP conferences to further our expertise. We also started the long, arduous task of breaking down the PCI requirements into concrete engineering tasks.  It was out of this PCI effort that I developed three tools – one of which would earn me my Brown Belt, and the other two my Black Belt.

The first tool came from the PCI requirement that requires you track all of 3rd party software libraries for vulnerabilities and remediate them based on severity. Working closely with the ASSET team we developed an API that would allow you to push product dependencies and versions into applications as they are built.   Once that was completed, I wrote an integrated and highly configurable Maven plugin which consumed the API during build time, thereby helping to keep applications up-to-date automatically as part of our continuous delivery system. After completing this tool, I submitted it as a project and was rewarded with my Brown Belt. My plugin has also been adopted by several teams across Adobe.

The second tool also came from a PCI requirement. It states that no changes are allowed on production servers without a review step, including code changes. At first glance this doesn’t seem so bad – after all we were already doing regular code reviews. So, it shouldn’t be a big deal, right? WRONG! The burden of PCI is that you have to prove that changes were reviewed and that no change was allowed to go to production without first being reviewed.  There were a number of manual approaches that one could take to meet this requirement. But, who wants the hassle and overhead of such a manual process? Enter my first Black Belt project – the Git SDLC Enforcer Plugin.  I developed a Jenkins plugin that ran with a merge onto a project’s release branch.  The plugin reviews the commit history and helps ensure that every commit belongs to a pull request and that each pull request was merged by someone other than the author of the pull request.  If any offending commits or pull requests are found then the build fails and an issue is opened on the project in its GitHub space.  This turned out to be a huge time saver and a very effective mechanism for ensuring every change done to the code is reviewed.

The project that finally earned me my Black Belt, however, was the development of a tool that will eventually fully replace the Adobe Shared Cloud’s secret injection mechanism. When working with Amazon Web Services, you have a little bit of a chicken and egg problem when you begin to automate deployments. At some point, you need an automated way to get the right credentials into the EC2 instances that your application needs to run. Currently the Shared Cloud’s secrets management leverages a combination of custom baked AMI’s, IAM Roles, S3, and encrypted data bags stored in the “Hosted Chef” service. For many reasons, we wanted to move away from Chef’s managed solution, and add some additional layers of security such as the ability to rotate encryption keys, logging access to secrets, the ability to restrict access to secrets based on environment and role, as well as making it auditable. This new tool was designed to be a drop in replacement for “Hosted Chef” – it made it easier to implement in our baking tool chain and replaced the data bag functionality provided by the previous tool as well as added some additional security functionality.  The tool works splendidly and by the end of the year the Shared Cloud will be exclusively using this tool resulting in a much more secure, efficient, reliable, and cost-effective mechanism for injecting secrets.

My take away from all this is that Adobe has developed a top notch security training program, and even though I have learned a ton about software security through it, I still have much to learn. I look forward to continue making a difference at Adobe.

Jed Glazner
Security Engineer

IT Asset Management: A Key in a Consistent Security Program

IT Asset Management (ITAM) is the complete and accurate inventory, ownership and governance of IT assets. ITAM is an essential and often required stipulation of an organization’s ability to implement baseline security practices and become compliant with rigorous industry standards. As IT continues to transform, organizations face the challenge of maintaining an accurate inventory of IT assets that consist of both physical and virtual devices, as well as static and dynamic spin-up-spin-down cloud infrastructures.

The absence of ITAM can result in a lack of asset governance and inaccurate inventory. Without a formalized process, companies might unknowingly be exposed to insecure assets that are open to exploitation. On the contrary, proper ITAM helps enable organizations to leverage a centralized and accurate record of inventory in which security measures can be implemented and applied consistently across the organization’s environment.

Risks Without ITAM

Assets that are not inventoried and tracked in an ITAM program present a very real and critical risk to the business. Unknown assets seldom have an appropriate owner identified and assigned. In essence, nobody within the organization is owning the responsibility to ensure that the unknown asset is sufficiently governed or secured. As a result, unknown assets can quickly fall out of sync with regulatory or compliance requirements leaving them vulnerable for exploitation.

In a world of constant patches and hotfixes, an unknown asset can become vulnerable after only a single missed update. Bad actors rarely attack the well-known and security hardened asset. It is far more common for a bad actor to patiently traverse the organization’s network, waiting to attack until they have identified an asset which the organization itself doesn’t know exists.

Benefits of ITAM

Before a company can sufficiently implement programs designed to protect its operational assets, it must first have the ability to identify and inventory those assets. Companies should put into place processes and controls to automate the inventorying of assets obtained via procurement and virtual machine provisioning. Assets can be inventoried and continuously tracked using a Configuration Management Database (CMDB). Each asset can be inventoried in the CMDB and assigned an owner, who is responsible for asset governance and maintenance until the decommission, or destruction, of the asset.

Processes must also be put into place to continuously monitor and update the CMDB inventory. One example of how Adobe monitors its CMDB is by leveraging operating security controls. For example, Adobe performs an analysis to determine if all assets sending logs to a corporate log server are known assets inventoried in the CMDB. If the asset is not inventoried in the CMDB, then the asset is categorized as an unknown asset. Once unknown assets are identified, further analysis is performed so that the asset can be added to the CMDB and an appropriate owner assigned.

At Adobe, we have created the Adobe Common Controls Framework (CCF), which is a set of control requirements which have been rationalized from the complex array of industry security, privacy and regulatory standards. CCF provides the necessary controls guidance to assist teams with asset management. ITAM helps provide Adobe internal, as well as third party external, auditors a centralized asset repository to leverage in order to gain reasonable assurance that security controls have been implemented and are operating effectively across the organization’s environment.

As described above, maintaining a complete and accurate ITAM in an organization of any size is no easy task. However, when implemented correctly, the benefits of ITAM allow organizations to consistently apply security controls across the operating environment, helping result in a reduced attack surface for potential bad actors. If organizations are not aware of where their assets are, then how can they reasonably know what assets they need to protect?

Matt Carroll
Sr. Analyst, Risk Advisory and Assurance Services (RAAS)

 

Do You Know How to Recognize Phishing?

Computer login and password on paper attached to a hook concept for phishing or internet security

By now, most of us know that the email from the Nigerian prince offering us large sums of money in return for our help to get the money out of Nigeria is a scam. We also recognize that the same goes for the email from our bank that is laden with spelling errors. However, phishing attacks have become more sophisticated over the years, and for the most part, it has become much harder to tell the difference between a legitimate piece of communication and a scam.

In recognition of National Cyber Security Awareness Month, we asked a nationally representative sample of ~2,000 computer-owning adults in the United States about their behaviors and knowledge when it comes to cybersecurity. This week, we’ll share some of the insights from our survey related to phishing—as well as resources and tips on how you can better protect yourself from falling victim to phishing attacks.

What is phishing?

Phishing is a form of fraud in which the attacker tries to learn information such as login credentials or account information by masquerading as a legitimate, reputable entity or person in email, instant messages (IMs) or other communication channel. Examples would be an email from a bank that is carefully designed to look like a legitimate message but that is coming from a criminally-motivated source, a phone message claiming to be from the Internal Revenue Service (IRS) threatening large fines unless you immediately pay what you supposedly owe, or the email from the Nigerian prince pleading for your compassion and promising a large reward. Attackers typically create these communications in an effort to steal money, personal information, or both. Phishing emails or IMs are typically designed to make you click on links or open attachments that look authentic but are really just there to distribute malware on your machine or to capture your credit card number in a form on the attacker’s site.

So do YOU know how to recognize phishing?

For the purpose of this blog post, we’ll focus on phishing emails as the attacker’s choice of communication. According to our survey, 70 percent of adults in the United States believe they can identify a phishing email. That percentage rises to 80 percent among Millennials.[i] Yet nearly four (4) in 10 people believe they have been victims of phishing. This goes to show that it’s not as easy to detect phishing emails as it may sound! Here are six tips to help you identify whether you’ve received a “phishy” email:

1. The email urges you to take immediate action

Phishing emails often try to trick you into clicking a link by claiming that your account has been closed or put on hold, or that there’s been fraudulent activity requiring your immediate attention. Of course, it’s possible you may receive a legitimate message informing you to take action on your account. To be safe, don’t click on the link in the email, no matter how authentic it appears to be. Instead, log into the account in question directly by visiting the appropriate website, then check your account status.

2. You don’t recognize the email sender

Another common way to identify a phishing email is if you don’t recognize the email sender. Two-thirds of those individuals we surveyed who believe they can identify a phishing email noted a top indicator to be whether or not they recognized the sender. However, our survey results also show that despite the warning signs, more than four (4) in 10 U.S. adults will still open the email—and among those, nearly half would click on a link inside—potentially putting themselves at risk.

3. The hyperlinked URL is different from the one shown

The hyperlink text in a phishing email may include the name of a legitimate bank. But when you hover the mouse over the link (without clicking on it), you may discover in a small pop-up window that the actual URL differs from the one displayed and that it doesn’t contain the bank’s name. Similarly, you can hover your mouse over the address in the “From” field to see if the website domain matches that of the organization the email is supposed to have been sent from.

4. The email in question has improper spelling or grammar

This is one of the most common signs that an email isn’t legitimate. Sometimes, the mistake is easy to spot, such as “Dear Costumer” instead of “Dear Customer.”

Other mistakes might be more difficult to spot, so make sure to look at the email in closer detail. For example, the subject line or the email itself might say “Health coverage for the unemployeed.” The word “unemployed” isn’t exactly difficult to spell, and any legitimate organization should have editors who review marketing emails carefully before sending them out. So when in doubt, check the email closely for misspellings and improper grammar.

5. The email requests personal information

Reputable organizations don’t ask for personal customer information via email. For example, if you have a checking account, your bank already knows your account number.

6. The email includes suspicious attachments

It would be highly unusual for a legitimate organization to send you an email with an attachment, unless it’s a document you’ve requested or are expecting. As always, if you receive an email that looks in any way suspicious, never click to download the attachment, as it could be malware.

What to do when you think you’ve received a phishing email

Report potential phishing scams. If you think you’ve received a phishing email from someone posing as Adobe, please forward that email to phishing@adobe.com, so we can investigate.

Google also offers online help for reporting phishing websites and phishing attacks. And last but not least, the U.S. government offers valuable tips for protecting yourself from phishing scams as well as an email address for reporting scams: phishing-report@us-cert.gov.

So while the Nigerian prince has become a lot more sophisticated in his tactics, there is a lot you can do to help protect yourself. Most importantly, trust your instincts. If it smells like a scam, it might very well be a scam!


[i] Millennials are considered individuals who reached adulthood around the turn of the 21st century. If you are in your mid-30s today, you are considered a Millennial.

Security Automation Part I: Defining Goals

This is the first of a multi-part series on security automation. This blog will focus on high-level design considerations. The next blog will focus on technical design considerations for building security automation. The third blog will dive even deeper with the specific examples as the series continues to get more technical.

There are many possible approaches for adding automation to your security process. For many security engineers, it is an opportunity to get away from reviewing other engineers’ code and write some of their own. One key difference between a successful automation project and “abandonware” is to create a project that will produce meaningful results for the organization. In order to accomplish that, it is critical to have a clear idea of what problem you are trying to solve at the onset of the project.

Conceptual aspirations

When choosing the right path for designing security automation, you need to decide what will be the primary goals for the automation. Most automation projects can be grouped into common themes:

Scalability

Scalability is something that most engineers instinctively go to first because the cloud has empowered researchers to do so much more. Security tools designed for scalability often focus on the penetration testing phase of the software development lifecycle. They often involve running black or grey box security tests against the infrastructure in order to confirm that the service was deployed correctly. Scalability is necessary if you are going to measure every host in a large production environment. While scalability is definitely a powerful tool that is necessary in the testing phase of your security development lifecycle, it sometimes overshadows other important options that can occur earlier in the development process and that may be less expensive in terms of resources.

Consistency

There is a lot of value in being able to say that “Every single release has ___.” Consistency isn’t considered as challenging of a problem as scalability because it is often taken for granted. Consistency is necessary for compliance requirements where public attestations need to have clear, simple statements that customers and auditors can understand. In addition, “special snowflakes” in the environment can drown an organization’s response when issues arise. Consistency automation projects frequently focus on the development or build phase of the software development lifecycle. They include integrating security tasks into build tools like Jenkins, Chef, Git, or Maven. By adding security controls into these central tools in the development phase, you can have reasonable confidence that machines in production are correctly configured without scanning each and every one individually.

Efficiency

Efficiency projects typically focus on improving operational processes that currently involve too much human interaction.  The goal is to refocus the team’s manual resources to more important tasks. For instance, many efficiency projects have the word “tracking” somewhere in their definition and involve better leveraging tools like JIRA or Sharepoint. Often times, efficiency automation is purchased rather than built because you aren’t particularly concerned with how the problem gets solved, so long as it gets solved and that you aren’t the one who has to maintain the code for it. That said, SalesForce’s open-source VulnReport.io project (http://vulnreport.io) is an example of a custom built efficiency tool which they claim improved operational efficiency and essentially resulted “in a ‘free’ extra engineer for our team.”

Metrics

Metrics gathering can be a project in itself or it can be the byproduct of a security automation tool. Metrics help inform and influence management decisions with concrete data. That said, it is important to pick metrics that can guide management and development teams towards solving critical issues. For instance, development teams will interpret the metrics gathered as the key performance indicator (KPI) that they are being measured against by management.

In addition, collecting data almost always leads to requests for more detailed data. This can be useful in helping to understand a complex problem or it can be a distraction that leads the team down a rabbit hole. If you take time to select the proper metrics, then you can help keep the team focused on digging deeper into the critical issues.

Operational Challenges

If your scalable automation project aims to run a web application penetration tool (WAPT) across your entire enterprise, then you are basically creating an “enterprise edition” for that tool. If you have used enterprise edition WAPTs in the past and you did not achieve the success that you wanted, then recreating the same concept with a slightly different tool will most likely not produce significantly different results when it comes to action within the enterprise. The success or failure of tools are typically hinged on the operational process surrounding the tool more than the tool itself. If there is no planning for handling the output from the tool, then increasing the scale at which the tool is run doesn’t really matter. When you are designing your automation project, consider operational questions such as:

Are you enumerating a problem that you can fix?

Enumerating an issue that the organization doesn’t have the resources to address can sometimes help justify getting the funding for solving the problem. On the other hand, if you are enumerating a problem that isn’t a priority for an organization, then perhaps you should refocus the automation on issues that are more critical. If no change occurs as the result of the effort, then the project will stop iterating because there is no need to re-measure the issue.

In some situations, it may be better to tackle the technical debt of basic issues before tackling larger issues. Tests for basic technical debt issues are often easier to create and they are easier for the dev team to address. As the dev team addresses the issues, the project will also iterate in response. While technical debt isn’t as exciting as the larger issues, addressing it may be a reasonable first step towards getting immediate ROI.

Are you producing “noise at scale”?

Running a tool that is known for creating a high level of false positives at scale will produce “noise at scale”. Unless you have an “at scale” triage team to eliminate the false positives, then you are just generating more work for everyone. Teams are less likely to take action on metrics that they believe are debatable due to the fear that their time might be wasted. A good security tool will empower the team to be more efficient rather than drown them in reports.

How will metrics guide the development team?

As mentioned earlier, the metric will be interpreted as a KPI for the team and they will focus their strategy around what is reported to management. Therefore, it makes sense not to bother measuring non-critical issues since you don’t want the team to get distracted by minor issues. You will want to make sure that you are collecting metrics on the top issues in a way that will encourage teams towards the desired approach.

Often times there are multiple ways to solve an issue and therefore multiples ways to measure the problem. Let’s assume that you wanted to create a project to tackle cross-site scripting (XSS). Creating metrics that count the number of XSS bugs will focus a development team on a bug fixing approach to the problem. Alternatively, counting the number of sites with content security policy headers deployed will focus the development team on security mitigations for XSS. In some cases, focusing the team on security mitigations has more immediate value than focusing on bug fixing.

What metrics does management need to see?

One method to determine how your metrics will drive development teams and inform management, is to phrase them in terms of an assertion. For instance, “I assert that the HSTS header is returned by all our sites in order to ensure our data is always encrypted.”  By phrasing it as an assertion, you are phrasing the test in terms of a simple true/false terms that can be reliably measured at scale. You are also phrasing the test in terms of its desired outcome using simplistic terms. This makes it easier to determine if the goal implied by the metric’s assertion meets with management’s goals.

From a management perspective, it is also important to understand whether your measurement of change or a measurement of adoption. Measuring the number of bugs in an application is often measuring an ongoing wave. If security programs are working, the height of the waves will trend down overtime. Although, that means you have to watch the wave through multiple software releases before you can reliably see a trend in its change. If you measure security mitigations, then you are measuring the adoption rate of a project that will end with a state of relative “completeness.” Tracking wave metrics overtime is valuable because you need to see when things are getting better or worse. However, since it is easy to procrastinate on issues that are open ended,  adoption-style projects that can be expressed as having a definitive “end” may get more immediate traction from the development team because you can set a deadline that needs to be met.

Putting it all together

With these ideas and questions in mind, you can mentally weigh which types of projects to start with for immediate ROI and the different tools for deploying them.

For instance, counting XSS and blind SQL injection bugs are hard tests to set up (authentication to the application, crawling the site, etc.), these tests frequently have false positives, and they typically result in the team focusing on bug fixing which would require in-depth monitoring overtime because it is a wave metric. In contrast, a security project measuring security headers, such as X-Frame-Options or HSTS, are simple tests to write, they have low false-positive rates, they can be defined as “(mostly) done” once the headers are set, and they focus the team on mitigations. Another easy project might be writing scalable tests that confirm the SSL configuration meets the company standards. Therefore, if you are working on a scalability project, starting with a simple SSL or security header projects can be quick wins that demonstrate the value of the tool. From there, you can then progress to measuring the more complex issues.

However, let’s say you don’t have the compute resources for a scalability project. An alternative might be to turn the projects into consistency style projects earlier in the lifecycle. You could create Git or Jenkins extensions that search the source code for indicators that the team has deployed security headers or proper SSL configurations. You would then measure how many teams are using the build platform extensions and review the collected results from the extension. It would have a similar effect as testing the deployed machines without as much implementation overhead. Whether this will work better overall for your organization will depend on where you are with your security program and its compliance requirements.

Conclusion

While the technical details of how to build security automation is an exciting challenge for any engineer, it is critical to build a system that will empower an organization. Your project will have a better chance of success if you spend time considering how the output of your tool will help guide progress. The scope of the project in terms of development effort and project coverage by carefully considering where in the development process you will deploy the automation.  By spending time on defining how the tool can best serve the team’s security goals, you can help ensure you are building a successful platform for the company.

The next blog will focus on the technical design considerations for building security automation tools.

Peleus Uhley
Principal Scientist, Security