Posts tagged "PCI"

Security Automation for PCI Certification of the Adobe Shared Cloud

Software engineering is a unique and exciting profession. Engineers must employ continuous learning habits in order to keep up with constantly morphing software ecosystem. This is especially true in the software security space.  The continuous introduction of new software also means new security vulnerabilities are introduced. The problem at its core is actually quite simple. It’s a human problem.  Engineers are people, and, like all people, they sometimes make mistakes.  These mistakes can manifest themselves in the form of ‘bugs’ and usually occur when the software is used in a way the engineer didn’t expect. When these bugs are left uncorrected it can leave the software vulnerable. Some mistakes are bigger than others and many are preventable. However, as they say, hindsight is always 20/20.  You need not necessarily experience these mistakes to learn from them. As my father often told me, a smart person learns from his mistakes, but a wise person learns from the mistakes of others. And so it goes with software security. In today’s software world, it’s not enough to just be smart, but you also need to be wise.

After working at Adobe for just shy of 5 years I have achieved the current coveted rank of ‘Black Belt’ in Adobe’s security program through the development of some internal security tools and assisting in the recent PCI certification of Shared Cloud (the internal platform upon which, Creative Cloud and Document Cloud are based).  Through Adobe’s security program my understanding of security has certainly broadened.  I earned my white belt within just a few months of joining Adobe which consisted of some online courses covering very basic security best practices. When Adobe created the role of “Security Champion” within every product team, I volunteered. Seeking to make myself a valuable resource to my team, I quickly eared my green belt which consisted of completing several advanced online courses covering a range of security topics from SQL Injection & XXS to Buffer Overflows. I now had 2 belts down,  only 2 to go.

At the beginning of 2015, the Adobe Shared Cloud team started down the path of PCI Compliance.  When it became clear that a dedicated team would be needed to manage this, myself and a few others made a career shift from software engineer to security engineer in order to form a dedicated security team for the Shared Cloud.  To bring ourselves up to speed, we began attending BlackHat and OWASP conferences to further our expertise. We also started the long, arduous task of breaking down the PCI requirements into concrete engineering tasks.  It was out of this PCI effort that I developed three tools – one of which would earn me my Brown Belt, and the other two my Black Belt.

The first tool came from the PCI requirement that requires you track all of 3rd party software libraries for vulnerabilities and remediate them based on severity. Working closely with the ASSET team we developed an API that would allow you to push product dependencies and versions into applications as they are built.   Once that was completed, I wrote an integrated and highly configurable Maven plugin which consumed the API during build time, thereby helping to keep applications up-to-date automatically as part of our continuous delivery system. After completing this tool, I submitted it as a project and was rewarded with my Brown Belt. My plugin has also been adopted by several teams across Adobe.

The second tool also came from a PCI requirement. It states that no changes are allowed on production servers without a review step, including code changes. At first glance this doesn’t seem so bad – after all we were already doing regular code reviews. So, it shouldn’t be a big deal, right? WRONG! The burden of PCI is that you have to prove that changes were reviewed and that no change was allowed to go to production without first being reviewed.  There were a number of manual approaches that one could take to meet this requirement. But, who wants the hassle and overhead of such a manual process? Enter my first Black Belt project – the Git SDLC Enforcer Plugin.  I developed a Jenkins plugin that ran with a merge onto a project’s release branch.  The plugin reviews the commit history and helps ensure that every commit belongs to a pull request and that each pull request was merged by someone other than the author of the pull request.  If any offending commits or pull requests are found then the build fails and an issue is opened on the project in its GitHub space.  This turned out to be a huge time saver and a very effective mechanism for ensuring every change done to the code is reviewed.

The project that finally earned me my Black Belt, however, was the development of a tool that will eventually fully replace the Adobe Shared Cloud’s secret injection mechanism. When working with Amazon Web Services, you have a little bit of a chicken and egg problem when you begin to automate deployments. At some point, you need an automated way to get the right credentials into the EC2 instances that your application needs to run. Currently the Shared Cloud’s secrets management leverages a combination of custom baked AMI’s, IAM Roles, S3, and encrypted data bags stored in the “Hosted Chef” service. For many reasons, we wanted to move away from Chef’s managed solution, and add some additional layers of security such as the ability to rotate encryption keys, logging access to secrets, the ability to restrict access to secrets based on environment and role, as well as making it auditable. This new tool was designed to be a drop in replacement for “Hosted Chef” – it made it easier to implement in our baking tool chain and replaced the data bag functionality provided by the previous tool as well as added some additional security functionality.  The tool works splendidly and by the end of the year the Shared Cloud will be exclusively using this tool resulting in a much more secure, efficient, reliable, and cost-effective mechanism for injecting secrets.

My take away from all this is that Adobe has developed a top notch security training program, and even though I have learned a ton about software security through it, I still have much to learn. I look forward to continue making a difference at Adobe.

Jed Glazner
Security Engineer

Adobe’s CCF Helps Acquisitions Meet Security Compliance Requirements

The Common Controls Framework (CCF) is a comprehensive set of control requirements, rationalized from the alphabet soup of several different industry information security and privacy standards. To help ensure that our standards effectively meet our customers’ expectations, we are constantly refining this framework based on industry requirement changes, customer asks, and internal feedback.

As Adobe continues to grow as an organization and as we continue to onboard new acquisitions, CCF enables these acquisitions to come into compliance more quickly. At Adobe, the goal is for acquisitions to meet organization security practices and standards and come up to speed with the compliance roadmap of the organization. CCF enables the new acquisitions to inherit existing simple & scalable solutions to reduce the overall effort significantly to meet compliance goals.

The journey for the newest members of the Adobe family (read: acquisition) begins with a gap assessment against the CCF. Once the gaps are determined against the existing CCF controls, the team can leverage a lot of driver-subscriber scalable controls (Read to understand driver-subscriber controls at Adobe) that are aligned with the CCF to remediate a majority of the gaps. Once the remediation is completed, often what is left is a handful of controls that need to be implemented in order to achieve compliance.

Another key component of security compliance is ensuring proper supporting documentation is in place. In most cases, the acquisition can therefore leverage existing documents used by product teams at Adobe that have successfully embarked and achieved compliance or on the roadmap, as they all address the same CCF requirements. Therefore, the team can often subscribe to the existing documentation when subscribing to a service. For the standalone controls, the teams can use existing templates documented in-line with the CCF to speed up the documentation effort.

Example of Implementation

One of our recent acquisitions was required to undergo PCI DSS compliance and as a result they underwent the gap assessment against the CCF controls. The acquisition was able to successfully leverage a lot of existing solutions like multifactor authentication to production, hardened baseline images, security monitoring, incident response processes to name a few to achieve compliance. At the end, the team was required to implement only a handful of standalone controls.

Given the new updated change in PCI DSS 3.2 around multifactor authentication, this acquisition will not be affected by the change in requirement as they already implemented Multi-factor authentication due to requirements listed in CCF.

Conclusion

Adobe’s CCF controls are helping new acquisitions achieve security compliance more quickly. These teams are able to leverage much of the existing infrastructure Adobe has put in place to meet and maintain its security certifications. Therefore, the overall burden of implementing these controls is significantly reduced and the acquisition that is now an Adobe member can continue to delight our customers at the same time as being compliant with Adobe’s security requirements.

Rahat Sethi
Sr. Information Security Analyst, Adobe Risk Assurance and Analysis Services (RAAS)

Adobe CCF Enables Quicker Adherence to Updated PCI Standards

The Adobe.com e-commerce store has been a PCI level 1 certified merchant for the last few years.  Adobe has significantly reduced its Card Holder Data environment (CDE) scope for this environment by using an external tokenization solution and maintains PAN-free environment by not storing any Primary Account Numbers (PAN) in its internal network. Adobe has implemented its Common Controls Framework (CCF) within the Card Holder Data environment which allows it to use the same set of controls to meet with requirements set forth by Payment Card Industry Data Security Standard PCI DSS V3.1 and many other security/compliance frameworks like ISO27001:2013, SOC2, among others. CCF is a set of approximately 250 controls designed specifically for Adobe’s business and provides the benefit by rationalizing the overlapping requirements across 10 different compliance and security frameworks.

PCI Security Standards Council (PCI SSC) recently released the latest version of the Data Security Standard V3.2. One of the notable changes in the PCI DSS V3.2 is the additional clarification provided around the use of multi-factor authentication for all administrative and remote access to the CDE.

PCI DSS V3.2 reference:

“8.3 Secure all individual non-console administrative access and all remote access to the CDE using multi-factor authentication.”

 By implementing CCF within the CDE, Adobe has already established a baseline control which requires all remote VPN sessions and production environments to be accessed via multi – factor authentication. This baseline control was adopted to meet with the requirements established by the more stringent of the compliance frameworks, hence allowing for Adobe to already be complaint with the clarifications provided in the PCI DSS v3.2 around multi-factor authentication.

Prasant Vadlamudi
Manager, Risk Advisory and Assurance Services (RAAS)

Marketing Cloud Gains New Compliance Wins

Over the past couple of years, we have developed the Adobe Common Controls Framework (CCF), enabling our cloud products, services, platforms and operations to achieve compliance with various security certifications, standards, and regulations (SOC2, ISO, PCI, HIPAA, FedRAMP etc.).  The CCF is a cornerstone of our company-wide security strategy. The framework has gained acceptance and visibility across our businesses leading to a growing roster of certifications.

Last week, Adobe Marketing Cloud became compliant with SOC2 -Type 1. This certification also enables our financial institution customers to comply with the Gramm-Leach Bliley Act (GLBA) requirements for using service providers.

In addition to SOC2 – Type 1, Adobe Experience Manager Managed Services (AEM MS) and Adobe Connect Managed Services (AC MS) have achieved compliance with ISO27001.  AEM MS has also achieved compliance with HIPAA, now joining AC MS in this designation.  This is in addition to the recently confirmed FedRAMP certification for both of these solutions, achieved in 2015.

During 2015, the Document Cloud eSign service implemented the CCF as well and became compliant with SOC2-Type 2, ISO27001, PCI, and HIPAA requirements. Please refer to the “Adobe Security and Privacy Certifications” white paper on Adobe.com for the most up-to-date information about our certifications across our products and services.

Over the past 3 years, we have made significant investments across the company to harmonize various security functions, compliance and governance processes, and technologies. These are major accomplishments and milestones for Adobe’s cloud services and products which will allow us to provide our customers with assurance that their data and applications are more secure.

We have also been out in the security and compliance community, talking with information security and compliance professionals about CCF. This has enabled further collaboration with industry peers in this area. This is a all part of our on-going commitment to help protect our customers and their data. We will update you in future posts on this blog as we achieve additional compliance milestones.

Abhi Pandit
Sr. Director – Risk Advisory and Assurance

 

Better Security Through Automation

Automation Strategies

“Automate all the things!” is a popular meme in the cloud community. Many of today’s talks at security conferences discuss the latest, sophisticated automation tool developed by a particular organization. However, adding “automation” to a project does not magically make things better by itself. Any idea can be automated; including the bad ones. For instance, delivering false positives “at scale” is not going to help your teams. This blog will discuss some of the projects that we are currently working on and the reasoning behind their goals.

Computer science has been focused on automation since its inception. The advent of the cloud only frees our ideas from being resource bound by hardware. However, that doesn’t necessarily mean that automation must take up 100 scalable machines. Sometimes simple automation projects can have large impacts. Within Adobe, we have several types of automation projects underway to help us with security. The goals range from business-level dashboards and compliance projects to low level security testing projects.

 

Defining goals

One large project that we are currently building is a security automation framework focused on security assertions. When you run a traditional web security scanner against a site, it will try to tell you everything about everything on the site. In order to do that effectively, you have to do a lot of pre-configuration (authentication, excluded directories, etc.). Working with Mohit Kalra, the Sr, Security Manager for the ASSET security team, we experimented with the idea of security assertions. Basically, could a scanner answer one true/false question about the site with a high degree of accuracy? Then we would ask that one simple question across all of our properties in order to get a meaningful measurement.

For instance, let’s compare the following two possible automation goals for combating XSS:

(a) Traditional automation: Give me the location of every XSS vulnerability for this site.

(b) Security assertion: Does the site return a Content Security Policy (CSP) header?

A web application testing tool like ZAP can be used to automate either goal. Both of these tests can be conducted across all of your properties for testing at scale. Which goal you choose will decide the direction of your project:

Effort to implement:

(a) Potentially requires effort towards tuning and configuration with a robust scanner in order to get solid results. There is a potential risk to the tested environment (excessive DB entries, high traffic, etc.)

(b) A straight forward measurement with a simple scanner or script. There is a low risk to the tested environment.

Summarizing the result for management:

(a) This approach provides a complex measurement of risk that can involve several variables (reflected vs. persistent, potential value of the site, cookie strategy, etc.). The risk that is measured is a point-in-time assessment since new XSS bugs might be introduced later with new code.

(b) This approach provides a simple measurement of best practice adoption across the organization. A risk measurement can be inferred but it is not absolute. If CSP adoption is already high, then more fine grained tests targeting individual rules will be necessary. However, if CSP adoption is still in the early stages, then just measuring who has started the adoption process can be useful.

Developer interpretation of the result:

(a) Development teams will think in terms of immediate bugs filed.

(b) Development teams will focus on the long term goal of defining a basic CSP.

Both (a) and (b) have merits depending on the needs of the organization. The traditional strategy (a) can give you very specific data about how prevalent XSS bugs are across the organization. However, tuning the tools to effectively find and report all that data is a significant time investment. The security assertion strategy (b) focuses more on long term XSS mitigations by measuring CSP adoption within the organization. The test is simpler to implement with less risk to the target environments. Tackling smaller automation projects has the added value of providing experience that may be necessary when designing larger automation projects.

Which goal is a higher priority will depend on your organization’s current needs. We found that, in playing with the true/false approach of security assertions, we focused more of our energy on what data was necessary versus just what data was possible. In addition, since security assertions are assumed to be simple tests, we focused more of our design efforts on perfecting the architecture of scalable testing environment rather than the idiosyncrasies of the tools that the environment would be running. Many automation projects try to achieve depth and breadth at the same time by running complex tools at scale. We decided to take an intermediate step by using security assertions to focus on breadth first and then to layer on depth as we proceed.

 

Focused automation within continual deployment

Creating automation environments to scan entire organizations can be a long term project. Smaller automation projects can often provide quick wins and valuable experience on building automation. For instance, continuous build systems are often a single chokepoint through which a large portion of your cloud must pass before deployment. Many of today’s continuous build environments allow for extensions that can be used to automate processes.

As an example, PCI requires that code check-ins are reviewed. Verifying this process is followed consistently requires significant human labor. One of our Creative Cloud security champions, Jed Glazner, developed a Jenkins plugin which can verify each check-in was reviewed. The plugin monitors the specified branch and ensures that all commits belong to a pull request, and that the pull requests were not self merged. This allows for daily, automatic verification of the process for compliance.

Jed worked on a similar project where he created a Maven plug-in that lists all third-party Java libraries and their versions within the application. The plugin would then upload that information to our third-party library tracker so that we can immediately identify libraries that need updates. Since the plug-in was integrated into the Maven build system, the data provided to the third-party library tracker was always based on the latest nightly build and it was always a complete list.

 

Your organization will eventually build, buy or borrow a large scale automation tool that scales out the enumeration of immediate risk issues. However, before you jump head first into trying to build a robust scanning environment from scratch, be sure to first identify what core questions you need the tools to answer in order to support your program. You might find that starting with smaller automation tasks that track long term objectives or operational best practices can be just as useful to an organization. Deploying these smaller projects can also provide experience that can help your plans for larger automation projects.

 

Peleus Uhley
Principal Scientist

Updated Security Information for Adobe Creative Cloud

As part of our major release of Creative Cloud on June 16th, 2015, we released an updated version of our security white paper for Adobe Creative Cloud for enterprise. In addition, we released a new white paper about the security architecture and capabilities of Adobe Business Catalyst. This updated information is useful in helping I.T. security professionals evaluate the security posture of our Creative Cloud offerings.

Adobe Creative Cloud for enterprise gives large organizations access to Adobe’s creative desktop and mobile applications and services, workgroup collaboration, and license management tools. It also includes flexible deployment, identity management options including Federated ID with Single Sign-On, annual license true-ups, and enterprise-level customer support — and it works with other Adobe enterprise offerings. This version of the white paper includes updated information about:

  • Various enterprise storage options now available, including updated information about geolocation of shared storage data
  • Enhancements to entitlement and identity management services
  • Enhancements to password management
  • Security architecture of shared services and the new enterprise managed services

Adobe Business Catalyst is an all-in-one business website and online marketing solution providing an integrated platform for Content Management (CMS), Customer Relationship Management (CRM), E‐Mail Marketing, ECommerce, and Analytics. The security white paper now available includes information about:

  • Overall architecture of Business Catalyst
  • PCI/DSS compliance information
  • Authentication and services
  • Ongoing risk management for the Business Catalyst application and infrastructure

Both white papers are available for download on the Adobe Security resources page on adobe.com.

 

Chris Parkerson
Sr. Marketing Strategy Manager

Top 10 Web Hacking Techniques of 2014

This year, I once again had the privilege to be one of judges for the “Top 10 Web Hacking Techniques” list that is organized by Matt Johansen and Johnathan Kuskos of the WhiteHat Security team. This is a great honor and a lot of fun to do, although the task of voting also requires a lot of reflection. A significant amount of work went into finding the issues, and that should be respected in the analysis for the top spot. This blog reflects my personal interpretation of the nominees this year.

My first job as a judge is to establish my criteria for judging. For instance:

  • Did the issue involve original or creative research?
  • What was the ultimate severity of the issue?
  • How many people could be affected by the vulnerability?
  • Did the finding change the conversation in the security community?

The last question is what made judging this years entries different from previous years. Many of the bugs were creative and could be damaging for a large number of people. However, for several of the top entries, the attention that they received helped change the conversation in the security industry.

A common trend in this year’s top 10 was the need to update third-party libraries. Obviously, Heartbleed (#1) and POODLE (#3) brought attention to keeping OpenSSL up-to-date. However, if you read the details on the Misfortune Cookie attack (#5), there was the following:

AllegroSoft issued a fixed version to address the Misfortune Cookie vulnerability in 2005, which was provided to licensed manufacturers. The patch propagation cycle, however, is incredibly slow (sometimes non-existent) with these types of devices. We can confirm many devices today still ship with the vulnerable version in place. 

Third-party libraries can be difficult to track and maintain in large organizations and large projects. Kymberlee Price and Jake Kouns spent the year giving great presentations on the risks of third-party code and how to deal with it.

Heartbleed and Shellshock were also part of the year of making attacks media-friendly by providing designer logos. Many of us rolled our eyes at how the logos drew additional media attention to the issues. Although, it is impossible to ignore how the added media attention helped expedite difficult projects such as the deprecation of SSLv3. Looking beyond the logos, these bugs had other attributes which made them accessible in terms of tracking and understanding the severity. For instance, besides a memorable name, Heartbleed included a detailed FAQ which helped to quickly explain the bug’s impact. Typically, a researcher would have had to dig through source code changelists which is difficult or consult HeartBleed’s CVSS score (5 out of 10) which can be misleading. Once you remove the cynicism from the logo discussion, the question that remains is what can the industry learn from these events that will allow our industry to better communicate critical information to a mass audience?

In addition, these vulnerabilities brought attention to the discussion around the “many eyes make all bugs shallow” theory. Shell Shock was a vulnerability that went undetected for years in the default shell used by most security engineers. Once security engineers began reviewing the code affected by Shell Shock, three other CVEs were identified within the same week. The remote code execution in Apache Struts ClassLoader (#8) was another example of a vulnerability in a popular open-source project. The Heartbleed vulnerability prompted the creation of the Core Infrastructure Initiative to formally assist with projects like OpenSSL, OpenSSH and the Network Time Protocol. Prior to the CII, OpenSSL only received about $2,000 per year in donations. The CII funding makes it possible to pursue projects such as having the NCC Group’s consultants audit OpenSSL.

In addition to bugs in third-party libraries, there was also some creative original research. For instance, the Rosetta Flash vulnerability (#4) combined the fact that the JSONP protocol allows attackers to control the first few bytes of a response with the fact that ZLIB compression format allows you to define the characters used for compression. Combining these two issues meant that an attacker could bounce a specially crafted, ZLIB-compressed SWF file off of a JSONP endpoint to get it to execute in their domain context. This technique worked on JSONP endpoints for several popular websites. Rather than asking JSONP endpoints to add data validation, Adobe changed Flash Player so that SWFs restrict the types of ZLIB-compressed data that is accepted.

The 6th and 7th issues on the list both dealt with authentication issues that reminded us that authentication systems are a complex network of trust. The research into “Hacking PayPal with One Click” (#6) combined three different bugs to create a CSRF attack against PayPal. While the details around the “Google Two-Factor Authentication Bypass” weren’t completely clear, it also reminded us that many trust systems are chained together. Two-factor authentication systems frequently rely on your phone. If you can social engineer a mobile carrier to redirect the victim’s account, then you can subvert the second factor in two-factor authentication.

The last two issues dealt with more subtle issues than remote code execution. Both show how little things can matter. The Facebook DDOS attack (#9) leveraged the simple support of image tags in the Notes service. If you include enough image tags on enough notes, then you could get over 100 Facebook servers generating traffic to the target. Lastly, “Covert Timing Channels based on HTTP Cache Headers” (#10) looked at ways hidden messages can be conveyed via headers that would otherwise be ignored in most traffic analysis.

Overall, this year was interesting in terms of how the bugs changed our industry. For instance, the fact that a large portion of the industry was dependent on OpenSSL was well known. However, without Heartbleed, the funding to have a major consulting firm perform a formal security audit would have never been made possible. Research from POODLE demonstrated that significant sites in the Alexa Top 1000 hadn’t adopted TLS which has been around since 1999. POODLE helped force the industry to accelerate the migration forward off of SSLv3 and onto TLS. In February, the PCI standard’s council announced, “because of these weaknesses, no version of SSL meets PCI SSC’s definition of ‘strong cryptography.” When a researcher’s work identifies a major risk, then that is clearly important within the scope of that one product or service. When a researcher’s work can help inspire changing the course of the industry, then that is truly remarkable.

For those attending RSA Conference, Matt Johansen and Johnathan Kuskos will be presenting the details of the Top 10 Web Hacking Techniques of 2014 on April 24 at 9:00 AM.

 

Peleus Uhley
Lead Security Strategist