Posts tagged "AWS"

Developing an Amazon Web Services (AWS) Security Standard

Adobe has an established footprint on Amazon Web Services (AWS).  It started in 2008 with Managed Services, and expanded greatly with the launch of Creative Cloud in 2012 and the migration of Business Catalyst to AWS in 2013. In this time, we found challenges in keeping up with AWS security review needs.  In order to increase scalability, it was clear we needed a defined set of minimum AWS security requirements and tooling automation for auditing AWS environments against it.  This might sound simple, but like many things, the devil was in the details. It took focused effort to ensure the result was a success.  So how did we get here?  Let’s start from the top.

First, the optimal output format needed to be decided upon.  Adobe consists of multiple Business Units (BUs) and there are many teams within those BUs.  We needed security requirements that could be broadly applied across the company as well as to acquisitions. so we needed requirements that could not only be applied to existing services and new services across BUs; but also be future-proof. Given these constraints, creating a formal standard for our teams to follow was the best choice.

Second, we needed to build a community of stakeholders in the project. For projects with broad impact such as this, it’s best to have equally broad stakeholder engagement.  I made sure we had multiple key representatives from all the BUs (leads, architects, & engineers) and that various security roles were represented (application security, operational security, incident response, and our security operations center).  This led to many strong opinions about direction. Thus, it was important to be an active communication facilitator for all teams to ensure their needs are met.

Third, we reviewed other efforts in the security industry to see what information we could learn.  There are many AWS security-related whitepapers from various members of the security community.  There have been multiple security-focused AWS re:Invent presentations over the years.  There’s also AWS’s Trusted Advisor and Config Rules, plus open source AWS security assessment tools like Security Monkey from Netflix and Scout2 from NCC Group.  These are all good places to glean information from.

While all of these varied information sources are fine and dandy, is their security guidance relevant to Adobe?  Does it address Adobe’s highest risk areas in AWS?  Uncritically following public guidance could result in the existence of a standard for the sake of having a standard – not one that delivered benefits for Adobe.

A combination of security community input, internally and externally documented best practices, and looking for patterns and possible areas of improvement was used to define an initial scope to the standard.  At the time the requirements were being drafted, AWS had over 30 services. It was unreasonable (and unnecessary) to create security guidance covering all of them.  The initial scope for the draft minimum security requirements was AWS account management, Identity & Access Management (IAM), and Compute (Amazon Elastic Compute Cloud (EC2) and Virtual Private Cloud (VPC)).

We worked with AWS stakeholders within Adobe through monthly one-hour meetings to get agreement on the minimum bar security requirements for AWS and which were to be applied to all of Adobe’s AWS accounts (dev, stage, prod, testing, QA, R&D, personal projects, etc).  We knew we’d want a higher security bar for environments that handle more sensitive classes of data or were customer facing. We held a two-day AWS security summit that was purely focused on defining these higher bar security requirements to ensure all stakeholders had their voices heard and avoid any contention as the standard was finalized.

As a result of the summit, the teams were able to define higher security requirements that covered account management/IAM and compute (spanning architecture, fleet management, data handling, and even requirements beyond EC2/VPC including expansion into AWS-managed services such as S3, DynamoDB, SQS, etc.).

I then worked with Adobe’s Information Systems Security Policies & Standards team to publish an Adobe-wide standard.  I transformed the technical requirements into an appropriate standard.  This was then submitted to Adobe’s broader standards’ teams to review.  After this review, it was ready for formal approval.

The necessary teams agreed to the standard and it was officially published internally in August 2016.  I then created documentation to help teams use the AWS CLI to audit for and correct issues from the minimum bar requirements. We also communicated the availability of the standard and began assisting teams towards meeting compliance with it.

Overall the standard has been well received by teams.  They understand the value of the standard and its requirements in helping Adobe ensure better security across our AWS deployments.  We have also developed timelines with various teams to help them achieve compliance with the standard. And, since our AWS Security Standard was released we have seen noted scalability improvements and fewer reported security issues.  This effort continues to help us in delivering the security and reliability our customers expect from our products and services.

Cynthia Spiess
Web Security Researcher

Evolving an Application Security Team

A centralized application security team, similar to ours here at Adobe, can be the key to driving the security vision of the company. It helps implement the Secure Product Lifecycle (SPLC) and provide security expertise within the organization.  To stay current and have impact within the organization, a security team also needs to be in the mode of continuous evolution and learning. At inception of such a team, impact is usually localized more simply to applications that the team reviews.  As the team matures, the team can start to influence the security posture across the whole organization. I lead the team of security researchers at Adobe. Our team’s charter is to provide security expertise to all application teams in the company.  At Adobe, we have seen our team mature over time. As we look back, we would like to share the various phases of evolution that we have gone through along the way.

Stage 1:  Dig Deeper

In the first stage, the team is in the phase of forming and acquires the required security skills through hiring and organic learning. The individuals on the team bring varied security expertise, experience, and a desired skillset to the team. During this stage, the team looks for applicability of security knowledge to the applications that the engineering teams develop.  The security team starts this by doing deep dives into the application architecture and understanding why the products are being created in the first place. Here the team understands the organization’s application portfolio, observes common design patterns, and then starts to build the bigger picture on how applications come together as a solution.   Sharing learnings within the team is key to accelerating to the next stage.

By reviewing applications individually, the security team starts to understand the “elephants in the room” better and is also able to prioritize based on risk profile. A security team will primarily witness this stage during inception. But, it could witness it again if it undergoes major course changes, takes on new areas such as an acquisition, or must take on a new technical direction.

Stage 2: Research

In the second stage, the security team is already able to perform security reviews for most applications, or at least a thematically related group of them, with relative ease.  The security team may then start to observe gaps in their security knowhow due to things such as changes in broader industry or company engineering practices or adoption of new technology stacks.

During this phase, the security team starts to invest time in researching any necessary security tradeoffs and relative security strength of newer technologies being explored or adopted by application engineering teams. This research and its practical application within the organization has the benefit of helping to establish security experts on a wider range of topics within the team.

This stage helps security teams stay ahead of the curve, grow security subject matter expertise, update any training materials, and helps them give more meaningful security advice to other engineering teams. For example, Adobe’s application security team was initially skilled in desktop security best practices. It evolved its skillset as the company launched products centered around the cloud and mobile platforms. This newly acquired skillset required further specializationwhen the company started exploring more “bleeding edge” cloud technologies such as containerization for building micro-services.

Stage 3: Security Impact

As security teams become efficient in reviewing solutions and can keep up with technological advances, they can then start looking at homogeneous security improvements across their entire organization.  This has the potential of a much broader impact on the organization. Therefore, this requires the security team to be appropriately scalable to match possible increasing demands upon it.

If a security team member wants to make this broader impact, the first step is identification of a problem that can be applied to a larger set of applications.  In other words, you must ask members of a security team to pick and own a particularly interesting security problem and try to solve it across a larger section of the company.

Within Adobe, we were able to identify a handful of key projects that fit the above criteria for our security team to tackle. Some examples include:

  1. Defining the Amazon Web Services (AWS) minimum security bar for the entire company
  2. Implementing appropriate transport layer security (TLS) configurations on Adobe domains
  3. Validating that product teams did not store secrets or credentials in their source code
  4. Forcing use of browser supported security flags (i.e. XSS-Protection, X-Frame-Options, etc.) to help protect web applications.

The scope of these solutions varied from just business critical applications to the entire company.

Some guidelines that we set within our own team to achieve this were as follows:

  1. The problem statement, like an elevator pitch, should be simple and easily understandable by all levels within the engineering teams – including senior management.
  2. The security researcher was free to define the scope and choose how the problem could be solved.
  3. The improvements made by engineering teams should be measurable in a repeatable way. This would allow for easier identification of any security regressions.
  4. Existing channels for reporting security backlog items to engineering teams must be utilized versus spinning up new processes.
  5. Though automation is generally viewed as a key to scalability for these types of solutions, the team also had flexibility to adopt any method deemed most appropriate. For example, a researcher could front-load code analysis and only provide precise security flaws uncovered to the application engineering team.  Similarly, a researcher could establish required “minimum bars” for application engineering teams helping to set new company security standards. The onus is then placed on the application engineering teams to achieve compliance against the new or updated standards.

For projects that required running tests repeatedly, we leveraged our Security Automation Framework. This helped automate tasks such as validation. For others, clear standards were established for application security teams. Once a defined confidence goal is reached within the security team about compliance against those standards, automated validation could be introduced.

Pivoting Around an Application versus a Problem

When applications undergo a penetration test, threat modeling, or a tool-based scan, teams must first address critical issues before resolving lower priority issues. Such an effort probes an application from many directions attempting to extract all known security issues.  In this case, the focus is on the application and its security issues are not known when the engagement starts.  Once problems are found, the application team owns fixing it.

On the other hand, if you choose to tackle one of the broader security problems for the organization, you test against a range of applications, mitigate it as quickly as possible for those applications, and make a goal to eventually eradicate the issue entirely from the organization.  Today, teams are often forced into reactively resolving such big problems as critical issues – often due to broader industry security vulnerabilities that affect multiple companies all at once.  Heartbleed and other similar named vulnerabilities are good examples of this.  The Adobe security team attempts to resolve as many known issues as possible proactively in an attempt to help lessen the organizational disruption when big industry-wide issues come along. This approach is our recipe for having a positive security impact across the organization.

It is worth noting that security teams will move in and out of the above stages and the stages will tend to repeat themselves over time.  For example, a new acquisition or a new platform might require deeper dives to understand.  Similarly, new technology trends will require investment in research.  Going after specific, broad security problems complements the deep dives and helps improve the security posture for the entire company.

We have found it very useful to have members of the security team take ownership of these “really big” security trends we see and help improve results across the company around it. These efforts are ongoing and we will share more insights in future blog posts.

Mohit Kalra
Sr. Manager, Secure Software Engineering

Security Automation for PCI Certification of the Adobe Shared Cloud

Software engineering is a unique and exciting profession. Engineers must employ continuous learning habits in order to keep up with constantly morphing software ecosystem. This is especially true in the software security space.  The continuous introduction of new software also means new security vulnerabilities are introduced. The problem at its core is actually quite simple. It’s a human problem.  Engineers are people, and, like all people, they sometimes make mistakes.  These mistakes can manifest themselves in the form of ‘bugs’ and usually occur when the software is used in a way the engineer didn’t expect. When these bugs are left uncorrected it can leave the software vulnerable. Some mistakes are bigger than others and many are preventable. However, as they say, hindsight is always 20/20.  You need not necessarily experience these mistakes to learn from them. As my father often told me, a smart person learns from his mistakes, but a wise person learns from the mistakes of others. And so it goes with software security. In today’s software world, it’s not enough to just be smart, but you also need to be wise.

After working at Adobe for just shy of 5 years I have achieved the current coveted rank of ‘Black Belt’ in Adobe’s security program through the development of some internal security tools and assisting in the recent PCI certification of Shared Cloud (the internal platform upon which, Creative Cloud and Document Cloud are based).  Through Adobe’s security program my understanding of security has certainly broadened.  I earned my white belt within just a few months of joining Adobe which consisted of some online courses covering very basic security best practices. When Adobe created the role of “Security Champion” within every product team, I volunteered. Seeking to make myself a valuable resource to my team, I quickly eared my green belt which consisted of completing several advanced online courses covering a range of security topics from SQL Injection & XXS to Buffer Overflows. I now had 2 belts down,  only 2 to go.

At the beginning of 2015, the Adobe Shared Cloud team started down the path of PCI Compliance.  When it became clear that a dedicated team would be needed to manage this, myself and a few others made a career shift from software engineer to security engineer in order to form a dedicated security team for the Shared Cloud.  To bring ourselves up to speed, we began attending BlackHat and OWASP conferences to further our expertise. We also started the long, arduous task of breaking down the PCI requirements into concrete engineering tasks.  It was out of this PCI effort that I developed three tools – one of which would earn me my Brown Belt, and the other two my Black Belt.

The first tool came from the PCI requirement that requires you track all of 3rd party software libraries for vulnerabilities and remediate them based on severity. Working closely with the ASSET team we developed an API that would allow you to push product dependencies and versions into applications as they are built.   Once that was completed, I wrote an integrated and highly configurable Maven plugin which consumed the API during build time, thereby helping to keep applications up-to-date automatically as part of our continuous delivery system. After completing this tool, I submitted it as a project and was rewarded with my Brown Belt. My plugin has also been adopted by several teams across Adobe.

The second tool also came from a PCI requirement. It states that no changes are allowed on production servers without a review step, including code changes. At first glance this doesn’t seem so bad – after all we were already doing regular code reviews. So, it shouldn’t be a big deal, right? WRONG! The burden of PCI is that you have to prove that changes were reviewed and that no change was allowed to go to production without first being reviewed.  There were a number of manual approaches that one could take to meet this requirement. But, who wants the hassle and overhead of such a manual process? Enter my first Black Belt project – the Git SDLC Enforcer Plugin.  I developed a Jenkins plugin that ran with a merge onto a project’s release branch.  The plugin reviews the commit history and helps ensure that every commit belongs to a pull request and that each pull request was merged by someone other than the author of the pull request.  If any offending commits or pull requests are found then the build fails and an issue is opened on the project in its GitHub space.  This turned out to be a huge time saver and a very effective mechanism for ensuring every change done to the code is reviewed.

The project that finally earned me my Black Belt, however, was the development of a tool that will eventually fully replace the Adobe Shared Cloud’s secret injection mechanism. When working with Amazon Web Services, you have a little bit of a chicken and egg problem when you begin to automate deployments. At some point, you need an automated way to get the right credentials into the EC2 instances that your application needs to run. Currently the Shared Cloud’s secrets management leverages a combination of custom baked AMI’s, IAM Roles, S3, and encrypted data bags stored in the “Hosted Chef” service. For many reasons, we wanted to move away from Chef’s managed solution, and add some additional layers of security such as the ability to rotate encryption keys, logging access to secrets, the ability to restrict access to secrets based on environment and role, as well as making it auditable. This new tool was designed to be a drop in replacement for “Hosted Chef” – it made it easier to implement in our baking tool chain and replaced the data bag functionality provided by the previous tool as well as added some additional security functionality.  The tool works splendidly and by the end of the year the Shared Cloud will be exclusively using this tool resulting in a much more secure, efficient, reliable, and cost-effective mechanism for injecting secrets.

My take away from all this is that Adobe has developed a top notch security training program, and even though I have learned a ton about software security through it, I still have much to learn. I look forward to continue making a difference at Adobe.

Jed Glazner
Security Engineer

New Security Framework for Amazon Web Services Released

The Center for Internet Security, of which Adobe is a corporate supporter, recently released their “Amazon Web Services Foundations Benchmark.” This document provides prescriptive guidance for configuring security options for a subset of Amazon Web Services with an emphasis on foundational, testable, and architecture agnostic settings. It is designed to provide baseline suggestions for ensuring more secure deployments of applications that utilize Amazon Web Services. Our own Cindy Spiess, web security researcher for our cloud services, is a contributor to the current version of this framework. Adobe is a major user of Amazon Web Services and efforts like this further our goals of educating the broader community on security best practices. You can download the framework document directly from the Center for Internet Security.

Liz McQuarrie
Principal Scientist & Director, Cloud Security Operations

Re-assessing Web Application Vulnerabilities for the Cloud

As I have been working with our cloud teams, I have found myself constantly reconsidering my legacy assumptions from my Web 2.0 days. I discussed a few of the high-level ones previously on this blog. For OWASP AppSec California in January, I decided to explore the impact of the cloud on Web application vulnerabilities. One of the assumptions that I had going into cloud projects was that the cloud was merely a hosting provider layer issue that only changed how the servers were managed. The risks to the web application logic layer would remain pretty much the same. I was wrong.

One of the things that kicked off my research in this area was watching Andres Riancho’s “Pivoting in Amazon Clouds,” talk at Black Hat last year. He had found a remote file include vulnerability in an AWS hosted Web application he was testing. Basically, the attacker convinces the Web application to act as a proxy and fetch the content of remote sites. Typically, this vulnerability could be used for cross-site scripting or defacement since the attacker could get the contents of the remote site injected into the context of the current Web application. Riancho was able to use that vulnerability to reach the metadata server for the EC2 instance and retrieve AWS configuration information. From there, he was able to use that information, along with a few of the client’s defense-in-depth issues, to escalate into taking over the entire AWS account. Therefore, the possible impacts for this vulnerability have increased.

The cloud also involves migration to a DevOps process. In the past, a network layer vulnerability led to network layer issues, and a Web application layer vulnerability led to Web application vulnerabilities. Today, since the scope of these roles overlap, a breakdown in the DevOps process means network layer issues can impact Web applications.

One vulnerability making the rounds recently is a vulnerability dealing with breakdowns in the DevOps process. The flow of the issue is as follows:

  1. The app/product team requests an S3 bucket called my-bucket.s3.amazonaws.com.
  2. The app team requests the IT team to register the my-bucket.example.org DNS name, which will point to my-bucket.s3.amazonaws.com, because a custom corporate domain will make things clearer for customers.
  3. Time elapses, and the app team decides to migrate to my-bucket2.s3.amazonaws.com.
  4. The app team requests from the IT team a new DNS name (my-bucket2.example.org) pointing to this new bucket.
  5. After the transition, the app team deletes the original my-bucket.s3.amazonaws.com bucket.

This all sounds great. Except, in this workflow, the application team didn’t inform IT and the original DNS entry was not deleted. An attacker can now register my-bucket.s3.amazon.com for their malicious content. Since the my-bucket.example.org DNS name still points there, the attacker can convince end users that their malicious content is example.org’s content.

This exploit is a defining example of why DevOps needs to exist within an organization. The flaw in this situation is a disconnect between the IT/Ops team that manages the DNS server and the app team that manages the buckets. The result of this disconnect can be a severe Web application vulnerability.

Many cloud migrations also involve switching from SQL databases to NoSQL databases. In addition to following the hardening guidelines for the respective databases, it is important to look at how developers are interacting with these databases.

Along with new NoSQL databases, there are a ton of new methods for applications to interact with the NoSQL databases.  For instance, the Unity JDBC driver allows you to create traditional SQL statements for use with the MongoDB NoSQL database. Developers also have the option of using REST frontends for their database. It is clear that a security researcher needs to know how an attacker might inject into the statements for their specific NoSQL server. However, it’s also important to look at the way that the application is sending the NoSQL statements to the database, as that might add additional attack surface.

NoSQL databases can also take existing risks and put them in a new context. For instance, in the context of a webpage, an eval() call results in cross-site scripting (XSS). In the context of MongoDB’s server side JavaScript support, a malicious injection into eval() could allow server-side JavaScript injection (SSJI). Therefore database developers who choose not to disable the JavaScript support, need to be trained on JavaScript injection risks when using statements like eval() and $where or when using a DB driver that exposes the Mongo shell. Existing JavaScript training on eval() would need to be modified for the database context since the MongoDB implementation is different from the browser version.

My original assumption that a cloud migration was primarily an infrastructure issue was false. While many of these vulnerabilities were always present and always severe, the migration to cloud platforms and processes means these bugs can manifest in new contexts, with new consequences. Existing recommendations will need to be adapted for the cloud. For instance, many NoSQL databases do not support the concept of prepared statements, so alternative defensive methods will be required. If your team is migrating an application to the cloud, it is important to revisit your threat model approach for the new deployment context.

Peleus Uhley
Lead Security Strategist

Retiring the “Back End” Concept

For people who have been in the security industry for some time, we have grown very accustomed to the phrases “front end” and “back end.” These terms, in part, came from the basic network architecture diagram that we used to see frequently when dealing with traditional network hosting:

 Picture1

The phrase “front end” referred to anything located in DMZ 1, and “back end” referred to anything located in DMZ 2. This was convenient because the application layer discussion of “front” and “back” often matched nicely with the network diagram of “front” and “back.”  Your web servers were the first layer to receive traffic in DMZ 1 and the databases which were behind the web servers were located in DMZ 2. Over time, this eventually led to the implicit assumption that a “back end” component was “protected by layers of firewalls” and “difficult for a hacker to reach.”

How The Definition Is Changing

Today, the network diagram and the application layer diagrams for cloud architectures do not always match up as nicely with their network layer counterparts. At the network layer, the diagram frequently turns into the diagram below:

 Picture2

In the cloud, the back end service may be an exposed API waiting for posts from the web server over potentially untrusted networks. In this example, the attacker can now directly reach the database over the network without having to pass through the web server layer.

Many traditional “back end” resources are now offered as a stand alone service. For instance, an organization may leverage a third-party database as a service (DBaaS) solution that is separate from its cloud provider. In some instances, an organization may decide to make their S3 buckets public so that they can be directly accessed from the Internet.

Even when a company leverages integrated solutions offered by a cloud provider, shared resources frequently exist outside the defined, protected network. For instance, “back end” resources such as S3, SQS and DynamoDB will exist outside your trusted VPC. Amazon does a great job of keeping its AWS availability zones free from most threats. However, you may want to consider a defense-in-depth strategy where SSL is leveraged to further secure these connections to shared resources.

With the cloud, we can no longer assume that the application layer diagram and the network layer diagrams are roughly equivalent since stark differences can lead to distinctly different trust boundaries and risk levels. Security reviews of application services are now much more of a mix of network layer questions and application layer questions. When discussing a “back end” application component with a developer, here are a few sample questions to measure its exposure:

*) Does the component live within your private network segment, as a shared resource from your cloud provider or is it completely external?

*) If the component is accessible over the Internet, are there Security Groups or other controls such as authentication that limit who can connect?

*) Are there transport security controls such as SSL or VPN for data that leaves the VPC or transits the Internet?

*) Is the data mirrored across the Internet to another component in a different AWS region? If so, what is done to protect the data as it crosses regions?

*) Does your threat model take into account that the connection crosses a trust boundary?

*) Do you have a plan to test this exposed “back end” API as though it was a front end service?

Obviously, this isn’t a comprehensive list since several of these questions will lead to follow up questions. This list is just designed to get the discussion headed in the right direction. With proper controls, the cloud service may emulate a “back end” but you will need to ask the right questions to ensure that there isn’t an implicit security-by-obscurity assumption.

The cloud has driven the creation of DevOps which is the combination of software engineering and IT operations. Similarly, the cloud is morphing application security reviews to include more analysis of network layer controls. For those of us who date back to the DMZ days, we have to readjust our assumptions to reflect the fact many of today’s “back end” resources are now connected across untrusted networks.

Peleus Uhley
Lead Security Strategist