Author Archive: Bronwen Matthews

Tips for Sandboxing Docker Containers

In the world of virtualization, we know two words: Virtual Machines and Containers. Both provide sandboxing: Virtual Machines provide it through hardware level abstraction while containers provide a process level isolation using a common kernel. Docker containers by default are secure but do they provide complete isolation? Let us look at the various ways sandboxing could be achieved in containers and what we need to do to try and achieve complete isolation.


One of the building blocks of containers that provides the first level of sandboxing is Namespaces. It allows processes with their own view of the system. It isolates the processes from having less effect on other processes in container environment or in the host system. Today there are 6 namespaces available in Linux and all of them are supported by Docker.

  • PID namespace: Provides isolation such that a process belonging to a particular PID namespace can only see other processes in the same namespace. It makes sure that processes that belong to one PID namespace cannot know the existence of processes in other PID namespace and hence cannot inspect or kill them.
  • User namespace: Provides isolation such that a process belonging to a particular user namespace is given a view such that a user could be a root within that namespace, but on the host system, it is mapped as a non-privileged user. This provides a great security improvement in Docker environment.
  • Mount namespace: Provides isolation of the host filesystem from the new filesystem created for the process. This allows processes in different namespaces to change the mount points without affecting each other.
  • Network namespace: Provides isolation such that a process belonging to a particular network namespace gets its own network stack that includes routing tables, IP tables rules, sockets and interfaces. Additionally, we would require Ethernet bridges that allow networking between hosts and namespaces.
  • Uts namespace: Isolates two system identifiers – nodename and domainname. This allows containers to have its own hostname and NIS domain name, which is helpful during the initialization steps.
  • IPC namespace: Provides isolation of InterProcess communication resources that includes IPC message queues, semaphores etc.

Although, namespaces provide a great level of isolation, there are resources that a container can access, but they are not namespaced. These resources are common to all the containers on the host machine which raises concerns over the security. This may present a risk of attack or information exposure. Resources that are not sandboxed include the following:

  • The Kernel Keyring: The Kernel Keyring separates keys using UID. Since we have multiple users in different containers that might have the same UID, all of these users are allowed to have access to the same keys in the keyring. Applications using Kernel Keyring for handling secrets are much less secured due to lack of sandboxing
  • The /proc and system time: Due to the “one size fits all” nature of Docker, a number of linux capabilities remain enabled. With certain capabilities enabled, the exposure of /proc offers a source of information leak and large attack surface. /proc includes files that contain configuration information of the kernel. It has information about the host system resources. Another set of Capabilities include the SYS_TIME and SYS_ADMIN, that allow changes to the system time not just inside the container, but also for the host and other containers.
  • The Kernel Modules: If an application loads kernel modules, that would allow the newly added module to be available across all the containers in the environment and the host system. There are some modules that enforce security policies. Access to such modules would allow the applications to make changes to the security policies which again is a big concern.
  • Hardware: The underlying hardware of the host system is shared between all the containers running on the system. A proper cgroup configuration and access control is required to have a fair distribution of resources. In other words, namespaces allow a larger area to be divided into smaller areas and cgroups allow proper usage of these areas. Cgroups work on resources like memory, cpu, disk drives etc. Having a well-defined cgroup configuration would prevent DoS attacks.


Capabilities are rules that help in performing privileged operations. The privileged operations are only allowed by the root user. An individual non-root process would not be able to perform any privileged operation. By dividing the rules into Capabilities, we can assign them to individual processes without elevating their privilege level. This way we can sandbox the container with certain restricted action and if it is compromised, it would perform less damage than it would with the “root” access. Be careful when using capabilities:

  • Defaults: As mentioned earlier, with “one size fits all” nature of Docker, a number of Capabilities remain enabled. These default set of capabilities given to a container does not provide complete isolation. A better approach would be to remove all the capabilities for the container and then add only those capabilities that are required by the application process running in the container. Adding capabilities comes from trial and error approach using various test scenarios for the application running on the container.
  • SYS_ADMIN capability: Another issue here is that even capabilities are not finegrained. One such capability that is most talked about is the SYS_ADMIN capability. It has a lot of functionalities, some of which are used only by the privileged user. Another reason of concern here.
  • SETUID binary: The setuid bit provides full root permission to a process using it. Many linux distributions use the setuid bit on several binaries, despite the fact that capabilities can be an alternative to using setuid, thus making it more safe and provide less surface for attack in case there is a break out from a non-privileged container. Defang SETUID binaries by removing the SETUID bit or mount filesystems with nosuid.


Seccomp (Secure Computing mode) is a simple sandboxing tool feature in the Linux Kernel. Seccomp provides a filtering mechanism for incoming system calls. It provides a process to monitor all the system calls it can make and take action if the system call is not allowed by the filter. Thus, if an attacker gains access to the container, it would have a limited number of system calls in its arsenal. The seccomp filter system uses Berkeley Packet Filter (BPF) system, similar to the one that uses socket filters. In other words, seccomp allows a user to catch a syscall and “allow”, “deny”, “trap”, “kill”, or “trace” it via the syscall number and arguments passed. An additional layer of granularity is added in locking down the process in one’s containers to only do what is needed.

Docker has provided a default seccomp profile for running on the containers that is more like a whitelist of calls that are allowed. This profile disables only 44 system calls out of 300+ available system calls. This is because of the vast use cases of the containers and its current deployment. Making it stricter would make many applications not usable via Docker container environment. Eg: System call such as reboot is disabled, because there would never be a situation where a container would ever need to reboot the host machine.

Another good example is keyctl – a system call for which a vulnerability was recently found (CVE 2016-0728). Keyctl is also disabled by default now. A most secure seccomp profile would be to create a Custom seccomp profile that blocks these 44 system calls and the ones running on the container that are not required by the app. This can be done with the help of DockerSlim ( that auto-generates seccomp profiles.

The good part about the seccomp feature is that it would make the attack surface very narrow. However, it also has around 250+ calls still available that would make it susceptible to attacks. For example, CVE 2014-2153 is a vulnerability that was found in the futex system call, which enables privilege escalation through a kernel exploit. This system call is still enabled and is inevitable since it has legitimate use for implementing basic resource locking for synchronization needs. Although the seccomp feature makes the containers more secured than earlier versions of Docker, it only provides moderate security in the container environment. This needs to be hardened, especially for enterprises, to make it compatible with the application running on the containers.


Through the hardening methods for namespaces, cgroups and the use of seccomp profiles we are able to sandbox our containers to a great extent. By following various benchmarks and using least privileges we can make our container environment secure. However, this only scratches the surface and there are plenty of things to take care of.

Rahul Gajria
Cloud Security Researcher Intern



4. ontainers-10pdf/

Adobe Document Cloud is now PCI DSS 3.1 compliant

The Payment Card Industry Data Security Standard (PCI DSS) prescribes certain security controls for organizations that accept payments via credit card.  The standard is designed to help reduce fraud by increasing controls around cardholder data.

On June 30, 2016, Adobe Document Cloud (which includes Adobe Sign and PDF Services) achieved compliance with PCI DSS 3.1* as a merchant and a service provider.

The Adobe Document Cloud’s PCI compliant status as a service provider helps our customers meet PCI requirements for safe handling of cardholder data.

The Adobe Common Controls Framework (CCF) and the underlying Security Compliance strategy helped us meet the current PCI requirements.  Any changes to the PCI standard are proactively incorporated into the CCF to ensure on-going compliance for all Adobe businesses.

More information about our Common Controls Framework and compliance efforts can be found on

Abhi Pandit
Sr. Director, Risk Advisory and Assurance Services (RaaS)

*Excludes Adobe Send & Track

Observations on CanSecWest 2016

Several members of Adobe’s product security team attended CanSecWest this year. The technical depth and breadth of the research presented in Vancouver this year yet again lived up to expectations.  Of the security conferences that Adobe sponsors throughout the year, CanSecWest consistently draws a critical mass from the security research community, with offensive, defensive and vendor communities well-represented.  Research presented this year ranged from discussions about advanced persistent threats (APTs), to vulnerabilities in software, to frameworks that assist in hardware security testing.

Trending Topics

Securing “the cloud” and the underlying virtualization technology is increasingly recognized as a core competency rather than an add-on.  A presentation by Qinghao Tang from Qihoo 360 demonstrated several security testing techniques for virtualization technology.  In particular, his work outlined a framework for fuzzing virtualization software which lead to the discovery of four critical vulnerabilities in QEMU emulator.

In a separate presentation, Shengping Wang (also from Qihoo 360) described a technique to escape a Docker container and run arbitrary code on the host system.  Specifically, the technique allowed an attacker to tamper with data structures storing kernel process descriptors to yield root access.

As the Internet of Things (IoT) continues along its explosive growth path, the community assembled at CanSecWest is among the more vocal warning of the security implications of billions of inter-connected devices.  Artem Chaykin of Positive Technologies described how almost every Android messaging app that uses Android Wear is vulnerable to message interception.  Moreover, malicious third party apps can be used to not only intercept messages, but also send arbitrary messages to everyone on the contact list of a device.

A separate talk by Song Li of OXID LLC described attacks on “smart” locks.  The attacks exploit pairing between a dedicated app and a bluetooth key-fob to achieve DoS (i.e., inability to unlock the door) and unintended unlocking.

Attributing cyber intrusions to specific actors or APTs can be controversial and subject to error.  This was the topic of an interesting talk by several researchers from Kaspersky Labs.  In particular, APTs have increased their use of deception tactics to confuse investigators attempting to assign attribution, and Kaspersky highlighted several examples of APTs deliberately planting misleading attributes in malware.

Continuing with the APT theme, Gadi Evron of Cymmetria discussed how the OPSEC of APTs have evolved over time to handle public disclosure of their activities.

Additional research

Building on recent advances in static and dynamic program analysis, Sophia D’Antoine of Trail of Bits described a practical technique for automated exploit generation.  The techniques described have inherent scalability issues, but we expect to see increased automation of certain aspects of exploit development.

In an exploration of graphics driver code, the Keen Labs Tencent team described fuzzing and code auditing strategies to identify bugs in Apple’s graphics drivers. Moreover, the team described an interesting method to gain reliable exploitation of a race condition that caused a double-free vulnerability on a doubly-linked list representation.

Guang Tang of Qihoo 360’s Marvel Team demonstrated how to exploit a vulnerability in the J8 javascript engine on a Google Nexus device to achieve remote code execution.  With code execution achieved, his team was then able to perform device actions such as installing arbitrary apps from the app store.  Importantly, they demonstrated that this vulnerability is still present in the Android PAC (Proxy Auto Config) service.

Finally, building on earlier work by Google Project Zero and other research, Chuanda Ding from Tencent Xuandu Lab presented research on abusing flaws in anti-virus software as a means to escape application sandboxes.

The exposure to bleeding edge research presented by subject matter security experts, and the opportunity to forge new relationships with the security research community sets CanSecWest apart from the security conferences Adobe attends throughout the year.  We hope to see you there next year.

Slides for these and other CanSecWest 2016 presentations should be posted on the CanSecWest site in a week or two.

Pieter Ockers
Sr. Security Program Manager

Improving Security for Mobile Productivity with Adobe Document Cloud

Recently Adobe announced that it is working with Microsoft to help improve security of mobile applications and productivity. We have integrated Adobe Acrobat Reader DC with Microsoft Intune, a solution for secure enterprise mobile application management. This gives I.T. management and security professionals more control over critical productivity applications deployed to their mobile device users. This functionality is currently available for Android devices and will soon be made available for iOS devices.

Our work with Microsoft is part of Adobe’s overall commitment to help keep our customers’ critical assets and data secure. We will continue to work with Microsoft and other security community partners to improve security across our products and services.

For more information about this solution, please see our post on the Adobe Document Cloud blog.

Bronwen Matthews
Sr. Product Marketing Manager, Security

Top 5 Things You Should Know About FedRAMP and Adobe’s Cloud Services for Government

In July, Adobe Experience Manager and Connect Managed Services received FedRAMP Authorization for its Cloud Services for Government. The Department of Health and Human Services (HHS) granted Adobe an Authority to Operate (ATO) for these specific cloud services run by Adobe Managed Services. Most importantly, this ATO can be leveraged government-wide, thereby decreasing the time and cost for other agencies and organizations as they adopt Adobe’s technology. So what exactly does this mean and why is it important? Here are the top 5 things you need to know:

1. What is FedRAMP?
The Federal Risk and Authorization Management Program (FedRAMP) provides a cost-effective, risk-based approach for the adoption and use of cloud services. It is a joint collaboration by the Department of Homeland Security (DHS), Department of Defense (DoD) and General Services Administration (GSA) as well as other working groups to assist agencies in meeting FISMA requirements for cloud systems. It provides a single, standard approach to security assessment, authorization and monitoring of cloud services.

2. Why Should I Care About FedRAMP?
According to the official FedRAMP site, FedRAMP is based upon the same set of security controls as documented in the Federal Information Security Management Act (FISMA) of 2001. These controls are outlined by the National Institute of Standards and Technology (NIST 800-53). Where FISMA exists as the approval process for on-premise programs, FedRAMP exists as the equivalent for cloud solutions. With recent legislation, all agencies seeking to use cloud services can only implement ones that are FedRAMP certified. More information about FedRAMP can be found here.

3. What does Adobe offer?
Adobe is the first FedRAMP cloud service provider (CSP) to deliver this combination of solutions:
• Web Content Management (WCM)
• Electronic Forms with eSignatures
• Documents Rights Management (DRM)
• Web-conferencing
• E-Learning (LMS)

These FedRAMP authorized solutions are supported by Adobe Products, run by Adobe Managed Services, from a specific region within the Amazon Web Services infrastructure.
i. Adobe Experience Manager Managed Services on Amazon GovCloud
ii. Adobe Connect Managed Services on Amazon GovCloud

4. What’s the big deal about FedRAMP Authorization ?
An agency authorized Authority To Operate (ATO) is the FedRAMP stamp-of-approval for federal agencies. It allows government entities (as well as commercial organizations) to more easily adopt Adobe’s FedRAMP certified cloud solutions. Approval from one agency means an approval for all agencies on the federal level – making an ATO extremely valuable for cloud service providers (CSPs).

Adobe partnered with the Department of Health and Human Services (HHS) to determine that Adobe’s approved cloud services comply with FedRAMP requirements. In working through the FedRAMP Security Assessment Framework (SAF), Adobe’s approved cloud services were first examined to be of FedRAMP standards and reviewed to ensure that solutions were properly documented. They then were evaluated by the Veris Group, a third party assessment organization (3APO) to make sure the software performs as documented, and had to pass 328 separate security controls in order to become FedRAMP authorized. The approval process is very intensive and takes anywhere from one to three years to complete. Accordingly, Adobe’s investment is significant and further demonstrates how Adobe stays ahead of the curve in terms of security and compliance.

5. Benefits of FedRAMP Certification for Cloud Based Solutions
In 2011 the U.S Federal Government released the Federal Cloud Computing Strategy that instituted a “Cloud First” policy emphasizing cloud services by requiring agencies to adopt a cloud solution if one exists. This strategy was developed as a result of three main benefits of cloud services: its deployment speed, minimal on-premise upkeeping, and constant stream of updates.
• Fast deployment speed – Hosted cloud solutions are typically already ‘up and running’ compared to on-premise solutions which can take months to implement. The beautiful part of the cloud is its scalability – it can be grow or shrink to suit the demands of the enterprise.
• Minimal on-premise housekeeping – in on-premise solutions, a lot of time is required of the security staff of individual agencies to set up servers, install software, manage patches and updates, performing backups and troubleshooting problems. With cloud solutions, there typically aren’t on-site servers, the software installs, patches and back up is the responsibility of the cloud service provider. This saves federal agencies time and money and allows the agency’s security team to focus on their core job.
• Always the newest version – Cloud solutions are constantly updated to provide new features or services and keep up to date with the changing security landscape. Cloud service providers also learn from the implementation of its software for one agency in order to improve the product to its other customers. These learnings help ensure that our customers are getting a secure and high quality service.

The US Government has clearly identified cloud solutions as the way of the future. With its recent FedRAMP authorization, Adobe seeks to be cemented as one of the leaders of cloud solutions in the public sector with its unique cloud service solutions.

You can learn more about how FedRAMP – and Adobe solutions – are helping to bring about the “consumerization of Government” in my other recent blog.

John Landwehr
Vice President & Public Sector CTO

Adobe Shared Cloud Now SOC2- Security Type 1 Compliant

We are very happy to report that KPMG LLP has completed their attestation and issued the final Type 1 SOC2 Security report for Adobe’s Digital Media Shared Cloud.

Adobe’s Shared Cloud is the infrastructure component supporting the Adobe Creative Cloud.   Adobe Creative Cloud teams can build their product and service offerings on top of the pluggable platform provide by Shared Cloud.

Completion of this project is a very important first step in the compliance roadmap for Adobe Creative Cloud.  Any Adobe service will inherit the controls that are in-scope for this Type 1 SOC2-Security report to the extent they leverage Shared Cloud as their data repository platform and Adobe Cloud Operations for their cloud operations.

Several Adobe teams worked closely together to ensure the successful completion of the project.  The teams will now focus on completing Type 2 attestation in 2015.

A big thanks to everyone involved.

Abhi Pandit
Sr. Director of Risk Advisory and Assurance

View of an Internship with ASSET

I technically joined the security community last year when I began my Master’s in Information Security at Carnegie Mellon University. I gained a lot of theoretical and practical knowledge from the program, but my internship with ASSET gave me a totally new perspective on how security in a large organization works. I worked on multiple projects over the summer in the beautiful city of San Francisco. I have outlined one of them below.

Adobe follows a Secure Product Lifecycle (SPLC).To cater to the large number of current and future Adobe products, the security guidance provided to the teams by ASSET needs to be scalable. Scalability requires automation, or else the number of security researchers and their time becomes a bottleneck. Security guidance is also intended to focus on the configuration of the projects. For example, a Web service written in Java that handles confidential information requires a very different set of guidelines to follow as compared to an Android application.

For such targeted guidance, we use a smart system called SD Elements. For SD Elements, I performed a gap-analysis on security recommendations of Android and iOS apps as well as on desktop and rich-client applications. I researched quite a bit in the process. Some of my sources included the CERT guidelines for securing applications, internal pen-test reports, and a lot of academic research papers and vendor reports. Adobe has now moved to cloud deployment for many of their products: Creative Cloud and Marketing Cloud are prime examples. To support this recent momentum, I also expanded the deployment phase in SD Elements which is a set of guidelines for DevOps teams to securely deploy and maintain their applications in the cloud.

During my internship, I worked with Mohit Kalra who was my manager and Karthik Raman, my mentor. They were always available to guide me whenever I got stuck on a problem and always gave me specific Adobe context. My other team-members were also very helpful and considerate throughout the internship and they always made me feel at home. As part of Adobe Be Involved month, I also got a chance to volunteer at Edgewood Center for Children and Families, which was a humbling experience. We played kickball with the kids and it was really great to see smiles on their faces.

Mayur blog post

Volunteer picture from Edgewood Center for Children and Families. (I’m the guy in bottom left.)

As a result of my internship at Adobe, I feel like I’ve really improved my technical knowledge and my understanding of how security works within an organization. Thanks, Adobe.

Mayur Sharma
Security Intern

Adobe Digital Publishing Suite, Enterprise Edition Security Overview

This new DPS security white paper describes the proactive approach and procedures implemented by Adobe to increase the security of your data included in applications built with Digital Publishing Suite.

The paper outlines the Adobe Digital Publishing Suite Content Flow for Secure Content, available in Digital Publishing Suite v30 or later for apps with direct entitlement and retail folios entitlement. The secure content feature allows you to restrict the distribution of your content based on user credentials or roles.

The paper also outline the security practices implemented by Adobe and our trusted partners.

Security threats and customer needs are ever-changing, so we’ll update the information in this white paper as necessary to address these changes.

Bronwen Matthews
Sr. Product Marketing Manager

Using Smart System to Scale and Target Proactive Security Guidance

One important step in the Adobe Secure Product Lifecyle is embedding security into product requirements and planning. To help with this effort, we’ve begun using a third-party tool called SD Elements.


SD Elements is a smart system that helps us scale our proactive security guidance by allowing us to define and recommend targeted security requirements to product teams across the company in an automated fashion. The tool enables us to provide more customized guidance to product owners than we could using a generic OWASP Top 10 or SANS Top 20 Controls for Internet Security list and it provides development teams with specific, actionable recommendations. We use this tool not only for our “light touch” product engagements, but to also provide our “heavy touch” engagements with the same level of consistent guidance as a foundation from which to work.

Another benefit of the tool is that it helps makes proactive security activities more measurable, which in turn helps demonstrate results which can be reported to upper management.

ASSET has worked with the third-party vendor Security Compass, to enhance SD Elements by providing feedback from “real world” usage of the product. The benefit to Adobe is that we get a more customized tool right off the shelf – beyond this, we’ve used the specialized features to tailor the product to fit our needs even more.

We employ many different tools and techniques with the SPLC and SD Elements is just one of those but we are starting to see success in the use of the product. It helps us make sure that product teams are adhering to a basic set of requirements and provides customized, actionable recommendations on top. For more information on how we use the tool within Adobe, please see the SD Elements Webcast.

If you’re interested in SD Elements you can check out their website.

Jim Hong
Group Technical Program Manager

New White Paper on Creative Cloud for teams Security Architecture and Functionality

At Adobe, we take the security of your digital experiences seriously.

The Adobe Creative Cloud for teams Security Overview white paper describes the proactive approach and procedures implemented by Adobe to increase the security of your Creative Cloud experience and data.

The paper provides details related to the security architecture and functionality available in Creative Cloud for teams. It also outlines the security practices implemented by Adobe and our trusted partners as part of the ongoing development of Creative Cloud. From our rigorous integration of security into our internal software development process to the tools used by our cross-functional incident response teams, we strive to be proactive and nimble.

Security threats and customer needs are ever-changing, so we’ll update the information in this white paper as necessary to address these changes.

Bronwen Matthews
Sr. Product Marketing Manager