Posts in Category "Security"

Tips for Sandboxing Docker Containers

In the world of virtualization, we know two words: Virtual Machines and Containers. Both provide sandboxing: Virtual Machines provide it through hardware level abstraction while containers provide a process level isolation using a common kernel. Docker containers by default are secure but do they provide complete isolation? Let us look at the various ways sandboxing could be achieved in containers and what we need to do to try and achieve complete isolation.

Namespaces

One of the building blocks of containers that provides the first level of sandboxing is Namespaces. It allows processes with their own view of the system. It isolates the processes from having less effect on other processes in container environment or in the host system. Today there are 6 namespaces available in Linux and all of them are supported by Docker.

  • PID namespace: Provides isolation such that a process belonging to a particular PID namespace can only see other processes in the same namespace. It makes sure that processes that belong to one PID namespace cannot know the existence of processes in other PID namespace and hence cannot inspect or kill them.
  • User namespace: Provides isolation such that a process belonging to a particular user namespace is given a view such that a user could be a root within that namespace, but on the host system, it is mapped as a non-privileged user. This provides a great security improvement in Docker environment.
  • Mount namespace: Provides isolation of the host filesystem from the new filesystem created for the process. This allows processes in different namespaces to change the mount points without affecting each other.
  • Network namespace: Provides isolation such that a process belonging to a particular network namespace gets its own network stack that includes routing tables, IP tables rules, sockets and interfaces. Additionally, we would require Ethernet bridges that allow networking between hosts and namespaces.
  • Uts namespace: Isolates two system identifiers – nodename and domainname. This allows containers to have its own hostname and NIS domain name, which is helpful during the initialization steps.
  • IPC namespace: Provides isolation of InterProcess communication resources that includes IPC message queues, semaphores etc.

Although, namespaces provide a great level of isolation, there are resources that a container can access, but they are not namespaced. These resources are common to all the containers on the host machine which raises concerns over the security. This may present a risk of attack or information exposure. Resources that are not sandboxed include the following:

  • The Kernel Keyring: The Kernel Keyring separates keys using UID. Since we have multiple users in different containers that might have the same UID, all of these users are allowed to have access to the same keys in the keyring. Applications using Kernel Keyring for handling secrets are much less secured due to lack of sandboxing
  • The /proc and system time: Due to the “one size fits all” nature of Docker, a number of linux capabilities remain enabled. With certain capabilities enabled, the exposure of /proc offers a source of information leak and large attack surface. /proc includes files that contain configuration information of the kernel. It has information about the host system resources. Another set of Capabilities include the SYS_TIME and SYS_ADMIN, that allow changes to the system time not just inside the container, but also for the host and other containers.
  • The Kernel Modules: If an application loads kernel modules, that would allow the newly added module to be available across all the containers in the environment and the host system. There are some modules that enforce security policies. Access to such modules would allow the applications to make changes to the security policies which again is a big concern.
  • Hardware: The underlying hardware of the host system is shared between all the containers running on the system. A proper cgroup configuration and access control is required to have a fair distribution of resources. In other words, namespaces allow a larger area to be divided into smaller areas and cgroups allow proper usage of these areas. Cgroups work on resources like memory, cpu, disk drives etc. Having a well-defined cgroup configuration would prevent DoS attacks.

Capabilities

Capabilities are rules that help in performing privileged operations. The privileged operations are only allowed by the root user. An individual non-root process would not be able to perform any privileged operation. By dividing the rules into Capabilities, we can assign them to individual processes without elevating their privilege level. This way we can sandbox the container with certain restricted action and if it is compromised, it would perform less damage than it would with the “root” access. Be careful when using capabilities:

  • Defaults: As mentioned earlier, with “one size fits all” nature of Docker, a number of Capabilities remain enabled. These default set of capabilities given to a container does not provide complete isolation. A better approach would be to remove all the capabilities for the container and then add only those capabilities that are required by the application process running in the container. Adding capabilities comes from trial and error approach using various test scenarios for the application running on the container.
  • SYS_ADMIN capability: Another issue here is that even capabilities are not finegrained. One such capability that is most talked about is the SYS_ADMIN capability. It has a lot of functionalities, some of which are used only by the privileged user. Another reason of concern here.
  • SETUID binary: The setuid bit provides full root permission to a process using it. Many linux distributions use the setuid bit on several binaries, despite the fact that capabilities can be an alternative to using setuid, thus making it more safe and provide less surface for attack in case there is a break out from a non-privileged container. Defang SETUID binaries by removing the SETUID bit or mount filesystems with nosuid.

Seccomp

Seccomp (Secure Computing mode) is a simple sandboxing tool feature in the Linux Kernel. Seccomp provides a filtering mechanism for incoming system calls. It provides a process to monitor all the system calls it can make and take action if the system call is not allowed by the filter. Thus, if an attacker gains access to the container, it would have a limited number of system calls in its arsenal. The seccomp filter system uses Berkeley Packet Filter (BPF) system, similar to the one that uses socket filters. In other words, seccomp allows a user to catch a syscall and “allow”, “deny”, “trap”, “kill”, or “trace” it via the syscall number and arguments passed. An additional layer of granularity is added in locking down the process in one’s containers to only do what is needed.

Docker has provided a default seccomp profile for running on the containers that is more like a whitelist of calls that are allowed. This profile disables only 44 system calls out of 300+ available system calls. This is because of the vast use cases of the containers and its current deployment. Making it stricter would make many applications not usable via Docker container environment. Eg: System call such as reboot is disabled, because there would never be a situation where a container would ever need to reboot the host machine.

Another good example is keyctl – a system call for which a vulnerability was recently found (CVE 2016-0728). Keyctl is also disabled by default now. A most secure seccomp profile would be to create a Custom seccomp profile that blocks these 44 system calls and the ones running on the container that are not required by the app. This can be done with the help of DockerSlim (http://dockersl.im) that auto-generates seccomp profiles.

The good part about the seccomp feature is that it would make the attack surface very narrow. However, it also has around 250+ calls still available that would make it susceptible to attacks. For example, CVE 2014-2153 is a vulnerability that was found in the futex system call, which enables privilege escalation through a kernel exploit. This system call is still enabled and is inevitable since it has legitimate use for implementing basic resource locking for synchronization needs. Although the seccomp feature makes the containers more secured than earlier versions of Docker, it only provides moderate security in the container environment. This needs to be hardened, especially for enterprises, to make it compatible with the application running on the containers.

Conclusion

Through the hardening methods for namespaces, cgroups and the use of seccomp profiles we are able to sandbox our containers to a great extent. By following various benchmarks and using least privileges we can make our container environment secure. However, this only scratches the surface and there are plenty of things to take care of.

Rahul Gajria
Cloud Security Researcher Intern

 

References

1. https://www.toptal.com/linux/separation-anxiety-isolating-your-system-withlinux-namespaces
2. http://www.slideshare.net/jpetazzo/anatomy-of-a-container-namespaces-cgroupssome-filesystem-magic-linuxcon
3. https://www.oreilly.com/ideas/docker-security?intcmp=il-webops-free-articlelgen_five_security_concerns_when_using_docker
4. https://www.nccgroup.trust/globalassets/ourresearch/us/whitepapers/2016/april/ncc_group_understanding_hardening_linux_c ontainers-10pdf/
5. https://docs.docker.com/engine/security/security/

Adobe’s CCF Helps Acquisitions Meet Security Compliance Requirements

The Common Controls Framework (CCF) is a comprehensive set of control requirements, rationalized from the alphabet soup of several different industry information security and privacy standards. To help ensure that our standards effectively meet our customers’ expectations, we are constantly refining this framework based on industry requirement changes, customer asks, and internal feedback.

As Adobe continues to grow as an organization and as we continue to onboard new acquisitions, CCF enables these acquisitions to come into compliance more quickly. At Adobe, the goal is for acquisitions to meet organization security practices and standards and come up to speed with the compliance roadmap of the organization. CCF enables the new acquisitions to inherit existing simple & scalable solutions to reduce the overall effort significantly to meet compliance goals.

The journey for the newest members of the Adobe family (read: acquisition) begins with a gap assessment against the CCF. Once the gaps are determined against the existing CCF controls, the team can leverage a lot of driver-subscriber scalable controls (Read to understand driver-subscriber controls at Adobe) that are aligned with the CCF to remediate a majority of the gaps. Once the remediation is completed, often what is left is a handful of controls that need to be implemented in order to achieve compliance.

Another key component of security compliance is ensuring proper supporting documentation is in place. In most cases, the acquisition can therefore leverage existing documents used by product teams at Adobe that have successfully embarked and achieved compliance or on the roadmap, as they all address the same CCF requirements. Therefore, the team can often subscribe to the existing documentation when subscribing to a service. For the standalone controls, the teams can use existing templates documented in-line with the CCF to speed up the documentation effort.

Example of Implementation

One of our recent acquisitions was required to undergo PCI DSS compliance and as a result they underwent the gap assessment against the CCF controls. The acquisition was able to successfully leverage a lot of existing solutions like multifactor authentication to production, hardened baseline images, security monitoring, incident response processes to name a few to achieve compliance. At the end, the team was required to implement only a handful of standalone controls.

Given the new updated change in PCI DSS 3.2 around multifactor authentication, this acquisition will not be affected by the change in requirement as they already implemented Multi-factor authentication due to requirements listed in CCF.

Conclusion

Adobe’s CCF controls are helping new acquisitions achieve security compliance more quickly. These teams are able to leverage much of the existing infrastructure Adobe has put in place to meet and maintain its security certifications. Therefore, the overall burden of implementing these controls is significantly reduced and the acquisition that is now an Adobe member can continue to delight our customers at the same time as being compliant with Adobe’s security requirements.

Rahat Sethi
Sr. Information Security Analyst, Adobe Risk Assurance and Analysis Services (RAAS)

Adobe @ BlackHat USA 2016

We are headed to BlackHat USA 2016 in Las Vegas this week with members of our Adobe security teams. We are looking forward to connecting with the security community throughout the week. We also hope to meet up with some of you at the parties, at the craps tables, or just mingling outside the session rooms during the week.

This year Peleus Uhley, our Lead Security Strategist, will be speaking on Wednesday, August 3rd, at 4:20 p.m. He will be talking about “Design Approaches for Security Automation.” DarkReading says his talk is one of the “10 Hottest Talks” at the conference this year, so you do not want to miss it.

This year we are again proud to sponsor the r00tz Kids Conference @ DefCon. If you are going to DefCon and bringing your kids, we hope you take the time out to take them to this great event for future security pros. There will be educational sessions and hands-on workshops throughout the event to challenge their creativity and skills.

Make sure to follow our team on Twitter @AdobeSecurity. Feel free to follow me as well @BradArkin. We’ll be tweeting info as to our observations and happenings during the week. Look for the hashtag #AdobeBH2016.

We are looking forward to a great week in Vegas.

Brad Arkin
VP and Chief Security Officer

Identity and Access Management in the Enterprise Environment

The management of identity is one of the most common and complex security challenges that is faced by organizations today. Many businesses operate globally with thousands of users constantly accessing hundreds of unique systems and applications. Establishing a role-based model and enforcing accountability is critical to securing access to company resources, but can be very difficult to implement, especially in mature and large organizations.

At Adobe, our Identity and Access Management (IAM) strategy is comprised of the following 6 pillars:

  1. Compliance with key Adobe Common Control Framework (CCF) objectives, especially those related to authentication and authorization.
  2. Authentication to Adobe systems is governed via a centralized identity source, which maintains compliance with scalable CCF requirements.
  3. Workflows have been implemented which automate provisioning, deprovisioning, and periodic access review processes.
  4. Access requests require a user and role to be selected from a pre-defined list, and a business justification must be provided.
  5. Once a user is granted access to a role, strong authentication is required to access company and customer resources.
  6. Critical system activity is logged to a centralized repository to maintain user accountability.

Our security administrators work diligently to discourage abuse and try to avoid human error. They recognize the importance of a centrally managed identity source built with strong role-based and accountability principles. When a single source of record exists, whose updates are automatically synced to all integrated systems, the need to manage access to each system independently is eliminated.

Workflows have been implemented to automatically route access requests to approvers, provision approved requests within the system, disable terminated users, and perform access updates based on periodic access review submissions. This automation helps reduce the risk associated with manual processes and creates efficiency in Adobe’s IAM implementation.

One of the most important pillars is defining a role-based access model. System owners predefine roles within Adobe’s automated provisioning workflow. This allows users to self-service the access request process while maintaining least privilege. Users requesting access to a role which would grant excessive privilege will be denied by the role owner and the user must resubmit their request for a more restrictive role.

Role-based models for managing access help reduce provisioning errors and overhead, improve logical access review accuracy, and enforce least privilege. When logical access roles are not defined, excessive or unauthorized access across systems is likely to result from manual and ad-hoc provisioning processes.

For example, without a defined role-based access model, new hires might require 25 separate permissions to be configured for them to perform their job responsibilities. Performing these tasks for numerous new hires, position transfers, and exiting personnel on a daily basis is cumbersome and prone to error.

Additionally, during periodic logical access reviews, the system owner must review each of the 25 separate permissions for every user with access to the system. The ability to review all users assigned to a defined role in one step will save the organization time and money while improving security.

On the other hand, defining a set of roles with explicit system privileges requires a one-time setup with minimal ongoing maintenance. Changes to existing roles should be controlled via change management processes. Once established, system owners can perform a single action to assign users to a role, or multiple roles, based on their job responsibilities.

Finally, least privilege requires that each defined and approved role has the minimum necessary system privileges which allow the role to fulfill its job requirements. When role creation is not guided by least privilege, it often results in excessive access for many of its members and appropriate access for very few of its members.  New roles should always be created for system users that require more or less access than what is provided by an existing role.

Mismanaged systems may introduce security and process breakdowns, which may facilitate unauthorized or excessive access to systems or data. Access to critical Adobe resources requires a valid whitelisted IP, username, password, and a logical access token or key. The combination of these elements comply with Adobe’s Authentication Standard requirements. If suspicious or malicious activity is identified within a system, security administrators are able identify the user and hold them accountable for their associated system activity.

Accountability ties the authenticated user to the actions they performed while interacting with the system. In most cases, a single user is assigned a unique account. When shared accounts are necessary, they must be individually authenticated to before being used by an individual user. This provides security administrators the ability to track a specific user’s actions within a system, which can be used to investigate incidents and deny repudiation.

Maintaining an efficient and more secure IAM model in a large organization can be challenging and requires diligent forethought. When implemented correctly, an organization can help reduce the risk and likelihood of unauthorized access, both internally and externally. Adobe is committed to excellence with the delivery of its services and the protection of both Adobe and customer resources. Our IAM implementation is just one of many examples of Adobe’s defense-in-depth security strategy.

Zosh Kuball
Sr. Analyst, Adobe Risk Assurance and Analysis Services (RAAS)

Preparing for Content Security Policies

Deploying Content Security Policies (CSPs) can help increase the security of your website. Therefore, it is an easy recommendation that most security professionals make when working with development teams. However, the process of helping a team go from the simple recommendation to a successfully deployed policy can be complicated. As a security professional, you need to be aware of the potential challenges along the way in order to help teams navigate their way to a successful deployment.

Let’s start with the initial challenges that developers may face. One, is that many authoring tools that support accelerated web design, do not produce CSP compliant code. While some of the major JS frameworks support CSP, the extensions to those frameworks may not. Also, there are trade-offs to running in CSP compliant mode. For instance, AngularJS has a CSP-compliant mode that you can enable. (https://docs.angularjs.org/api/ng/directive/ngCsp) However, there are speed trade offs in enabling that flag. Security people may think a 30% speed trade-off is acceptable but, depending on how dependent the page is on AngularJS, the developers may take notice.

Lastly, a CSP compliant coding style may be counter-intuitive to some long time web developers. For instance, let’s take a simple button tag as an example. In a pre-CSP design approach, you would write it as:

<input type=”button” id=”myButton” value=”Click Me!” onClick=”doCoolStuff();”/>

To enable CSP, we have to tell the developer to remove the onClick statement and put it in a separate JS file. This may be counterintuitive because the developer may wonder why making two Internet requests and dynamically adding the event handler might be safer than just hard coding the event inline within the original request. From a developer’s perspective, the extra web request adds latency to the page load. Also, it will make the code harder to read because you have to do more cross-referencing in order to understand what the button does.

In terms of moving the code out of the HTML, it is more complicated than just copying and pasting. For instance, you can’t add an event handler to the button until the button has been added to the DOM. Therefore, you may need to add an additional onLoad event handler just to add the onClick event handler to the button. The same race condition would apply anytime you dynamically add content to an innerHTML property.

For cascading style sheets, you also need to remove any style properties from the code. Fortunately, while many developers still use inline style statements, the deprecation of several style properties in tags by the HTML5 movement has already forced the migration of many style settings to separate files. That said, there may be templates and libraries that developers use with style statements still in the HTML.

Understanding the changes required to support CSP is important because it gives you perspective on the scope of what you are asking teams to change. It is more than just enumerating everywhere that you load content and putting it in a header. You are likely asking teams to recode large amounts of working, validated code. This is similar to the “banned function” movement that happened in C/C++ code several years ago where developers had to re-architect existing code to use newer C/C++ functions.

Before you start a CSP deployment with teams, you should be prepared to answer questions such as:

  • Do I deploy CSP Level 1, Level 2, or Level 3?
    • Is Level 2 better than Level 1 in terms of security or does it just have more features?
    • What is the browser adoption for each level? I want to use the nonce and hash features of CSP Level 2 for my inline scripts but I also have to be compatible with older browsers.
    • I heard that the “strict-dynamic” property in Level 3 is going to make deployment easier. Should we just wait for that?
  • I read one site where it said I should start with the policy “default ‘self'” but another site said I should start with “default-src ‘none’; script-src ‘self’; connect-src ‘self’; img-src ‘self’; style-src ‘self’;”.  Which should I use?
  • Do I have to set a unique, restricted CSP for each page or can I just set one for the entire site that is a little more permissive?
  • How do I handle the CSP report-only reports? What is the process for ensuring someone reviews the reports? Is there a pre-made module available via gem/pip/npm that handles some of this and automatically puts the reports in a database?
  • Do I need to deploy the CSP reporting in production or is staging enough?
  • What are the common mistakes people make when deploying CSPs?
  • How do I debug when the CSP is breaking content? The CSP report doesn’t give me the line number of the violation and sometimes it doesn’t even give me the full path for the request. Do the builtin browser developer consoles help with debugging violations? Which is best?
  • I load JS content from a third-party provider and their content is causing issues with the recommended CSP settings. What should I do?
  • How bad is it to allow unsafe-inline? Can I use the nonce approach instead? Are there any prewritten/preapproved npm/pip/gem modules for generating nonces? How does that compare to hash source?  How do I know whether to use nonces or hashes? I heard that nonces don’t work well with static content. Is that true? Is there a way to have the server generate the hash at runtime so I don’t have to update CSPs every time I change a file?
  • How do we change our developer guides to help ensure that new code is written in a CSP compliant manner? Are there plugins for our IDEs to detect these things? Will our static analysis tools flag these issues? Are there pre-written guidelines which enumerate what you can and can’t do? Is there a way to style the code so that it is easier to handle the added complexity of cross-referencing between files?
  • Are there any published stats on rendering time when implementing CSP? What is the performance impact?
  • Is it OK to specify things like media-src, object-src or font-src and set them to ‘self’ even though we aren’t currently using them on the page? I want to limit how often I have to adjust the policy files in production. As long as they are set to ‘self’, then it shouldn’t be that big of a risk, right?

As you can see, the questions arising from deployment can get complicated quickly. For instance, it is trivial to define what would be the ideal policy and how to set the header. However, content security policies are the lowest common denominator of all your page dependencies. If the team integrates with multiple providers, then the policy is going to be whatever lowest common denominator is necessary to support all the providers that are linked by that page. How are you going to handle the situation where their third-party library requires “unsafe-inline unsafe-eval data: …”?  Are your policies immutable or will there be an exception process? What is the exception process?

While I can’t provide all the answers in a single blog, here are some thoughts on how to approach it:

  • Work on changing coding guidelines so that new code is CSP compliant. You will never catch up to where you need to be if new code continues to follow the existing development practices.
  • You can get a feel for the scope of the effort by setting up a test server with a few web pages from your site and experimenting with adding CSPs locally.
  • Rather than going after the entire site, start with critical pages. The simple process of getting your first page to work will likely require a decent amount of effort such as training the team on what the rules mean, deciding what approach should be used by the server to add the headers, learning how to debug issues, identifying patterns, etc. For instance, a login page is critical in terms of security and it is likely to have fewer third-party dependencies than other pages. Once that first page is established, you can build on your CSP deployment incrementally from there.
  • Track the policies necessary to support different third-party providers and libraries. This will allow teams to share rules and limit the amount of debugging necessary for third-party code.
  • Search for blogs and talks where people talk about the actual deployment process rather than just the technical definitions. One example is this blog by Terrill Dent of Square : https://corner.squareup.com/2016/05/content-security-policy-single-page-app.html For people who prefer full presentations, Michele Spagnuolo and Lukas Weichselbaum have conducted a lot research in this area. Their HITB Amsterdam presentation detailed common mistakes made in CSP deployments: https://conference.hitb.org/hitbsecconf2016ams/materials/D1T2%20-%20Michele%20Spagnuolo%20and%20Lukas%20Weichselbaum%20-%20CSP%20Oddities.pdf (slides) https://www.youtube.com/watch?v=eewyLp9QLEs (video) They also recently presented, “Making CSP great again” at OWASP AppSecEU 2016: https://www.youtube.com/watch?v=uf12a-0AluI&list=PLpr-xdpM8wG-Kf1_BOnT2LFZU8_SXfpKL&index=37

It is trivial to email a team telling them they should deploy content security policies and describing all the wonderful benefits that they can bring. However, the next step is to dig in and help the teams through the implementation process. In Spagnulo and Weichselbaum’s AppSecEU presentation, they mentioned that they analyzed 1.6 million policies from the web and they estimated that they were able to bypass the whitelist of at least 90% of them. Therefore, simply deploying CSPs does not guarantee security.  You will need to be prepared for all the design questions and policy decisions that will result from the effort in order to deploy them well. That preparation will pay dividends in creating and executing a successful CSP deployment plan with a development team.

Peleus Uhley
Principal Scientist

Join Members of our Security Team at AppSec Europe and Security of Things World

Our director of secure software engineering, Dave Lenoe, will be speaking at the upcoming Security of Things World conference in Berlin, Germany, June 27 – 28. In addition, two more members of our security team will also be speaking at the upcoming OWASP AppSec Europe conference in Rome, Italy, June 27 – July 1.

First up is Dave at Security of Things World. He will be speaking about how Adobe engages with the broader security community for both proactive and reactive assistance in finding and resolving vulnerabilities in our solutions. You can join him on Monday, June 27, at 2:30 p.m.

Next up will be Julia Knecht, security analyst for Adobe Marketing Cloud, at OWASP AppSec Europe to share lessons learned from developing and employing an effective Secure Product Lifecycle (SPLC) process for our Marketing Cloud solutions. This session will give you on-the-ground knowledge that may assist you in developing your own SAAS-ready SPLC that helps break down silos in your organization, making it more agile and effective at building secure solutions. Julia’s session will be on Thursday, June 30th, at 3:00 p.m.

Finally, Vaibhav Gupta, security researcher, will be leading a “lightning training” on the OWASP Zed Attack Proxy (ZAP) tool at OWASP AppSec Europe. ZAP is one of the world’s most popular free security tools and is actively maintained by hundreds of international volunteers. It helps you automatically find security vulnerabilities in your web applications while you are developing and testing them. This training is focused on helping you with ZAP automation to enable better integration of it into your DevOps environment. Vaibhav’s session will be on Friday, July 1st, at 10:20 a.m.

If you will be at either of these conferences next week, we hope you can join our team for their sessions and conversation after or in the hallways throughout the event.

Striving for a Security Culture

Security is a discipline that is forever changing. There are always new threats on the horizon and we are constantly warning people that software is never as secure as we might like it as we cannot predict the future. Be alert! Never let your guard down! Think like an attacker! These are well tread refrains.

The truth is, however, none of us can be a perfect sentry. Yes, there is a lot of security work that can be automated and integrated into the product lifecycle, but there will always be a human element and humans are not perfect. They can get bored, complacent, and rely too much on crutches that have worked for them in the past. Also, there are limits to the amount of information that a person can take in. A focus on security can get lost amongst things like needing to create new features or meeting deadlines.

So how do we make sure that security awareness is not lost? That all the training we spend precious treasure creating doesn’t get lost in the noise?

The answer is to systematically work towards creating a culture of security.

A healthy security culture is one where folks have internalized your security protocols to the extent that desirable security actions are the default. For example: suspicious activity on a server? Immediately report it to the incident response team.

Culture is what you do with the information, education and tools that you have been exposed to. Creating an organizational culture strong in security principles and awareness requires identifying deliberate behaviors, creating opportunities for these behaviors to exist, rewarding them, and measuring the result. This can often be achieved through “gamification”.

So, how did we use this technique effectively? We wanted to foster a culture of open communication about security. We want everyone to be able to join the conversation, but we knew we were going to need to find allies in order to make the conversation authentic and genuine. We created an e-mail distribution list and we started inviting everyone who took our training to join. We also encouraged our security champions to join as watchers and helpers for any issues on the thread.

After a while, it became clear that our black and brown belts (the most advanced trainees) were very active on the list, researching and answering questions that would have cost our dedicated Security Researchers valuable time. We maintain a program where we give spontaneous positive feedback to managers, and the brown and black belt holders are consistently at the top of that list for their participation.

Over the years, the list has grown considerably, it is self-mediated, and remains a thriving community where anyone can ask a question. It is the first place we go to make announcements about our security efforts, where we learn from across the company tips and tricks to help keep things more secure, and where we share the latest security news.

Let’s break down further what we did here:

  1. Declared goal – open communication
  2. Created e-mail list and advertised it to champions and thought leaders
  3. Let people interact without over moderating
  4. Reward participants with positive feedback, engage in their topics, and encourage public praise
  5. Keep track of subscriber growth

Here are a few of the many dividends that have come from this tool to encourage our open security culture:

  • Security issues or potential problems within our products can be more quickly identified
  • We are often made aware of cutting edge industry and academic research that might be useful
  • There is a sense of “community” amongst members of this list
  • It has become easier to spread information to the right people very quickly

As mentioned above, security is an ever moving target. Security culture efforts need to adapt to meet that target. The culture of openness example above is a broad target designed to change a complex set of behaviors. You can use the same strategy to address more simple behaviors – like badge surfing or locking computer screens. Prioritize the behaviors you want to change, then focus on them, reward people for even the smallest of victories and measure the difference in the occurrence of the behavior.

Josh Kebbel-Wyen
Sr. Security Awareness and Training Program Manager

Building a Team of Digital Marketing Security Champions

Two years ago, I joined the Digital Marketing Product Security Team and took on the responsibility of establishing and managing the Secure Product Lifecycle (SPLC) process for Digital Marketing Product Engineering. There are currently eight Digital Marketing Solutions with engineering teams located all over the world.  Many of these solutions came to Adobe by way of acquisition.  I work with differing stacks, languages, company cultures, and time zones.  I knew some of the engineers from having run our 3rd Party Penetration Testing program for three years – however, I was mostly starting the process from scratch. My main goal was to lower security overhead in the product development cycle and leverage existing processes.

I am very passionate about quickly making improved security an integrated part of our product development and leveraging as many existing processes and tools as possible.   In order to promote security knowledge throughout the large Digital Marketing engineering organization.  I created a human “botnet” of security champions. These champions come from positions all over the organization and coordinate with our security team to facilitate ongoing management and enforcement of our SPLC process.

Security, admittedly, has a bit of an “image problem” among development teams.  It is something that developers often think of as this big, scary set of tasks intended to make their jobs more difficult or less enjoyable.  We placed a big emphasis on changing this perception. The Digital Marketing Security team is focused on being a supportive, service organization – a far cry from the perception that we can be a terrible force of nature leaving engineers feeling like they’ve been hit by a truck or would like to be.  Rather than coming in with the metaphorical hammer, we thought, “can we get people to actually enjoy their interactions with our security team?  How can we make this incredibly important, but often dreaded, piece of software development an integral and easier to implement piece of the existing process?”

The first thing I did was to meet with the solution owners and program managers to learn about how these teams develop and deliver software for these SAAS offerings.  Adobe has an incredible program management network, and an existing Service Lifecycle program that I was able to leverage and adapt to help meet requirements of our Secure Product Lifecycle.  I worked with the program managers to figure out how we could best add SPLC steps to their development and release process. I also ensured we had a clear process for adding security requirements and checkpoints to the release process. I worked with solution engineering directors to identify Security Champions on their engineering teams who would work with me to continue to improve our approach to security for the solutions.

A Security Champion is: ‘An advocate of security and the Digital Marketing Security team’s point of contact for the solution.  The champion has a good understanding of the technology, an interest in ensuring better security for their offering, and a strong personal network in the engineering organization.’  Once this human “botnet” of Security Champions was established, the heavy lifting began.  I set key performance indicators (KPIs) for the different elements of the SPLC around security training, threat modeling, static/dynamic analysis and penetration testing. The very first KPI that we focus on, for the purpose of enabling the proper background for having security conversations with the engineers, is technical security training.  Adobe’s corporate secure software engineering team (known as “ASSET”) has created a fantastic training program that focuses on technical security topics and awards certifications in the form of white and green belts, similar to karate training. Each of the program managers have added this training to the new engineer onboarding steps and they and the security champions have helped to develop strong measurements for the other KPIs.

My Security Champions helped increase the pervasiveness of our “security culture” more than I could have imagined when first starting this program.  They are one of the driving forces in helping to further improve security across Adobe’s Digital Marketing solutions.  They have been an amazing force multiplier helping to prioritize security practices in their teams’ design process, roadmap development, and mindset.

About 6 months after kicking off the Security Champions program, Digital Marketing Engineering had grown their base of security knowledge to have over 95% of their engineers white and green belt certified.  We’d also increased the number of threat models, penetration tests, ongoing security projects, and automated security testing. Our metrics against these initiatives have continued to increase and improve. The teams are more proactively involving the Digital Marketing and corporate security teams in their design discussions helping to ensure better security implementations throughout the process.

Messages like this from the teams show it’s working and make it all worth it:

Picture1

We’re committed to building and maintaining the trust of our Digital Marketing customers by developing and providing them with the most secure software possible – solutions that help meet business demands and allow configurations to help meet their security and compliance needs.  The SPLC and Security Champions program have helped to broaden the security knowledge and awareness of the Digital Marketing engineering teams.  We will continue to raise that bar by continuing to iterate and improve on these programs.

Julia Knecht
Security Analyst, Digital Marketing

Adobe CCF Enables Quicker Adherence to Updated PCI Standards

The Adobe.com e-commerce store has been a PCI level 1 certified merchant for the last few years.  Adobe has significantly reduced its Card Holder Data environment (CDE) scope for this environment by using an external tokenization solution and maintains PAN-free environment by not storing any Primary Account Numbers (PAN) in its internal network. Adobe has implemented its Common Controls Framework (CCF) within the Card Holder Data environment which allows it to use the same set of controls to meet with requirements set forth by Payment Card Industry Data Security Standard PCI DSS V3.1 and many other security/compliance frameworks like ISO27001:2013, SOC2, among others. CCF is a set of approximately 250 controls designed specifically for Adobe’s business and provides the benefit by rationalizing the overlapping requirements across 10 different compliance and security frameworks.

PCI Security Standards Council (PCI SSC) recently released the latest version of the Data Security Standard V3.2. One of the notable changes in the PCI DSS V3.2 is the additional clarification provided around the use of multi-factor authentication for all administrative and remote access to the CDE.

PCI DSS V3.2 reference:

“8.3 Secure all individual non-console administrative access and all remote access to the CDE using multi-factor authentication.”

 By implementing CCF within the CDE, Adobe has already established a baseline control which requires all remote VPN sessions and production environments to be accessed via multi – factor authentication. This baseline control was adopted to meet with the requirements established by the more stringent of the compliance frameworks, hence allowing for Adobe to already be complaint with the clarifications provided in the PCI DSS v3.2 around multi-factor authentication.

Prasant Vadlamudi
Manager, Risk Advisory and Assurance Services (RAAS)

Fingerprinting a Security Team

The central security team in a product development organization plays a vital role in implementing a secure product lifecycle process.  It is the team that drives the central security vision for the organization and works with individual teams on their proactive security needs.   I lead the technical team of proactive security researchers in Adobe. They are all recognized security experts and are able to help the company adapt to the ever changing threat landscape.  Apart from being on top of the latest security issues and potential mitigations that may need to be in place, the security team also faces challenges of constant skill evolution and remaining closely aligned to the business.

This post focuses on the challenges faced by the security team and potential ways to overcome them.

Increase in technologies as a function of time.

A company’s product portfolio is a combination of its existing products, new product launches, and acquisitions intended to help bridge product functionality gaps or expand into new business areas.  Over time, this brings a wide variety of technologies and architectures into the company.  Moreover, the pace of adoption of new technologies is much higher than the pace of retiring older technologies.  Therefore, the central security team needs to keep up with the newer technology stacks and architectures being adopted while also maintaining a manageable state with existing ones. An acquisition can further complicate this due to an influx of new technologies into the development environment in a very short period of time.

Security is not immune to business evolution.

The cloud and mobile space have forced companies to rethink how they should offer products and services to their customers.  Adobe went through a similar transformation from being a company that offers desktop products to one that attempts to strike the right balance between desktop, cloud, and mobile.  A security team needs to also quickly align with such business changes.

Multi-platform comes with a multiplication factor.

When the same product is offered on multiple operating systems, on multiple form factors (such as mobile and desktop), or deployed on multiple infrastructures, security considerations can increase due to the unique qualities of each platform. The central security team needs to be aware of and fluent in these considerations to provide effective proactive advice.

Subject matter expertise has limitations.

Strong subject matter expertise helps security teams’ credibility in imparting sound security advice to teams.  For security sensitive areas, experts in the team are essential to providing much deeper advice.  That said, any one individual cannot be an expert on every security topic.  Expertise is something that needs to be uniformly distributed through a team.

These challenges can be addressed by growing the team organically and through hiring.  Hiring to acquire new skills alone is not the best strategy – the skills required today will soon be outdated tomorrow.  A security team therefore needs to adopt strategies that allow it to constantly evolve and stay current. A few such strategies are discussed below.

T-Shaped skills.

Security researchers in a security team should aim for a T-Shaped skill set.  This allows for a fine balance between breadth and depth in security. The breadth is useful to help cover baseline security reviews.  The depth helps researchers become specific security subject matter experts. Having many subject experts strengthens the overall team’s skills because other team members learn from them and they are also available to provide guidance when there is a requirement in their area of expertise.

Strong Computer Science foundations.

Product security is an extension of engineering work.  Security requires understanding good design patterns, architecture, code, testing strategies, etc. Writing good software requires strong foundations in computer science irrespective of the layer of technology stack one ends up working on. Strong computer science skills can also help make security skills language and platform agnostic.  With strong computer science skills, a security researcher can learn new security concepts once and then apply to different platforms as and when needed.  With such strong fundamentals, the cost of finding out the “how” on new platforms is relatively small.

Hire for your gaps but also focus on ability to learn quickly.

A working product has so many pieces & processes that make it work.  If you can make a mental image of what it takes to make software, you can more clearly see strengths and weaknesses in your security team.  For example, engineering a service requires a good understanding of code (and the languages of choice), frameworks, technology stacks (such as queues, web server, backend database, third party libraries), infrastructure used for deploying, TLS configurations, testing methodologies, the source control system, the overall design and architecture, the REST interface, interconnection with various other services, the tool chain involved – the list is extensive. When hiring, one facet to evaluate in a candidate is whether he or she brings security strengths to the team through passion and past job experience that can fill the team’s existing gaps.  However, it can be even more important to evaluate the candidate’s willingness to learn new skills.  The ability to learn, adapt, and not be held captive to one existing skill set is an important factor to look for in candidates during hiring.  The secondary goal is to add a variety of security skills to the team and try to avoid duplicating the existing the skill set already in the team.

“Skate where the puck’s going, not where it’s been.”

To stay current with the business needs and where engineering teams are headed, it is important for a security team to spend a portion of their time investigating the security implications of newer technologies being adopted by the product teams.  As Wayne Gretzky famously said, “you want to skate where the puck’s going, not where it’s been.” However, security teams need to cover larger ground. You do have to stay current with new technologies being adopted. Older technologies still get used in the company as only some teams may move away from them. So it would be wise not to ignore those older technologies by maintaining expertise in those areas, while aiming to move teams away from those technologies as they become more difficult to effectively secure.  Predicting future areas of investment is difficult.  Security teams can make that task easier by looking at the industry trends and by talking to engineering teams to find out where are they headed.  The managers of a security team also have a responsibility to stay informed about new technologies, as well as future directions their respective companies may go in, in order to invest in newer areas to grow the team.

Go with the flow.

If a business has taken a decision to invest in cloud or mobile or change the way it does business, a security team should be among the first in the company to detect this change and make plans to adapt early.  If the business moves in a certain direction and the security team does not, it can unfortunately label a team as being one that only knows the older technology stack.  Moreover, it is vital for the security team to show alignment with a changing business. It is primarily the responsibility of the security team’s leadership to detect such changes and start planning for them early.

Automate and create time.

If a task is performed multiple times, the security team should evaluate if the task can be automated or if a tool can do it more efficiently.  The time reduced through automation and tooling can help free up time and resources which can then be used to invest in newer areas that are a priority for the security team.

Growing a security team can have many underlying challenges that are not always obvious to an external observer.  The industry’s primary focus is on the new threat landscapes being faced by the business.  A healthy mix of organic growth and hiring will help a security team adapt and evolve continuously to the changes being introduced by factors not in their direct control.  It is the responsibility of both security researchers and the management team to keep learning and to spend time detecting any undercurrents of change in the security space.

Mohit Kalra
Sr. Manager, Secure Software Engineering