Security Automation for PCI Certification of the Adobe Shared Cloud

Software engineering is a unique and exciting profession. Engineers must employ continuous learning habits in order to keep up with constantly morphing software ecosystem. This is especially true in the software security space.  The continuous introduction of new software also means new security vulnerabilities are introduced. The problem at its core is actually quite simple. It’s a human problem.  Engineers are people, and, like all people, they sometimes make mistakes.  These mistakes can manifest themselves in the form of ‘bugs’ and usually occur when the software is used in a way the engineer didn’t expect. When these bugs are left uncorrected it can leave the software vulnerable. Some mistakes are bigger than others and many are preventable. However, as they say, hindsight is always 20/20.  You need not necessarily experience these mistakes to learn from them. As my father often told me, a smart person learns from his mistakes, but a wise person learns from the mistakes of others. And so it goes with software security. In today’s software world, it’s not enough to just be smart, but you also need to be wise.

After working at Adobe for just shy of 5 years I have achieved the current coveted rank of ‘Black Belt’ in Adobe’s security program through the development of some internal security tools and assisting in the recent PCI certification of Shared Cloud (the internal platform upon which, Creative Cloud and Document Cloud are based).  Through Adobe’s security program my understanding of security has certainly broadened.  I earned my white belt within just a few months of joining Adobe which consisted of some online courses covering very basic security best practices. When Adobe created the role of “Security Champion” within every product team, I volunteered. Seeking to make myself a valuable resource to my team, I quickly eared my green belt which consisted of completing several advanced online courses covering a range of security topics from SQL Injection & XXS to Buffer Overflows. I now had 2 belts down,  only 2 to go.

At the beginning of 2015, the Adobe Shared Cloud team started down the path of PCI Compliance.  When it became clear that a dedicated team would be needed to manage this, myself and a few others made a career shift from software engineer to security engineer in order to form a dedicated security team for the Shared Cloud.  To bring ourselves up to speed, we began attending BlackHat and OWASP conferences to further our expertise. We also started the long, arduous task of breaking down the PCI requirements into concrete engineering tasks.  It was out of this PCI effort that I developed three tools – one of which would earn me my Brown Belt, and the other two my Black Belt.

The first tool came from the PCI requirement that requires you track all of 3rd party software libraries for vulnerabilities and remediate them based on severity. Working closely with the ASSET team we developed an API that would allow you to push product dependencies and versions into applications as they are built.   Once that was completed, I wrote an integrated and highly configurable Maven plugin which consumed the API during build time, thereby helping to keep applications up-to-date automatically as part of our continuous delivery system. After completing this tool, I submitted it as a project and was rewarded with my Brown Belt. My plugin has also been adopted by several teams across Adobe.

The second tool also came from a PCI requirement. It states that no changes are allowed on production servers without a review step, including code changes. At first glance this doesn’t seem so bad – after all we were already doing regular code reviews. So, it shouldn’t be a big deal, right? WRONG! The burden of PCI is that you have to prove that changes were reviewed and that no change was allowed to go to production without first being reviewed.  There were a number of manual approaches that one could take to meet this requirement. But, who wants the hassle and overhead of such a manual process? Enter my first Black Belt project – the Git SDLC Enforcer Plugin.  I developed a Jenkins plugin that ran with a merge onto a project’s release branch.  The plugin reviews the commit history and helps ensure that every commit belongs to a pull request and that each pull request was merged by someone other than the author of the pull request.  If any offending commits or pull requests are found then the build fails and an issue is opened on the project in its GitHub space.  This turned out to be a huge time saver and a very effective mechanism for ensuring every change done to the code is reviewed.

The project that finally earned me my Black Belt, however, was the development of a tool that will eventually fully replace the Adobe Shared Cloud’s secret injection mechanism. When working with Amazon Web Services, you have a little bit of a chicken and egg problem when you begin to automate deployments. At some point, you need an automated way to get the right credentials into the EC2 instances that your application needs to run. Currently the Shared Cloud’s secrets management leverages a combination of custom baked AMI’s, IAM Roles, S3, and encrypted data bags stored in the “Hosted Chef” service. For many reasons, we wanted to move away from Chef’s managed solution, and add some additional layers of security such as the ability to rotate encryption keys, logging access to secrets, the ability to restrict access to secrets based on environment and role, as well as making it auditable. This new tool was designed to be a drop in replacement for “Hosted Chef” – it made it easier to implement in our baking tool chain and replaced the data bag functionality provided by the previous tool as well as added some additional security functionality.  The tool works splendidly and by the end of the year the Shared Cloud will be exclusively using this tool resulting in a much more secure, efficient, reliable, and cost-effective mechanism for injecting secrets.

My take away from all this is that Adobe has developed a top notch security training program, and even though I have learned a ton about software security through it, I still have much to learn. I look forward to continue making a difference at Adobe.

Jed Glazner
Security Engineer

IT Asset Management: A Key in a Consistent Security Program

IT Asset Management (ITAM) is the complete and accurate inventory, ownership and governance of IT assets. ITAM is an essential and often required stipulation of an organization’s ability to implement baseline security practices and become compliant with rigorous industry standards. As IT continues to transform, organizations face the challenge of maintaining an accurate inventory of IT assets that consist of both physical and virtual devices, as well as static and dynamic spin-up-spin-down cloud infrastructures.

The absence of ITAM can result in a lack of asset governance and inaccurate inventory. Without a formalized process, companies might unknowingly be exposed to insecure assets that are open to exploitation. On the contrary, proper ITAM helps enable organizations to leverage a centralized and accurate record of inventory in which security measures can be implemented and applied consistently across the organization’s environment.

Risks Without ITAM

Assets that are not inventoried and tracked in an ITAM program present a very real and critical risk to the business. Unknown assets seldom have an appropriate owner identified and assigned. In essence, nobody within the organization is owning the responsibility to ensure that the unknown asset is sufficiently governed or secured. As a result, unknown assets can quickly fall out of sync with regulatory or compliance requirements leaving them vulnerable for exploitation.

In a world of constant patches and hotfixes, an unknown asset can become vulnerable after only a single missed update. Bad actors rarely attack the well-known and security hardened asset. It is far more common for a bad actor to patiently traverse the organization’s network, waiting to attack until they have identified an asset which the organization itself doesn’t know exists.

Benefits of ITAM

Before a company can sufficiently implement programs designed to protect its operational assets, it must first have the ability to identify and inventory those assets. Companies should put into place processes and controls to automate the inventorying of assets obtained via procurement and virtual machine provisioning. Assets can be inventoried and continuously tracked using a Configuration Management Database (CMDB). Each asset can be inventoried in the CMDB and assigned an owner, who is responsible for asset governance and maintenance until the decommission, or destruction, of the asset.

Processes must also be put into place to continuously monitor and update the CMDB inventory. One example of how Adobe monitors its CMDB is by leveraging operating security controls. For example, Adobe performs an analysis to determine if all assets sending logs to a corporate log server are known assets inventoried in the CMDB. If the asset is not inventoried in the CMDB, then the asset is categorized as an unknown asset. Once unknown assets are identified, further analysis is performed so that the asset can be added to the CMDB and an appropriate owner assigned.

At Adobe, we have created the Adobe Common Controls Framework (CCF), which is a set of control requirements which have been rationalized from the complex array of industry security, privacy and regulatory standards. CCF provides the necessary controls guidance to assist teams with asset management. ITAM helps provide Adobe internal, as well as third party external, auditors a centralized asset repository to leverage in order to gain reasonable assurance that security controls have been implemented and are operating effectively across the organization’s environment.

As described above, maintaining a complete and accurate ITAM in an organization of any size is no easy task. However, when implemented correctly, the benefits of ITAM allow organizations to consistently apply security controls across the operating environment, helping result in a reduced attack surface for potential bad actors. If organizations are not aware of where their assets are, then how can they reasonably know what assets they need to protect?

Matt Carroll
Sr. Analyst, Risk Advisory and Assurance Services (RAAS)

 

Do You Know How to Recognize Phishing?

Computer login and password on paper attached to a hook concept for phishing or internet security

By now, most of us know that the email from the Nigerian prince offering us large sums of money in return for our help to get the money out of Nigeria is a scam. We also recognize that the same goes for the email from our bank that is laden with spelling errors. However, phishing attacks have become more sophisticated over the years, and for the most part, it has become much harder to tell the difference between a legitimate piece of communication and a scam.

In recognition of National Cyber Security Awareness Month, we asked a nationally representative sample of ~2,000 computer-owning adults in the United States about their behaviors and knowledge when it comes to cybersecurity. This week, we’ll share some of the insights from our survey related to phishing—as well as resources and tips on how you can better protect yourself from falling victim to phishing attacks.

What is phishing?

Phishing is a form of fraud in which the attacker tries to learn information such as login credentials or account information by masquerading as a legitimate, reputable entity or person in email, instant messages (IMs) or other communication channel. Examples would be an email from a bank that is carefully designed to look like a legitimate message but that is coming from a criminally-motivated source, a phone message claiming to be from the Internal Revenue Service (IRS) threatening large fines unless you immediately pay what you supposedly owe, or the email from the Nigerian prince pleading for your compassion and promising a large reward. Attackers typically create these communications in an effort to steal money, personal information, or both. Phishing emails or IMs are typically designed to make you click on links or open attachments that look authentic but are really just there to distribute malware on your machine or to capture your credit card number in a form on the attacker’s site.

So do YOU know how to recognize phishing?

For the purpose of this blog post, we’ll focus on phishing emails as the attacker’s choice of communication. According to our survey, 70 percent of adults in the United States believe they can identify a phishing email. That percentage rises to 80 percent among Millennials.[i] Yet nearly four (4) in 10 people believe they have been victims of phishing. This goes to show that it’s not as easy to detect phishing emails as it may sound! Here are six tips to help you identify whether you’ve received a “phishy” email:

1. The email urges you to take immediate action

Phishing emails often try to trick you into clicking a link by claiming that your account has been closed or put on hold, or that there’s been fraudulent activity requiring your immediate attention. Of course, it’s possible you may receive a legitimate message informing you to take action on your account. To be safe, don’t click on the link in the email, no matter how authentic it appears to be. Instead, log into the account in question directly by visiting the appropriate website, then check your account status.

2. You don’t recognize the email sender

Another common way to identify a phishing email is if you don’t recognize the email sender. Two-thirds of those individuals we surveyed who believe they can identify a phishing email noted a top indicator to be whether or not they recognized the sender. However, our survey results also show that despite the warning signs, more than four (4) in 10 U.S. adults will still open the email—and among those, nearly half would click on a link inside—potentially putting themselves at risk.

3. The hyperlinked URL is different from the one shown

The hyperlink text in a phishing email may include the name of a legitimate bank. But when you hover the mouse over the link (without clicking on it), you may discover in a small pop-up window that the actual URL differs from the one displayed and that it doesn’t contain the bank’s name. Similarly, you can hover your mouse over the address in the “From” field to see if the website domain matches that of the organization the email is supposed to have been sent from.

4. The email in question has improper spelling or grammar

This is one of the most common signs that an email isn’t legitimate. Sometimes, the mistake is easy to spot, such as “Dear Costumer” instead of “Dear Customer.”

Other mistakes might be more difficult to spot, so make sure to look at the email in closer detail. For example, the subject line or the email itself might say “Health coverage for the unemployeed.” The word “unemployed” isn’t exactly difficult to spell, and any legitimate organization should have editors who review marketing emails carefully before sending them out. So when in doubt, check the email closely for misspellings and improper grammar.

5. The email requests personal information

Reputable organizations don’t ask for personal customer information via email. For example, if you have a checking account, your bank already knows your account number.

6. The email includes suspicious attachments

It would be highly unusual for a legitimate organization to send you an email with an attachment, unless it’s a document you’ve requested or are expecting. As always, if you receive an email that looks in any way suspicious, never click to download the attachment, as it could be malware.

What to do when you think you’ve received a phishing email

Report potential phishing scams. If you think you’ve received a phishing email from someone posing as Adobe, please forward that email to phishing@adobe.com, so we can investigate.

Google also offers online help for reporting phishing websites and phishing attacks. And last but not least, the U.S. government offers valuable tips for protecting yourself from phishing scams as well as an email address for reporting scams: phishing-report@us-cert.gov.

So while the Nigerian prince has become a lot more sophisticated in his tactics, there is a lot you can do to help protect yourself. Most importantly, trust your instincts. If it smells like a scam, it might very well be a scam!


[i] Millennials are considered individuals who reached adulthood around the turn of the 21st century. If you are in your mid-30s today, you are considered a Millennial.

Security Automation Part I: Defining Goals

This is the first of a multi-part series on security automation. This blog will focus on high-level design considerations. The next blog will focus on technical design considerations for building security automation. The third blog will dive even deeper with the specific examples as the series continues to get more technical.

There are many possible approaches for adding automation to your security process. For many security engineers, it is an opportunity to get away from reviewing other engineers’ code and write some of their own. One key difference between a successful automation project and “abandonware” is to create a project that will produce meaningful results for the organization. In order to accomplish that, it is critical to have a clear idea of what problem you are trying to solve at the onset of the project.

Conceptual aspirations

When choosing the right path for designing security automation, you need to decide what will be the primary goals for the automation. Most automation projects can be grouped into common themes:

Scalability

Scalability is something that most engineers instinctively go to first because the cloud has empowered researchers to do so much more. Security tools designed for scalability often focus on the penetration testing phase of the software development lifecycle. They often involve running black or grey box security tests against the infrastructure in order to confirm that the service was deployed correctly. Scalability is necessary if you are going to measure every host in a large production environment. While scalability is definitely a powerful tool that is necessary in the testing phase of your security development lifecycle, it sometimes overshadows other important options that can occur earlier in the development process and that may be less expensive in terms of resources.

Consistency

There is a lot of value in being able to say that “Every single release has ___.” Consistency isn’t considered as challenging of a problem as scalability because it is often taken for granted. Consistency is necessary for compliance requirements where public attestations need to have clear, simple statements that customers and auditors can understand. In addition, “special snowflakes” in the environment can drown an organization’s response when issues arise. Consistency automation projects frequently focus on the development or build phase of the software development lifecycle. They include integrating security tasks into build tools like Jenkins, Chef, Git, or Maven. By adding security controls into these central tools in the development phase, you can have reasonable confidence that machines in production are correctly configured without scanning each and every one individually.

Efficiency

Efficiency projects typically focus on improving operational processes that currently involve too much human interaction.  The goal is to refocus the team’s manual resources to more important tasks. For instance, many efficiency projects have the word “tracking” somewhere in their definition and involve better leveraging tools like JIRA or Sharepoint. Often times, efficiency automation is purchased rather than built because you aren’t particularly concerned with how the problem gets solved, so long as it gets solved and that you aren’t the one who has to maintain the code for it. That said, SalesForce’s open-source VulnReport.io project (http://vulnreport.io) is an example of a custom built efficiency tool which they claim improved operational efficiency and essentially resulted “in a ‘free’ extra engineer for our team.”

Metrics

Metrics gathering can be a project in itself or it can be the byproduct of a security automation tool. Metrics help inform and influence management decisions with concrete data. That said, it is important to pick metrics that can guide management and development teams towards solving critical issues. For instance, development teams will interpret the metrics gathered as the key performance indicator (KPI) that they are being measured against by management.

In addition, collecting data almost always leads to requests for more detailed data. This can be useful in helping to understand a complex problem or it can be a distraction that leads the team down a rabbit hole. If you take time to select the proper metrics, then you can help keep the team focused on digging deeper into the critical issues.

Operational Challenges

If your scalable automation project aims to run a web application penetration tool (WAPT) across your entire enterprise, then you are basically creating an “enterprise edition” for that tool. If you have used enterprise edition WAPTs in the past and you did not achieve the success that you wanted, then recreating the same concept with a slightly different tool will most likely not produce significantly different results when it comes to action within the enterprise. The success or failure of tools are typically hinged on the operational process surrounding the tool more than the tool itself. If there is no planning for handling the output from the tool, then increasing the scale at which the tool is run doesn’t really matter. When you are designing your automation project, consider operational questions such as:

Are you enumerating a problem that you can fix?

Enumerating an issue that the organization doesn’t have the resources to address can sometimes help justify getting the funding for solving the problem. On the other hand, if you are enumerating a problem that isn’t a priority for an organization, then perhaps you should refocus the automation on issues that are more critical. If no change occurs as the result of the effort, then the project will stop iterating because there is no need to re-measure the issue.

In some situations, it may be better to tackle the technical debt of basic issues before tackling larger issues. Tests for basic technical debt issues are often easier to create and they are easier for the dev team to address. As the dev team addresses the issues, the project will also iterate in response. While technical debt isn’t as exciting as the larger issues, addressing it may be a reasonable first step towards getting immediate ROI.

Are you producing “noise at scale”?

Running a tool that is known for creating a high level of false positives at scale will produce “noise at scale”. Unless you have an “at scale” triage team to eliminate the false positives, then you are just generating more work for everyone. Teams are less likely to take action on metrics that they believe are debatable due to the fear that their time might be wasted. A good security tool will empower the team to be more efficient rather than drown them in reports.

How will metrics guide the development team?

As mentioned earlier, the metric will be interpreted as a KPI for the team and they will focus their strategy around what is reported to management. Therefore, it makes sense not to bother measuring non-critical issues since you don’t want the team to get distracted by minor issues. You will want to make sure that you are collecting metrics on the top issues in a way that will encourage teams towards the desired approach.

Often times there are multiple ways to solve an issue and therefore multiples ways to measure the problem. Let’s assume that you wanted to create a project to tackle cross-site scripting (XSS). Creating metrics that count the number of XSS bugs will focus a development team on a bug fixing approach to the problem. Alternatively, counting the number of sites with content security policy headers deployed will focus the development team on security mitigations for XSS. In some cases, focusing the team on security mitigations has more immediate value than focusing on bug fixing.

What metrics does management need to see?

One method to determine how your metrics will drive development teams and inform management, is to phrase them in terms of an assertion. For instance, “I assert that the HSTS header is returned by all our sites in order to ensure our data is always encrypted.”  By phrasing it as an assertion, you are phrasing the test in terms of a simple true/false terms that can be reliably measured at scale. You are also phrasing the test in terms of its desired outcome using simplistic terms. This makes it easier to determine if the goal implied by the metric’s assertion meets with management’s goals.

From a management perspective, it is also important to understand whether your measurement of change or a measurement of adoption. Measuring the number of bugs in an application is often measuring an ongoing wave. If security programs are working, the height of the waves will trend down overtime. Although, that means you have to watch the wave through multiple software releases before you can reliably see a trend in its change. If you measure security mitigations, then you are measuring the adoption rate of a project that will end with a state of relative “completeness.” Tracking wave metrics overtime is valuable because you need to see when things are getting better or worse. However, since it is easy to procrastinate on issues that are open ended,  adoption-style projects that can be expressed as having a definitive “end” may get more immediate traction from the development team because you can set a deadline that needs to be met.

Putting it all together

With these ideas and questions in mind, you can mentally weigh which types of projects to start with for immediate ROI and the different tools for deploying them.

For instance, counting XSS and blind SQL injection bugs are hard tests to set up (authentication to the application, crawling the site, etc.), these tests frequently have false positives, and they typically result in the team focusing on bug fixing which would require in-depth monitoring overtime because it is a wave metric. In contrast, a security project measuring security headers, such as X-Frame-Options or HSTS, are simple tests to write, they have low false-positive rates, they can be defined as “(mostly) done” once the headers are set, and they focus the team on mitigations. Another easy project might be writing scalable tests that confirm the SSL configuration meets the company standards. Therefore, if you are working on a scalability project, starting with a simple SSL or security header projects can be quick wins that demonstrate the value of the tool. From there, you can then progress to measuring the more complex issues.

However, let’s say you don’t have the compute resources for a scalability project. An alternative might be to turn the projects into consistency style projects earlier in the lifecycle. You could create Git or Jenkins extensions that search the source code for indicators that the team has deployed security headers or proper SSL configurations. You would then measure how many teams are using the build platform extensions and review the collected results from the extension. It would have a similar effect as testing the deployed machines without as much implementation overhead. Whether this will work better overall for your organization will depend on where you are with your security program and its compliance requirements.

Conclusion

While the technical details of how to build security automation is an exciting challenge for any engineer, it is critical to build a system that will empower an organization. Your project will have a better chance of success if you spend time considering how the output of your tool will help guide progress. The scope of the project in terms of development effort and project coverage by carefully considering where in the development process you will deploy the automation.  By spending time on defining how the tool can best serve the team’s security goals, you can help ensure you are building a successful platform for the company.

The next blog will focus on the technical design considerations for building security automation tools.

Peleus Uhley
Principal Scientist, Security

Working with Girls Who Code

I was lucky to grow up with a support system of teachers and family who encouraged me to pursue a career in STEM. My father was an engineer and as a little girl, I wanted to be just like him. So when it came time to decide what my major in undergrad would be, I had no doubt about choosing computer engineering. When I moved to Seattle, I met many girls who did not share the same experiences as me. One told me her family just didn’t believe girls could do math, while another told me teachers were never supportive and told her that girls didn’t do well in math and science. This was just unacceptable to me. I believe that all children, regardless of their gender, race, and background should be encouraged to pursue any field they want.

Girls Who Code is a non-profit organization with a mission to create programs that will inspire, educate, and equip girls with the computing skills to pursue 21st century opportunities. Girls Who Code found that by 2020, there will be 1.4 million jobs available in tech fields and US graduates are on track to fill 29% of those jobs – but only 3% of these will be women. In the 1980s, 37% of computer science graduates were women, but today it’s only around 18%. I work in cybersecurity where the percentage of women in the field is around 11%. These are very disappointing statistics, and I wanted to help change the situation. So when my manager approached me to help teach the Girls Who Code class for Adobe in Seattle, I jumped at the opportunity.

Adobe has partnered with Girls Who Code for three years to host summer immersion programs. Apart from providing classroom space, program managers and mentors, this summer, Adobe was the only Girls Who Code partner company that provided its own instructors, with four full-time, female employees teaching the coding classes.

During the months of July and August, I taught 20 high school girls, ages 15-18, the basics of computer science skills including Scratch, Python, Arduino programming, and web development. The program also taught leadership skills like self-confidence, self-advocacy and public speaking. Other Adobe employees organized field trips, speakers, and workshops and helped the girls with projects. Several Adobe women volunteered one hour per week to provide career mentorship and conduct technical interview workshops for the girls.

The last two weeks of the program, the girls picked an idea for a final project and took it from inception to launch. They came up with BIG ideas they felt passionate about from developing a safe places app, to teaching children arts and music, to helping students be more productive. They used technologies they had never used before including Jquery, integrating the Google and Facebook API, and using Mongo db to host everything on AWS. On graduation day, the girls presented their projects to their family, mentors and various Adobe employees.

I’m proud to work at Adobe, a company that follows through on its values. In addition to all the time and resources, the Adobe Foundation gave each of the girls a laptop and a one-year Creative Cloud subscription to continue their tech journey. I am thankful to my team,  who supported me in this effort and picked up the slack while I was teaching. In my own little way, I hope I have encouraged more young women to pursue STEM fields, including careers at Adobe and our peers in the tech industry.

Aparna Rangarajan
Sr. Technical Program Manager – Security

Join Me at Privacy.Security.Risk 2016 in San Jose this Thursday

I will be speaking this Thursday, September 15th, from 12:15 – 1:15 p.m. at the Privacy.Security.Risk 2016 conference in San Jose, CA, sponsored by the International Association of Privacy Professionals (IAPP) and the Cloud Security Alliance (CSA). The topic will be “Achieving Container Security at Scale.” Containers are an exciting technology that show great promise in improving efficiency, scalability, and repeatability in cloud service development environments. However, as with any new technology, it also presents a unique set of security risks that must be addressed. As a company on the “bleeding edge” in use of this technology at scale, we believe we are in a unique position to help the security and compliance communities adopt the best security standards possible around this technology without sacrificing its benefits. My session will discuss our vision for use of container technology, the current security issues we have observed that require industry remedies to help us and our peers achieve necessary scale, and our own ideas for helping to address these issues both in the immediate and longer term. If you are attending the conference this week, I hope you’ll be able to join me.

Brad Arkin
Chief Security Officer (CSO)

Come for Developer Day @ Adobe San Jose on September 12th

On September 12th, Adobe will be hosting a Developer Day hosted by SAFECode, the Cloud Security Alliance (CSA), and Adobe at Adobe’s San Jose headquarters. The agenda is packed with great content and experts from leading product security organizations. Please consider attending on Monday to have the opportunity to learn and network with peers across the industry.

Topics of the day include:

  • Software Assurance: Putting Industry Best Practices into Action
    • Driving Software Assurance Knowledge among Software Professionals
    • Fundamental Practices for Software Assurance
    • Third Party Components and Secure Software
  • Cloud + Dev == Security.Awesome
    • The power of cloud developer tools
    • What is DevSecOps
    • Security by design
  • Panel: Putting Software Assurance Theory into Practice

Leading industry experts from SAFECode and CSA will discuss some of the latest case studies in software assurance and new frontiers of software security. The panelists will be fielding questions and sharing experiences on the advantages organizations are gaining when leveraging the latest innovative security approaches to the development lifecycle.

For the full agenda, speakers and to register for this free event, please click here: https://www.eventbank.com/event/777/.

Adobe Works with BYU Summer Security Camp for Girls

Adobe Works with the BYU Cybersecurity Summer Camp for Girls

This summer members of the Adobe security teams worked with Brigham Young University (BYU) on a free cybersecurity summer camp for girls in grades 8 – 12.  This event is organized by the BYU Cybersecurity Research Lab and Adobe helps with funding, curriculum development, and mentoring for the program. The camp included 4 days of hands-on cybersecurity workshops, classes, and experiences. The students learned about many topics designed to get them excited about pursuing cybersecurity as a career including hacking, privacy, viruses and how to stay safe online. At the core of the event was a space-themed “escape” challenge. This challenge required teams to solve, through a simulated space ship command bridge, common cybersecurity problems to avoid power failures, hostile alien encounters, and other pitfalls. It was a good combination of training from experts and fun experiential learning experiences.

“All the research and our own experience has shown that this age range is a critical time for young women to develop an interest in cybersecurity” says Dr. Dale Rowe, Director of the BYU Cybersecurity Research Lab. Not only was it beneficial for the participants, Adobe employees serving as mentors also had a great time. CJ Cornel, student director of the camp, said, “the camp was a great way to help us share our passion for cybersecurity with some of the next generation in a safe environment.”

This camp is one of many activities Adobe sponsors to encourage girls and young women to enter the cybersecurity field including Women in Cybersecurity, Girls Who Code, Winja “Capture the Flag” (“CTF”) Competition, and r00tz @ BlackHat.

2016-08-13-2 fromthedailyherald

Chandler Newby
Information Security Engineer

Donald Porter
Sr. Manager, Security Engineering

Security Considerations for Container Orchestration

Orchestration platforms are enabling organizations to scale their applications at an unparalleled rate which is why many technology centric companies are surging to move applications onto a distributed datacenter wide platform that enables them to scale at a click of a button.

One orchestration platform that is rapidly growing in popularity is a distributed datacenter wide operating system running on top of the Apache Mesos kernel. A traditional operating system manages resources for a single server, whereas a distributed operating system seamlessly manages resources for multiple servers acting as one shared pool of resources. Once Mesos agents have been established on servers throughout a datacenter, a cluster is formed, and Mesos jobs can be distributed from a Mesos master to servers with available resources within the cluster for processing.

A risk associated with the Mesos master and slave model revolves around the authentication of Mesos services. Many Mesos deployments do not require authentication by default – if an attacker can communicate with either the Mesos master service (default TCP port 5050) or a Mesos slave service (default TCP port 5051), the attacker may be able to easily gain remote code execution (RCE) on another server within the cluster.

Frameworks are commonly installed within DC/OS environments to provide datacenter wide services. For example, the Marathon Framework is commonly used within these environments to perform container orchestration and help ensure that a specific number of instances of a container are running persistently within the DC/OS cluster. The Chronos Framework is also commonly deployed to provide fault tolerant distributed job scheduling throughout the cluster.

Today many frameworks do not by default enforce authentication on both management web UIs and the API interfaces. Therefore, if an attacker can communicate with the framework, they may be able to gain remote code execution (RCE) on servers within the cluster.

One hurdle associated with the exploitation of framework services, is that many implementations will deploy framework services to random high TCP ports on arbitrary servers within the cluster, making it slightly more difficult for an unaware attacker to find the services within the datacenter. This hurdle can be overcome by leveraging a few built in services within the DC/OS environment. First, many times a unique top level domain (TLD) is established for the Mesos cluster which can be used by services within the cluster to locate frameworks (e.g. ping -c3 marathon.mesos). Second, if the Mesos master service can be queried, a complete list of Mesos slaves can be obtained potentially reducing the attack space. Lastly, many times a Mesos DNS service is established that will enable a remote attacker to perform an enumerate API call. This is the functional equivalent of performing a DNS zone transfer – it will provide a detailed map of all services and random high ports back to the attacker.

Defenders can easily take quite a few steps to help prevent the above exploitation paths, including:

  • Enabling authentication on all Mesos Masters and Agents
  • Enabling authentication on all Framework Web UIs and APIs services
  • Disabling the enumerate API call for the Mesos DNS service
  • Logging authentication requests and the execution of jobs for detection of suspicious events

As the popularity of orchestration platforms grows, attackers will continue to spend more resources building tools and techniques to exploit these frameworks and services. Organizations leveraging these technologies would be wise to spend the extra cycles up front to put reasonable security controls in place before using these platforms to host production applications.

Bryce Kunz
Sr. Lead Security Engineer, Digital Marketing

Tips for Sandboxing Docker Containers

In the world of virtualization, we know two words: Virtual Machines and Containers. Both provide sandboxing: Virtual Machines provide it through hardware level abstraction while containers provide a process level isolation using a common kernel. Docker containers by default are secure but do they provide complete isolation? Let us look at the various ways sandboxing could be achieved in containers and what we need to do to try and achieve complete isolation.

Namespaces

One of the building blocks of containers that provides the first level of sandboxing is Namespaces. It allows processes with their own view of the system. It isolates the processes from having less effect on other processes in container environment or in the host system. Today there are 6 namespaces available in Linux and all of them are supported by Docker.

  • PID namespace: Provides isolation such that a process belonging to a particular PID namespace can only see other processes in the same namespace. It makes sure that processes that belong to one PID namespace cannot know the existence of processes in other PID namespace and hence cannot inspect or kill them.
  • User namespace: Provides isolation such that a process belonging to a particular user namespace is given a view such that a user could be a root within that namespace, but on the host system, it is mapped as a non-privileged user. This provides a great security improvement in Docker environment.
  • Mount namespace: Provides isolation of the host filesystem from the new filesystem created for the process. This allows processes in different namespaces to change the mount points without affecting each other.
  • Network namespace: Provides isolation such that a process belonging to a particular network namespace gets its own network stack that includes routing tables, IP tables rules, sockets and interfaces. Additionally, we would require Ethernet bridges that allow networking between hosts and namespaces.
  • Uts namespace: Isolates two system identifiers – nodename and domainname. This allows containers to have its own hostname and NIS domain name, which is helpful during the initialization steps.
  • IPC namespace: Provides isolation of InterProcess communication resources that includes IPC message queues, semaphores etc.

Although, namespaces provide a great level of isolation, there are resources that a container can access, but they are not namespaced. These resources are common to all the containers on the host machine which raises concerns over the security. This may present a risk of attack or information exposure. Resources that are not sandboxed include the following:

  • The Kernel Keyring: The Kernel Keyring separates keys using UID. Since we have multiple users in different containers that might have the same UID, all of these users are allowed to have access to the same keys in the keyring. Applications using Kernel Keyring for handling secrets are much less secured due to lack of sandboxing
  • The /proc and system time: Due to the “one size fits all” nature of Docker, a number of linux capabilities remain enabled. With certain capabilities enabled, the exposure of /proc offers a source of information leak and large attack surface. /proc includes files that contain configuration information of the kernel. It has information about the host system resources. Another set of Capabilities include the SYS_TIME and SYS_ADMIN, that allow changes to the system time not just inside the container, but also for the host and other containers.
  • The Kernel Modules: If an application loads kernel modules, that would allow the newly added module to be available across all the containers in the environment and the host system. There are some modules that enforce security policies. Access to such modules would allow the applications to make changes to the security policies which again is a big concern.
  • Hardware: The underlying hardware of the host system is shared between all the containers running on the system. A proper cgroup configuration and access control is required to have a fair distribution of resources. In other words, namespaces allow a larger area to be divided into smaller areas and cgroups allow proper usage of these areas. Cgroups work on resources like memory, cpu, disk drives etc. Having a well-defined cgroup configuration would prevent DoS attacks.

Capabilities

Capabilities are rules that help in performing privileged operations. The privileged operations are only allowed by the root user. An individual non-root process would not be able to perform any privileged operation. By dividing the rules into Capabilities, we can assign them to individual processes without elevating their privilege level. This way we can sandbox the container with certain restricted action and if it is compromised, it would perform less damage than it would with the “root” access. Be careful when using capabilities:

  • Defaults: As mentioned earlier, with “one size fits all” nature of Docker, a number of Capabilities remain enabled. These default set of capabilities given to a container does not provide complete isolation. A better approach would be to remove all the capabilities for the container and then add only those capabilities that are required by the application process running in the container. Adding capabilities comes from trial and error approach using various test scenarios for the application running on the container.
  • SYS_ADMIN capability: Another issue here is that even capabilities are not finegrained. One such capability that is most talked about is the SYS_ADMIN capability. It has a lot of functionalities, some of which are used only by the privileged user. Another reason of concern here.
  • SETUID binary: The setuid bit provides full root permission to a process using it. Many linux distributions use the setuid bit on several binaries, despite the fact that capabilities can be an alternative to using setuid, thus making it more safe and provide less surface for attack in case there is a break out from a non-privileged container. Defang SETUID binaries by removing the SETUID bit or mount filesystems with nosuid.

Seccomp

Seccomp (Secure Computing mode) is a simple sandboxing tool feature in the Linux Kernel. Seccomp provides a filtering mechanism for incoming system calls. It provides a process to monitor all the system calls it can make and take action if the system call is not allowed by the filter. Thus, if an attacker gains access to the container, it would have a limited number of system calls in its arsenal. The seccomp filter system uses Berkeley Packet Filter (BPF) system, similar to the one that uses socket filters. In other words, seccomp allows a user to catch a syscall and “allow”, “deny”, “trap”, “kill”, or “trace” it via the syscall number and arguments passed. An additional layer of granularity is added in locking down the process in one’s containers to only do what is needed.

Docker has provided a default seccomp profile for running on the containers that is more like a whitelist of calls that are allowed. This profile disables only 44 system calls out of 300+ available system calls. This is because of the vast use cases of the containers and its current deployment. Making it stricter would make many applications not usable via Docker container environment. Eg: System call such as reboot is disabled, because there would never be a situation where a container would ever need to reboot the host machine.

Another good example is keyctl – a system call for which a vulnerability was recently found (CVE 2016-0728). Keyctl is also disabled by default now. A most secure seccomp profile would be to create a Custom seccomp profile that blocks these 44 system calls and the ones running on the container that are not required by the app. This can be done with the help of DockerSlim (http://dockersl.im) that auto-generates seccomp profiles.

The good part about the seccomp feature is that it would make the attack surface very narrow. However, it also has around 250+ calls still available that would make it susceptible to attacks. For example, CVE 2014-2153 is a vulnerability that was found in the futex system call, which enables privilege escalation through a kernel exploit. This system call is still enabled and is inevitable since it has legitimate use for implementing basic resource locking for synchronization needs. Although the seccomp feature makes the containers more secured than earlier versions of Docker, it only provides moderate security in the container environment. This needs to be hardened, especially for enterprises, to make it compatible with the application running on the containers.

Conclusion

Through the hardening methods for namespaces, cgroups and the use of seccomp profiles we are able to sandbox our containers to a great extent. By following various benchmarks and using least privileges we can make our container environment secure. However, this only scratches the surface and there are plenty of things to take care of.

Rahul Gajria
Cloud Security Researcher Intern

 

References

1. https://www.toptal.com/linux/separation-anxiety-isolating-your-system-withlinux-namespaces
2. http://www.slideshare.net/jpetazzo/anatomy-of-a-container-namespaces-cgroupssome-filesystem-magic-linuxcon
3. https://www.oreilly.com/ideas/docker-security?intcmp=il-webops-free-articlelgen_five_security_concerns_when_using_docker
4. https://www.nccgroup.trust/globalassets/ourresearch/us/whitepapers/2016/april/ncc_group_understanding_hardening_linux_c ontainers-10pdf/
5. https://docs.docker.com/engine/security/security/