Posts in Category "Security"

Boldly Leading the Possibilities in Cybersecurity, Risk, and Privacy

During the last week in October, five members of the Adobe Security team and I attended the Executive Women’s Forum (EWF) National Conference as first-time attendees.  Over 400 were in attendance at the fourteenth annual conference.  It was the first time three separate tracks were offered, which focused on the primary topic of “Balancing Risk and Opportunity, by Transforming Cybersecurity, Risk, and Privacy beyond the Enterprise.”

The Executive Women’s Forum has emerged as a leading member organization using education, leadership development and trusted relationships to attract, develop and advance women in the Information Security, IT Risk Management, Privacy, Governance, Compliance and Risk Assurance industries.  Additionally, EWF membership offers virtual access to peers and thought leadership globally, networking opportunities both locally and at industry conferences, advancement education and opportunities via EWF’s leadership program, plus their peer and mentoring program.  EWF also provides learning interaction at their national conference, regional meetings and webinar series.

At this year’s national EWF conference, several of the presentations and sessions stood out, namely:

  • Several keynote speakers wowed the crowd with their personal stories of industry challenges, personal hardships and their rise through the ranks. Speakers of interest included:
    • Susan Keating (President and Chief Executive Officer at National Foundation for Credit Counseling). Keating recounted her personal story of managing through what was in 2001 the largest banking fraud in US history and lessons learned from the experience.  Her message and advice focused in on being resilient, prioritizing prevention and recovery preparation activities, remembering communication is imperative and one needs to be tireless in connecting at all points (employees, customers, partners, etc.), that bullying and intimidating behavior is not to be tolerated, and that it’s important to keep a culture healthy by always remembering the human component – if you’re not staying connected to people you could miss something.
    • Meg McCarthy (Executive Vice President of Operations and Technology at Aetna). Interviewed by Joyce Brocaglia, McCarthy spoke of her career journey to the executive suite, the challenges she faced along the way, and what it takes to thrive as a leader at the top.  Among her advice, three pieces stuck out: Talk the talk – get communication coaching, get into executive strategy meetings, and identify and study role models.  Saying yes – be careful declining – McCarthy always took opportunities offered to her.  The proof is in the pudding – build a track record, visualize your goals, and always look the part.
    • Valerie Plame (Former Operations Officer at US Central Intelligence Agency). Plame told her story as an undercover operations officer for the CIA, who served her country by gathering intel on weapons of mass destruction.  When her husband Joe Wilson spoke out about the falsities that were levied publically to justify the Iraq War, the administration retaliated by revealing Plame’s position in the CIA, ruining her career and reputation, and exposing her to domestic and foreign enemies.  She encouraged all to hold people, government and organizations accountable for their words and actions.
    • Nina Burleigh (Author and National Correspondent at Newsweek Magazine). Burleigh explained how the issue of women’s equality is a challenge everyone wants to address and is approaching a tipping point.  She foresees 2017 being the year of women, and topics, especially in the US, about female political representation, family and maternal leave and women’s health care will be at the forefront.
  • Additionally, there were several breakout talks that bear mention:
    • The pre-conference workshop on Conversational Intelligence facilitated by Linda Dolceamore of EWF focused on the chemical reactions in our brains in response to different types of communication. The workshop taught us what to do in order to activate the prefrontal cortex for high-level thinking, as well as evaluate whether our conversations are transactional, positional, or transformational.  Proper application of this information should enable a person to build better relationships, which will then evoke higher levels of trust and collaboration.
    • A panel session where five C-level executives talked about what they see next and what keeps them up at night. Takeaways included:
      • Trust is the currency of the future.
      • The digital vortex is upon us and only smart digitization will see us through.
      • Stay true to yourself. Stay curious.  Ask why.
    • The presentation regarding EWF’s initiative for Voice Privacy. As products proliferate utilizing voice interaction, it is imperative we consider the security and privacy aspect of our voices and provide the industry with appropriate guidance for voice enabled technology.
    • Yolanda Smith’s presentation on The New Device Threat Landscape.  Client-side attacks generally start off the corporate network.  Smith demonstrated a karma attack using a Hak5 Pineapple Nano as the deviant access point (complete with a phony landing page) and the Social Engineering Toolkit to generate a payload for a reverse TCP shell.  To mitigate the threat of these sort of attacks, remove probes from your devices and refrain from connecting devices to unknown networks.

EWF’s goal of extending the influence and strength of women’s voices in the industry, aligns well with Adobe’s mission to establish Adobe as a leader within the industry for creating an environment which supports the growth and development of global women leaders.  Therefore, it’s exciting for Adobe to partner with the Executive Women’s Forum organization.  If EWF’s national conference is a taste of their yearly impact, it will be compelling to participate in the additional year-round initiatives, events and opportunities available through EWF’s membership. We look forward to connecting with colleagues and friends at more events going forward.

Security Automation Part II: Defining Requirements

Every security engineer wants to build the big security automation framework for the challenge of designing something with complexity. Building those big projects have their set of challenges. Like any good coding project, you want to have a plan before setting out on the adventure.

In the last blog, we dealt with some of the high level business concerns that were necessary to consider in order to design a project that would produce the right results for the organization. In this blog we will look at the high level design considerations from the software architect’s perspective. In the next blog, we will look at the implementer’s concerns. For now, most architects are concerned with the following:

Maintainability

This is a concern for both the implementer and architect, but they often have different perspectives. If you are designing a tool that the organization is going to use as a foundation of its security program, then you need to design the tool such that the team can maintain it over time.

Maintainability through project scope

There are already automation and scalability projects that are deployed by the development team. These may include tools such as Jenkins, Git, Chef, or Maven. All of these frameworks are extensible. If all you want to do is run code with each build, then you might consider integrating into these existing frameworks rather than building your own automation. They will handle things such as logging, alerting, scheduling, and interacting with the target environment. Your team just has to write code to tell them what you want done with each build.

If you are attempting a larger project, do you have a roadmap of smaller deliverables to validate the design as you progress? The roadmap should prioritize the key elements of success for the project in order to get an early sense if you are heading in the right direction with your design. In addition, while it is important to define everything that your project will do, it is also important to define all the things that your tool will not perform. Think ahead to all of the potential tangential use cases that your framework could be asked to perform by management and customers. By establishing what is out of scope for your project, you can set proper expectations earlier in the process and those restrictions will become guardrails to keep you on track when requests for tangential features come in.

Maintainability through function delegation

Can you leverage third-party services for operational issues?  Can you use the cloud so that baseline network and machine uptime is maintained by someone else? Can you leverage tools such as Splunk so that log management is handled by someone else? What third-party libraries already exist so that you are only inventing the wheels that need to be specific to your organization? For instance, tools like RabbitMQ are sufficient to handle most queueing needs.  The more of the “busy work” that can be delegated to third-party services or code, the more time that the internal developers can spend on perfecting the framework’s core mission.

Deployment

It is important to know where your large scale security framework may be deployed. Do you need to scan staging environments that are located on an internal network in order to verify security features before shipping? Do you need to scan production systems on an external network to verify proper deployment? Do you need to scan the production instances from outside the corporate network because internal security controls would interfere with the scan? Do you want to have multiple scanning nodes in both the internal and external network? Should you decouple the job runner from the scanning nodes so that the job runner can be on the internal network even if the scanning node is external?  Do you want to allow teams to be able to deploy their own instances so that they can run tests themselves? For instance, it may be faster if an India based team can conduct the scan locally than to run the scan from US based hosts. In addition, geographical load balancers will direct traffic to the nearest hosts which may cause scanning blind spots. Do you care if the scanners get deployed to multiple  geographic locations so long as they all report back to the same database?

Tool selection

It is important to spend time thinking about the tools that you will want your large security automation framework to run because security testing tools change. You do not want your massive project to die just because the tool it was initially built to execute falls out of fashion and is no longer maintained. If there is a loose coupling between the scanning tool and the framework that runs it, then you will be able to run alternative tools once the ROI on the initial scanning tool diminishes. If you are not doing a large scale framework and are instead just modifying existing automation frameworks, the same principles will apply even if they are at a smaller scale.

Tool dependencies

While the robustness of tests results is an important criterion for tool selection, complex tools often have complex dependencies. Some tools only require the targeted URL and some tools need complex configuration files.  Do you just need to run a few tools or do you want to spend the time to make your framework be security tool agnostic? Can you use a Docker image for each tool in order to avoid dependency collisions between security assessment tools? When the testing tool conducts the attack on the remote host, does the attack presume that code injected into the remote host’s environment can send a message back to the testing tool?  If you are building a scalable scanning system with dynamically allocated, short-lived hosts that live behind a NAT server, then it may be tricky for the remote attack code to send a message back to the original security assessment tool.

Inputs and outputs

Do the tools require a complex, custom configuration file per target or do you just need to provide the hostname? If you want to scale across a large number of sites, tools that require complex, per-site configuration files may slow the speed at which you can scale and require more maintenance over time. Does the tool provide a single clear response that is easy to record or does it provide detailed, nuanced responses that require intelligent parsing? Complex results with many different findings may make it more difficult to add alerting around specific issues to the tool. They could also make metrics more challenging depending on what and how you measure.

Tool scalability

How many instances of the security assessment tool can be run on a single host? For instance, tools that listen on ports limit the number of instances per server. If so, you may need Docker or a similar auto-scaling solution. Complex tools take longer to run which may cause issues with detecting time outs. How will the tool handle re-tests for identified issues? Does the tool have granularity so that dev team can test their proposed patch against the specific issue? Or does the entire test suite need to be re-run every time the developer wants to verify their patch?

Focus and roll out

If you are tackling a large project, it is important to understand what is the minimum viable product? What is the one thing that makes this tool different than just buying the enterprise version of the equivalent commercial tool? Could the entire project be replaced with a few Python scripts and crontab? If you can’t articulate what extra value your approach will bring over the alternative commercial or crontab approach, then the project will not succeed. The people who would leverage the platform may get impatient waiting for your development to be done. They could instead opt for a quicker solution, such as buying a service, so that they can move on to the next problem.  As you design your project, always ask yourself, “Why not cron?” This will help you focus on the key elements of the project that will bring unique value to the organization. Your roadmap should focus on delivering those first.

Team adoption

Just because you are building a tool to empower the security team, doesn’t mean that your software won’t have other customers. This tool will need to interact with the development teams’ environments. This security tool will produce outputs that will eventually need to be processed by the development team. The development teams should not be an afterthought in your design. You will be holding them accountable for the results and they need methods for understanding the context of what your team has found and being able to independently retest.

For instance, one argument for integrating into something like Jenkins or GIt, is that you are using a tool the development team already understands. When you try to explain how your project will affect their environment, using a tool that they know means that the discussion will be in language that they understand. They will still have concerns that your code might have negative impacts on their environment. However, they may have more faith in the project if they can mentally quantify the risk based on known systems. When you create standalone frameworks, then it is harder for them to understand the scale of the risk because it is completely foreign to them.

At Adobe, we have been able to work directly with the development teams for building security automation. In a previous blog, an Adobe developer described the tools that he built as part of his pursuit of an internal black belt security training certification. There are several advantages to having the security champions on development teams build the development tools rather than the core security team. One is that full time developers are often better coders than the security teams and the developers better understand the framework integration. Also, in the event of an issue with the tool, the development team has the knowledge to take emergency action. Often times, a security team just needs the tool to meet specific requirements and the implementation and operational management of the tool can be handled by the team responsible for the environment. This can make the development team more at ease with having the tool in their environment and it frees up the core security team to focus on larger issues.

Conclusion

While jumping right into the challenges of the implementation is always tempting, thinking through the complete data flow for the proposed tools can help save you a lot of rewriting. It also important that you avoid trying to boil the ocean by scoping more than your team can manage. Most importantly, always keep focus on the unique value of your approach and the customers that you need to buy into the tool once it is launched. The next blog will focus on an implementer’s concerns around platform selection, queuing, scheduling, and scaling by looking at example implementations.

Security Automation for PCI Certification of the Adobe Shared Cloud

Software engineering is a unique and exciting profession. Engineers must employ continuous learning habits in order to keep up with constantly morphing software ecosystem. This is especially true in the software security space.  The continuous introduction of new software also means new security vulnerabilities are introduced. The problem at its core is actually quite simple. It’s a human problem.  Engineers are people, and, like all people, they sometimes make mistakes.  These mistakes can manifest themselves in the form of ‘bugs’ and usually occur when the software is used in a way the engineer didn’t expect. When these bugs are left uncorrected it can leave the software vulnerable. Some mistakes are bigger than others and many are preventable. However, as they say, hindsight is always 20/20.  You need not necessarily experience these mistakes to learn from them. As my father often told me, a smart person learns from his mistakes, but a wise person learns from the mistakes of others. And so it goes with software security. In today’s software world, it’s not enough to just be smart, but you also need to be wise.

After working at Adobe for just shy of 5 years I have achieved the current coveted rank of ‘Black Belt’ in Adobe’s security program through the development of some internal security tools and assisting in the recent PCI certification of Shared Cloud (the internal platform upon which, Creative Cloud and Document Cloud are based).  Through Adobe’s security program my understanding of security has certainly broadened.  I earned my white belt within just a few months of joining Adobe which consisted of some online courses covering very basic security best practices. When Adobe created the role of “Security Champion” within every product team, I volunteered. Seeking to make myself a valuable resource to my team, I quickly eared my green belt which consisted of completing several advanced online courses covering a range of security topics from SQL Injection & XXS to Buffer Overflows. I now had 2 belts down,  only 2 to go.

At the beginning of 2015, the Adobe Shared Cloud team started down the path of PCI Compliance.  When it became clear that a dedicated team would be needed to manage this, myself and a few others made a career shift from software engineer to security engineer in order to form a dedicated security team for the Shared Cloud.  To bring ourselves up to speed, we began attending BlackHat and OWASP conferences to further our expertise. We also started the long, arduous task of breaking down the PCI requirements into concrete engineering tasks.  It was out of this PCI effort that I developed three tools – one of which would earn me my Brown Belt, and the other two my Black Belt.

The first tool came from the PCI requirement that requires you track all of 3rd party software libraries for vulnerabilities and remediate them based on severity. Working closely with the ASSET team we developed an API that would allow you to push product dependencies and versions into applications as they are built.   Once that was completed, I wrote an integrated and highly configurable Maven plugin which consumed the API during build time, thereby helping to keep applications up-to-date automatically as part of our continuous delivery system. After completing this tool, I submitted it as a project and was rewarded with my Brown Belt. My plugin has also been adopted by several teams across Adobe.

The second tool also came from a PCI requirement. It states that no changes are allowed on production servers without a review step, including code changes. At first glance this doesn’t seem so bad – after all we were already doing regular code reviews. So, it shouldn’t be a big deal, right? WRONG! The burden of PCI is that you have to prove that changes were reviewed and that no change was allowed to go to production without first being reviewed.  There were a number of manual approaches that one could take to meet this requirement. But, who wants the hassle and overhead of such a manual process? Enter my first Black Belt project – the Git SDLC Enforcer Plugin.  I developed a Jenkins plugin that ran with a merge onto a project’s release branch.  The plugin reviews the commit history and helps ensure that every commit belongs to a pull request and that each pull request was merged by someone other than the author of the pull request.  If any offending commits or pull requests are found then the build fails and an issue is opened on the project in its GitHub space.  This turned out to be a huge time saver and a very effective mechanism for ensuring every change done to the code is reviewed.

The project that finally earned me my Black Belt, however, was the development of a tool that will eventually fully replace the Adobe Shared Cloud’s secret injection mechanism. When working with Amazon Web Services, you have a little bit of a chicken and egg problem when you begin to automate deployments. At some point, you need an automated way to get the right credentials into the EC2 instances that your application needs to run. Currently the Shared Cloud’s secrets management leverages a combination of custom baked AMI’s, IAM Roles, S3, and encrypted data bags stored in the “Hosted Chef” service. For many reasons, we wanted to move away from Chef’s managed solution, and add some additional layers of security such as the ability to rotate encryption keys, logging access to secrets, the ability to restrict access to secrets based on environment and role, as well as making it auditable. This new tool was designed to be a drop in replacement for “Hosted Chef” – it made it easier to implement in our baking tool chain and replaced the data bag functionality provided by the previous tool as well as added some additional security functionality.  The tool works splendidly and by the end of the year the Shared Cloud will be exclusively using this tool resulting in a much more secure, efficient, reliable, and cost-effective mechanism for injecting secrets.

My take away from all this is that Adobe has developed a top notch security training program, and even though I have learned a ton about software security through it, I still have much to learn. I look forward to continue making a difference at Adobe.

Jed Glazner
Security Engineer

Do You Know How to Recognize Phishing?

Computer login and password on paper attached to a hook concept for phishing or internet security

By now, most of us know that the email from the Nigerian prince offering us large sums of money in return for our help to get the money out of Nigeria is a scam. We also recognize that the same goes for the email from our bank that is laden with spelling errors. However, phishing attacks have become more sophisticated over the years, and for the most part, it has become much harder to tell the difference between a legitimate piece of communication and a scam.

In recognition of National Cyber Security Awareness Month, we asked a nationally representative sample of ~2,000 computer-owning adults in the United States about their behaviors and knowledge when it comes to cybersecurity. This week, we’ll share some of the insights from our survey related to phishing—as well as resources and tips on how you can better protect yourself from falling victim to phishing attacks.

What is phishing?

Phishing is a form of fraud in which the attacker tries to learn information such as login credentials or account information by masquerading as a legitimate, reputable entity or person in email, instant messages (IMs) or other communication channel. Examples would be an email from a bank that is carefully designed to look like a legitimate message but that is coming from a criminally-motivated source, a phone message claiming to be from the Internal Revenue Service (IRS) threatening large fines unless you immediately pay what you supposedly owe, or the email from the Nigerian prince pleading for your compassion and promising a large reward. Attackers typically create these communications in an effort to steal money, personal information, or both. Phishing emails or IMs are typically designed to make you click on links or open attachments that look authentic but are really just there to distribute malware on your machine or to capture your credit card number in a form on the attacker’s site.

So do YOU know how to recognize phishing?

For the purpose of this blog post, we’ll focus on phishing emails as the attacker’s choice of communication. According to our survey, 70 percent of adults in the United States believe they can identify a phishing email. That percentage rises to 80 percent among Millennials.[i] Yet nearly four (4) in 10 people believe they have been victims of phishing. This goes to show that it’s not as easy to detect phishing emails as it may sound! Here are six tips to help you identify whether you’ve received a “phishy” email:

1. The email urges you to take immediate action

Phishing emails often try to trick you into clicking a link by claiming that your account has been closed or put on hold, or that there’s been fraudulent activity requiring your immediate attention. Of course, it’s possible you may receive a legitimate message informing you to take action on your account. To be safe, don’t click on the link in the email, no matter how authentic it appears to be. Instead, log into the account in question directly by visiting the appropriate website, then check your account status.

2. You don’t recognize the email sender

Another common way to identify a phishing email is if you don’t recognize the email sender. Two-thirds of those individuals we surveyed who believe they can identify a phishing email noted a top indicator to be whether or not they recognized the sender. However, our survey results also show that despite the warning signs, more than four (4) in 10 U.S. adults will still open the email—and among those, nearly half would click on a link inside—potentially putting themselves at risk.

3. The hyperlinked URL is different from the one shown

The hyperlink text in a phishing email may include the name of a legitimate bank. But when you hover the mouse over the link (without clicking on it), you may discover in a small pop-up window that the actual URL differs from the one displayed and that it doesn’t contain the bank’s name. Similarly, you can hover your mouse over the address in the “From” field to see if the website domain matches that of the organization the email is supposed to have been sent from.

4. The email in question has improper spelling or grammar

This is one of the most common signs that an email isn’t legitimate. Sometimes, the mistake is easy to spot, such as “Dear Costumer” instead of “Dear Customer.”

Other mistakes might be more difficult to spot, so make sure to look at the email in closer detail. For example, the subject line or the email itself might say “Health coverage for the unemployeed.” The word “unemployed” isn’t exactly difficult to spell, and any legitimate organization should have editors who review marketing emails carefully before sending them out. So when in doubt, check the email closely for misspellings and improper grammar.

5. The email requests personal information

Reputable organizations don’t ask for personal customer information via email. For example, if you have a checking account, your bank already knows your account number.

6. The email includes suspicious attachments

It would be highly unusual for a legitimate organization to send you an email with an attachment, unless it’s a document you’ve requested or are expecting. As always, if you receive an email that looks in any way suspicious, never click to download the attachment, as it could be malware.

What to do when you think you’ve received a phishing email

Report potential phishing scams. If you think you’ve received a phishing email from someone posing as Adobe, please forward that email to phishing@adobe.com, so we can investigate.

Google also offers online help for reporting phishing websites and phishing attacks. And last but not least, the U.S. government offers valuable tips for protecting yourself from phishing scams as well as an email address for reporting scams: phishing-report@us-cert.gov.

So while the Nigerian prince has become a lot more sophisticated in his tactics, there is a lot you can do to help protect yourself. Most importantly, trust your instincts. If it smells like a scam, it might very well be a scam!


[i] Millennials are considered individuals who reached adulthood around the turn of the 21st century. If you are in your mid-30s today, you are considered a Millennial.

Security Automation Part I: Defining Goals

This is the first of a multi-part series on security automation. This blog will focus on high-level design considerations. The next blog will focus on technical design considerations for building security automation. The third blog will dive even deeper with the specific examples as the series continues to get more technical.

There are many possible approaches for adding automation to your security process. For many security engineers, it is an opportunity to get away from reviewing other engineers’ code and write some of their own. One key difference between a successful automation project and “abandonware” is to create a project that will produce meaningful results for the organization. In order to accomplish that, it is critical to have a clear idea of what problem you are trying to solve at the onset of the project.

Conceptual aspirations

When choosing the right path for designing security automation, you need to decide what will be the primary goals for the automation. Most automation projects can be grouped into common themes:

Scalability

Scalability is something that most engineers instinctively go to first because the cloud has empowered researchers to do so much more. Security tools designed for scalability often focus on the penetration testing phase of the software development lifecycle. They often involve running black or grey box security tests against the infrastructure in order to confirm that the service was deployed correctly. Scalability is necessary if you are going to measure every host in a large production environment. While scalability is definitely a powerful tool that is necessary in the testing phase of your security development lifecycle, it sometimes overshadows other important options that can occur earlier in the development process and that may be less expensive in terms of resources.

Consistency

There is a lot of value in being able to say that “Every single release has ___.” Consistency isn’t considered as challenging of a problem as scalability because it is often taken for granted. Consistency is necessary for compliance requirements where public attestations need to have clear, simple statements that customers and auditors can understand. In addition, “special snowflakes” in the environment can drown an organization’s response when issues arise. Consistency automation projects frequently focus on the development or build phase of the software development lifecycle. They include integrating security tasks into build tools like Jenkins, Chef, Git, or Maven. By adding security controls into these central tools in the development phase, you can have reasonable confidence that machines in production are correctly configured without scanning each and every one individually.

Efficiency

Efficiency projects typically focus on improving operational processes that currently involve too much human interaction.  The goal is to refocus the team’s manual resources to more important tasks. For instance, many efficiency projects have the word “tracking” somewhere in their definition and involve better leveraging tools like JIRA or Sharepoint. Often times, efficiency automation is purchased rather than built because you aren’t particularly concerned with how the problem gets solved, so long as it gets solved and that you aren’t the one who has to maintain the code for it. That said, SalesForce’s open-source VulnReport.io project (http://vulnreport.io) is an example of a custom built efficiency tool which they claim improved operational efficiency and essentially resulted “in a ‘free’ extra engineer for our team.”

Metrics

Metrics gathering can be a project in itself or it can be the byproduct of a security automation tool. Metrics help inform and influence management decisions with concrete data. That said, it is important to pick metrics that can guide management and development teams towards solving critical issues. For instance, development teams will interpret the metrics gathered as the key performance indicator (KPI) that they are being measured against by management.

In addition, collecting data almost always leads to requests for more detailed data. This can be useful in helping to understand a complex problem or it can be a distraction that leads the team down a rabbit hole. If you take time to select the proper metrics, then you can help keep the team focused on digging deeper into the critical issues.

Operational Challenges

If your scalable automation project aims to run a web application penetration tool (WAPT) across your entire enterprise, then you are basically creating an “enterprise edition” for that tool. If you have used enterprise edition WAPTs in the past and you did not achieve the success that you wanted, then recreating the same concept with a slightly different tool will most likely not produce significantly different results when it comes to action within the enterprise. The success or failure of tools are typically hinged on the operational process surrounding the tool more than the tool itself. If there is no planning for handling the output from the tool, then increasing the scale at which the tool is run doesn’t really matter. When you are designing your automation project, consider operational questions such as:

Are you enumerating a problem that you can fix?

Enumerating an issue that the organization doesn’t have the resources to address can sometimes help justify getting the funding for solving the problem. On the other hand, if you are enumerating a problem that isn’t a priority for an organization, then perhaps you should refocus the automation on issues that are more critical. If no change occurs as the result of the effort, then the project will stop iterating because there is no need to re-measure the issue.

In some situations, it may be better to tackle the technical debt of basic issues before tackling larger issues. Tests for basic technical debt issues are often easier to create and they are easier for the dev team to address. As the dev team addresses the issues, the project will also iterate in response. While technical debt isn’t as exciting as the larger issues, addressing it may be a reasonable first step towards getting immediate ROI.

Are you producing “noise at scale”?

Running a tool that is known for creating a high level of false positives at scale will produce “noise at scale”. Unless you have an “at scale” triage team to eliminate the false positives, then you are just generating more work for everyone. Teams are less likely to take action on metrics that they believe are debatable due to the fear that their time might be wasted. A good security tool will empower the team to be more efficient rather than drown them in reports.

How will metrics guide the development team?

As mentioned earlier, the metric will be interpreted as a KPI for the team and they will focus their strategy around what is reported to management. Therefore, it makes sense not to bother measuring non-critical issues since you don’t want the team to get distracted by minor issues. You will want to make sure that you are collecting metrics on the top issues in a way that will encourage teams towards the desired approach.

Often times there are multiple ways to solve an issue and therefore multiples ways to measure the problem. Let’s assume that you wanted to create a project to tackle cross-site scripting (XSS). Creating metrics that count the number of XSS bugs will focus a development team on a bug fixing approach to the problem. Alternatively, counting the number of sites with content security policy headers deployed will focus the development team on security mitigations for XSS. In some cases, focusing the team on security mitigations has more immediate value than focusing on bug fixing.

What metrics does management need to see?

One method to determine how your metrics will drive development teams and inform management, is to phrase them in terms of an assertion. For instance, “I assert that the HSTS header is returned by all our sites in order to ensure our data is always encrypted.”  By phrasing it as an assertion, you are phrasing the test in terms of a simple true/false terms that can be reliably measured at scale. You are also phrasing the test in terms of its desired outcome using simplistic terms. This makes it easier to determine if the goal implied by the metric’s assertion meets with management’s goals.

From a management perspective, it is also important to understand whether your measurement of change or a measurement of adoption. Measuring the number of bugs in an application is often measuring an ongoing wave. If security programs are working, the height of the waves will trend down overtime. Although, that means you have to watch the wave through multiple software releases before you can reliably see a trend in its change. If you measure security mitigations, then you are measuring the adoption rate of a project that will end with a state of relative “completeness.” Tracking wave metrics overtime is valuable because you need to see when things are getting better or worse. However, since it is easy to procrastinate on issues that are open ended,  adoption-style projects that can be expressed as having a definitive “end” may get more immediate traction from the development team because you can set a deadline that needs to be met.

Putting it all together

With these ideas and questions in mind, you can mentally weigh which types of projects to start with for immediate ROI and the different tools for deploying them.

For instance, counting XSS and blind SQL injection bugs are hard tests to set up (authentication to the application, crawling the site, etc.), these tests frequently have false positives, and they typically result in the team focusing on bug fixing which would require in-depth monitoring overtime because it is a wave metric. In contrast, a security project measuring security headers, such as X-Frame-Options or HSTS, are simple tests to write, they have low false-positive rates, they can be defined as “(mostly) done” once the headers are set, and they focus the team on mitigations. Another easy project might be writing scalable tests that confirm the SSL configuration meets the company standards. Therefore, if you are working on a scalability project, starting with a simple SSL or security header projects can be quick wins that demonstrate the value of the tool. From there, you can then progress to measuring the more complex issues.

However, let’s say you don’t have the compute resources for a scalability project. An alternative might be to turn the projects into consistency style projects earlier in the lifecycle. You could create Git or Jenkins extensions that search the source code for indicators that the team has deployed security headers or proper SSL configurations. You would then measure how many teams are using the build platform extensions and review the collected results from the extension. It would have a similar effect as testing the deployed machines without as much implementation overhead. Whether this will work better overall for your organization will depend on where you are with your security program and its compliance requirements.

Conclusion

While the technical details of how to build security automation is an exciting challenge for any engineer, it is critical to build a system that will empower an organization. Your project will have a better chance of success if you spend time considering how the output of your tool will help guide progress. The scope of the project in terms of development effort and project coverage by carefully considering where in the development process you will deploy the automation.  By spending time on defining how the tool can best serve the team’s security goals, you can help ensure you are building a successful platform for the company.

The next blog will focus on the technical design considerations for building security automation tools.

Peleus Uhley
Principal Scientist, Security

Adobe Works with BYU Summer Security Camp for Girls

Adobe Works with the BYU Cybersecurity Summer Camp for Girls

This summer members of the Adobe security teams worked with Brigham Young University (BYU) on a free cybersecurity summer camp for girls in grades 8 – 12.  This event is organized by the BYU Cybersecurity Research Lab and Adobe helps with funding, curriculum development, and mentoring for the program. The camp included 4 days of hands-on cybersecurity workshops, classes, and experiences. The students learned about many topics designed to get them excited about pursuing cybersecurity as a career including hacking, privacy, viruses and how to stay safe online. At the core of the event was a space-themed “escape” challenge. This challenge required teams to solve, through a simulated space ship command bridge, common cybersecurity problems to avoid power failures, hostile alien encounters, and other pitfalls. It was a good combination of training from experts and fun experiential learning experiences.

“All the research and our own experience has shown that this age range is a critical time for young women to develop an interest in cybersecurity” says Dr. Dale Rowe, Director of the BYU Cybersecurity Research Lab. Not only was it beneficial for the participants, Adobe employees serving as mentors also had a great time. CJ Cornel, student director of the camp, said, “the camp was a great way to help us share our passion for cybersecurity with some of the next generation in a safe environment.”

This camp is one of many activities Adobe sponsors to encourage girls and young women to enter the cybersecurity field including Women in Cybersecurity, Girls Who Code, Winja “Capture the Flag” (“CTF”) Competition, and r00tz @ BlackHat.

2016-08-13-2 fromthedailyherald

Chandler Newby
Information Security Engineer

Donald Porter
Sr. Manager, Security Engineering

Security Considerations for Container Orchestration

Orchestration platforms are enabling organizations to scale their applications at an unparalleled rate which is why many technology centric companies are surging to move applications onto a distributed datacenter wide platform that enables them to scale at a click of a button.

One orchestration platform that is rapidly growing in popularity is a distributed datacenter wide operating system running on top of the Apache Mesos kernel. A traditional operating system manages resources for a single server, whereas a distributed operating system seamlessly manages resources for multiple servers acting as one shared pool of resources. Once Mesos agents have been established on servers throughout a datacenter, a cluster is formed, and Mesos jobs can be distributed from a Mesos master to servers with available resources within the cluster for processing.

A risk associated with the Mesos master and slave model revolves around the authentication of Mesos services. Many Mesos deployments do not require authentication by default – if an attacker can communicate with either the Mesos master service (default TCP port 5050) or a Mesos slave service (default TCP port 5051), the attacker may be able to easily gain remote code execution (RCE) on another server within the cluster.

Frameworks are commonly installed within DC/OS environments to provide datacenter wide services. For example, the Marathon Framework is commonly used within these environments to perform container orchestration and help ensure that a specific number of instances of a container are running persistently within the DC/OS cluster. The Chronos Framework is also commonly deployed to provide fault tolerant distributed job scheduling throughout the cluster.

Today many frameworks do not by default enforce authentication on both management web UIs and the API interfaces. Therefore, if an attacker can communicate with the framework, they may be able to gain remote code execution (RCE) on servers within the cluster.

One hurdle associated with the exploitation of framework services, is that many implementations will deploy framework services to random high TCP ports on arbitrary servers within the cluster, making it slightly more difficult for an unaware attacker to find the services within the datacenter. This hurdle can be overcome by leveraging a few built in services within the DC/OS environment. First, many times a unique top level domain (TLD) is established for the Mesos cluster which can be used by services within the cluster to locate frameworks (e.g. ping -c3 marathon.mesos). Second, if the Mesos master service can be queried, a complete list of Mesos slaves can be obtained potentially reducing the attack space. Lastly, many times a Mesos DNS service is established that will enable a remote attacker to perform an enumerate API call. This is the functional equivalent of performing a DNS zone transfer – it will provide a detailed map of all services and random high ports back to the attacker.

Defenders can easily take quite a few steps to help prevent the above exploitation paths, including:

  • Enabling authentication on all Mesos Masters and Agents
  • Enabling authentication on all Framework Web UIs and APIs services
  • Disabling the enumerate API call for the Mesos DNS service
  • Logging authentication requests and the execution of jobs for detection of suspicious events

As the popularity of orchestration platforms grows, attackers will continue to spend more resources building tools and techniques to exploit these frameworks and services. Organizations leveraging these technologies would be wise to spend the extra cycles up front to put reasonable security controls in place before using these platforms to host production applications.

Bryce Kunz
Sr. Lead Security Engineer, Digital Marketing

Tips for Sandboxing Docker Containers

In the world of virtualization, we know two words: Virtual Machines and Containers. Both provide sandboxing: Virtual Machines provide it through hardware level abstraction while containers provide a process level isolation using a common kernel. Docker containers by default are secure but do they provide complete isolation? Let us look at the various ways sandboxing could be achieved in containers and what we need to do to try and achieve complete isolation.

Namespaces

One of the building blocks of containers that provides the first level of sandboxing is Namespaces. It allows processes with their own view of the system. It isolates the processes from having less effect on other processes in container environment or in the host system. Today there are 6 namespaces available in Linux and all of them are supported by Docker.

  • PID namespace: Provides isolation such that a process belonging to a particular PID namespace can only see other processes in the same namespace. It makes sure that processes that belong to one PID namespace cannot know the existence of processes in other PID namespace and hence cannot inspect or kill them.
  • User namespace: Provides isolation such that a process belonging to a particular user namespace is given a view such that a user could be a root within that namespace, but on the host system, it is mapped as a non-privileged user. This provides a great security improvement in Docker environment.
  • Mount namespace: Provides isolation of the host filesystem from the new filesystem created for the process. This allows processes in different namespaces to change the mount points without affecting each other.
  • Network namespace: Provides isolation such that a process belonging to a particular network namespace gets its own network stack that includes routing tables, IP tables rules, sockets and interfaces. Additionally, we would require Ethernet bridges that allow networking between hosts and namespaces.
  • Uts namespace: Isolates two system identifiers – nodename and domainname. This allows containers to have its own hostname and NIS domain name, which is helpful during the initialization steps.
  • IPC namespace: Provides isolation of InterProcess communication resources that includes IPC message queues, semaphores etc.

Although, namespaces provide a great level of isolation, there are resources that a container can access, but they are not namespaced. These resources are common to all the containers on the host machine which raises concerns over the security. This may present a risk of attack or information exposure. Resources that are not sandboxed include the following:

  • The Kernel Keyring: The Kernel Keyring separates keys using UID. Since we have multiple users in different containers that might have the same UID, all of these users are allowed to have access to the same keys in the keyring. Applications using Kernel Keyring for handling secrets are much less secured due to lack of sandboxing
  • The /proc and system time: Due to the “one size fits all” nature of Docker, a number of linux capabilities remain enabled. With certain capabilities enabled, the exposure of /proc offers a source of information leak and large attack surface. /proc includes files that contain configuration information of the kernel. It has information about the host system resources. Another set of Capabilities include the SYS_TIME and SYS_ADMIN, that allow changes to the system time not just inside the container, but also for the host and other containers.
  • The Kernel Modules: If an application loads kernel modules, that would allow the newly added module to be available across all the containers in the environment and the host system. There are some modules that enforce security policies. Access to such modules would allow the applications to make changes to the security policies which again is a big concern.
  • Hardware: The underlying hardware of the host system is shared between all the containers running on the system. A proper cgroup configuration and access control is required to have a fair distribution of resources. In other words, namespaces allow a larger area to be divided into smaller areas and cgroups allow proper usage of these areas. Cgroups work on resources like memory, cpu, disk drives etc. Having a well-defined cgroup configuration would prevent DoS attacks.

Capabilities

Capabilities are rules that help in performing privileged operations. The privileged operations are only allowed by the root user. An individual non-root process would not be able to perform any privileged operation. By dividing the rules into Capabilities, we can assign them to individual processes without elevating their privilege level. This way we can sandbox the container with certain restricted action and if it is compromised, it would perform less damage than it would with the “root” access. Be careful when using capabilities:

  • Defaults: As mentioned earlier, with “one size fits all” nature of Docker, a number of Capabilities remain enabled. These default set of capabilities given to a container does not provide complete isolation. A better approach would be to remove all the capabilities for the container and then add only those capabilities that are required by the application process running in the container. Adding capabilities comes from trial and error approach using various test scenarios for the application running on the container.
  • SYS_ADMIN capability: Another issue here is that even capabilities are not finegrained. One such capability that is most talked about is the SYS_ADMIN capability. It has a lot of functionalities, some of which are used only by the privileged user. Another reason of concern here.
  • SETUID binary: The setuid bit provides full root permission to a process using it. Many linux distributions use the setuid bit on several binaries, despite the fact that capabilities can be an alternative to using setuid, thus making it more safe and provide less surface for attack in case there is a break out from a non-privileged container. Defang SETUID binaries by removing the SETUID bit or mount filesystems with nosuid.

Seccomp

Seccomp (Secure Computing mode) is a simple sandboxing tool feature in the Linux Kernel. Seccomp provides a filtering mechanism for incoming system calls. It provides a process to monitor all the system calls it can make and take action if the system call is not allowed by the filter. Thus, if an attacker gains access to the container, it would have a limited number of system calls in its arsenal. The seccomp filter system uses Berkeley Packet Filter (BPF) system, similar to the one that uses socket filters. In other words, seccomp allows a user to catch a syscall and “allow”, “deny”, “trap”, “kill”, or “trace” it via the syscall number and arguments passed. An additional layer of granularity is added in locking down the process in one’s containers to only do what is needed.

Docker has provided a default seccomp profile for running on the containers that is more like a whitelist of calls that are allowed. This profile disables only 44 system calls out of 300+ available system calls. This is because of the vast use cases of the containers and its current deployment. Making it stricter would make many applications not usable via Docker container environment. Eg: System call such as reboot is disabled, because there would never be a situation where a container would ever need to reboot the host machine.

Another good example is keyctl – a system call for which a vulnerability was recently found (CVE 2016-0728). Keyctl is also disabled by default now. A most secure seccomp profile would be to create a Custom seccomp profile that blocks these 44 system calls and the ones running on the container that are not required by the app. This can be done with the help of DockerSlim (http://dockersl.im) that auto-generates seccomp profiles.

The good part about the seccomp feature is that it would make the attack surface very narrow. However, it also has around 250+ calls still available that would make it susceptible to attacks. For example, CVE 2014-2153 is a vulnerability that was found in the futex system call, which enables privilege escalation through a kernel exploit. This system call is still enabled and is inevitable since it has legitimate use for implementing basic resource locking for synchronization needs. Although the seccomp feature makes the containers more secured than earlier versions of Docker, it only provides moderate security in the container environment. This needs to be hardened, especially for enterprises, to make it compatible with the application running on the containers.

Conclusion

Through the hardening methods for namespaces, cgroups and the use of seccomp profiles we are able to sandbox our containers to a great extent. By following various benchmarks and using least privileges we can make our container environment secure. However, this only scratches the surface and there are plenty of things to take care of.

Rahul Gajria
Cloud Security Researcher Intern

 

References

1. https://www.toptal.com/linux/separation-anxiety-isolating-your-system-withlinux-namespaces
2. http://www.slideshare.net/jpetazzo/anatomy-of-a-container-namespaces-cgroupssome-filesystem-magic-linuxcon
3. https://www.oreilly.com/ideas/docker-security?intcmp=il-webops-free-articlelgen_five_security_concerns_when_using_docker
4. https://www.nccgroup.trust/globalassets/ourresearch/us/whitepapers/2016/april/ncc_group_understanding_hardening_linux_c ontainers-10pdf/
5. https://docs.docker.com/engine/security/security/

Adobe’s CCF Helps Acquisitions Meet Security Compliance Requirements

The Common Controls Framework (CCF) is a comprehensive set of control requirements, rationalized from the alphabet soup of several different industry information security and privacy standards. To help ensure that our standards effectively meet our customers’ expectations, we are constantly refining this framework based on industry requirement changes, customer asks, and internal feedback.

As Adobe continues to grow as an organization and as we continue to onboard new acquisitions, CCF enables these acquisitions to come into compliance more quickly. At Adobe, the goal is for acquisitions to meet organization security practices and standards and come up to speed with the compliance roadmap of the organization. CCF enables the new acquisitions to inherit existing simple & scalable solutions to reduce the overall effort significantly to meet compliance goals.

The journey for the newest members of the Adobe family (read: acquisition) begins with a gap assessment against the CCF. Once the gaps are determined against the existing CCF controls, the team can leverage a lot of driver-subscriber scalable controls (Read to understand driver-subscriber controls at Adobe) that are aligned with the CCF to remediate a majority of the gaps. Once the remediation is completed, often what is left is a handful of controls that need to be implemented in order to achieve compliance.

Another key component of security compliance is ensuring proper supporting documentation is in place. In most cases, the acquisition can therefore leverage existing documents used by product teams at Adobe that have successfully embarked and achieved compliance or on the roadmap, as they all address the same CCF requirements. Therefore, the team can often subscribe to the existing documentation when subscribing to a service. For the standalone controls, the teams can use existing templates documented in-line with the CCF to speed up the documentation effort.

Example of Implementation

One of our recent acquisitions was required to undergo PCI DSS compliance and as a result they underwent the gap assessment against the CCF controls. The acquisition was able to successfully leverage a lot of existing solutions like multifactor authentication to production, hardened baseline images, security monitoring, incident response processes to name a few to achieve compliance. At the end, the team was required to implement only a handful of standalone controls.

Given the new updated change in PCI DSS 3.2 around multifactor authentication, this acquisition will not be affected by the change in requirement as they already implemented Multi-factor authentication due to requirements listed in CCF.

Conclusion

Adobe’s CCF controls are helping new acquisitions achieve security compliance more quickly. These teams are able to leverage much of the existing infrastructure Adobe has put in place to meet and maintain its security certifications. Therefore, the overall burden of implementing these controls is significantly reduced and the acquisition that is now an Adobe member can continue to delight our customers at the same time as being compliant with Adobe’s security requirements.

Rahat Sethi
Sr. Information Security Analyst, Adobe Risk Assurance and Analysis Services (RAAS)

Adobe @ BlackHat USA 2016

We are headed to BlackHat USA 2016 in Las Vegas this week with members of our Adobe security teams. We are looking forward to connecting with the security community throughout the week. We also hope to meet up with some of you at the parties, at the craps tables, or just mingling outside the session rooms during the week.

This year Peleus Uhley, our Lead Security Strategist, will be speaking on Wednesday, August 3rd, at 4:20 p.m. He will be talking about “Design Approaches for Security Automation.” DarkReading says his talk is one of the “10 Hottest Talks” at the conference this year, so you do not want to miss it.

This year we are again proud to sponsor the r00tz Kids Conference @ DefCon. If you are going to DefCon and bringing your kids, we hope you take the time out to take them to this great event for future security pros. There will be educational sessions and hands-on workshops throughout the event to challenge their creativity and skills.

Make sure to follow our team on Twitter @AdobeSecurity. Feel free to follow me as well @BradArkin. We’ll be tweeting info as to our observations and happenings during the week. Look for the hashtag #AdobeBH2016.

We are looking forward to a great week in Vegas.

Brad Arkin
VP and Chief Security Officer