Posts in Category "Uncategorized"

More Effective Threat Modeling

There are a lot of theories about threat models. Their utility often depends on the context and the job to which they are applied. I was asked to speak about threat models at the recent BSIMM Community Conference, which made me formally re-evaluate my thoughts on the matter. Over the years I’ve used threat models in many ways at both the conceptual level and at the application level. In preparing for the conference I first tried to deconstruct the purpose of threat models. Then I looked at the ways I’ve implemented their intent.

Taking a step back to examine their value with respect to any risk situation, you examine things such as who, what, how, when, and why:

Who is the entity conducting the attack, including nation states, organized crime, and activists.

What is the ultimate target of the attack, such as credit card data.

How is the method by which attackers will get to the data, such as SQL injection.

Why captures the reason the target is important to the attacker. Does the data have monetary value? Or are you just a pool of resources an attacker can leverage in pursuit of other goals?

A threat can be described as who will target what, using how in order to achieve why.

We will come back to when in a moment. Threat models typically put most of the focus on what and how. The implicit assumption is that it doesn’t really matter who or why—your focus is on stopping the attack. Focusing on what and how allows you to identify potential bugs that will crop up in the design, regardless of who might be conducting the attack and their motivation.

The challenge with focusing solely on what and how is that they change over time. How is dependent on the specifics of the implementation, which will change as it grows. On the other hand, who and why tend to be fairly constant. Sometimes, focusing on who and why can lead to new ideas for overall mitigations that can protect you better than the point fixes identified by how.

For instance, we knew that attackers using advanced persistent threat (APT) (who) were fuzzing (how) Flash Player (what). To look at the problem from a different angle, we decided to stop and ask why. It wasn’t solely because of Flash Player’s ubiquity. At the time, most Flash Player attacks were being delivered via Office documents. Attackers were focusing on Flash Player because they could embed it in an Office document to conduct targeted spearphishing attacks. Targeted spearphishing is a valuable attack method because you can directly access a specific target with minimal exposure. By adding a Flash Player warning dialogue to alert users of a potential spearphishing attempt in Office, we addressed why Flash Player was of value to them. After that simple mitigation was added, the number of zero-day attacks dropped noticeably.

I also mentioned that when could be useful. Most people think of threat models as a tool for the design phase. However, threat models can also be used in developing incident response plans. You can take any given risk and consider, “When this mitigation fails or is bypassed, we will respond by…”

Therefore, having a threat model for an application can be extremely useful in controlling both high-level (who/why) and low-level threats (how/what). That said, the reality is that many companies have moved away from traditional threat models. Keeping a threat model up-to-date can be a lot of effort in a rapid development environment. Adam Shostack covered many of the common issues with this in his blog post, The Trouble with Threat Modeling. The question each team faces is how to achieve the value of threat modeling using a more scalable method.

Unfortunately, there is not a one-size-fits-all solution to this problem. For the teams I have worked with, my approach has been to try and keep the spirit of threat modeling but be flexible on the implementation. Threat models can also have different focuses, as Shostack describes in his blog post, Reinvigorate your Threat Modeling Process. To cover all the variants would be too involved for a single post, but here are three general suggestions:

  1. There should be a general high-level threat model for the overall application. This high-level model ensures everyone is headed in the same direction, and it can be updated as needed for major changes to the application. A high-level threat model can be good for sharing with customers, for helping new hires to understand the security design of the application, and as a reference for the security team.
  2. Threat models don’t have to be documented in the traditional threat model format. The traditional format is very clear and organized, but it can also be complex and difficult to document in different tools. The goal of a threat model is to document risks and plans to address them. For individual features, this can be in a simple paragraph form that everyone can understand. Even writing, “this feature has no security implications,” is informative.
  3. Put the information where developers are most likely to find it. For instance, adding a security section to the spec using the simplified format suggested eliminates the need to cross-reference a separate document, helping to ensure that everyone involved will read the security information. The information could also be captured in the user story for the feature. If your code is the documentation, see if your Javadoc tool supports custom tags. If so, you could encourage your developers to use an @security tag when documenting code. If you follow Behavior Driven Development, your threat model can be captured as Cucumber test assertions. Getting this specific means the developer won’t always have the complete picture of how the control fits into the overall design. However, it is important for them to know that the documentation note is there for a specific security reason. If the developer has questions, the security champion can always help them cross-reference it to the overall threat model.

Overall I think the concept of threat modeling still serves a valid purpose. Examining how and what can ensure your implementation is sound, and you can also identify higher level mitigations by examining who and why. The traditional approach to threat modeling may not be the right fit for modern teams, though teams can achieve the goal if they are creative with their implementation of the concept. Along with our development processes, threat modeling must also evolve.

Peleus Uhley
Lead Security Strategist

Join Us at CSA EMEA Congress November 19 – 20!

Adobe will be participating again this year in the Cloud Security Alliance (CSA) EMEA Congress event in Rome, Italy, November 19 – 20, 2014. This conference attracts senior decision makers in IT Security from a wide range of industries and governmental organizations. This event focuses on regulatory, compliance, governance, and technical security issues facing both cloud service providers and users of cloud services. We’re excited to be back at what promises to be another great event this year.

I will be presenting a keynote session entitled “Security Roadmaps and Dashboards, Oh My!” on Thursday, November 20th, at 9:40 a.m. A “good” security roadmap is going to come from an ear-to-the-ground approach to security across all teams. It should also reflect current security industry trends. This is essential in creating a multi-faceted, balanced security roadmap that actually drives teams to build security into everything they do. How do you build and keep a solid, adaptable security roadmap in place? By focusing on the right metrics to measure success against the roadmap and developing meaningful dashboards to communicate progress and success to management. This presentation will discuss how Adobe tackled this problem across its very large product, service, and I.T. organization and provide insights into how you might tackle this problem in your own organization. I will also be available in our booth to answer questions after the session.

Please make sure to follow @AdobeSecurity on Twitter for the latest happenings during CSA EMEA Congress as we will be live tweeting during the event – look for the hashtag #AdobeCSA.

 

David Lenoe

Director, Product Security

Looking Back at the Grace Hopper Celebration

As someone new to the Grace Hopper Celebration (GHC), I was excited and overwhelmed on realizing there were around 8000 women from more than 60 countries. I had the opportunity to meet some really interesting people from within and outside of Adobe.

The keynote by Shafi Goldwasser (winner of the 2012 ACM Turing award) was especially interesting. She discussed cryptography and the varied, seemingly paradoxical solutions it can help us achieve. Highlighting the need to store data privately in the cloud with the ability to simultaneously harness that data to solve problems (e.g. research in medicine), she emphasized that the “magic of cryptography” as the key to this, and spoke at some length on looking at problems through the “cryptographic lens.”

Dr. Arati Prabhakar’s (Dir of DARPA) keynote during the award ceremonies was very inspiring. She talked about the benefits military research has provided to areas like the Internet, material sciences and safer warfare, and talked about further research into new areas, such as producing new materials and chemicals and rethinking complex military systems. She even showed the audience a video of a robotic arm being controlled by a quadriplegic woman hooked up to a computer.

The majority of presentations I attended were related to security, where I met smart and motivated women working in the security field, and a lot of students interested in security. The talks varied from Lorrie Cranor’s talk on analyzing and storing passwords safely, to a panel discussion integration of security in SDLC (panelists included Justine Osborne, Leigh Honeywell and Parisa Tabriz) to homomorphic encryption and its future uses (Mariana Raykova and Giselle Font). Other talks ranged from security fundamentals and cryptography aimed at college students to more “hot topics” like wearable technology, biometrics, cloud computing and HCI.

I also helped out at the career fair, and met a lot of undergraduates interested in working with Adobe. It was fun talking with them about what I do and learning about what they were interested in, including two students Adobe had sponsored to attend GHC this year. I met a number of industry professionals as well as students at talks and events who are working on including more girls and women in tech through outreach programs, hackathons and mentoring. It was refreshing to see a few men attending the GHC too.

The theme of the GHC this year was “Everyone, Everywhere.” It was a very inclusive environment, and apart from the talks there were events to make our evenings fun- ice breakers and dances. The long list of impressive speakers, motivating panelists and encouraging mentors/organizations were all very accessible and inspiring. I had a great time at GHC and I hope more people (men and women!) get to attend the conference in the future.

Devika Yeragudipati
ASSET Security Researcher

The Simplest Form of Secure Coding

Within the security industry, we often run into situations where providing immediate security guidance isn’t straightforward. For instance, a team may be using a new, cutting edge language that doesn’t have many existing security tools or guidelines available. If you are a small startup, then you may not have the budget for the enterprise security solutions. In large organizations, the process of migrating a newly acquired team into your existing tools and trainings may take several weeks. What advice can we give to those teams to get them down the road to security today?

In these situations, I remind them to go back to their original developer training. Many developers are familiar with the term “technical debt which refers to the “eventual consequences of poor system design, software architecture or software development within a codebase.” Technical security debt is one component of an application’s overall technical debt. The higher the technical debt is for an application, the greater the chance for security issues. Moreover, it’s much easier to integrate security tools and techniques into code that has been developed with solid processes.

To a certain extent, the industry has known this for awhile. Developers like prepared statements because pre-compiled code runs faster, and security people like it because pre-compiled code is less injectable. Developers want exception handling because it makes the web application more stable and they can cleanly direct users to a support page which is a better user experience. Security people want exception handling so that there is a plan for malicious input and because showing the stack trace is an information leak. However, let’s take this a step further.

If you search the web for “Top 10″ lists for developer best practices and/or common coding mistakes, then you will see there’s a clear overlap in traditional coding principles and security principles across all languages.  For example, the Modern IE  Cross Browser Development Standards & Interoperability Best Practices is written for developers and justifies these points using concepts that are important to clean development. However, I can go through the same list and justify many of their points using security concepts. Here are just a few of their recommendations and how they relate to security:

  • Use a build process with tools to check for errors and minify files. On the security side, establishing this will enable you to more easily integrate security tools into the build process.
  • Always use a standards-based doctype to avoid Quirks Mode. Quirks Mode makes it easier to inject XSS vulnerabilities into your site.
  • Avoid inlineJavaScript tags in HTML markup. In security, avoiding inlineJavaScript makes it easier to support Content Security Policies.

Switching to Ruby on Rails, here’s a list of the 10 Most Common Rails Mistakes and how to avoid them so developers can create better applications. When you look at those errors from a security perspective, you will also see overlaps:

  • Common Mistake #1-3: Putting too much logic in the controller/view/model. These three points all deal with keeping your code cleanly designed for better maintainability. Security is a common reason for performing code maintenance. Often times, your response to active attacks against your system will be slowed down because the code cannot be easily changed or is too complex to identify a single validation point.

    This section also reminds us that the controller is the recommended place for first-level session and request parameter management. This allows for high-level sanity checking before the data makes it into your model.

  •   Common Mistake #5: Using too many gems. Controlling your third-party libraries also helps to limit your attack surface and reduce the maintenance costs of keeping them up-to-date for security vulnerabilities.
  •   Common Mistake #7: Lack of automated tests. As mentioned in the HTML lists, using an automated test framework enables you to also include security tests. This blog refers to use techniques such as BDD for which there are also Ruby-based BDD security frameworks like Gauntlt.
  •  Common Mistake #10: Checking sensitive information into source code repositories. This is clearly a security rule. In this example, they are referring to a specific issue with Rails secret tokens. However, this is a common mistake for development in general. Separating credentials from code is simply good coding hygiene – it can prevent an unintended leak of the credential and permit a credential rotation without having to rebuild the application.

Even if you go back to a 2007 article in the IBM WebSphere Developer Technical Journal on The Top Java EE Best Practices, which is described as a “best-of-the-best list of what we feel are the most important and significant best practices for Java EE,” then you will see the same themes being echoed within the first five principles of the list:

  •   Always use MVC.This was also mentioned in the Rails Top 10. Centralized development allows for centralized validation.
  •  Don’t reinvent the wheel. This is true for security, as well. For instance, don’t invent your own cryptography library wheel!
  •  Apply automated unit tests and test harnesses at every layer. Again, this will make it easier to include security tests.
  •  Develop to the specifications, not the application server.This point highlights the importance of not locking your code into a specific version of the server. One of the most frequent issues large enterprises struggle with is migrating from older, vulnerable platforms, because the code is too dependent on the old environment.This concept is also related to #16 in their list, “Plan for version updates”.
  • Plan for using Java EE security from Day One. The idea here is similar to “Don’t reinvent the wheel.” Most development platforms provide security frameworks that are already tested and ready to use.

As you can see, regardless of your coding language, security best practices tend to overlap with your developer best practices. Following them will either directly make your code more secure or make it easier to integrate security controls later. In meetings with management, developers and security people can be aligned in requesting adequate time to code properly.

It’s true that security-specific training will always be necessary for topics such as authentication, authorization, cryptography, etc. And security training certainly shows you how to think about your code defensively which will help with application logic errors. However, a lot of the low-hanging bugs and security issues can be caught by following basic good, old-fashioned coding best practices. The more you can control your overall technical debt, the more you will control your security debt.

Peleus Uhley
Lead Security Strategist

Building Relationships and Learning at Black Hat and DEF CON

Adobe attends Black Hat in Las Vegas each year and this year was no exception. The Adobe security team as well as several security champions from Adobe’s product teams attended Black Hat and a few stayed on for DEF CON too. What follows is the experiences and takeaways of Rajat and Karthik security researchers on ASSET, from Black Hat and DEF CON 2014.

Security is often characterized as a dichotomy between “breaking” and “building”. Presentations at Black Hat and DEF CON are no exception – focused on these categories as a result of the approach that hackers take towards their research. For example, Charlie Miller and Chris Valasek’s, “A Survey of Remote Automotive Attack Surfaces” was a memorable talk in the breaking-security category, where they disassembled the onboard computers in over twenty commercial cars and analyzed ways to remotely control them. It was refreshing to take a step back and observe that security scrutiny can be brought to bear on all engineering design, not just software design.

In the building-security category, we appreciated the format of the various roundtables at Black Hat because they mirrored many of the themes of security conversations across Adobe. For example we found the roundtable discussions on API Security  and Continuous Integration and Deployment to be valuable lessons for our researchers and security champions. At DEF CON, we came across DemonSaw, a new tool that lets you securely share files in a peer-to-peer network without requiring cloud storage. We found it to be an impressive implementation of cryptography fundamentals to meet security and privacy.

We noticed the gradual shift in focus of the talks from last year, in that more hackers are going after hosted services and mobile/embedded applications. This gave Adobe security champions the opportunity to see how hackers adapt to changes in the industry and to get an attacker’s perspective on compromising applications that may be similar to our own. Often times security champions had to strike a balance between talks that apply to their day-to-day work, like Alex Stamos’ Building Safe Systems at Scale and talks that were interesting given the impact to the industry, for example the talk about BadUSB. We also saw the recurring theme that each year the security community finds more serious vulnerabilities than the last, as a result of new products and platforms flooding the market. It was a reminder that with the universal growth of technology there’s a need for deeper investment in security. 

BH party

 Adobe-hosted  event at the Cosmopolitan’s Chandelier Bar on August 7th.

Black Hat and DEF CON offer much more than the presentations and trainings. The Black Hat Arsenal showcased cutting-edge security research, with prototypes of packet-capturing drones and tools that harvest information from various embedded devices. Most of the tools on display were open-source and it was great to see research shared in the security community. The Vendor Expo was an expansive mix of large companies promoting their product suites, along with newcomers exploring niche problems such as log mining, threat intelligence, and biometric security. No DEF CON conference is complete without a Capture the Flag (CTF) event, which is a place for professionals–or hobbyists–to build their skills and compete with each other in solving real-world challenges related to forensics and Web exploitation – this year’s competition was won by PPP.

It was evident that Black Hat and DEF CON have steadily grown in popularity. For the first time at Black Hat we were standing in line to enter briefings. The size and scale of these events keep increasing, which is a testament to the expanding influence of security in technology and business. Despite the growth, the atmosphere at Black Hat and DEF CON remains collegial. Meeting and talking with people about the challenges we all face always makes for a valuable learning experience.

Karthik Raman, Security Researcher
Rajat Shah, Security Researcher

 

 

 

View of an Internship with ASSET

I technically joined the security community last year when I began my Master’s in Information Security at Carnegie Mellon University. I gained a lot of theoretical and practical knowledge from the program, but my internship with ASSET gave me a totally new perspective on how security in a large organization works. I worked on multiple projects over the summer in the beautiful city of San Francisco. I have outlined one of them below.

Adobe follows a Secure Product Lifecycle (SPLC).To cater to the large number of current and future Adobe products, the security guidance provided to the teams by ASSET needs to be scalable. Scalability requires automation, or else the number of security researchers and their time becomes a bottleneck. Security guidance is also intended to focus on the configuration of the projects. For example, a Web service written in Java that handles confidential information requires a very different set of guidelines to follow as compared to an Android application.

For such targeted guidance, we use a smart system called SD Elements. For SD Elements, I performed a gap-analysis on security recommendations of Android and iOS apps as well as on desktop and rich-client applications. I researched quite a bit in the process. Some of my sources included the CERT guidelines for securing applications, internal pen-test reports, and a lot of academic research papers and vendor reports. Adobe has now moved to cloud deployment for many of their products: Creative Cloud and Marketing Cloud are prime examples. To support this recent momentum, I also expanded the deployment phase in SD Elements which is a set of guidelines for DevOps teams to securely deploy and maintain their applications in the cloud.

During my internship, I worked with Mohit Kalra who was my manager and Karthik Raman, my mentor. They were always available to guide me whenever I got stuck on a problem and always gave me specific Adobe context. My other team-members were also very helpful and considerate throughout the internship and they always made me feel at home. As part of Adobe Be Involved month, I also got a chance to volunteer at Edgewood Center for Children and Families, which was a humbling experience. We played kickball with the kids and it was really great to see smiles on their faces.

Mayur blog post

Volunteer picture from Edgewood Center for Children and Families. (I’m the guy in bottom left.)

As a result of my internship at Adobe, I feel like I’ve really improved my technical knowledge and my understanding of how security works within an organization. Thanks, Adobe.

Mayur Sharma
Security Intern

An Overview of Behavior Driven Development Tools for CI Security Testing

While researching continuous integration (CI) testing models for Adobe, I have been experimenting with different open-source tools like Mittn, BDD-Security, and Gauntlt. Each of these tools centers around a process called Behavior Driven Development (BDD). BDD enables you to define your security requirements as “stories,” which are compatible with Scrum development and continuous integration testing. Whereas previous approaches required development teams to go outside their normal process to use security tools, these frameworks aim to integrate security tools within the existing development process.

None of these frameworks are designed to replace your existing security testing tools. Instead, they’re designed to be a wrapper around those security tools so that you can clearly define unit tests as scenarios within a story. The easiest way to understand the story/scenario concept is to look at a few examples from BDD-Security. This first scenario is part of an authentication story. It verifies that account lockouts are enforced by the demo web application:

Scenario: Lock the user account out after 4 incorrect authentication attempts
Meta: @id auth_lockout
Given the default username from: users.table
And an incorrect password
And the user logs in from a fresh login page 4 times
When the default password is used from: users.table
And the user logs in from a fresh login page
Then the user is not logged in

The BDD frameworks take this human readable statement about your security policy and translates it into a technical unit test for your web application penetration testing tool. With this approach, you’re able to phrase your security requirements for the application as a true/false statement. If Jenkins sees a false result from this unit test, then it catches the bug immediately and can flag the build. In addition, this human readable approach to defining unit tests allows the scenarios to double as documentation. An auditor can quickly read through the scenarios and map them to a threat model or policy requirement.

In order to interact with your web site and perform the log in, the framework will need a corresponding class written in a web browser automation framework. The BDD example above used a custom class that leverages the Selenium 2 framework to navigate to the login page, find the login form elements, fill in their values and have the browser perform the submit action. Selenium is a common tool for web site testers so your testing team may already be familiar with it or similar frameworks.

Writing custom classes that understand your web site is good for creating specific tests around your application logic. However, you can also perform traditional spidering and scanning tests as in this second example from BDD-Security:

Scenario: The application should not contain Cross Site Scripting vulnerabilities
Meta: @id scan_xss
#navigate_app will spider the website
GivenStories: navigate_app.story
Given a scanner with all policies disabled
And the Cross-Site-Scripting policy is enabled
And the attack strength is set to High
And the alert threshold is set to Medium
When the scanner is run
And false positives described in: tables/false_positives.table are removed
Then no Medium or higher risk vulnerabilities should be present

For the BDD-Security demo, the scanner that is used is the OWASP ZAP proxy. Although, BDD frameworks are not limited to tests through a web proxy. For instance, this example from BDD-Security shows how to run Nessus and ensure that the scan doesn’t return with anything that is severity 2 (medium) or higher:

Scenario: The host systems should not expose known security vulnerabilities

Given a nessus server at https://localhost:8834
And the nessus username continuum and the password continuum
And the scanning policy named test
And the target hosts
|hostname |
|localhost  |
When the scanner is run with scan name bddscan
And the list of issues is stored
And the following false positives are removed
|PluginID   |Hostname   |  Reason                                                                      |
|43111      |127.0.0.1    |  Example of how to add a false positive to this story  |
Then no severity: 2 or higher issues should be present

There are a lot of good blogs and presentations (1,2,3) that further explain the benefits of BDD approaches to security, so I won’t go into any further detail. Instead, I will focus on three current tools and highlight key differences that are important to consider when evaluating them.

Which BDD Tool is Right for You?

To start, here is a quick summary of the tools at the time of this writing:

Mittn Gauntlt BDD-Security
Primary Language Python Ruby Java
Approximate Age 3 months 2 years 2 years
Commits within last 3 months yes yes yes
BDD Framework Behave Cucumber jbehave
Default Web App Pen Test Tools Burp Suite, radamsa Garmr, arachni, dirb, sqlmap, curl Zap, Burp Suite
Default SSL analysis sslyze heartbleed, sslyze TestSSL
Default Network Layer Tools N/A nmap nessus
Windows or Unix Unix Unix** Both

** Gauntlt’s ‘When tool is installed” statement is dependent on the Unix “which” command. If you exclude that statement from your scenarios, then many tests will work on Windows.

If you plan to wrap more than the officially supported list of tools or have complex application logic, then you may need custom sentences, known as “step definitions.” Modifying step definitions is not difficult. Although, once you start modifying code, you have to consider how to merge your changes with future updates to the framework.

Each framework has a different approach to their step definitions. For instance, BDD-Security tends to encourage formal step definition sentences in all their test cases which would require code changes for custom steps. With Gauntlt you can store additional step definition files in the attack_adapters directory. Gauntlt also provides flexibility through a few generic step definitions that allow you to check the output of arbitrary raw command lines, as seen in their hello world example below:

Background:
Feature: hello world with gauntlt using the generic command line attack
  Scenario:
    When I launch a “generic” attack with:
      “””
      cat /etc/passwd
      “””
    Then the output should contain:
      “””
      root
      “””

Similarly, you should also consider how the framework will handle false positives from the tools. For instance, Mittn allows you to address the situation by tracking them in a database. BDD-Security allows you to address false positives statements within the scenario, as seen in the Nessus example from above or in a separate table. Gauntlt’s approach is to leverage “should not contain” statements within the scenario.

Since these tools are designed to be integrated into your continuous integration testing frameworks, you will want to evaluate how compatible they will be and how tightly you will need them integrated. For instance, quoting Continuum Security’s BDD introduction:

BDD-Security jobs can be run as a shell script or ant job. With the xUnit and JBehave plugins the results will be stored and available in unit test format. The HTML reports can also be published through Jenkins.

Mittn is also based on Behave and can produce JUnit XML test result documents. Gauntlt’s default output is handled by Cucumber. By default, Gauntlt supports pretty StdOut format and HTML output but you can modify the code to get JUnit output. There is an open improvement request to allow JUnit through the config file. Guantlt has documentation for Travis integration, as well.

Overall, the tools were not difficult to deploy. Gauntlt’s virtual box demo environment in its starter kit can be converted to be deployed via Chef in the cloud with a little work. When choosing a framework, you should also consider the platform support of the security tools you intend to use and the platform of your integrated testing environments. For instance, using a GUI-based tool on a Linux server will require an X11 desktop to be installed.

All of these tools have promise depending on your preferences and needs. BDD-Security would be a good tool for web development teams who are familiar with tools like Selenium and want tight integration with their processes. Gauntlt’s ability to support “generic” attacks makes it a good tool for teams that want to use a wide array of tools in their testing. Mittn is the youngest entry and doesn’t yet have features like built-in spidering support. Although, Python developers can easily find libraries for spidering sites, and Mittn’s external database approach to tracking issues may be useful for teams who have other systems that need to be notified of new results.

Before adopting one of these tools, an organization will likely do a buy-vs.-build analysis with commercial continuous monitoring offerings. For those who will be presenting the build argument, these tools provide enough to make a solid case in that discussion.

Where these frameworks add value is by allowing you to take your existing security tools (Burp, Nessus, etc.) and make them a part of your daily build process. By integrating with the continuous build system, you can immediately identify potential issues and ensure a minimum baseline of security. The scenario-based approach allows you to map requirements in your security policy to clearly defined unit tests. This evolution of open-source security frameworks that are designed to directly integrate with the DevOps process is an exciting step forward in the maturity of security testing.

Peleus Uhley
Lead Security Strategist

Observations From an OWASP Novice: OWASP AppSec Europe

Last month, I had the opportunity to attend OWASP AppSec Europe in Cambridge.

The conference was split into two parts. The first two days consisted of training courses and project summits, where the different OWASP project teams met to discuss problems and further proceedings, and the last two days were conference and research presentations.

Admittedly an OWASP novice, I was excited to learn what OWASP has to offer beyond the Top 10 Project most of us are familiar with. As it is commonly the case with conferences, there were a lot of interesting conversations that occurred over coffee (or cider). I had the opportunity to meet some truly fascinating individuals who gave some great insight to the “other” side of the security fence, including representatives from Information Security Group Royal Holloway, various OWASP chapters, and many more.

One of my favorite presentations was from Sebastian Lekies, PhD candidate at SAP and the University of Bochum, who demonstrated website byte-level flow analysis by using a modified Chrome browser to find DOM-based XSS attacks. Taint-tags were put on every byte of memory that comes from user-input and traced through the whole execution until it was displayed back to the user. This browser was used to automatically analyze the first two levels of all Alexa Top 5000 websites, finding that an astounding 9.6 percent carry at least one DOM-based XSS flaw.

Another interesting presentation was a third day keynote by Lorenzo Cavallaro from Royal Holloway University. He and his team are creating an automatic analysis system to reconstruct behaviors of Android malware called CopperDroid. It was a very technical, very interesting talk, and Lorenzo could have easily filled another 100 hours.

Rounding out the event were engaging activities that broke up the sessions – everything from the University Challenge to a game show to a (very Hogwarts-esque) conference dinner at Homerton College’s Great Hall.

All in all, it was an exciting opportunity for me to learn how OWASP has broadened its spectrum in the last few years beyond web application security and all the resources that are currently available. I learned a lot, met some great people, and had a great time. I highly recommend to anyone that has the opportunity to attend!

Lars Krapf
Security Researcher, Digital Marketing

Retiring the “Back End” Concept

For people who have been in the security industry for some time, we have grown very accustomed to the phrases “front end” and “back end.” These terms, in part, came from the basic network architecture diagram that we used to see frequently when dealing with traditional network hosting:

 Picture1

The phrase “front end” referred to anything located in DMZ 1, and “back end” referred to anything located in DMZ 2. This was convenient because the application layer discussion of “front” and “back” often matched nicely with the network diagram of “front” and “back.”  Your web servers were the first layer to receive traffic in DMZ 1 and the databases which were behind the web servers were located in DMZ 2. Over time, this eventually led to the implicit assumption that a “back end” component was “protected by layers of firewalls” and “difficult for a hacker to reach.”

How The Definition Is Changing

Today, the network diagram and the application layer diagrams for cloud architectures do not always match up as nicely with their network layer counterparts. At the network layer, the diagram frequently turns into the diagram below:

 Picture2

In the cloud, the back end service may be an exposed API waiting for posts from the web server over potentially untrusted networks. In this example, the attacker can now directly reach the database over the network without having to pass through the web server layer.

Many traditional “back end” resources are now offered as a stand alone service. For instance, an organization may leverage a third-party database as a service (DBaaS) solution that is separate from its cloud provider. In some instances, an organization may decide to make their S3 buckets public so that they can be directly accessed from the Internet.

Even when a company leverages integrated solutions offered by a cloud provider, shared resources frequently exist outside the defined, protected network. For instance, “back end” resources such as S3, SQS and DynamoDB will exist outside your trusted VPC. Amazon does a great job of keeping its AWS availability zones free from most threats. However, you may want to consider a defense-in-depth strategy where SSL is leveraged to further secure these connections to shared resources.

With the cloud, we can no longer assume that the application layer diagram and the network layer diagrams are roughly equivalent since stark differences can lead to distinctly different trust boundaries and risk levels. Security reviews of application services are now much more of a mix of network layer questions and application layer questions. When discussing a “back end” application component with a developer, here are a few sample questions to measure its exposure:

*) Does the component live within your private network segment, as a shared resource from your cloud provider or is it completely external?

*) If the component is accessible over the Internet, are there Security Groups or other controls such as authentication that limit who can connect?

*) Are there transport security controls such as SSL or VPN for data that leaves the VPC or transits the Internet?

*) Is the data mirrored across the Internet to another component in a different AWS region? If so, what is done to protect the data as it crosses regions?

*) Does your threat model take into account that the connection crosses a trust boundary?

*) Do you have a plan to test this exposed “back end” API as though it was a front end service?

Obviously, this isn’t a comprehensive list since several of these questions will lead to follow up questions. This list is just designed to get the discussion headed in the right direction. With proper controls, the cloud service may emulate a “back end” but you will need to ask the right questions to ensure that there isn’t an implicit security-by-obscurity assumption.

The cloud has driven the creation of DevOps which is the combination of software engineering and IT operations. Similarly, the cloud is morphing application security reviews to include more analysis of network layer controls. For those of us who date back to the DMZ days, we have to readjust our assumptions to reflect the fact many of today’s “back end” resources are now connected across untrusted networks.

Peleus Uhley
Lead Security Strategist

 

 

 

Another Successful Adobe Hackfest!

ASSET, along with members of the Digital Marketing security team, recently organized an internal “capture the flag” event called Adobe Hackfest. Now in its third year, this 10-day event accommodates teams spread across various geographies. The objective is for participants to find and exploit vulnerable endpoints to reveal secrets. The lucky contestants that complete all hacks at each level are entered to win some awesome prizes.

This year, we challenged participants with two vulnerabilities to hack at two different difficulty levels, carefully chosen to create security awareness within the organization. Using the two hacks as teaching opportunities, we targeted three information security concepts under cross-site scripting, SQL injection and password storage categories. Our primary intention was to demonstrate consequences of using insecure coding practices via a simulated vulnerable production environment.

Contributing to the event’s success were logistics we’ve added from previous events to create a more seamless experience. The event was heavily promoted internally, and we had specific channels for participants to ask questions or request hints, including three hosted Adobe Connect sessions in different time zones.  The Digital Marketing security team also created a framework that generated unique secrets for every participant, and a leaderboard that would update automatically.

Participants worked very hard which generated stiff competition, with more than 50 percent unlocking at least one secret, and nearly 30 percent unlocking all four. Though our developers, quality engineers, and everyone else involved in shipping code undergo different information security trainings, this event helps bring theories into practice by emphasizing that there is no “silver bullet” when it comes to security, and the importance of a layered approach.

Participation was at an all-time high, and given the tremendous interest within Adobe, we are now planning to have Hackfests more frequently. Looking forward to Hackfest Autumn!

Vaibhav Gupta
Security Researcher