Author Archive: Peleus Uhley

Securely Deploying MongoDB 3.0

I recently needed to set up an advanced, sharded MongoDB 3.0 database with all the best practices enabled for a deployment of the CRITs web application. This would be an opportunity for me to get first hand experience with the recommended security guidance that I recommend to other Adobe teams. This post will cover some of the lessons that I learned along the way. This isn’t a replacement for reading the documentation. Rather, it is a story to bookmark for when one of your teams is ready to deploy a secure MongoDB 3.0 instance and is looking for real-world examples to supplement the documentation.

MongoDB provides a ton of security documentation and tutorials, which are invaluable. It is highly recommended that you read them thoroughly before you begin, since there are a lot of important details captured in the documents. The tutorials often contain important details that aren’t always captured in the core documentation for a specific feature.

If you are migrating from an older version, you’ll quickly find that MongoDB has been very active in improving the security of its software. The challenge will be that some of your previous work may now be deprecated. For instance, the password hashing functions have migrated from MONGODB-CR to SCRAM-SHA-1 . The configuration file switched in version 2.6 from name-value pairs to YAML. Oddly, when I downloaded the most recent version of MongoDB, it came with the name-value pair version by default. While name-value pairs are still supported, I decided to create the new YAML version from scratch to avoid a migration later. In addition, keyfile authorization between cluster servers has been replaced with X.509. These improvements are all things you will want to track when migrating from an older version of MongoDB.

In prepping for the deployment, there are a few things you will want to do:

  • Get a virtual notepad. A lot of MongoDB commands are lengthy to type, and you will end up pasting them more than once.
  • After reading the documentation and coming up with a plan for the certificate architecture, create a script for generating certificates. You will end up generating one to two certificates per server.
  • Anytime you deploy a certificate system, you should have a plan for certificate maintenance such as certificate expirations.
  • The system is dependent on a solid base. Make sure you have basic sysadmin tasks done first, such as using NTP to ensure hosts have consistent clocks for timestamps.

If you are starting from scratch, I would recommend getting MongoDB cluster connectivity established, followed by layering on security. At a minimum, establish basic connectivity between the shards. If you try to do security and a fresh install at the same time, you may have a harder time debugging.

Enabling basic SSL between hosts

I have noticed confusion over which versions of MongoDB support SSL, since it has changed over time and there were differences between standard and enterprise versions. Some package repositories for open-source OSs are hosting the older versions of MongoDB. The current MongoDB 3.0 page says, “New in version 3.0: Most MongoDB distributions now include support for SSL.”  Since I wasn’t sure what “most” meant, I downloaded the standard Ubuntu version (not enterprise) from the MongoDB hosted repository, as described here: That version did support SSL out of the box.

MongoDB has several levels of SSL settings, including disabled, allowSSL, preferSSL, and requireSSL. These can be useful if you are slowly migrating a system, are learning the command line, or have different needs for different roles. For instance, you may specify requireSSL for your shards and config servers to ensure secure inter-MongoDB communication. For your MongoDB router instance, you may choose a setting of preferSSL to allow legacy web applications to connect without SSL, while still maintaining secure inter-cluster communication.

If you plan to also use X.509 for cluster authorization, you should consider whether you will also be using cluster authentication and whether you want to specify a separate certificate for clusterAuth. If you go with separate certificates, you will want to set the serverAuth Extended Key Usage (EKU) attribute on the SSL certificate and create a separate clientAuth certificate for cluster authorization. A final configuration for the SSL configuration would be:



      CAFile: “root_CA_public.pem”

      mode: requireSSL

      PEMKeyFile: “mongo-shard1-serverAuth.pem”

      PEMKeyPassword: YourPasswordHereIfNecessary

Enabling authentication between servers

In recent versions of MongoDB, the inter-cluster authentication method has changed in version 2.6 from using keyfiles to leveraging X.509 certificates. The keyfile authentication was just a shared secret, whereas X.509 verifies approval from a known CA. To ease migration from older implementations, MongoDB lets you start at keyfile, then move to hybrid support with sendKeyFile and sendX509, before finally ending at the X.509-only authentication setting: x509. If you have not already enabled keyfiles in an existing MongoDB deployment, then you may need to take your shard offline in order to enable it. If you are using a separate certificate for X.509 authentication, then you will want to set the clientAuth EKU in the certificate.

The certificates used for inter-cluster authentication must have their X.509 subject (O, OU, DN, etc.) set exactly the same, except for the hostname in the CN. The CN, or Subject Alternative Name, must match the hostname of the server. If you want flexibility to move shards to new instances without reissuing certificates, you may want a secondary DNS infrastructure that will allow you to remap static hostnames to different instances. When a cluster node is successfully authenticated to another cluster node, it will get admin privileges for the instance. The following settings will enable cluster authentication:



      CAFile: “/etc/mongodb/rootCA.pem”

      clusterFile: “mongo-shard1-clientAuth.pem”

      clusterPassword: YourClusterFilePEMPasswordHere

      CRLFile: “YourCRLFileIfNecessary.pem”



       clusterAuthMode: x509

Client authentication and authorization

MongoDB authorization can support a set of built-in roles and user defined roles for those who want to split authorization levels across multiple users. However, authorization is not enabled by default. To enable authorization, you must specify the following in your config file:


          authorization: enabled

There was a significant change in the authorization model changed between 2.4 and 2.6. If you are doing an upgrade from 2.4, be sure to read the release notes for all the details. The 2.4 model is no longer supported in MongoDB 3.0. Also, an existing environment may have downtime because you have to sync changing your app to use the MongoDB password as well as enabling authentication in MongoDB.

For user-level account access, you will have a choice between traditional username and password, LDAP proxy, Kerberos, and X.509. For my isolated infrastructure, I had to choose between X.509 and username/password. Which approach is correct depends on how you interact with the server and how you manage secrets. While I had to use a username and password for the CRITs web application, I wanted to play with X.509 for the local shard admin accounts. The X.509 authentication can only be used with servers that have SSL enabled. While it is not strictly necessary to have local shard admin accounts, the documentation suggested that they would eventually be needed for maintenance. From the admin database, X.509 users can be added to the $external database using the following command:



          createUser: “DC=org,DC=example, CN=clusterAdmin,OU=My Group,O=My Company,ST=California,C=US”,

          roles: [

             { role: ‘clusterAdmin’, db: ‘admin’ }




The createUser field contains the subject from the client certificate for the cluster admin. Once added, the command line for a connection as the clusterAdmin would look like this:

       mongo –ssl –sslCAFile root_CA_public.pem –sslPEMKeyFile ./clusterAdmin.pem mongo_shard1:27018/admin

Although you provided the key in the command line, you still need to run the auth command that corresponds to the clusterAdmin.pem certificate in order to convert to that role:



         mechanism: “MONGODB-X509”,

         user: “ DC=org,DC=example, CN=clusterAdmin,OU=My Group,O=My Company,ST=California,C=US



The localhost exception allows you to create the first user administrator in the admin database when authorization is enabled. However, once you have created the first admin account, you should remember to disable it by specifying:


           enableLocalhostAuthBypass: false

Once you have the admin accounts created, you can create the application roles against the application database with more restricted privileges:



         user: “crits_app_user”,

         pwd: “My$ecur3AppPassw0rd”

         roles: [

            { role: “readWrite”, db: “crits” }


         writeConcern: { w: “majority” , wtimeout: 5000 }



At this stage, there are still other security options worth reviewing. For instance, there are some SSL settings I didn’t cover because they already default to the secure setting. If you are migrating from an older database, then you will want to check the additional settings, since some behavior may change. Hopefully, this post will help you to get started with secure communication, authentication, and authorization aspects of MongoDB 3.0.


Peleus Uhley
Lead Security Strategist

Top 10 Web Hacking Techniques of 2014

This year, I once again had the privilege to be one of judges for the “Top 10 Web Hacking Techniques” list that is organized by Matt Johansen and Johnathan Kuskos of the WhiteHat Security team. This is a great honor and a lot of fun to do, although the task of voting also requires a lot of reflection. A significant amount of work went into finding the issues, and that should be respected in the analysis for the top spot. This blog reflects my personal interpretation of the nominees this year.

My first job as a judge is to establish my criteria for judging. For instance:

  • Did the issue involve original or creative research?
  • What was the ultimate severity of the issue?
  • How many people could be affected by the vulnerability?
  • Did the finding change the conversation in the security community?

The last question is what made judging this years entries different from previous years. Many of the bugs were creative and could be damaging for a large number of people. However, for several of the top entries, the attention that they received helped change the conversation in the security industry.

A common trend in this year’s top 10 was the need to update third-party libraries. Obviously, Heartbleed (#1) and POODLE (#3) brought attention to keeping OpenSSL up-to-date. However, if you read the details on the Misfortune Cookie attack (#5), there was the following:

AllegroSoft issued a fixed version to address the Misfortune Cookie vulnerability in 2005, which was provided to licensed manufacturers. The patch propagation cycle, however, is incredibly slow (sometimes non-existent) with these types of devices. We can confirm many devices today still ship with the vulnerable version in place. 

Third-party libraries can be difficult to track and maintain in large organizations and large projects. Kymberlee Price and Jake Kouns spent the year giving great presentations on the risks of third-party code and how to deal with it.

Heartbleed and Shellshock were also part of the year of making attacks media-friendly by providing designer logos. Many of us rolled our eyes at how the logos drew additional media attention to the issues. Although, it is impossible to ignore how the added media attention helped expedite difficult projects such as the deprecation of SSLv3. Looking beyond the logos, these bugs had other attributes which made them accessible in terms of tracking and understanding the severity. For instance, besides a memorable name, Heartbleed included a detailed FAQ which helped to quickly explain the bug’s impact. Typically, a researcher would have had to dig through source code changelists which is difficult or consult HeartBleed’s CVSS score (5 out of 10) which can be misleading. Once you remove the cynicism from the logo discussion, the question that remains is what can the industry learn from these events that will allow our industry to better communicate critical information to a mass audience?

In addition, these vulnerabilities brought attention to the discussion around the “many eyes make all bugs shallow” theory. Shell Shock was a vulnerability that went undetected for years in the default shell used by most security engineers. Once security engineers began reviewing the code affected by Shell Shock, three other CVEs were identified within the same week. The remote code execution in Apache Struts ClassLoader (#8) was another example of a vulnerability in a popular open-source project. The Heartbleed vulnerability prompted the creation of the Core Infrastructure Initiative to formally assist with projects like OpenSSL, OpenSSH and the Network Time Protocol. Prior to the CII, OpenSSL only received about $2,000 per year in donations. The CII funding makes it possible to pursue projects such as having the NCC Group’s consultants audit OpenSSL.

In addition to bugs in third-party libraries, there was also some creative original research. For instance, the Rosetta Flash vulnerability (#4) combined the fact that the JSONP protocol allows attackers to control the first few bytes of a response with the fact that ZLIB compression format allows you to define the characters used for compression. Combining these two issues meant that an attacker could bounce a specially crafted, ZLIB-compressed SWF file off of a JSONP endpoint to get it to execute in their domain context. This technique worked on JSONP endpoints for several popular websites. Rather than asking JSONP endpoints to add data validation, Adobe changed Flash Player so that SWFs restrict the types of ZLIB-compressed data that is accepted.

The 6th and 7th issues on the list both dealt with authentication issues that reminded us that authentication systems are a complex network of trust. The research into “Hacking PayPal with One Click” (#6) combined three different bugs to create a CSRF attack against PayPal. While the details around the “Google Two-Factor Authentication Bypass” weren’t completely clear, it also reminded us that many trust systems are chained together. Two-factor authentication systems frequently rely on your phone. If you can social engineer a mobile carrier to redirect the victim’s account, then you can subvert the second factor in two-factor authentication.

The last two issues dealt with more subtle issues than remote code execution. Both show how little things can matter. The Facebook DDOS attack (#9) leveraged the simple support of image tags in the Notes service. If you include enough image tags on enough notes, then you could get over 100 Facebook servers generating traffic to the target. Lastly, “Covert Timing Channels based on HTTP Cache Headers” (#10) looked at ways hidden messages can be conveyed via headers that would otherwise be ignored in most traffic analysis.

Overall, this year was interesting in terms of how the bugs changed our industry. For instance, the fact that a large portion of the industry was dependent on OpenSSL was well known. However, without Heartbleed, the funding to have a major consulting firm perform a formal security audit would have never been made possible. Research from POODLE demonstrated that significant sites in the Alexa Top 1000 hadn’t adopted TLS which has been around since 1999. POODLE helped force the industry to accelerate the migration forward off of SSLv3 and onto TLS. In February, the PCI standard’s council announced, “because of these weaknesses, no version of SSL meets PCI SSC’s definition of ‘strong cryptography.” When a researcher’s work identifies a major risk, then that is clearly important within the scope of that one product or service. When a researcher’s work can help inspire changing the course of the industry, then that is truly remarkable.

For those attending RSA Conference, Matt Johansen and Johnathan Kuskos will be presenting the details of the Top 10 Web Hacking Techniques of 2014 on April 24 at 9:00 AM.


Peleus Uhley
Lead Security Strategist

Re-assessing Web Application Vulnerabilities for the Cloud

As I have been working with our cloud teams, I have found myself constantly reconsidering my legacy assumptions from my Web 2.0 days. I discussed a few of the high-level ones previously on this blog. For OWASP AppSec California in January, I decided to explore the impact of the cloud on Web application vulnerabilities. One of the assumptions that I had going into cloud projects was that the cloud was merely a hosting provider layer issue that only changed how the servers were managed. The risks to the web application logic layer would remain pretty much the same. I was wrong.

One of the things that kicked off my research in this area was watching Andres Riancho’s “Pivoting in Amazon Clouds,” talk at Black Hat last year. He had found a remote file include vulnerability in an AWS hosted Web application he was testing. Basically, the attacker convinces the Web application to act as a proxy and fetch the content of remote sites. Typically, this vulnerability could be used for cross-site scripting or defacement since the attacker could get the contents of the remote site injected into the context of the current Web application. Riancho was able to use that vulnerability to reach the metadata server for the EC2 instance and retrieve AWS configuration information. From there, he was able to use that information, along with a few of the client’s defense-in-depth issues, to escalate into taking over the entire AWS account. Therefore, the possible impacts for this vulnerability have increased.

The cloud also involves migration to a DevOps process. In the past, a network layer vulnerability led to network layer issues, and a Web application layer vulnerability led to Web application vulnerabilities. Today, since the scope of these roles overlap, a breakdown in the DevOps process means network layer issues can impact Web applications.

One vulnerability making the rounds recently is a vulnerability dealing with breakdowns in the DevOps process. The flow of the issue is as follows:

  1. The app/product team requests an S3 bucket called
  2. The app team requests the IT team to register the DNS name, which will point to, because a custom corporate domain will make things clearer for customers.
  3. Time elapses, and the app team decides to migrate to
  4. The app team requests from the IT team a new DNS name ( pointing to this new bucket.
  5. After the transition, the app team deletes the original bucket.

This all sounds great. Except, in this workflow, the application team didn’t inform IT and the original DNS entry was not deleted. An attacker can now register for their malicious content. Since the DNS name still points there, the attacker can convince end users that their malicious content is’s content.

This exploit is a defining example of why DevOps needs to exist within an organization. The flaw in this situation is a disconnect between the IT/Ops team that manages the DNS server and the app team that manages the buckets. The result of this disconnect can be a severe Web application vulnerability.

Many cloud migrations also involve switching from SQL databases to NoSQL databases. In addition to following the hardening guidelines for the respective databases, it is important to look at how developers are interacting with these databases.

Along with new NoSQL databases, there are a ton of new methods for applications to interact with the NoSQL databases.  For instance, the Unity JDBC driver allows you to create traditional SQL statements for use with the MongoDB NoSQL database. Developers also have the option of using REST frontends for their database. It is clear that a security researcher needs to know how an attacker might inject into the statements for their specific NoSQL server. However, it’s also important to look at the way that the application is sending the NoSQL statements to the database, as that might add additional attack surface.

NoSQL databases can also take existing risks and put them in a new context. For instance, in the context of a webpage, an eval() call results in cross-site scripting (XSS). In the context of MongoDB’s server side JavaScript support, a malicious injection into eval() could allow server-side JavaScript injection (SSJI). Therefore database developers who choose not to disable the JavaScript support, need to be trained on JavaScript injection risks when using statements like eval() and $where or when using a DB driver that exposes the Mongo shell. Existing JavaScript training on eval() would need to be modified for the database context since the MongoDB implementation is different from the browser version.

My original assumption that a cloud migration was primarily an infrastructure issue was false. While many of these vulnerabilities were always present and always severe, the migration to cloud platforms and processes means these bugs can manifest in new contexts, with new consequences. Existing recommendations will need to be adapted for the cloud. For instance, many NoSQL databases do not support the concept of prepared statements, so alternative defensive methods will be required. If your team is migrating an application to the cloud, it is important to revisit your threat model approach for the new deployment context.

Peleus Uhley
Lead Security Strategist

Adobe “Hack Days”

Within the ASSET team, we apply different techniques to keep our skills sharp. One technique we use periodically is to hold a “hack day,” which is an all-day event for the entire team. They are similar in concept to developer hack days but they are focused on security. These events are used to build security innovation and teamwork.

As in many large organizations, there are always side projects that everyone would like to work on. But they can be difficult to complete or even research when the work has to be squeezed in-between meetings and emails. The “free time” approach can be challenging depending on the state of your projects and how much they eat into the “free time.”  Therefore, we set aside one day every few months where the team is freed from all distractions and given the chance to focus on something of their choice for an entire day. That focus can generate a wealth of insight that more than compensates for the time investment. We have learned that sometimes a creative approach to security skill building is necessary for organizations that have a wide product portfolio.

There are very few rules for hack day. The general guidelines are:

  • The work has to be technical.
  • Researchers can work individually or as teams.
  • The work does not have to be focused on normally assigned duties. For instance, researchers can target any product or service within the company, regardless of whether they are the researcher assigned to it.
  • The Adobe world should be better at the end of the day. This can be a directly measurable achievement (e.g., new bugs, new tools, etc.). It can also be an indirect improvement, such as learning a new security skill set.
  • Researchers work from the same room(s) so that ideas can be exchanged in real time.
  • Lunch is provided so that people can stay together and share ideas and stories while they take a break from working.

Researchers are allowed to pick their own hack day projects. This is to encourage innovation and creative thinking. If a researcher examines someone else’s product or service, it can often provide a valuable third-party perspective. The outcomes from our hack days have included:

  • Refreshing the researcher’s experience with the existing security tools they are recommending to teams.
  • Trying out a new technique a researcher learned about at a conference but never had the chance to practically apply.
  • Allowing the team time to create a tool they have been meaning to build.
  • Allowing the team to dig into a product or service at a deeper level than just specs and obtain a broader view than what is gained by spot reviews of code. This helps the researcher give more informed advice going forward.
  • Providing an opportunity for researchers to challenge their assumptions about the product or service through empirical analysis.
  • And of course, team building.

A good example of the benefits of hack days comes from a pair of researchers who decided to perform a penetration test on an existing service. This service had already gone through multiple third-party consultant reviews and typically received a good health report. Therefore, the assumption was that it would be a difficult target because all the low-hanging fruit had already been eliminated. Nonetheless, the team was able to find several significant security vulnerabilities just from a single day’s effort.

While the individual bugs were interesting, what was more interesting was trying to identify why their assumption that the yield would be low was wrong. This led to a re-evaluation of the current process for that service. Should we rotate the consultancy?  Were the consultants focused on the wrong areas? Why did the existing internal process fail to catch the bugs? How do we fix this going forward? This kind of insight and the questions it prompted, more than justified a day of effort, and it was a rewarding find for the researchers involved.

With a mature application, experienced penetration testers often average less than one bug a day. Therefore, the hack day team may finish the day without finding any. But finding bugs is not the ultimate goal of a hack day. Rather, the goal is to allow researchers to gain a deeper understanding of the tools they recommend, the applications they work with, and the skills they want to grow. We have learned that a creative approach to security skill building is necessary for organizations, especially ones that have a wide product portfolio.

Given the outcomes we have achieved, a one-day investment is a bargain. While the team has the freedom to work on these things at any time, setting aside official days to focus solely on these projects helps accelerate innovation and research—and that’s of immense value to any organization. Hack days help our security team stay up to speed with Adobe’s complex technology stacks that vary across the company.  So if your organization feels trapped in the daily grind of tracking issues, consider a periodic hack day of your own to help your team break free of routine and innovate.

Peleus Uhley
Lead Security Strategist

Never Stop Coding

Several members of Adobe’s security team have taken to the media to offer career advice to aspiring security professionals (you can read more about that here, here, and here). For those interested in security researcher positions, my advice is to never stop coding. This is true whether you are working in an entry-level position or are already a senior researcher.

Within the security industry, it has often been said, “It is easier to teach a developer about security than it is to teach a security researcher about development.” This thought can be applied to hiring decisions. Those trained solely in security can be less effective in a development organization for several reasons.

Often, pure security researchers have seen only the fail in the industry. This leads them to assume vulnerable code is always the product of apathetic or unskilled developers. Since they have never attempted large-scale development, they don’t have a robust understanding of the complex challenges in secure code development. A researcher can’t be effective in a development organization if he or she doesn’t have an appreciation of the challenges the person on the other side of the table faces.

The second reason is that people with development backgrounds can give better advice. For instance, when NoSQL databases became popular, people quickly mapped the concept of SQL injection to NoSQL injection. At a high level, they are both databases of information and both accept queries for their information. So both can have injections. Therefore, people were quick to predict that NoSQL injection would quickly become as common as SQL injection. At a high level, that is accurate.

SQL injection is popular because it is a “structured query language,” which means all SQL databases follow the same basic structured format. If you dig into NoSQL databases, you quickly realize that their query formats can vary widely from SQL-esque queries (Cassandra), to JSON-based queries (MongoDB, DynamoDB), to assembly-esque queries (Redis). This means that injection attacks have to be more customized to the target. Although, if you are able to have a coding level discussion with the developers, then you may discover that they are using a database driver which allows them to use traditional SQL queries against a NoSQL database. That could mean that traditional SQL injections are also possible against your NoSQL infrastructure. Security recommendations for a NoSQL environment also have to be more targeted. For instance, prepared statements are available in Cassandra but not in MongoDB. This is all knowledge that you can learn by digging deep into a subject and experimenting with technologies at a developer level.

Lastly, you learn to appreciate how “simple” changes can be more complex than you first imagine. I recently tried to commit some changes to the open-source project, CRITs. While my first commit was functional, I’ve already refactored the code twice in the process of getting it production-ready. The team was absolutely correct in rejecting the changes because the design could be improved. The current version is measurably better than my first rough-sketch proposal. While I don’t like making mistakes in public, these sorts of humbling experiences remind me of the challenges faced by the developers I work with. There can be a fairly large gap between a working design and a good design. This means your “simple recommendation” actually may be quite complex. In the process of trying to commit to the project, I learned a lot more about tools such as MongoDB and Django than I ever would have learned skimming security best practice documentation. That will make me more effective within Adobe when talking to product teams using these tools, since I will better understand their language and concerns. In addition, I am making a contribution to the security community that others may benefit from.

At this point in my career, I am in a senior position, a long way from when I first started over 15 years ago as a developer. However, I still try to find time for coding projects to keep my skills sharp and my knowledge up-to-date. If you look at the people leading the industry at companies such as Google, Etsy, iSec Partners, etc., many are respected because they are also keeping their hands on the keyboards and are speaking from direct knowledge. They not only provide research but also tools to empower others. Whether you are a recent grad or a senior researcher, never lose sight of the code, where it all starts.

Peleus Uhley
Lead Security Strategist

More Effective Threat Modeling

There are a lot of theories about threat models. Their utility often depends on the context and the job to which they are applied. I was asked to speak about threat models at the recent BSIMM Community Conference, which made me formally re-evaluate my thoughts on the matter. Over the years I’ve used threat models in many ways at both the conceptual level and at the application level. In preparing for the conference I first tried to deconstruct the purpose of threat models. Then I looked at the ways I’ve implemented their intent.

Taking a step back to examine their value with respect to any risk situation, you examine things such as who, what, how, when, and why:

Who is the entity conducting the attack, including nation states, organized crime, and activists.

What is the ultimate target of the attack, such as credit card data.

How is the method by which attackers will get to the data, such as SQL injection.

Why captures the reason the target is important to the attacker. Does the data have monetary value? Or are you just a pool of resources an attacker can leverage in pursuit of other goals?

A threat can be described as who will target what, using how in order to achieve why.

We will come back to when in a moment. Threat models typically put most of the focus on what and how. The implicit assumption is that it doesn’t really matter who or why—your focus is on stopping the attack. Focusing on what and how allows you to identify potential bugs that will crop up in the design, regardless of who might be conducting the attack and their motivation.

The challenge with focusing solely on what and how is that they change over time. How is dependent on the specifics of the implementation, which will change as it grows. On the other hand, who and why tend to be fairly constant. Sometimes, focusing on who and why can lead to new ideas for overall mitigations that can protect you better than the point fixes identified by how.

For instance, we knew that attackers using advanced persistent threat (APT) (who) were fuzzing (how) Flash Player (what). To look at the problem from a different angle, we decided to stop and ask why. It wasn’t solely because of Flash Player’s ubiquity. At the time, most Flash Player attacks were being delivered via Office documents. Attackers were focusing on Flash Player because they could embed it in an Office document to conduct targeted spearphishing attacks. Targeted spearphishing is a valuable attack method because you can directly access a specific target with minimal exposure. By adding a Flash Player warning dialogue to alert users of a potential spearphishing attempt in Office, we addressed why Flash Player was of value to them. After that simple mitigation was added, the number of zero-day attacks dropped noticeably.

I also mentioned that when could be useful. Most people think of threat models as a tool for the design phase. However, threat models can also be used in developing incident response plans. You can take any given risk and consider, “When this mitigation fails or is bypassed, we will respond by…”

Therefore, having a threat model for an application can be extremely useful in controlling both high-level (who/why) and low-level threats (how/what). That said, the reality is that many companies have moved away from traditional threat models. Keeping a threat model up-to-date can be a lot of effort in a rapid development environment. Adam Shostack covered many of the common issues with this in his blog post, The Trouble with Threat Modeling. The question each team faces is how to achieve the value of threat modeling using a more scalable method.

Unfortunately, there is not a one-size-fits-all solution to this problem. For the teams I have worked with, my approach has been to try and keep the spirit of threat modeling but be flexible on the implementation. Threat models can also have different focuses, as Shostack describes in his blog post, Reinvigorate your Threat Modeling Process. To cover all the variants would be too involved for a single post, but here are three general suggestions:

  1. There should be a general high-level threat model for the overall application. This high-level model ensures everyone is headed in the same direction, and it can be updated as needed for major changes to the application. A high-level threat model can be good for sharing with customers, for helping new hires to understand the security design of the application, and as a reference for the security team.
  2. Threat models don’t have to be documented in the traditional threat model format. The traditional format is very clear and organized, but it can also be complex and difficult to document in different tools. The goal of a threat model is to document risks and plans to address them. For individual features, this can be in a simple paragraph form that everyone can understand. Even writing, “this feature has no security implications,” is informative.
  3. Put the information where developers are most likely to find it. For instance, adding a security section to the spec using the simplified format suggested eliminates the need to cross-reference a separate document, helping to ensure that everyone involved will read the security information. The information could also be captured in the user story for the feature. If your code is the documentation, see if your Javadoc tool supports custom tags. If so, you could encourage your developers to use an @security tag when documenting code. If you follow Behavior Driven Development, your threat model can be captured as Cucumber test assertions. Getting this specific means the developer won’t always have the complete picture of how the control fits into the overall design. However, it is important for them to know that the documentation note is there for a specific security reason. If the developer has questions, the security champion can always help them cross-reference it to the overall threat model.

Overall I think the concept of threat modeling still serves a valid purpose. Examining how and what can ensure your implementation is sound, and you can also identify higher level mitigations by examining who and why. The traditional approach to threat modeling may not be the right fit for modern teams, though teams can achieve the goal if they are creative with their implementation of the concept. Along with our development processes, threat modeling must also evolve.

Peleus Uhley
Lead Security Strategist

The Simplest Form of Secure Coding

Within the security industry, we often run into situations where providing immediate security guidance isn’t straightforward. For instance, a team may be using a new, cutting edge language that doesn’t have many existing security tools or guidelines available. If you are a small startup, then you may not have the budget for the enterprise security solutions. In large organizations, the process of migrating a newly acquired team into your existing tools and trainings may take several weeks. What advice can we give to those teams to get them down the road to security today?

In these situations, I remind them to go back to their original developer training. Many developers are familiar with the term “technical debt which refers to the “eventual consequences of poor system design, software architecture or software development within a codebase.” Technical security debt is one component of an application’s overall technical debt. The higher the technical debt is for an application, the greater the chance for security issues. Moreover, it’s much easier to integrate security tools and techniques into code that has been developed with solid processes.

To a certain extent, the industry has known this for awhile. Developers like prepared statements because pre-compiled code runs faster, and security people like it because pre-compiled code is less injectable. Developers want exception handling because it makes the web application more stable and they can cleanly direct users to a support page which is a better user experience. Security people want exception handling so that there is a plan for malicious input and because showing the stack trace is an information leak. However, let’s take this a step further.

If you search the web for “Top 10” lists for developer best practices and/or common coding mistakes, then you will see there’s a clear overlap in traditional coding principles and security principles across all languages.  For example, the Modern IE  Cross Browser Development Standards & Interoperability Best Practices is written for developers and justifies these points using concepts that are important to clean development. However, I can go through the same list and justify many of their points using security concepts. Here are just a few of their recommendations and how they relate to security:

  • Use a build process with tools to check for errors and minify files. On the security side, establishing this will enable you to more easily integrate security tools into the build process.
  • Always use a standards-based doctype to avoid Quirks Mode. Quirks Mode makes it easier to inject XSS vulnerabilities into your site.
  • Avoid inlineJavaScript tags in HTML markup. In security, avoiding inlineJavaScript makes it easier to support Content Security Policies.

Switching to Ruby on Rails, here’s a list of the 10 Most Common Rails Mistakes and how to avoid them so developers can create better applications. When you look at those errors from a security perspective, you will also see overlaps:

  • Common Mistake #1-3: Putting too much logic in the controller/view/model. These three points all deal with keeping your code cleanly designed for better maintainability. Security is a common reason for performing code maintenance. Often times, your response to active attacks against your system will be slowed down because the code cannot be easily changed or is too complex to identify a single validation point.

    This section also reminds us that the controller is the recommended place for first-level session and request parameter management. This allows for high-level sanity checking before the data makes it into your model.

  •   Common Mistake #5: Using too many gems. Controlling your third-party libraries also helps to limit your attack surface and reduce the maintenance costs of keeping them up-to-date for security vulnerabilities.
  •   Common Mistake #7: Lack of automated tests. As mentioned in the HTML lists, using an automated test framework enables you to also include security tests. This blog refers to use techniques such as BDD for which there are also Ruby-based BDD security frameworks like Gauntlt.
  •  Common Mistake #10: Checking sensitive information into source code repositories. This is clearly a security rule. In this example, they are referring to a specific issue with Rails secret tokens. However, this is a common mistake for development in general. Separating credentials from code is simply good coding hygiene – it can prevent an unintended leak of the credential and permit a credential rotation without having to rebuild the application.

Even if you go back to a 2007 article in the IBM WebSphere Developer Technical Journal on The Top Java EE Best Practices, which is described as a “best-of-the-best list of what we feel are the most important and significant best practices for Java EE,” then you will see the same themes being echoed within the first five principles of the list:

  •   Always use MVC.This was also mentioned in the Rails Top 10. Centralized development allows for centralized validation.
  •  Don’t reinvent the wheel. This is true for security, as well. For instance, don’t invent your own cryptography library wheel!
  •  Apply automated unit tests and test harnesses at every layer. Again, this will make it easier to include security tests.
  •  Develop to the specifications, not the application server.This point highlights the importance of not locking your code into a specific version of the server. One of the most frequent issues large enterprises struggle with is migrating from older, vulnerable platforms, because the code is too dependent on the old environment.This concept is also related to #16 in their list, “Plan for version updates”.
  • Plan for using Java EE security from Day One. The idea here is similar to “Don’t reinvent the wheel.” Most development platforms provide security frameworks that are already tested and ready to use.

As you can see, regardless of your coding language, security best practices tend to overlap with your developer best practices. Following them will either directly make your code more secure or make it easier to integrate security controls later. In meetings with management, developers and security people can be aligned in requesting adequate time to code properly.

It’s true that security-specific training will always be necessary for topics such as authentication, authorization, cryptography, etc. And security training certainly shows you how to think about your code defensively which will help with application logic errors. However, a lot of the low-hanging bugs and security issues can be caught by following basic good, old-fashioned coding best practices. The more you can control your overall technical debt, the more you will control your security debt.

Peleus Uhley
Lead Security Strategist

An Overview of Behavior Driven Development Tools for CI Security Testing

While researching continuous integration (CI) testing models for Adobe, I have been experimenting with different open-source tools like Mittn, BDD-Security, and Gauntlt. Each of these tools centers around a process called Behavior Driven Development (BDD). BDD enables you to define your security requirements as “stories,” which are compatible with Scrum development and continuous integration testing. Whereas previous approaches required development teams to go outside their normal process to use security tools, these frameworks aim to integrate security tools within the existing development process.

None of these frameworks are designed to replace your existing security testing tools. Instead, they’re designed to be a wrapper around those security tools so that you can clearly define unit tests as scenarios within a story. The easiest way to understand the story/scenario concept is to look at a few examples from BDD-Security. This first scenario is part of an authentication story. It verifies that account lockouts are enforced by the demo web application:

Scenario: Lock the user account out after 4 incorrect authentication attempts
Meta: @id auth_lockout
Given the default username from: users.table
And an incorrect password
And the user logs in from a fresh login page 4 times
When the default password is used from: users.table
And the user logs in from a fresh login page
Then the user is not logged in

The BDD frameworks take this human readable statement about your security policy and translates it into a technical unit test for your web application penetration testing tool. With this approach, you’re able to phrase your security requirements for the application as a true/false statement. If Jenkins sees a false result from this unit test, then it catches the bug immediately and can flag the build. In addition, this human readable approach to defining unit tests allows the scenarios to double as documentation. An auditor can quickly read through the scenarios and map them to a threat model or policy requirement.

In order to interact with your web site and perform the log in, the framework will need a corresponding class written in a web browser automation framework. The BDD example above used a custom class that leverages the Selenium 2 framework to navigate to the login page, find the login form elements, fill in their values and have the browser perform the submit action. Selenium is a common tool for web site testers so your testing team may already be familiar with it or similar frameworks.

Writing custom classes that understand your web site is good for creating specific tests around your application logic. However, you can also perform traditional spidering and scanning tests as in this second example from BDD-Security:

Scenario: The application should not contain Cross Site Scripting vulnerabilities
Meta: @id scan_xss
#navigate_app will spider the website
GivenStories: navigate_app.story
Given a scanner with all policies disabled
And the Cross-Site-Scripting policy is enabled
And the attack strength is set to High
And the alert threshold is set to Medium
When the scanner is run
And false positives described in: tables/false_positives.table are removed
Then no Medium or higher risk vulnerabilities should be present

For the BDD-Security demo, the scanner that is used is the OWASP ZAP proxy. Although, BDD frameworks are not limited to tests through a web proxy. For instance, this example from BDD-Security shows how to run Nessus and ensure that the scan doesn’t return with anything that is severity 2 (medium) or higher:

Scenario: The host systems should not expose known security vulnerabilities

Given a nessus server at https://localhost:8834
And the nessus username continuum and the password continuum
And the scanning policy named test
And the target hosts
|hostname |
|localhost  |
When the scanner is run with scan name bddscan
And the list of issues is stored
And the following false positives are removed
|PluginID   |Hostname   |  Reason                                                                      |
|43111      |    |  Example of how to add a false positive to this story  |
Then no severity: 2 or higher issues should be present

There are a lot of good blogs and presentations (1,2,3) that further explain the benefits of BDD approaches to security, so I won’t go into any further detail. Instead, I will focus on three current tools and highlight key differences that are important to consider when evaluating them.

Which BDD Tool is Right for You?

To start, here is a quick summary of the tools at the time of this writing:

Mittn Gauntlt BDD-Security
Primary Language Python Ruby Java
Approximate Age 3 months 2 years 2 years
Commits within last 3 months yes yes yes
BDD Framework Behave Cucumber jbehave
Default Web App Pen Test Tools Burp Suite, radamsa Garmr, arachni, dirb, sqlmap, curl Zap, Burp Suite
Default SSL analysis sslyze heartbleed, sslyze TestSSL
Default Network Layer Tools N/A nmap nessus
Windows or Unix Unix Unix** Both

** Gauntlt’s ‘When tool is installed” statement is dependent on the Unix “which” command. If you exclude that statement from your scenarios, then many tests will work on Windows.

If you plan to wrap more than the officially supported list of tools or have complex application logic, then you may need custom sentences, known as “step definitions.” Modifying step definitions is not difficult. Although, once you start modifying code, you have to consider how to merge your changes with future updates to the framework.

Each framework has a different approach to their step definitions. For instance, BDD-Security tends to encourage formal step definition sentences in all their test cases which would require code changes for custom steps. With Gauntlt you can store additional step definition files in the attack_adapters directory. Gauntlt also provides flexibility through a few generic step definitions that allow you to check the output of arbitrary raw command lines, as seen in their hello world example below:

Feature: hello world with gauntlt using the generic command line attack
    When I launch a “generic” attack with:
      cat /etc/passwd
    Then the output should contain:

Similarly, you should also consider how the framework will handle false positives from the tools. For instance, Mittn allows you to address the situation by tracking them in a database. BDD-Security allows you to address false positives statements within the scenario, as seen in the Nessus example from above or in a separate table. Gauntlt’s approach is to leverage “should not contain” statements within the scenario.

Since these tools are designed to be integrated into your continuous integration testing frameworks, you will want to evaluate how compatible they will be and how tightly you will need them integrated. For instance, quoting Continuum Security’s BDD introduction:

BDD-Security jobs can be run as a shell script or ant job. With the xUnit and JBehave plugins the results will be stored and available in unit test format. The HTML reports can also be published through Jenkins.

Mittn is also based on Behave and can produce JUnit XML test result documents. Gauntlt’s default output is handled by Cucumber. By default, Gauntlt supports pretty StdOut format and HTML output but you can modify the code to get JUnit output. There is an open improvement request to allow JUnit through the config file. Guantlt has documentation for Travis integration, as well.

Overall, the tools were not difficult to deploy. Gauntlt’s virtual box demo environment in its starter kit can be converted to be deployed via Chef in the cloud with a little work. When choosing a framework, you should also consider the platform support of the security tools you intend to use and the platform of your integrated testing environments. For instance, using a GUI-based tool on a Linux server will require an X11 desktop to be installed.

All of these tools have promise depending on your preferences and needs. BDD-Security would be a good tool for web development teams who are familiar with tools like Selenium and want tight integration with their processes. Gauntlt’s ability to support “generic” attacks makes it a good tool for teams that want to use a wide array of tools in their testing. Mittn is the youngest entry and doesn’t yet have features like built-in spidering support. Although, Python developers can easily find libraries for spidering sites, and Mittn’s external database approach to tracking issues may be useful for teams who have other systems that need to be notified of new results.

Before adopting one of these tools, an organization will likely do a buy-vs.-build analysis with commercial continuous monitoring offerings. For those who will be presenting the build argument, these tools provide enough to make a solid case in that discussion.

Where these frameworks add value is by allowing you to take your existing security tools (Burp, Nessus, etc.) and make them a part of your daily build process. By integrating with the continuous build system, you can immediately identify potential issues and ensure a minimum baseline of security. The scenario-based approach allows you to map requirements in your security policy to clearly defined unit tests. This evolution of open-source security frameworks that are designed to directly integrate with the DevOps process is an exciting step forward in the maturity of security testing.

Peleus Uhley
Lead Security Strategist

Retiring the “Back End” Concept

For people who have been in the security industry for some time, we have grown very accustomed to the phrases “front end” and “back end.” These terms, in part, came from the basic network architecture diagram that we used to see frequently when dealing with traditional network hosting:


The phrase “front end” referred to anything located in DMZ 1, and “back end” referred to anything located in DMZ 2. This was convenient because the application layer discussion of “front” and “back” often matched nicely with the network diagram of “front” and “back.”  Your web servers were the first layer to receive traffic in DMZ 1 and the databases which were behind the web servers were located in DMZ 2. Over time, this eventually led to the implicit assumption that a “back end” component was “protected by layers of firewalls” and “difficult for a hacker to reach.”

How The Definition Is Changing

Today, the network diagram and the application layer diagrams for cloud architectures do not always match up as nicely with their network layer counterparts. At the network layer, the diagram frequently turns into the diagram below:


In the cloud, the back end service may be an exposed API waiting for posts from the web server over potentially untrusted networks. In this example, the attacker can now directly reach the database over the network without having to pass through the web server layer.

Many traditional “back end” resources are now offered as a stand alone service. For instance, an organization may leverage a third-party database as a service (DBaaS) solution that is separate from its cloud provider. In some instances, an organization may decide to make their S3 buckets public so that they can be directly accessed from the Internet.

Even when a company leverages integrated solutions offered by a cloud provider, shared resources frequently exist outside the defined, protected network. For instance, “back end” resources such as S3, SQS and DynamoDB will exist outside your trusted VPC. Amazon does a great job of keeping its AWS availability zones free from most threats. However, you may want to consider a defense-in-depth strategy where SSL is leveraged to further secure these connections to shared resources.

With the cloud, we can no longer assume that the application layer diagram and the network layer diagrams are roughly equivalent since stark differences can lead to distinctly different trust boundaries and risk levels. Security reviews of application services are now much more of a mix of network layer questions and application layer questions. When discussing a “back end” application component with a developer, here are a few sample questions to measure its exposure:

*) Does the component live within your private network segment, as a shared resource from your cloud provider or is it completely external?

*) If the component is accessible over the Internet, are there Security Groups or other controls such as authentication that limit who can connect?

*) Are there transport security controls such as SSL or VPN for data that leaves the VPC or transits the Internet?

*) Is the data mirrored across the Internet to another component in a different AWS region? If so, what is done to protect the data as it crosses regions?

*) Does your threat model take into account that the connection crosses a trust boundary?

*) Do you have a plan to test this exposed “back end” API as though it was a front end service?

Obviously, this isn’t a comprehensive list since several of these questions will lead to follow up questions. This list is just designed to get the discussion headed in the right direction. With proper controls, the cloud service may emulate a “back end” but you will need to ask the right questions to ensure that there isn’t an implicit security-by-obscurity assumption.

The cloud has driven the creation of DevOps which is the combination of software engineering and IT operations. Similarly, the cloud is morphing application security reviews to include more analysis of network layer controls. For those of us who date back to the DMZ days, we have to readjust our assumptions to reflect the fact many of today’s “back end” resources are now connected across untrusted networks.

Peleus Uhley
Lead Security Strategist




The Cloud as the Operating System

The current trend is to push more and more of our traditional desktop tasks to the cloud. We use the cloud for file storage, image processing and a number of other activities.  However, that transition is more complex than just copying the data from one location to another.

Desktop operating systems have evolved over decades to provide a complex series of controls and security protections for that data. These controls were developed in direct response to increasing usage and security requirements. When we move those tasks to the cloud, the business requirements that led to the evolution of those desktop controls remain in place. We must find ways to provide those same controls and protections using the cloud infrastructure. When looking at creating these security solutions in the cloud, I often refer back to the desktop OS architectures to learn from their designs.


Example: File storage

File storage on the desktop is actually quite complex under the hood. You have your traditional user, group and (world/other/everyone) classifications. Each of these classifications can be granted the standard read, write and execute permissions. This all seems fairly straightforward.

However, if you dig a little deeper, permissions often have to be granted to more than just one single group. End users often want to provide access to multiple groups and several additional individuals. The operating system can also layer on its own additional permissions. For instance, SELinux can add permissions regarding file usage and context that go beyond just simple user level permissions. Windows can provide fine-grained permissions on whether you can write data or just append data.

There are several different types of usage scenarios that led to the creation of these controls. For instance, some controls were created to allow end users to share information with entities they do not trust. Other controls were driven by the need for services to perform tasks with data on the user’s behalf.


Learning from the Desktop

While the technical details of how we store and retrieve data changes when we migrate data to the cloud, the fundamental principles and complexities of protecting that data still persist. When planning your file sharing service, you can use the desktop as a reference for how complex your permissions may need to be as it scales. Will end users have complex file sharing scenarios with multiple groups and individuals? Will you have services that run in the background and perform maintenance on the user data? What permissions will those services need to process the data? These are hard problems to solve and you don’t want to reinvent these critical wheels from scratch.

Admittedly there is not always a direct 1:1 mapping between the desktop and cloud. For instance, the desktop OS gets to assume that the hard drive is physically connected to the CPU that will do the processing. In the cloud, your workers or services may be connecting to your cloud storage service across untrusted networks. This detail can add additional authentication and transport level security controls on top of the traditional desktop controls.

Overall, the question that we face as engineers is how can we best take the lessons learned from the desktop and progress them forward to work in a cloud infrastructure. File storage and access is just one aspect of the desktop that is being migrated to the cloud. Going forward, I plan to dig deeper into this idea and similar topics that I learn from working with Adobe’s cloud teams.

Peleus Uhley
Lead Security Strategist