Posts tagged "OWASP"

Leveraging Security Headers for Better Web App Security

Modern browsers support quite a few HTTP headers that provide an additional layer in any defense-in-depth strategy. If present in an HTTP response, these headers enable compatible browsers to enforce certain security properties. In early 2016, we undertook an Adobe-wide project to implement security headers.

Implementing a security control in a medium to large scale organization, consisting of many teams and projects, presents its own unique challenges. For example, should we go after select headers or all, which headers do we select, how do we encourage adoption, how do we verify, and so on. Here are a few interesting observations made in our quest to implement security headers:

Browser vs. Non-browser clients

Compatible browsers enforce security policies contained in security headers while incompatible browsers or non-browser clients simply ignore them. However, additional considerations are necessary that may be unique to your setup. For example, Adobe has a range of clients: full browser-based, headless browser-based (e.g., Chromium embedded framework), and desktop applications. Implementing the security property required by a HTTP Strict Transport Security (HSTS) header (i.e. all traffic is sent over TLS for all such clients) requires a combination of an enabling HSTS header and 302 redirects from HTTP to HTTPS. This is not as secure. The reason is that incompatible headless browser-based clients or desktop applications will ignore a HSTS header sent by the server-side. Also, if the landing page URL is provided over HTTP, the clients will continue to send requests over HTTP if the 302 redirect approach is not used. The problem of updating all such clients to have HTTPS landing URLs, which would require updating clients installed on customers’ machines, is a thorny problem. The key is to understand the unique aspects of your applications and customer base to determine how each header could be implemented.

Effort ROI

Adding some security headers requires less effort than others. In our assessment, we determined that there is an increasing order of difficulty and required investment in implementation. In order: X-XSS-Protection, X-Content-Type-Options, X-Frame-Options, HSTS, and Content Security Policy (CSP). For the first two, we have not found any reason not to use them and, so far, we have not seen any team needing to do any extra work when turning these headers on. Enabling a X-Frame-Options header may require some additional considerations. For example, are there legitimate reasons for allowing framing at all or from same origin? Content Security Policy (CSP) headers is by far the most prolific, subsuming all other security headers. Depending on goals, it may require the most effort to employ. For example, if cross-site scripting prevention is the goal, it might result in refactoring of the code to separate JavaScript.

We decided to adopt a phased implementation. For Phase 1: X-XSS-Protection, X-Content-Type-Options, X-Frame-Options. For Phase 2: HSTS, CSP (we haven’t yet considered the Public Key Pinning Extension for HTTP header). Dividing massive efforts into phases offers two main benefits:

  1. ability to make initial phases lightweight to build momentum with product teams – later phases can focus on more complex tasks
  2. the opportunity to iron out quirks in processes and automation.

Application stack vs. Operations stack

Who should we ask to enable these headers? Should it be the developers writing application code or the operations engineer managing servers? There are pros and cons of each option (e.g. specifying many security headers in server configurations such as nginx, IIS, and apache is programming language agnostic and doesn’t require changes to the code). Introducing such headers could even be made part of the hardened deployment templates (i.e. Chef recipes). However, some headers may require deeper understanding of the application landscape such as which sites, if any, should be allowed to frame it. We provided both options to teams and the table below provides some useful links that helped inform our decisions.

Server based nginx, IIS, Apache https://blog.g3rt.nl/nginx-add_header-pitfall.html, https://scotthelme.co.uk/hardening-your-http-response-headers
Programing Language-based Defenses Ruby on Rails https://github.com/twitter/secureheaders
JavaScript https://github.com/helmetjs/helmet, https://github.com/seanmonstar/hood, https://github.com/nlf/blankiehttps://github.com/rwjblue/ember-cli-content-security-policy
Java https://spring.io/blog/2013/08/23/spring-security-3-2-0-rc1-highlights-security-headers
ASP.NET https://github.com/NWebsec/NWebsec/wiki
Python https://github.com/mozilla/django-csp, https://github.com/jsocol/commonware, https://github.com/sdelements/django-security
Go https://github.com/kr/secureheader
Elixir https://github.com/anotherhale/secure_headers

Affecting the change

We leveraged our existing Secure Product Life Cycle engagement process with product teams to affect this change. We use a security backlog as part of this process to capture pending security work for a team. Adding security header work to this backlog ensured that the security headers implementation work would be completed as part of our regular processes.

Checking Compliance

Having provided necessary phased guidance, the last piece of the puzzle was to develop an automated way to verify correct implementation of security headers. Manually checking status of these headers would not only be effort intensive but also repetitive and ineffective. Using publicly available scanners (such as https://securityheaders.io) is not always an option due to a stage/dev environments’ accessibility restrictions or specific phased implementation guidance. As noted here, Adobe has undertaken several company-wide security initiatives. A major one is the Security Automation Framework (SAF). SAF allows creating assertions, with various programming languages supported, and run them periodically with reporting. First, we organically compiled a list of end points for Adobe web properties that were discovered through other initiatives. Then Phase 1 and Phase 2 header checks were implemented as SAF assertions to run weekly as scans across the sites on this list. These automated scans have been instrumental in getting a bird’s eye view of adoption progress and give information needed to start a dialogue with product teams.

Security headers provide a useful layer in any defense-in-depth strategy. Most of these are relatively easy to implement. The size of an organization and number of products can introduce challenges that are not necessarily specific to implementing security headers. Within any organization, its critical to plan security initiatives so they help form a malleable ecosystem to help ease the implementation of future initiatives. It does indeed take a village to implement security controls in an organization as big as Adobe.

Prithvi Bisht
Senior Security Researcher

OWASP, IR, ML, and Internal Bug Bounties

A few weeks ago, I traveled to the OWASP Summit located just outside of London. The OWASP Summit is not a conference. It is a remote offsite event for OWASP leaders and the community to brain storm on how to improve OWASP.  There were a series of one hour sessions on how to tackle different topics and what OWASP could offer to make the world better. This is a summary of some of the more interesting sessions that I was involved in from my personal perspective. These views are not in any way official OWASP views nor are they intended to represent the views of everyone who participated in the respective working groups.

One session that I attended dealt with incident response (IR). Often times, incident response guidelines are written for the response team. They cover how to do forensics, analyze logs, and other response tasks covered by the core IR team. However, there is an opportunity for creating incident response guidelines which target the security champions and developers that IR team interacts with during an incident. In addition, OWASP can relate how this information ties back into their existing security development best practices. As an example, one of the first questions from an IR team is how does customer data flow through the application. This allows the IR team to quickly determine what might have been exposed. In theory, a threat model should contain this type of diagram and could be an immediate starting point for this discussion. In addition, after the IR event, it is good for the security champion to have a post mortem review of the threat model to determine how the process could have better identified the exploited risk. Many of the secure development best practices recommended in the security development lifecycle support both proactive and reactive security efforts. Also, a security champion should know how to contact the IR team and regularly sync with them on the current response policies. These types of recommendations from OWASP could help companies ensure that a security champion on the development team are prepared to assist the IR team during a potential incident. Many of the ideas from our discussion were captured in this outcomes page from the session: https://owaspsummit.org/Outcomes/Playbooks/Incident-Response-Playbook.html.

Another interesting discussion from the summit dealt with internal bug bounties. This is an area where Devesh Bhatt and myself were able to provide input based on our experiences at Adobe. Devesh has participated in our internal bounty programs as well as several external bounty programs. Internal bug bounties have been a hot topic in the security community. On the one hand, many companies have employees who participate in public bounties and it would be great to focus those skills on internal projects. Employees who participate in public bug bounties often include general developers who are outside of the security team and therefore aren’t directly tied into internal security testing efforts. On the other hand, you want to avoid creating conflicting incentives within the company. If this is a topic of interest to you, Adobe’s Pieter Ockers will be discussing the internal bug bounty program he created in detail at the O’Reilly Security conference in New York this October: https://conferences.oreilly.com/security/sec-ny/public/schedule/detail/62443.

Lastly, there was a session on machine learning. This has been a recent research area of mine since it is the next natural step of evolution for applying the data that is collected from our security automation work. Adobe also applies machine learning to projects like Sensei. Even though the session was on Friday, there was a large turnout by the OWASP leaders.  We discussed if there were ways to share machine learning training data sets and methodologies for generating them using common tools.  One of the observations was that many people are still very new to the topic of machine learning. Therefore, I decided to start by drafting out a machine learning resources page for OWASP. The goal of the page isn’t to copy pages of existing introductory content onto an OWASP page where it could quickly become dated. The page also isn’t designed to drown the reader with a link to every machine learning reference that has ever been created. Instead, it focuses on a small selection of references that are useful to get someone started with the basic concepts. The reader can then go find their own content that goes deeper into the areas that interest them. For instance, Coursera provides an 11 week Stanford course on machine learning but that would overwhelm the person just seeking a high-level overview. The first draft of my straw man proposal for the OWASP page can be found here: https://www.owasp.org/index.php/OWASP_Machine_Learning_Resources. As OWASP creates more machine learning content, this page could eventually be a useful appendix. This page is only a proposal and it is not an official OWASP project at this stage.  Additional ideas from the workshop discussion can be found on the outcomes page: https://owaspsummit.org/Outcomes/machine-learning-and-security/machine-learning-and-security.html.

OWASP is a community driven group where all are invited to participate. If these topics interest you, then feel free to join an OWASP mailing list and start to provide more ideas. There were several other great sessions from the summit and you can find a list of outcomes from each session here: https://owaspsummit.org/Outcomes/.

Peleus Uhley
Principal Scientist

Join Members of our Security Team at AppSec Europe and Security of Things World

Our director of secure software engineering, Dave Lenoe, will be speaking at the upcoming Security of Things World conference in Berlin, Germany, June 27 – 28. In addition, two more members of our security team will also be speaking at the upcoming OWASP AppSec Europe conference in Rome, Italy, June 27 – July 1.

First up is Dave at Security of Things World. He will be speaking about how Adobe engages with the broader security community for both proactive and reactive assistance in finding and resolving vulnerabilities in our solutions. You can join him on Monday, June 27, at 2:30 p.m.

Next up will be Julia Knecht, security analyst for Adobe Marketing Cloud, at OWASP AppSec Europe to share lessons learned from developing and employing an effective Secure Product Lifecycle (SPLC) process for our Marketing Cloud solutions. This session will give you on-the-ground knowledge that may assist you in developing your own SAAS-ready SPLC that helps break down silos in your organization, making it more agile and effective at building secure solutions. Julia’s session will be on Thursday, June 30th, at 3:00 p.m.

Finally, Vaibhav Gupta, security researcher, will be leading a “lightning training” on the OWASP Zed Attack Proxy (ZAP) tool at OWASP AppSec Europe. ZAP is one of the world’s most popular free security tools and is actively maintained by hundreds of international volunteers. It helps you automatically find security vulnerabilities in your web applications while you are developing and testing them. This training is focused on helping you with ZAP automation to enable better integration of it into your DevOps environment. Vaibhav’s session will be on Friday, July 1st, at 10:20 a.m.

If you will be at either of these conferences next week, we hope you can join our team for their sessions and conversation after or in the hallways throughout the event.

Re-assessing Web Application Vulnerabilities for the Cloud

As I have been working with our cloud teams, I have found myself constantly reconsidering my legacy assumptions from my Web 2.0 days. I discussed a few of the high-level ones previously on this blog. For OWASP AppSec California in January, I decided to explore the impact of the cloud on Web application vulnerabilities. One of the assumptions that I had going into cloud projects was that the cloud was merely a hosting provider layer issue that only changed how the servers were managed. The risks to the web application logic layer would remain pretty much the same. I was wrong.

One of the things that kicked off my research in this area was watching Andres Riancho’s “Pivoting in Amazon Clouds,” talk at Black Hat last year. He had found a remote file include vulnerability in an AWS hosted Web application he was testing. Basically, the attacker convinces the Web application to act as a proxy and fetch the content of remote sites. Typically, this vulnerability could be used for cross-site scripting or defacement since the attacker could get the contents of the remote site injected into the context of the current Web application. Riancho was able to use that vulnerability to reach the metadata server for the EC2 instance and retrieve AWS configuration information. From there, he was able to use that information, along with a few of the client’s defense-in-depth issues, to escalate into taking over the entire AWS account. Therefore, the possible impacts for this vulnerability have increased.

The cloud also involves migration to a DevOps process. In the past, a network layer vulnerability led to network layer issues, and a Web application layer vulnerability led to Web application vulnerabilities. Today, since the scope of these roles overlap, a breakdown in the DevOps process means network layer issues can impact Web applications.

One vulnerability making the rounds recently is a vulnerability dealing with breakdowns in the DevOps process. The flow of the issue is as follows:

  1. The app/product team requests an S3 bucket called my-bucket.s3.amazonaws.com.
  2. The app team requests the IT team to register the my-bucket.example.org DNS name, which will point to my-bucket.s3.amazonaws.com, because a custom corporate domain will make things clearer for customers.
  3. Time elapses, and the app team decides to migrate to my-bucket2.s3.amazonaws.com.
  4. The app team requests from the IT team a new DNS name (my-bucket2.example.org) pointing to this new bucket.
  5. After the transition, the app team deletes the original my-bucket.s3.amazonaws.com bucket.

This all sounds great. Except, in this workflow, the application team didn’t inform IT and the original DNS entry was not deleted. An attacker can now register my-bucket.s3.amazon.com for their malicious content. Since the my-bucket.example.org DNS name still points there, the attacker can convince end users that their malicious content is example.org’s content.

This exploit is a defining example of why DevOps needs to exist within an organization. The flaw in this situation is a disconnect between the IT/Ops team that manages the DNS server and the app team that manages the buckets. The result of this disconnect can be a severe Web application vulnerability.

Many cloud migrations also involve switching from SQL databases to NoSQL databases. In addition to following the hardening guidelines for the respective databases, it is important to look at how developers are interacting with these databases.

Along with new NoSQL databases, there are a ton of new methods for applications to interact with the NoSQL databases.  For instance, the Unity JDBC driver allows you to create traditional SQL statements for use with the MongoDB NoSQL database. Developers also have the option of using REST frontends for their database. It is clear that a security researcher needs to know how an attacker might inject into the statements for their specific NoSQL server. However, it’s also important to look at the way that the application is sending the NoSQL statements to the database, as that might add additional attack surface.

NoSQL databases can also take existing risks and put them in a new context. For instance, in the context of a webpage, an eval() call results in cross-site scripting (XSS). In the context of MongoDB’s server side JavaScript support, a malicious injection into eval() could allow server-side JavaScript injection (SSJI). Therefore database developers who choose not to disable the JavaScript support, need to be trained on JavaScript injection risks when using statements like eval() and $where or when using a DB driver that exposes the Mongo shell. Existing JavaScript training on eval() would need to be modified for the database context since the MongoDB implementation is different from the browser version.

My original assumption that a cloud migration was primarily an infrastructure issue was false. While many of these vulnerabilities were always present and always severe, the migration to cloud platforms and processes means these bugs can manifest in new contexts, with new consequences. Existing recommendations will need to be adapted for the cloud. For instance, many NoSQL databases do not support the concept of prepared statements, so alternative defensive methods will be required. If your team is migrating an application to the cloud, it is important to revisit your threat model approach for the new deployment context.

Peleus Uhley
Lead Security Strategist

Observations From an OWASP Novice: OWASP AppSec Europe

Last month, I had the opportunity to attend OWASP AppSec Europe in Cambridge.

The conference was split into two parts. The first two days consisted of training courses and project summits, where the different OWASP project teams met to discuss problems and further proceedings, and the last two days were conference and research presentations.

Admittedly an OWASP novice, I was excited to learn what OWASP has to offer beyond the Top 10 Project most of us are familiar with. As it is commonly the case with conferences, there were a lot of interesting conversations that occurred over coffee (or cider). I had the opportunity to meet some truly fascinating individuals who gave some great insight to the “other” side of the security fence, including representatives from Information Security Group Royal Holloway, various OWASP chapters, and many more.

One of my favorite presentations was from Sebastian Lekies, PhD candidate at SAP and the University of Bochum, who demonstrated website byte-level flow analysis by using a modified Chrome browser to find DOM-based XSS attacks. Taint-tags were put on every byte of memory that comes from user-input and traced through the whole execution until it was displayed back to the user. This browser was used to automatically analyze the first two levels of all Alexa Top 5000 websites, finding that an astounding 9.6 percent carry at least one DOM-based XSS flaw.

Another interesting presentation was a third day keynote by Lorenzo Cavallaro from Royal Holloway University. He and his team are creating an automatic analysis system to reconstruct behaviors of Android malware called CopperDroid. It was a very technical, very interesting talk, and Lorenzo could have easily filled another 100 hours.

Rounding out the event were engaging activities that broke up the sessions – everything from the University Challenge to a game show to a (very Hogwarts-esque) conference dinner at Homerton College’s Great Hall.

All in all, it was an exciting opportunity for me to learn how OWASP has broadened its spectrum in the last few years beyond web application security and all the resources that are currently available. I learned a lot, met some great people, and had a great time. I highly recommend to anyone that has the opportunity to attend!

Lars Krapf
Security Researcher, Digital Marketing

ColdFusion 11 Enhances the Security Foundation of ColdFusion 10

Tuesday marked the release of ColdFusion 11, the most advanced version of the platform to date. In this release, many of the features introduced in ColdFusion 10 have been upgraded and strengthened, and developers will now have access to an even more extensive toolkit of security controls and additional features. 

A few of the most significant ColdFusion 11 upgrades fall into three categories. The release includes advances in the Secure Profile feature, access to more OWASP tools, and a host of new APIs and development resources.

1.       More OWASP Tools

 In ColdFusion 11, several new OWASP tools have been added to provide more integrated security features. For instance, features from the AntiSamy project have been included to help developers safely display controlled subsets of user supplied HTML/CSS. ColdFusion 11 exposes AntiSamy through the new getSafeHTML() and isSafeHTML().

In addition, ColdFusion 11 contains more tools from OWASP’s Enterprise Security API library, or ESAPI, including the EncodeForXPath and EncodeForXMLAttribute features. These ESAPI features provide developers more flexibility to update the security of existing applications and serve as a strong platform for new development.

2.       Flexible Secure Profile Controls

Secure Profile was a critical feature in ColdFusion 10, because it allowed administrators to deploy ColdFusion with secure defaults. In the ColdFusion 11 release, admins have even more flexibility when deploying Secure Profile.

In ColdFusion 10, customers had the choice to enable secure install or not, only at the time of installation,depending on their preferences. But with ColdFusion 11, customers now have the ability to turn Secure Profile off or on after installation, whenever they’d like, which streamlines the lockdown process to prevent a variety of attacks.

Further improvements to the Secure Profile are documented here.

3.       Integrating Security into Existing APIs

 ColdFusion 11 has many upgraded APIs and features – but there are a few I’d like to highlight here. First, ColdFusion 11 includes an advanced password-based key derivation function – called PBKDF2 – which allows developers to create encryption keys from passwords using an industry-accepted cryptographic algorithm. Additionally, the cfmail feature now supports the ability to send S/MIME encrypted e-mails. Another ColdFusion 11 update includes the ability to enable SSL for WebSockets. More security upgrade information can be found in the ColdFusion 11 docs.

Overall, this latest iteration of the platform increases flexibility for developers, while enhancing security. Administrators will now find it even easier to lock down their environments. For information on additional security features please refer to the Security Enhancements (ColdFusion 11) page and the CFML Reference (ColdFusion 11).

Peleus Uhley
Lead Security Strategist