Posts tagged "DevOps"

Re-assessing Web Application Vulnerabilities for the Cloud

As I have been working with our cloud teams, I have found myself constantly reconsidering my legacy assumptions from my Web 2.0 days. I discussed a few of the high-level ones previously on this blog. For OWASP AppSec California in January, I decided to explore the impact of the cloud on Web application vulnerabilities. One of the assumptions that I had going into cloud projects was that the cloud was merely a hosting provider layer issue that only changed how the servers were managed. The risks to the web application logic layer would remain pretty much the same. I was wrong.

One of the things that kicked off my research in this area was watching Andres Riancho’s “Pivoting in Amazon Clouds,” talk at Black Hat last year. He had found a remote file include vulnerability in an AWS hosted Web application he was testing. Basically, the attacker convinces the Web application to act as a proxy and fetch the content of remote sites. Typically, this vulnerability could be used for cross-site scripting or defacement since the attacker could get the contents of the remote site injected into the context of the current Web application. Riancho was able to use that vulnerability to reach the metadata server for the EC2 instance and retrieve AWS configuration information. From there, he was able to use that information, along with a few of the client’s defense-in-depth issues, to escalate into taking over the entire AWS account. Therefore, the possible impacts for this vulnerability have increased.

The cloud also involves migration to a DevOps process. In the past, a network layer vulnerability led to network layer issues, and a Web application layer vulnerability led to Web application vulnerabilities. Today, since the scope of these roles overlap, a breakdown in the DevOps process means network layer issues can impact Web applications.

One vulnerability making the rounds recently is a vulnerability dealing with breakdowns in the DevOps process. The flow of the issue is as follows:

  1. The app/product team requests an S3 bucket called my-bucket.s3.amazonaws.com.
  2. The app team requests the IT team to register the my-bucket.example.org DNS name, which will point to my-bucket.s3.amazonaws.com, because a custom corporate domain will make things clearer for customers.
  3. Time elapses, and the app team decides to migrate to my-bucket2.s3.amazonaws.com.
  4. The app team requests from the IT team a new DNS name (my-bucket2.example.org) pointing to this new bucket.
  5. After the transition, the app team deletes the original my-bucket.s3.amazonaws.com bucket.

This all sounds great. Except, in this workflow, the application team didn’t inform IT and the original DNS entry was not deleted. An attacker can now register my-bucket.s3.amazon.com for their malicious content. Since the my-bucket.example.org DNS name still points there, the attacker can convince end users that their malicious content is example.org’s content.

This exploit is a defining example of why DevOps needs to exist within an organization. The flaw in this situation is a disconnect between the IT/Ops team that manages the DNS server and the app team that manages the buckets. The result of this disconnect can be a severe Web application vulnerability.

Many cloud migrations also involve switching from SQL databases to NoSQL databases. In addition to following the hardening guidelines for the respective databases, it is important to look at how developers are interacting with these databases.

Along with new NoSQL databases, there are a ton of new methods for applications to interact with the NoSQL databases.  For instance, the Unity JDBC driver allows you to create traditional SQL statements for use with the MongoDB NoSQL database. Developers also have the option of using REST frontends for their database. It is clear that a security researcher needs to know how an attacker might inject into the statements for their specific NoSQL server. However, it’s also important to look at the way that the application is sending the NoSQL statements to the database, as that might add additional attack surface.

NoSQL databases can also take existing risks and put them in a new context. For instance, in the context of a webpage, an eval() call results in cross-site scripting (XSS). In the context of MongoDB’s server side JavaScript support, a malicious injection into eval() could allow server-side JavaScript injection (SSJI). Therefore database developers who choose not to disable the JavaScript support, need to be trained on JavaScript injection risks when using statements like eval() and $where or when using a DB driver that exposes the Mongo shell. Existing JavaScript training on eval() would need to be modified for the database context since the MongoDB implementation is different from the browser version.

My original assumption that a cloud migration was primarily an infrastructure issue was false. While many of these vulnerabilities were always present and always severe, the migration to cloud platforms and processes means these bugs can manifest in new contexts, with new consequences. Existing recommendations will need to be adapted for the cloud. For instance, many NoSQL databases do not support the concept of prepared statements, so alternative defensive methods will be required. If your team is migrating an application to the cloud, it is important to revisit your threat model approach for the new deployment context.

Peleus Uhley
Senior Security Strategist

A First Experiment with Chef

Security professionals are now using cloud solutions to manage large-scale security issues in the production Ops workflow. In order to achieve scale, the security processes are automated through tools such as Chef. However, you can also leverage the automation aspects of the cloud regardless of whether or not you plan to scale it to hundreds of machines. For example, one of my first cloud projects was to leverage Chef to resolve a personal, small scale issue where I might have previously used a VM or separate machine.

Using the cloud for personal DevOps

At Adobe, we’re constantly hiring third party security consultants to test our products. They require environments for building and testing Adobe code, and they need access to tools like Cygwin, Wireshark, SysInternals, WinDbg, etc. For my personal testing, I also require access to machines with a similar set up.  Using the cloud, it is possible to quickly spin up and destroy these types of security testing environments as needed.  

For this project, I used Adobe’s private cloud infrastructure – which is just an implementation detail – this approach can work on any cloud infrastructure. Our IT department also provides an internal Chef server for hosting cookbooks and managing roles.

In order to connect the Chef server and the cloud environment, I decided to set up a web server which would copy the Chef config files to the cloud instance and launch chef-client. For this, I chose Windows IIS, C# and ASP.NET because I had an unrelated goal of learning more about WMI and PowerShell through a coding project. Vagrant for Windows would be an alternative to this approach but it wasn’t available when I began the project. My personal Linux box was used to write the Chef recipes and upload the cookbook to the Chef server.

The workflow of the set-up process is as follows:

1.    Request a new Windows 7 64-bit instance from the cloud and wait for the response saying it is ready.

2.    The user provides the domain name and administrator password for the Windows 7 instance to the web page. If I need to set up additional accounts on the machine, I can also collect those here and run command lines over WMI to create them.

 3.   Using the admin credentials, the web server issues a series of remote WMI calls to create a PowerShell script on the Windows 7 instance. WMI doesn’t allow you to directly create files on remote machines, so I had to use “echo” and redirect the output.

 4.   By default, Windows doesn’t allow you to run unsigned PowerShell scripts. Although, with admin privileges you can use a command line to disable the signature check before executing the script. Once the script was done, the signature check is re-enabled.

 5.    The PowerShell script will download the client.rb file and validator.pem key needed to register the Windows 7 instance with the chef server.

 6.   WMI can then be used to run the chef-client command line which will register the new Windows 7 instance with the chef-server.

 7.    Since chef requires a privileged user to assign roles to the Win7 node, a separate chef key and a copy of the knife utility are stored locally on the web server. The C# server code will execute knife using the privileged key to assign the appropriate role to the new Windows 7 node in Chef.

 8.    Lastly, the web server uses WMI to run chef-client on the Windows 7 instance a second time so that the recipes are executed. The last chef recipe in the list will create a finished.txt file on the file system that the web server can verify that the process is completed.

Lessons Learned from Writing the Recipes

Using Chef, I was able to install common tools such as WinDbg, Cygwin (with custom selected utilities), SysInternals, Wireshark, Chrome, etc. Chef can be used to execute VBScript which allowed me to accomplish tasks such as running Windows Update during the setup process. For most recipes, the Opscode Windows cookbook made writing the recipes fairly straightforward.

Some of the installers will require a little legwork. For instance, you may have to search the web for the command line flags necessary for silent install. If you cannot automatically download an installer from the web, then the installers can be stored on the Chef server along with the recipes. For Wireshark, it was necessary to download and install winpcap before downloading and installing Wireshark. One client installer was not Chef-friendly, which lead to the creation of a Chef script that would have Windows Task Scheduler install the software instead. 

As a specific example, Cygwin can be one of the more complicated installs because it is a four-step process. There are some public cygwin recipes available for more advanced installs but this is enough to get it installed just using the Opscode Windows cookbook. For a robust deployment, additional code can be added to ensure that it doesn’t get run twice if Cygwin is already installed. The demo workflow below uses windows_package to download the setup executable. The windows_batch commands will then run the executable to install the default configuration, download additional utilities and then install the additional utilities:

 include_recipe “windows”

 windows_package ‘cygwin64′ do
     source ‘http://cygwin.com/setup-x86_64.exe
     installer_type :custom
     action :install
     options ‘–download –quiet-mode –local-package-dir c:\\cygwin_downloads –root c:\\cygwin64 -s http://mirror.cs.vt.edu/pub/cygwin/cygwin/’
end

windows_batch ‘cygwin_install_from_local_dir‘ do
code <<-EOH

    #{Chef::Config[:file_cache_path]}/setup-x86_64.exe –local-install –quiet-mode –local-package-dir c:\\\\cygwin_downloads –root c:\\\\cygwin64

   #{Chef::Config[:file_cache_path]}/setup-x86_64.exe –download –quiet-mode –local-package-dir c:\\\\cygwin_downloads –root c:\\\\cygwin64 -s http://mirror.cs.vt.edu/pub/cygwin/cygwin/ –packages vim,curl,wget,python,cadaver,rats,nc,nc6,git,subversion,flawfinder,stunnel,emacs,openssh,openssl,binutils

   #{Chef::Config[:file_cache_path]}/setup-x86_64.exe –quiet-mode –local-package-dir c:\\\\cygwin_downloads –root c:\\\\cygwin64 -s http://mirror.cs.vt.edu/pub/cygwin/cygwin/ –packages vim,curl,wget,python,cadaver,rats,nc,nc6,git,subversion,flawfinder,stunnel,emacs,openssh,openssl,binutils -L

    Exit 0
EOH
end

Post-Project Reflections

There are definitely alternative approaches such as using a baseline VM snapshot or a pre-built AMI with all of these tools installed. They each have their own pros and cons. For instance, local VMs launch faster but the scaling is limited by disk space. I chose Chef recipes because it provides the flexibility to create custom builds and ensures that everything is current. If needed, the system is able to scale quickly. The extra work in writing the server meant that I could make the process available to other team members.

Overall, despite having to be creative with some recipes, the project didn’t require a huge investment of time. A large portion was written while killing time in airports. The fact that I was able to go from a limited amount of knowledge to a working implementation fairly quickly speaks to the ease of Chef. If you are interested in learning more about the cloud, you don’t need a large, massively scalable project in order to get your hands dirty. It is possible to start with simple problems like creating disposable workstations.

PeleusUhley
Lead Security Strategist