Posts in Category "Uncategorized"

An Overview of Behavior Driven Development Tools for CI Security Testing

While researching continuous integration (CI) testing models for Adobe, I have been experimenting with different open-source tools like Mittn, BDD-Security, and Gauntlt. Each of these tools centers around a process called Behavior Driven Development (BDD). BDD enables you to define your security requirements as “stories,” which are compatible with Scrum development and continuous integration testing. Whereas previous approaches required development teams to go outside their normal process to use security tools, these frameworks aim to integrate security tools within the existing development process.

None of these frameworks are designed to replace your existing security testing tools. Instead, they’re designed to be a wrapper around those security tools so that you can clearly define unit tests as scenarios within a story. The easiest way to understand the story/scenario concept is to look at a few examples from BDD-Security. This first scenario is part of an authentication story. It verifies that account lockouts are enforced by the demo web application:

Scenario: Lock the user account out after 4 incorrect authentication attempts
Meta: @id auth_lockout
Given the default username from: users.table
And an incorrect password
And the user logs in from a fresh login page 4 times
When the default password is used from: users.table
And the user logs in from a fresh login page
Then the user is not logged in

The BDD frameworks take this human readable statement about your security policy and translates it into a technical unit test for your web application penetration testing tool. With this approach, you’re able to phrase your security requirements for the application as a true/false statement. If Jenkins sees a false result from this unit test, then it catches the bug immediately and can flag the build. In addition, this human readable approach to defining unit tests allows the scenarios to double as documentation. An auditor can quickly read through the scenarios and map them to a threat model or policy requirement.

In order to interact with your web site and perform the log in, the framework will need a corresponding class written in a web browser automation framework. The BDD example above used a custom class that leverages the Selenium 2 framework to navigate to the login page, find the login form elements, fill in their values and have the browser perform the submit action. Selenium is a common tool for web site testers so your testing team may already be familiar with it or similar frameworks.

Writing custom classes that understand your web site is good for creating specific tests around your application logic. However, you can also perform traditional spidering and scanning tests as in this second example from BDD-Security:

Scenario: The application should not contain Cross Site Scripting vulnerabilities
Meta: @id scan_xss
#navigate_app will spider the website
GivenStories: navigate_app.story
Given a scanner with all policies disabled
And the Cross-Site-Scripting policy is enabled
And the attack strength is set to High
And the alert threshold is set to Medium
When the scanner is run
And false positives described in: tables/false_positives.table are removed
Then no Medium or higher risk vulnerabilities should be present

For the BDD-Security demo, the scanner that is used is the OWASP ZAP proxy. Although, BDD frameworks are not limited to tests through a web proxy. For instance, this example from BDD-Security shows how to run Nessus and ensure that the scan doesn’t return with anything that is severity 2 (medium) or higher:

Scenario: The host systems should not expose known security vulnerabilities

Given a nessus server at https://localhost:8834
And the nessus username continuum and the password continuum
And the scanning policy named test
And the target hosts
|hostname |
|localhost  |
When the scanner is run with scan name bddscan
And the list of issues is stored
And the following false positives are removed
|PluginID   |Hostname   |  Reason                                                                      |
|43111      |127.0.0.1    |  Example of how to add a false positive to this story  |
Then no severity: 2 or higher issues should be present

There are a lot of good blogs and presentations (1,2,3) that further explain the benefits of BDD approaches to security, so I won’t go into any further detail. Instead, I will focus on three current tools and highlight key differences that are important to consider when evaluating them.

Which BDD Tool is Right for You?

To start, here is a quick summary of the tools at the time of this writing:

Mittn Gauntlt BDD-Security
Primary Language Python Ruby Java
Approximate Age 3 months 2 years 2 years
Commits within last 3 months yes yes yes
BDD Framework Behave Cucumber jbehave
Default Web App Pen Test Tools Burp Suite, radamsa Garmr, arachni, dirb, sqlmap, curl Zap, Burp Suite
Default SSL analysis sslyze heartbleed, sslyze TestSSL
Default Network Layer Tools N/A nmap nessus
Windows or Unix Unix Unix** Both

** Gauntlt’s ‘When tool is installed” statement is dependent on the Unix “which” command. If you exclude that statement from your scenarios, then many tests will work on Windows.

If you plan to wrap more than the officially supported list of tools or have complex application logic, then you may need custom sentences, known as “step definitions.” Modifying step definitions is not difficult. Although, once you start modifying code, you have to consider how to merge your changes with future updates to the framework.

Each framework has a different approach to their step definitions. For instance, BDD-Security tends to encourage formal step definition sentences in all their test cases which would require code changes for custom steps. With Gauntlt you can store additional step definition files in the attack_adapters directory. Gauntlt also provides flexibility through a few generic step definitions that allow you to check the output of arbitrary raw command lines, as seen in their hello world example below:

Background:
Feature: hello world with gauntlt using the generic command line attack
  Scenario:
    When I launch a “generic” attack with:
      “””
      cat /etc/passwd
      “””
    Then the output should contain:
      “””
      root
      “””

Similarly, you should also consider how the framework will handle false positives from the tools. For instance, Mittn allows you to address the situation by tracking them in a database. BDD-Security allows you to address false positives statements within the scenario, as seen in the Nessus example from above or in a separate table. Gauntlt’s approach is to leverage “should not contain” statements within the scenario.

Since these tools are designed to be integrated into your continuous integration testing frameworks, you will want to evaluate how compatible they will be and how tightly you will need them integrated. For instance, quoting Continuum Security’s BDD introduction:

BDD-Security jobs can be run as a shell script or ant job. With the xUnit and JBehave plugins the results will be stored and available in unit test format. The HTML reports can also be published through Jenkins.

Mittn is also based on Behave and can produce JUnit XML test result documents. Gauntlt’s default output is handled by Cucumber. By default, Gauntlt supports pretty StdOut format and HTML output but you can modify the code to get JUnit output. There is an open improvement request to allow JUnit through the config file. Guantlt has documentation for Travis integration, as well.

Overall, the tools were not difficult to deploy. Gauntlt’s virtual box demo environment in its starter kit can be converted to be deployed via Chef in the cloud with a little work. When choosing a framework, you should also consider the platform support of the security tools you intend to use and the platform of your integrated testing environments. For instance, using a GUI-based tool on a Linux server will require an X11 desktop to be installed.

All of these tools have promise depending on your preferences and needs. BDD-Security would be a good tool for web development teams who are familiar with tools like Selenium and want tight integration with their processes. Gauntlt’s ability to support “generic” attacks makes it a good tool for teams that want to use a wide array of tools in their testing. Mittn is the youngest entry and doesn’t yet have features like built-in spidering support. Although, Python developers can easily find libraries for spidering sites, and Mittn’s external database approach to tracking issues may be useful for teams who have other systems that need to be notified of new results.

Before adopting one of these tools, an organization will likely do a buy-vs.-build analysis with commercial continuous monitoring offerings. For those who will be presenting the build argument, these tools provide enough to make a solid case in that discussion.

Where these frameworks add value is by allowing you to take your existing security tools (Burp, Nessus, etc.) and make them a part of your daily build process. By integrating with the continuous build system, you can immediately identify potential issues and ensure a minimum baseline of security. The scenario-based approach allows you to map requirements in your security policy to clearly defined unit tests. This evolution of open-source security frameworks that are designed to directly integrate with the DevOps process is an exciting step forward in the maturity of security testing.

Peleus Uhley
Lead Security Strategist

Observations From an OWASP Novice: OWASP AppSec Europe

Last month, I had the opportunity to attend OWASP AppSec Europe in Cambridge.

The conference was split into two parts. The first two days consisted of training courses and project summits, where the different OWASP project teams met to discuss problems and further proceedings, and the last two days were conference and research presentations.

Admittedly an OWASP novice, I was excited to learn what OWASP has to offer beyond the Top 10 Project most of us are familiar with. As it is commonly the case with conferences, there were a lot of interesting conversations that occurred over coffee (or cider). I had the opportunity to meet some truly fascinating individuals who gave some great insight to the “other” side of the security fence, including representatives from Information Security Group Royal Holloway, various OWASP chapters, and many more.

One of my favorite presentations was from Sebastian Lekies, PhD candidate at SAP and the University of Bochum, who demonstrated website byte-level flow analysis by using a modified Chrome browser to find DOM-based XSS attacks. Taint-tags were put on every byte of memory that comes from user-input and traced through the whole execution until it was displayed back to the user. This browser was used to automatically analyze the first two levels of all Alexa Top 5000 websites, finding that an astounding 9.6 percent carry at least one DOM-based XSS flaw.

Another interesting presentation was a third day keynote by Lorenzo Cavallaro from Royal Holloway University. He and his team are creating an automatic analysis system to reconstruct behaviors of Android malware called CopperDroid. It was a very technical, very interesting talk, and Lorenzo could have easily filled another 100 hours.

Rounding out the event were engaging activities that broke up the sessions – everything from the University Challenge to a game show to a (very Hogwarts-esque) conference dinner at Homerton College’s Great Hall.

All in all, it was an exciting opportunity for me to learn how OWASP has broadened its spectrum in the last few years beyond web application security and all the resources that are currently available. I learned a lot, met some great people, and had a great time. I highly recommend to anyone that has the opportunity to attend!

Lars Krapf
Security Researcher, Digital Marketing

Retiring the “Back End” Concept

For people who have been in the security industry for some time, we have grown very accustomed to the phrases “front end” and “back end.” These terms, in part, came from the basic network architecture diagram that we used to see frequently when dealing with traditional network hosting:

 Picture1

The phrase “front end” referred to anything located in DMZ 1, and “back end” referred to anything located in DMZ 2. This was convenient because the application layer discussion of “front” and “back” often matched nicely with the network diagram of “front” and “back.”  Your web servers were the first layer to receive traffic in DMZ 1 and the databases which were behind the web servers were located in DMZ 2. Over time, this eventually led to the implicit assumption that a “back end” component was “protected by layers of firewalls” and “difficult for a hacker to reach.”

How The Definition Is Changing

Today, the network diagram and the application layer diagrams for cloud architectures do not always match up as nicely with their network layer counterparts. At the network layer, the diagram frequently turns into the diagram below:

 Picture2

In the cloud, the back end service may be an exposed API waiting for posts from the web server over potentially untrusted networks. In this example, the attacker can now directly reach the database over the network without having to pass through the web server layer.

Many traditional “back end” resources are now offered as a stand alone service. For instance, an organization may leverage a third-party database as a service (DBaaS) solution that is separate from its cloud provider. In some instances, an organization may decide to make their S3 buckets public so that they can be directly accessed from the Internet.

Even when a company leverages integrated solutions offered by a cloud provider, shared resources frequently exist outside the defined, protected network. For instance, “back end” resources such as S3, SQS and DynamoDB will exist outside your trusted VPC. Amazon does a great job of keeping its AWS availability zones free from most threats. However, you may want to consider a defense-in-depth strategy where SSL is leveraged to further secure these connections to shared resources.

With the cloud, we can no longer assume that the application layer diagram and the network layer diagrams are roughly equivalent since stark differences can lead to distinctly different trust boundaries and risk levels. Security reviews of application services are now much more of a mix of network layer questions and application layer questions. When discussing a “back end” application component with a developer, here are a few sample questions to measure its exposure:

*) Does the component live within your private network segment, as a shared resource from your cloud provider or is it completely external?

*) If the component is accessible over the Internet, are there Security Groups or other controls such as authentication that limit who can connect?

*) Are there transport security controls such as SSL or VPN for data that leaves the VPC or transits the Internet?

*) Is the data mirrored across the Internet to another component in a different AWS region? If so, what is done to protect the data as it crosses regions?

*) Does your threat model take into account that the connection crosses a trust boundary?

*) Do you have a plan to test this exposed “back end” API as though it was a front end service?

Obviously, this isn’t a comprehensive list since several of these questions will lead to follow up questions. This list is just designed to get the discussion headed in the right direction. With proper controls, the cloud service may emulate a “back end” but you will need to ask the right questions to ensure that there isn’t an implicit security-by-obscurity assumption.

The cloud has driven the creation of DevOps which is the combination of software engineering and IT operations. Similarly, the cloud is morphing application security reviews to include more analysis of network layer controls. For those of us who date back to the DMZ days, we have to readjust our assumptions to reflect the fact many of today’s “back end” resources are now connected across untrusted networks.

Peleus Uhley
Lead Security Strategist

 

 

 

Another Successful Adobe Hackfest!

ASSET, along with members of the Digital Marketing security team, recently organized an internal “capture the flag” event called Adobe Hackfest. Now in its third year, this 10-day event accommodates teams spread across various geographies. The objective is for participants to find and exploit vulnerable endpoints to reveal secrets. The lucky contestants that complete all hacks at each level are entered to win some awesome prizes.

This year, we challenged participants with two vulnerabilities to hack at two different difficulty levels, carefully chosen to create security awareness within the organization. Using the two hacks as teaching opportunities, we targeted three information security concepts under cross-site scripting, SQL injection and password storage categories. Our primary intention was to demonstrate consequences of using insecure coding practices via a simulated vulnerable production environment.

Contributing to the event’s success were logistics we’ve added from previous events to create a more seamless experience. The event was heavily promoted internally, and we had specific channels for participants to ask questions or request hints, including three hosted Adobe Connect sessions in different time zones.  The Digital Marketing security team also created a framework that generated unique secrets for every participant, and a leaderboard that would update automatically.

Participants worked very hard which generated stiff competition, with more than 50 percent unlocking at least one secret, and nearly 30 percent unlocking all four. Though our developers, quality engineers, and everyone else involved in shipping code undergo different information security trainings, this event helps bring theories into practice by emphasizing that there is no “silver bullet” when it comes to security, and the importance of a layered approach.

Participation was at an all-time high, and given the tremendous interest within Adobe, we are now planning to have Hackfests more frequently. Looking forward to Hackfest Autumn!

Vaibhav Gupta
Security Researcher

The Cloud as the Operating System

The current trend is to push more and more of our traditional desktop tasks to the cloud. We use the cloud for file storage, image processing and a number of other activities.  However, that transition is more complex than just copying the data from one location to another.

Desktop operating systems have evolved over decades to provide a complex series of controls and security protections for that data. These controls were developed in direct response to increasing usage and security requirements. When we move those tasks to the cloud, the business requirements that led to the evolution of those desktop controls remain in place. We must find ways to provide those same controls and protections using the cloud infrastructure. When looking at creating these security solutions in the cloud, I often refer back to the desktop OS architectures to learn from their designs.

 

Example: File storage

File storage on the desktop is actually quite complex under the hood. You have your traditional user, group and (world/other/everyone) classifications. Each of these classifications can be granted the standard read, write and execute permissions. This all seems fairly straightforward.

However, if you dig a little deeper, permissions often have to be granted to more than just one single group. End users often want to provide access to multiple groups and several additional individuals. The operating system can also layer on its own additional permissions. For instance, SELinux can add permissions regarding file usage and context that go beyond just simple user level permissions. Windows can provide fine-grained permissions on whether you can write data or just append data.

There are several different types of usage scenarios that led to the creation of these controls. For instance, some controls were created to allow end users to share information with entities they do not trust. Other controls were driven by the need for services to perform tasks with data on the user’s behalf.

 

Learning from the Desktop

While the technical details of how we store and retrieve data changes when we migrate data to the cloud, the fundamental principles and complexities of protecting that data still persist. When planning your file sharing service, you can use the desktop as a reference for how complex your permissions may need to be as it scales. Will end users have complex file sharing scenarios with multiple groups and individuals? Will you have services that run in the background and perform maintenance on the user data? What permissions will those services need to process the data? These are hard problems to solve and you don’t want to reinvent these critical wheels from scratch.

Admittedly there is not always a direct 1:1 mapping between the desktop and cloud. For instance, the desktop OS gets to assume that the hard drive is physically connected to the CPU that will do the processing. In the cloud, your workers or services may be connecting to your cloud storage service across untrusted networks. This detail can add additional authentication and transport level security controls on top of the traditional desktop controls.

Overall, the question that we face as engineers is how can we best take the lessons learned from the desktop and progress them forward to work in a cloud infrastructure. File storage and access is just one aspect of the desktop that is being migrated to the cloud. Going forward, I plan to dig deeper into this idea and similar topics that I learn from working with Adobe’s cloud teams.

Peleus Uhley
Lead Security Strategist

Adobe Digital Publishing Suite, Enterprise Edition Security Overview

This new DPS security white paper describes the proactive approach and procedures implemented by Adobe to increase the security of your data included in applications built with Digital Publishing Suite.

The paper outlines the Adobe Digital Publishing Suite Content Flow for Secure Content, available in Digital Publishing Suite v30 or later for apps with direct entitlement and retail folios entitlement. The secure content feature allows you to restrict the distribution of your content based on user credentials or roles.

The paper also outline the security practices implemented by Adobe and our trusted partners.

Security threats and customer needs are ever-changing, so we’ll update the information in this white paper as necessary to address these changes.

Bronwen Matthews
Sr. Product Marketing Manager

A First Experiment with Chef

Security professionals are now using cloud solutions to manage large-scale security issues in the production Ops workflow. In order to achieve scale, the security processes are automated through tools such as Chef. However, you can also leverage the automation aspects of the cloud regardless of whether or not you plan to scale it to hundreds of machines. For example, one of my first cloud projects was to leverage Chef to resolve a personal, small scale issue where I might have previously used a VM or separate machine.

Using the cloud for personal DevOps

At Adobe, we’re constantly hiring third party security consultants to test our products. They require environments for building and testing Adobe code, and they need access to tools like Cygwin, Wireshark, SysInternals, WinDbg, etc. For my personal testing, I also require access to machines with a similar set up.  Using the cloud, it is possible to quickly spin up and destroy these types of security testing environments as needed.  

For this project, I used Adobe’s private cloud infrastructure – which is just an implementation detail – this approach can work on any cloud infrastructure. Our IT department also provides an internal Chef server for hosting cookbooks and managing roles.

In order to connect the Chef server and the cloud environment, I decided to set up a web server which would copy the Chef config files to the cloud instance and launch chef-client. For this, I chose Windows IIS, C# and ASP.NET because I had an unrelated goal of learning more about WMI and PowerShell through a coding project. Vagrant for Windows would be an alternative to this approach but it wasn’t available when I began the project. My personal Linux box was used to write the Chef recipes and upload the cookbook to the Chef server.

The workflow of the set-up process is as follows:

1.    Request a new Windows 7 64-bit instance from the cloud and wait for the response saying it is ready.

2.    The user provides the domain name and administrator password for the Windows 7 instance to the web page. If I need to set up additional accounts on the machine, I can also collect those here and run command lines over WMI to create them.

 3.   Using the admin credentials, the web server issues a series of remote WMI calls to create a PowerShell script on the Windows 7 instance. WMI doesn’t allow you to directly create files on remote machines, so I had to use “echo” and redirect the output.

 4.   By default, Windows doesn’t allow you to run unsigned PowerShell scripts. Although, with admin privileges you can use a command line to disable the signature check before executing the script. Once the script was done, the signature check is re-enabled.

 5.    The PowerShell script will download the client.rb file and validator.pem key needed to register the Windows 7 instance with the chef server.

 6.   WMI can then be used to run the chef-client command line which will register the new Windows 7 instance with the chef-server.

 7.    Since chef requires a privileged user to assign roles to the Win7 node, a separate chef key and a copy of the knife utility are stored locally on the web server. The C# server code will execute knife using the privileged key to assign the appropriate role to the new Windows 7 node in Chef.

 8.    Lastly, the web server uses WMI to run chef-client on the Windows 7 instance a second time so that the recipes are executed. The last chef recipe in the list will create a finished.txt file on the file system that the web server can verify that the process is completed.

Lessons Learned from Writing the Recipes

Using Chef, I was able to install common tools such as WinDbg, Cygwin (with custom selected utilities), SysInternals, Wireshark, Chrome, etc. Chef can be used to execute VBScript which allowed me to accomplish tasks such as running Windows Update during the setup process. For most recipes, the Opscode Windows cookbook made writing the recipes fairly straightforward.

Some of the installers will require a little legwork. For instance, you may have to search the web for the command line flags necessary for silent install. If you cannot automatically download an installer from the web, then the installers can be stored on the Chef server along with the recipes. For Wireshark, it was necessary to download and install winpcap before downloading and installing Wireshark. One client installer was not Chef-friendly, which lead to the creation of a Chef script that would have Windows Task Scheduler install the software instead. 

As a specific example, Cygwin can be one of the more complicated installs because it is a four-step process. There are some public cygwin recipes available for more advanced installs but this is enough to get it installed just using the Opscode Windows cookbook. For a robust deployment, additional code can be added to ensure that it doesn’t get run twice if Cygwin is already installed. The demo workflow below uses windows_package to download the setup executable. The windows_batch commands will then run the executable to install the default configuration, download additional utilities and then install the additional utilities:

 include_recipe “windows”

 windows_package ‘cygwin64′ do
     source ‘http://cygwin.com/setup-x86_64.exe
     installer_type :custom
     action :install
     options ‘–download –quiet-mode –local-package-dir c:\\cygwin_downloads –root c:\\cygwin64 -s http://mirror.cs.vt.edu/pub/cygwin/cygwin/’
end

windows_batch ‘cygwin_install_from_local_dir‘ do
code <<-EOH

    #{Chef::Config[:file_cache_path]}/setup-x86_64.exe –local-install –quiet-mode –local-package-dir c:\\\\cygwin_downloads –root c:\\\\cygwin64

   #{Chef::Config[:file_cache_path]}/setup-x86_64.exe –download –quiet-mode –local-package-dir c:\\\\cygwin_downloads –root c:\\\\cygwin64 -s http://mirror.cs.vt.edu/pub/cygwin/cygwin/ –packages vim,curl,wget,python,cadaver,rats,nc,nc6,git,subversion,flawfinder,stunnel,emacs,openssh,openssl,binutils

   #{Chef::Config[:file_cache_path]}/setup-x86_64.exe –quiet-mode –local-package-dir c:\\\\cygwin_downloads –root c:\\\\cygwin64 -s http://mirror.cs.vt.edu/pub/cygwin/cygwin/ –packages vim,curl,wget,python,cadaver,rats,nc,nc6,git,subversion,flawfinder,stunnel,emacs,openssh,openssl,binutils -L

    Exit 0
EOH
end

Post-Project Reflections

There are definitely alternative approaches such as using a baseline VM snapshot or a pre-built AMI with all of these tools installed. They each have their own pros and cons. For instance, local VMs launch faster but the scaling is limited by disk space. I chose Chef recipes because it provides the flexibility to create custom builds and ensures that everything is current. If needed, the system is able to scale quickly. The extra work in writing the server meant that I could make the process available to other team members.

Overall, despite having to be creative with some recipes, the project didn’t require a huge investment of time. A large portion was written while killing time in airports. The fact that I was able to go from a limited amount of knowledge to a working implementation fairly quickly speaks to the ease of Chef. If you are interested in learning more about the cloud, you don’t need a large, massively scalable project in order to get your hands dirty. It is possible to start with simple problems like creating disposable workstations.

PeleusUhley
Lead Security Strategist

Approaching Cloud Security From Two Perspectives

Last month, I was in Amsterdam to give a talk at SecureCloud 2014, a conference hosted by the Cloud Security Alliance. The conference attendees included a number of governmental policy-makers, and provided an opportunity for people from around the world to discuss the future of cloud computing security.

This conference was co-sponsored by the European Union Agency for Network and Information Security, or ENISA. They are actively working toward assisting Europe in adopting cloud technologies and simplifying governmental regulations across Europe. Therefore, they were able to attract representatives from several governments to share ideas on leveraging the cloud for the betterment of Europe.

EU Governments are Adopting Cloud Technology

To set context for the state of cloud adoption, Udo Helmbrecht, the Executive Director of ENISA, shared this slide during his presentation depicting the deployment model of government clouds in different countries. This information was from ENISA’s  Good Practice Guide for securely deploying Governmental Clouds

EU Slide

 

According to their numbers, at least 14 EU countries have developed a national cloud strategy or digital agenda. The European Commission is spearheading a number of initiatives, such as “Unleashing the Potential of Cloud Computing in Europe“, aimed at encouraging further uptake of cloud computing services in the EU, both in the public and private sector. ENISA is working together with the European Council on several of those initiatives such as defining the role of cloud certification schemes.

One example of governments taking advantage of the cloud was given by Evangelos Floros, the product manager for Okeanos. Okeanos is a public cloud built for the Greek academic and research community. In addition, Arjan de Jong presented on how the Dutch government is experimenting with a closed, government cloud for internal use. If their experiment is successful, then they will progress towards expanding the scale of their cloud offerings. Many of the presentations from SecureCloud can be found on their website.

A Different Perspective from Amsterdam

It was interesting to see all the different top-down, government perspectives from policy-makers at the CSA SecureCloud conference last month. This month, I will be back in Amsterdam for the Hack in the Box conference and Haxpo. This will be a very different group of people who help secure the Internet from the bottom up through innovative exploit techniques and secure implementations. Karthik Raman and I will be presenting there on the topic of securing cloud storage. Our presentation will involve a mix of cloud strategy as well as some technical implementation solutions. If you are attending, please come by our talk or the Adobe booth to say hello.

Peleus Uhley
Lead Security Strategist

 

 

ColdFusion 11 Enhances the Security Foundation of ColdFusion 10

Tuesday marked the release of ColdFusion 11, the most advanced version of the platform to date. In this release, many of the features introduced in ColdFusion 10 have been upgraded and strengthened, and developers will now have access to an even more extensive toolkit of security controls and additional features. 

A few of the most significant ColdFusion 11 upgrades fall into three categories. The release includes advances in the Secure Profile feature, access to more OWASP tools, and a host of new APIs and development resources.

1.       More OWASP Tools

 In ColdFusion 11, several new OWASP tools have been added to provide more integrated security features. For instance, features from the AntiSamy project have been included to help developers safely display controlled subsets of user supplied HTML/CSS. ColdFusion 11 exposes AntiSamy through the new getSafeHTML() and isSafeHTML().

In addition, ColdFusion 11 contains more tools from OWASP’s Enterprise Security API library, or ESAPI, including the EncodeForXPath and EncodeForXMLAttribute features. These ESAPI features provide developers more flexibility to update the security of existing applications and serve as a strong platform for new development.

2.       Flexible Secure Profile Controls

Secure Profile was a critical feature in ColdFusion 10, because it allowed administrators to deploy ColdFusion with secure defaults. In the ColdFusion 11 release, admins have even more flexibility when deploying Secure Profile.

In ColdFusion 10, customers had the choice to enable secure install or not, only at the time of installation,depending on their preferences. But with ColdFusion 11, customers now have the ability to turn Secure Profile off or on after installation, whenever they’d like, which streamlines the lockdown process to prevent a variety of attacks.

Further improvements to the Secure Profile are documented here.

3.       Integrating Security into Existing APIs

 ColdFusion 11 has many upgraded APIs and features – but there are a few I’d like to highlight here. First, ColdFusion 11 includes an advanced password-based key derivation function – called PBKDF2 – which allows developers to create encryption keys from passwords using an industry-accepted cryptographic algorithm. Additionally, the cfmail feature now supports the ability to send S/MIME encrypted e-mails. Another ColdFusion 11 update includes the ability to enable SSL for WebSockets. More security upgrade information can be found in the ColdFusion 11 docs.

Overall, this latest iteration of the platform increases flexibility for developers, while enhancing security. Administrators will now find it even easier to lock down their environments. For information on additional security features please refer to the Security Enhancements (ColdFusion 11) page and the CFML Reference (ColdFusion 11).

Peleus Uhley
Lead Security Strategist

Using Smart System to Scale and Target Proactive Security Guidance

One important step in the Adobe Secure Product Lifecyle is embedding security into product requirements and planning. To help with this effort, we’ve begun using a third-party tool called SD Elements.

ADO867-Security-SPLC_V1-live

SD Elements is a smart system that helps us scale our proactive security guidance by allowing us to define and recommend targeted security requirements to product teams across the company in an automated fashion. The tool enables us to provide more customized guidance to product owners than we could using a generic OWASP Top 10 or SANS Top 20 Controls for Internet Security list and it provides development teams with specific, actionable recommendations. We use this tool not only for our “light touch” product engagements, but to also provide our “heavy touch” engagements with the same level of consistent guidance as a foundation from which to work.

Another benefit of the tool is that it helps makes proactive security activities more measurable, which in turn helps demonstrate results which can be reported to upper management.

ASSET has worked with the third-party vendor Security Compass, to enhance SD Elements by providing feedback from “real world” usage of the product. The benefit to Adobe is that we get a more customized tool right off the shelf – beyond this, we’ve used the specialized features to tailor the product to fit our needs even more.

We employ many different tools and techniques with the SPLC and SD Elements is just one of those but we are starting to see success in the use of the product. It helps us make sure that product teams are adhering to a basic set of requirements and provides customized, actionable recommendations on top. For more information on how we use the tool within Adobe, please see the SD Elements Webcast.

If you’re interested in SD Elements you can check out their website.

Jim Hong
Group Technical Program Manager