Author Archive: Peleus Uhley

Retiring the “Back End” Concept

For people who have been in the security industry for some time, we have grown very accustomed to the phrases “front end” and “back end.” These terms, in part, came from the basic network architecture diagram that we used to see frequently when dealing with traditional network hosting:

 Picture1

The phrase “front end” referred to anything located in DMZ 1, and “back end” referred to anything located in DMZ 2. This was convenient because the application layer discussion of “front” and “back” often matched nicely with the network diagram of “front” and “back.”  Your web servers were the first layer to receive traffic in DMZ 1 and the databases which were behind the web servers were located in DMZ 2. Over time, this eventually led to the implicit assumption that a “back end” component was “protected by layers of firewalls” and “difficult for a hacker to reach.”

How The Definition Is Changing

Today, the network diagram and the application layer diagrams for cloud architectures do not always match up as nicely with their network layer counterparts. At the network layer, the diagram frequently turns into the diagram below:

 Picture2

In the cloud, the back end service may be an exposed API waiting for posts from the web server over potentially untrusted networks. In this example, the attacker can now directly reach the database over the network without having to pass through the web server layer.

Many traditional “back end” resources are now offered as a stand alone service. For instance, an organization may leverage a third-party database as a service (DBaaS) solution that is separate from its cloud provider. In some instances, an organization may decide to make their S3 buckets public so that they can be directly accessed from the Internet.

Even when a company leverages integrated solutions offered by a cloud provider, shared resources frequently exist outside the defined, protected network. For instance, “back end” resources such as S3, SQS and DynamoDB will exist outside your trusted VPC. Amazon does a great job of keeping its AWS availability zones free from most threats. However, you may want to consider a defense-in-depth strategy where SSL is leveraged to further secure these connections to shared resources.

With the cloud, we can no longer assume that the application layer diagram and the network layer diagrams are roughly equivalent since stark differences can lead to distinctly different trust boundaries and risk levels. Security reviews of application services are now much more of a mix of network layer questions and application layer questions. When discussing a “back end” application component with a developer, here are a few sample questions to measure its exposure:

*) Does the component live within your private network segment, as a shared resource from your cloud provider or is it completely external?

*) If the component is accessible over the Internet, are there Security Groups or other controls such as authentication that limit who can connect?

*) Are there transport security controls such as SSL or VPN for data that leaves the VPC or transits the Internet?

*) Is the data mirrored across the Internet to another component in a different AWS region? If so, what is done to protect the data as it crosses regions?

*) Does your threat model take into account that the connection crosses a trust boundary?

*) Do you have a plan to test this exposed “back end” API as though it was a front end service?

Obviously, this isn’t a comprehensive list since several of these questions will lead to follow up questions. This list is just designed to get the discussion headed in the right direction. With proper controls, the cloud service may emulate a “back end” but you will need to ask the right questions to ensure that there isn’t an implicit security-by-obscurity assumption.

The cloud has driven the creation of DevOps which is the combination of software engineering and IT operations. Similarly, the cloud is morphing application security reviews to include more analysis of network layer controls. For those of us who date back to the DMZ days, we have to readjust our assumptions to reflect the fact many of today’s “back end” resources are now connected across untrusted networks.

Peleus Uhley
Lead Security Strategist

 

 

 

The Cloud as the Operating System

The current trend is to push more and more of our traditional desktop tasks to the cloud. We use the cloud for file storage, image processing and a number of other activities.  However, that transition is more complex than just copying the data from one location to another.

Desktop operating systems have evolved over decades to provide a complex series of controls and security protections for that data. These controls were developed in direct response to increasing usage and security requirements. When we move those tasks to the cloud, the business requirements that led to the evolution of those desktop controls remain in place. We must find ways to provide those same controls and protections using the cloud infrastructure. When looking at creating these security solutions in the cloud, I often refer back to the desktop OS architectures to learn from their designs.

 

Example: File storage

File storage on the desktop is actually quite complex under the hood. You have your traditional user, group and (world/other/everyone) classifications. Each of these classifications can be granted the standard read, write and execute permissions. This all seems fairly straightforward.

However, if you dig a little deeper, permissions often have to be granted to more than just one single group. End users often want to provide access to multiple groups and several additional individuals. The operating system can also layer on its own additional permissions. For instance, SELinux can add permissions regarding file usage and context that go beyond just simple user level permissions. Windows can provide fine-grained permissions on whether you can write data or just append data.

There are several different types of usage scenarios that led to the creation of these controls. For instance, some controls were created to allow end users to share information with entities they do not trust. Other controls were driven by the need for services to perform tasks with data on the user’s behalf.

 

Learning from the Desktop

While the technical details of how we store and retrieve data changes when we migrate data to the cloud, the fundamental principles and complexities of protecting that data still persist. When planning your file sharing service, you can use the desktop as a reference for how complex your permissions may need to be as it scales. Will end users have complex file sharing scenarios with multiple groups and individuals? Will you have services that run in the background and perform maintenance on the user data? What permissions will those services need to process the data? These are hard problems to solve and you don’t want to reinvent these critical wheels from scratch.

Admittedly there is not always a direct 1:1 mapping between the desktop and cloud. For instance, the desktop OS gets to assume that the hard drive is physically connected to the CPU that will do the processing. In the cloud, your workers or services may be connecting to your cloud storage service across untrusted networks. This detail can add additional authentication and transport level security controls on top of the traditional desktop controls.

Overall, the question that we face as engineers is how can we best take the lessons learned from the desktop and progress them forward to work in a cloud infrastructure. File storage and access is just one aspect of the desktop that is being migrated to the cloud. Going forward, I plan to dig deeper into this idea and similar topics that I learn from working with Adobe’s cloud teams.

Peleus Uhley
Lead Security Strategist

A First Experiment with Chef

Security professionals are now using cloud solutions to manage large-scale security issues in the production Ops workflow. In order to achieve scale, the security processes are automated through tools such as Chef. However, you can also leverage the automation aspects of the cloud regardless of whether or not you plan to scale it to hundreds of machines. For example, one of my first cloud projects was to leverage Chef to resolve a personal, small scale issue where I might have previously used a VM or separate machine.

Using the cloud for personal DevOps

At Adobe, we’re constantly hiring third party security consultants to test our products. They require environments for building and testing Adobe code, and they need access to tools like Cygwin, Wireshark, SysInternals, WinDbg, etc. For my personal testing, I also require access to machines with a similar set up.  Using the cloud, it is possible to quickly spin up and destroy these types of security testing environments as needed.  

For this project, I used Adobe’s private cloud infrastructure – which is just an implementation detail – this approach can work on any cloud infrastructure. Our IT department also provides an internal Chef server for hosting cookbooks and managing roles.

In order to connect the Chef server and the cloud environment, I decided to set up a web server which would copy the Chef config files to the cloud instance and launch chef-client. For this, I chose Windows IIS, C# and ASP.NET because I had an unrelated goal of learning more about WMI and PowerShell through a coding project. Vagrant for Windows would be an alternative to this approach but it wasn’t available when I began the project. My personal Linux box was used to write the Chef recipes and upload the cookbook to the Chef server.

The workflow of the set-up process is as follows:

1.    Request a new Windows 7 64-bit instance from the cloud and wait for the response saying it is ready.

2.    The user provides the domain name and administrator password for the Windows 7 instance to the web page. If I need to set up additional accounts on the machine, I can also collect those here and run command lines over WMI to create them.

 3.   Using the admin credentials, the web server issues a series of remote WMI calls to create a PowerShell script on the Windows 7 instance. WMI doesn’t allow you to directly create files on remote machines, so I had to use “echo” and redirect the output.

 4.   By default, Windows doesn’t allow you to run unsigned PowerShell scripts. Although, with admin privileges you can use a command line to disable the signature check before executing the script. Once the script was done, the signature check is re-enabled.

 5.    The PowerShell script will download the client.rb file and validator.pem key needed to register the Windows 7 instance with the chef server.

 6.   WMI can then be used to run the chef-client command line which will register the new Windows 7 instance with the chef-server.

 7.    Since chef requires a privileged user to assign roles to the Win7 node, a separate chef key and a copy of the knife utility are stored locally on the web server. The C# server code will execute knife using the privileged key to assign the appropriate role to the new Windows 7 node in Chef.

 8.    Lastly, the web server uses WMI to run chef-client on the Windows 7 instance a second time so that the recipes are executed. The last chef recipe in the list will create a finished.txt file on the file system that the web server can verify that the process is completed.

Lessons Learned from Writing the Recipes

Using Chef, I was able to install common tools such as WinDbg, Cygwin (with custom selected utilities), SysInternals, Wireshark, Chrome, etc. Chef can be used to execute VBScript which allowed me to accomplish tasks such as running Windows Update during the setup process. For most recipes, the Opscode Windows cookbook made writing the recipes fairly straightforward.

Some of the installers will require a little legwork. For instance, you may have to search the web for the command line flags necessary for silent install. If you cannot automatically download an installer from the web, then the installers can be stored on the Chef server along with the recipes. For Wireshark, it was necessary to download and install winpcap before downloading and installing Wireshark. One client installer was not Chef-friendly, which lead to the creation of a Chef script that would have Windows Task Scheduler install the software instead. 

As a specific example, Cygwin can be one of the more complicated installs because it is a four-step process. There are some public cygwin recipes available for more advanced installs but this is enough to get it installed just using the Opscode Windows cookbook. For a robust deployment, additional code can be added to ensure that it doesn’t get run twice if Cygwin is already installed. The demo workflow below uses windows_package to download the setup executable. The windows_batch commands will then run the executable to install the default configuration, download additional utilities and then install the additional utilities:

 include_recipe “windows”

 windows_package ‘cygwin64′ do
     source ‘http://cygwin.com/setup-x86_64.exe
     installer_type :custom
     action :install
     options ‘–download –quiet-mode –local-package-dir c:\\cygwin_downloads –root c:\\cygwin64 -s http://mirror.cs.vt.edu/pub/cygwin/cygwin/’
end

windows_batch ‘cygwin_install_from_local_dir‘ do
code <<-EOH

    #{Chef::Config[:file_cache_path]}/setup-x86_64.exe –local-install –quiet-mode –local-package-dir c:\\\\cygwin_downloads –root c:\\\\cygwin64

   #{Chef::Config[:file_cache_path]}/setup-x86_64.exe –download –quiet-mode –local-package-dir c:\\\\cygwin_downloads –root c:\\\\cygwin64 -s http://mirror.cs.vt.edu/pub/cygwin/cygwin/ –packages vim,curl,wget,python,cadaver,rats,nc,nc6,git,subversion,flawfinder,stunnel,emacs,openssh,openssl,binutils

   #{Chef::Config[:file_cache_path]}/setup-x86_64.exe –quiet-mode –local-package-dir c:\\\\cygwin_downloads –root c:\\\\cygwin64 -s http://mirror.cs.vt.edu/pub/cygwin/cygwin/ –packages vim,curl,wget,python,cadaver,rats,nc,nc6,git,subversion,flawfinder,stunnel,emacs,openssh,openssl,binutils -L

    Exit 0
EOH
end

Post-Project Reflections

There are definitely alternative approaches such as using a baseline VM snapshot or a pre-built AMI with all of these tools installed. They each have their own pros and cons. For instance, local VMs launch faster but the scaling is limited by disk space. I chose Chef recipes because it provides the flexibility to create custom builds and ensures that everything is current. If needed, the system is able to scale quickly. The extra work in writing the server meant that I could make the process available to other team members.

Overall, despite having to be creative with some recipes, the project didn’t require a huge investment of time. A large portion was written while killing time in airports. The fact that I was able to go from a limited amount of knowledge to a working implementation fairly quickly speaks to the ease of Chef. If you are interested in learning more about the cloud, you don’t need a large, massively scalable project in order to get your hands dirty. It is possible to start with simple problems like creating disposable workstations.

PeleusUhley
Lead Security Strategist

Approaching Cloud Security From Two Perspectives

Last month, I was in Amsterdam to give a talk at SecureCloud 2014, a conference hosted by the Cloud Security Alliance. The conference attendees included a number of governmental policy-makers, and provided an opportunity for people from around the world to discuss the future of cloud computing security.

This conference was co-sponsored by the European Union Agency for Network and Information Security, or ENISA. They are actively working toward assisting Europe in adopting cloud technologies and simplifying governmental regulations across Europe. Therefore, they were able to attract representatives from several governments to share ideas on leveraging the cloud for the betterment of Europe.

EU Governments are Adopting Cloud Technology

To set context for the state of cloud adoption, Udo Helmbrecht, the Executive Director of ENISA, shared this slide during his presentation depicting the deployment model of government clouds in different countries. This information was from ENISA’s  Good Practice Guide for securely deploying Governmental Clouds

EU Slide

 

According to their numbers, at least 14 EU countries have developed a national cloud strategy or digital agenda. The European Commission is spearheading a number of initiatives, such as “Unleashing the Potential of Cloud Computing in Europe“, aimed at encouraging further uptake of cloud computing services in the EU, both in the public and private sector. ENISA is working together with the European Council on several of those initiatives such as defining the role of cloud certification schemes.

One example of governments taking advantage of the cloud was given by Evangelos Floros, the product manager for Okeanos. Okeanos is a public cloud built for the Greek academic and research community. In addition, Arjan de Jong presented on how the Dutch government is experimenting with a closed, government cloud for internal use. If their experiment is successful, then they will progress towards expanding the scale of their cloud offerings. Many of the presentations from SecureCloud can be found on their website.

A Different Perspective from Amsterdam

It was interesting to see all the different top-down, government perspectives from policy-makers at the CSA SecureCloud conference last month. This month, I will be back in Amsterdam for the Hack in the Box conference and Haxpo. This will be a very different group of people who help secure the Internet from the bottom up through innovative exploit techniques and secure implementations. Karthik Raman and I will be presenting there on the topic of securing cloud storage. Our presentation will involve a mix of cloud strategy as well as some technical implementation solutions. If you are attending, please come by our talk or the Adobe booth to say hello.

Peleus Uhley
Lead Security Strategist

 

 

ColdFusion 11 Enhances the Security Foundation of ColdFusion 10

Tuesday marked the release of ColdFusion 11, the most advanced version of the platform to date. In this release, many of the features introduced in ColdFusion 10 have been upgraded and strengthened, and developers will now have access to an even more extensive toolkit of security controls and additional features. 

A few of the most significant ColdFusion 11 upgrades fall into three categories. The release includes advances in the Secure Profile feature, access to more OWASP tools, and a host of new APIs and development resources.

1.       More OWASP Tools

 In ColdFusion 11, several new OWASP tools have been added to provide more integrated security features. For instance, features from the AntiSamy project have been included to help developers safely display controlled subsets of user supplied HTML/CSS. ColdFusion 11 exposes AntiSamy through the new getSafeHTML() and isSafeHTML().

In addition, ColdFusion 11 contains more tools from OWASP’s Enterprise Security API library, or ESAPI, including the EncodeForXPath and EncodeForXMLAttribute features. These ESAPI features provide developers more flexibility to update the security of existing applications and serve as a strong platform for new development.

2.       Flexible Secure Profile Controls

Secure Profile was a critical feature in ColdFusion 10, because it allowed administrators to deploy ColdFusion with secure defaults. In the ColdFusion 11 release, admins have even more flexibility when deploying Secure Profile.

In ColdFusion 10, customers had the choice to enable secure install or not, only at the time of installation,depending on their preferences. But with ColdFusion 11, customers now have the ability to turn Secure Profile off or on after installation, whenever they’d like, which streamlines the lockdown process to prevent a variety of attacks.

Further improvements to the Secure Profile are documented here.

3.       Integrating Security into Existing APIs

 ColdFusion 11 has many upgraded APIs and features – but there are a few I’d like to highlight here. First, ColdFusion 11 includes an advanced password-based key derivation function – called PBKDF2 – which allows developers to create encryption keys from passwords using an industry-accepted cryptographic algorithm. Additionally, the cfmail feature now supports the ability to send S/MIME encrypted e-mails. Another ColdFusion 11 update includes the ability to enable SSL for WebSockets. More security upgrade information can be found in the ColdFusion 11 docs.

Overall, this latest iteration of the platform increases flexibility for developers, while enhancing security. Administrators will now find it even easier to lock down their environments. For information on additional security features please refer to the Security Enhancements (ColdFusion 11) page and the CFML Reference (ColdFusion 11).

Peleus Uhley
Lead Security Strategist

Top 10 Hacking Techniques of 2013: A Few Things to Consider in 2014

For the last few years, I’ve been a part of the annual ranking of top 10 web hacking techniques organized by WhiteHat Security. Each year, it’s an honor to be asked to participate, and this year is no different. Not only does judging the Top 10 Web Hacking Techniques allow me to research these potential threats more closely, it also informs my day-to-day work.

WhiteHat’s Matt Johansen and Johnathan Kuskos have provided a detailed overview of the top 10 with some highlights available via this webinar.  This blog post will further describe some of the lessons learned from the community’s research.

1. XML-based Attacks Will Receive More Attention

This year, two of the top 15 focused on XML-based attacks. XML is the foundation of a large portion of the information we exchange over the Internet, making it an important area of study.

Specifically, both researchers focused on XML External Entities. In terms of practical applications of their research, last month Facebook gave out their largest bug bounty yet for an XML external entity attack. The Facebook attack demonstrated an arbitrary file read that they later re-classified as a potential RCE bug.

Advanced XML features such as XML external entities, XSLT and similar options are very powerful. If you are using an XML parser, be sure to check which features can be disabled to reduce your attack surface. For instance, the Facebook patch for the exploit was to set libxml_disable_entity_loader(true).

In addition, JSON is becoming an extensively used alternative to XML. As such, the JSON community is adding similar features to the JSON format. Developers will need to understand all the features that their JSON parsers support to ensure that their parsers are not providing more functionality than their APIs are intended to support.

2. SSL Takes Three of the Top 10 Spots

In both the 2011 and 2012 Top 10 lists, SSL attacks made it into the top spot.  For the 2013 list, three attacks on SSL made it into the top 10: Lucky 13, BREACH and Weaknesses in RC4. Advances in research always lead to more advances in research. In fact, the industry has already seen our first new report against SSL in 2014.  It will be hard to predict how much farther and faster research will advance, but it is safe to assume that it will.

Last year at BlackHat USA, Alex Stamos, Thomas Ptacek, Tom Ritter and Javed Samuel presented a session titled “The Factoring Dead: Preparing for the Cryptopocalypse.” In the presentation, they highlighted some of the challenges that the industry is facing in preparing for a significant breach of a cryptographic algorithm or protocol. Most systems are not designed for cryptographic agility and updating cryptography requires a community effort.

These three Top 10 entries further highlight the need for our industry to improve our crypto agility within our critical infrastructure. Developers and administrators, you should start examining your environments for TLS v1.2 support. All major browsers currently support this protocol. Also, review your infrastructure to determine if you could easily adopt future versions of TLS and/or different cryptographic ciphers for your TLS communication. The OWASP Transport Layer Protection Cheat Sheet provides more information on steps to hard your TLS implementation.

3. XSS Continues to Be a Common Concern for Security Professionals

We’ve known about cross-side scripting (XSS) in the community for over a decade, but it’s interesting that people still find innovative ways to both produce and detect it. At the most abstract level, solving the problem is complex because JavaScript is a Turing-complete language that is under active development. HTML5 and CSS3 are on the theoretical edge of Turing-Completeness in that you can implement Rule 110 so long as you have human interaction. Therefore, in theory, you could not make an absolute statement about the security of a web page without solving the halting problem.

The No. 1 entry in the Top 10 this year demonstrated that this problem is further complicated due to the fact that browsers will try to automatically correct bad code. What you see in the written code is not necessarily what the browser will interpret at execution. To solve this, any static analysis approach would not only need to know the language but also know how the browser will rewrite any flaws.

This is why HTML5 security advances such as Content Security Policies (CSP) and iframe sandboxes are so important (or even non-standards-based protections such as X-XSS-Protection).  Static analysis will be able to help you find many of your flaws. However, due to all the variables at play, they cannot guarantee a flawless site. Additional mitigations like CSP will lessen the real world exploitability of any remaining flaws in the code.

These were just a few of the things I noticed as a part of the panel this year. Thanks to Jeremiah Grossman, Matt Johansen, Johnathan Kuskos and the entire WhiteHat Security team for putting this together. It’s a valuable resource for the community – and I’m excited to see what makes the list next year.

Peleus Uhley

Lead Security Strategist

 

Mass Customization of Attacks Talk at RSA

Business consultant Stanley Davis defined mass customization as the “customization and personalization of products and services for individual customers at a mass production price.” Anyone who has ever ordered a custom PC is no stranger to mass customization: that particular combination of components wasn’t assembled into a PC until the customer initiated an order.

As we responded to zero-day exploits in the past couple of years, we took stock of some of the properties that separated them from mass malware, which affect older, patched vulnerabilities. For example, we noticed zero-day attacks starting to target more than one version of a platform on one or more operating systems. In addition, we observed that zero-day attacks contain more than one exploit possibly affecting multiple vendors’ products. Our thesis can be stated as follows: The exploit creation industry is maturing; by combining the features of mass malware with multiple zero-day exploits, they can create mass-customized attacks.

 masscustomizedattacks

 

We expand on this thesis in our upcoming talk at the RSA 2014 conference and use several case studies to prove it.

If you’re going to be attending RSA on Tuesday, Feb. 25, please swing by our talk at 2:40 p.m. in the West Room 3006. We look forward to sharing our research and the conversations with our friends and partners in the industry!

Peleus Uhley, Platform Security Strategist
Karthik Raman, Security Researcher

Flash Player Sandbox Now Available for Safari on Mac OS X

Over the last few years, Adobe has protected our Flash Player customers through a technique known as sandboxing. Thus far, we have worked with Google, Microsoft and Mozilla on deploying sandboxes for their respective browsers. Most recently, we have worked with Apple to protect Safari users on OS X. With this week’s release of Safari in OS X Mavericks, Flash Player will now be protected by an OS X App Sandbox.

For the technically minded, this means that there is a specific com.macromedia.Flash Player.plugin.sb file defining the security permissions for Flash Player when it runs within the sandboxed plugin process. As you might expect, Flash Player’s capabilities to read and write files will be limited to only those locations it needs to function properly. The sandbox also limits Flash Player’s local connections to device resources and inter-process communication (IPC) channels. Finally, the sandbox limits Flash Player’s networking privileges to prevent unnecessary connection capabilities.

Safari users on OS X Mavericks can view Flash Player content while benefiting from these added security protections. We’d like to thank the Apple security team for working with us to deliver this solution.

Peleus Uhley
Platform Security Strategist

Flash Player Security with Windows 8 and Internet Explorer 10

With the launch of Internet Explorer 10 on Windows 8 last year, customers have experienced improved Flash Player capabilities. Adobe worked closely with Microsoft to integrate Flash Player into Internet Explorer 10 for the Windows 8 platform, but some of our customers are still unaware of the full benefit of the security enhancements. We’d like to take the opportunity to discuss how this integration introduced several new changes that have increased end-user security.

The first significant change is that Flash Player updates for IE 10 on Windows 8 are now distributed through Windows Update. End-users are no longer prompted by the Flash Player auto-updater to update Internet Explorer. This also means that enterprises can now distribute Flash Player updates for Windows 8 through their existing Windows OS patch management workflows. For IE 10 users on Windows 7, you will continue to be updated through Flash Player’s existing update mechanisms.

Windows 8 and IE 10 bring a new level of security known as Enhanced Protected Mode (EPM). In immersive mode, EPM is enabled by default. End users can enable Enhanced Protected Mode on the desktop by selecting Tools > Internet Options > Advanced and checking “Enable Enhanced Protected Mode.”

EPM on IE 10 provides several new protections. One is that all content processes will run as 64-bit processes. This means that Flash Player will also be run as a 64-bit process which will make heap sprays more difficult. The larger address space makes it more difficult to predict the memory location of the spray with a decent statistical likelihood.

The Windows 8 OS security model also utilizes AppContainers for Windows Store. The AppContainer for Internet Explorer 10 is an improvement on the existing idea of Integrity levels. The IE 10 AppContainer brokers both read and write access to most of the operating system. This is an improvement over traditional Protected Mode where only write access was limited. Since Flash Player will be executing as a low privileged process, it will not be able to read user-owned data without user interaction. In addition, the IE 10 AppContainer enforces certain network restrictions which are described here. Since Flash Player is integrated into IE 10, Flash Player is sandboxed by the same AppContainer broker as Internet Explorer.

One aspect of the new AppContainer brokers is that Internet Explorer 10 has an unique cookie store for each mode. Browser cookies for immersive surfing will be placed in the IE 10 AppContainer storage location. Cookies created while surfing Internet-zone content in IE 10 on the desktop will be placed in the Low Integrity Level (LowIL) cookie location. Flash Player acknowledges this paradigm for Local Shared Objects (LSOs), as well. This means that any data stored from your Flash Player gaming in immersive mode will not be available to Flash Player when you are surfing with IE on the desktop. More information on how IE 10 handles cookies on Windows 8 can be found in this blog.

Overall, these new protections serve to further improve security for our Windows 8 customers while also delivering a more streamlined update workflow. Adobe will continue to work with Microsoft to better improve security for our mutual customers going forward.

Peleus Uhley
Platform Security Strategist

Reflections on Black Hat & DefCon

This year the ASSET security team along with security engineers from several other Adobe teams travelled to Vegas to attend the summer’s largest security conferences – Black Hat and DefCon. The technical talks can typically range from “cool bugs” to “conceptual issues that require long term solutions.” While the bugs are fun, here’s my take on the major underlying themes this year.

One major theme is that our core cryptographic solutions such as RSA and TLS are beginning to show their age. There was more than one talk about attacking TLS and another presentation by iSEC Partners focused on advances related to breaking RSA. The iSEC team made a valid case that we, as an industry, are not prepared for easily deploying alternative cryptographic solutions. Our industry needs to apply the principles of “crypto agility” so that we can deploy alternative solutions in our core security protocols, should the need arise.

Another theme this year was the security issues with embedded systems. Embedded systems development used to be limited to small bits of assembly code on isolated chips. However, advances in disk storage, antenna size, and processors has resulted in more sophisticated applications powering more complex devices. This exposed a larger attack surface to security researchers at Black Hat and DefCon who then found vulnerabilities in medical devicesSIM cardsautomobilesHVAC systemsIP phonesdoor locksiOS chargersSmart TVsnetwork surveillance cameras, and similar dedicated devices. As manufacturing adopts more advanced hardware and software for devices, our industry will need to continue to expand our security education and outreach to these other industries.

In traditional software, OS enforced sandboxes and compiler flags have been making it more difficult to exploit software. However, Kevin Snow and Lucas Davi showed that making additional improvements to address space layout randomization (ASLR), known as “fine-grained ASLR,” will not provide any significant additional levels of security. Therefore, we must rely on kernel enforced security controls and, by logical extension, the kernel itself. Mateusz Jurczyk and Gynvael Coldwind dedicated significant research effort into developing tools to find kernel vulnerabilities in various operating system kernels. In addition, Ling Chuan Lee and Chan Lee Yee went after font vulnerabilities in the Windows kernel. Meanwhile, Microsoft offered to judge live mitigation bypasses of their kernel at their booth. With only a small number of application security presentations, research focus appears to be shifting back toward the kernel this year.

Ethics and the law had an increased focus this year. In addition to the keynote by General Alexander, there were four legal talks at Black Hat and DefCon from the ACLU, EFF and Alex Stamos. Paraphrasing Stamos’ presentation, “The debate over full disclosure or responsible disclosure now seems quaint.” There were no easy answers provided; just more complex questions.

Regardless of the specific reason that drew you to Vegas this year, the only true constant in our field is that we must continue learning. It is much harder these days to be an effective security generalist. The technology, research and ethics of what we do continues to evolve and forces deeper specialization and understanding. The bar required to wander into a random Black Hat talk and understand the presentation continues to rise. Fortunately, walking into a bar at Black Hat and offering a fellow researcher a drink is still a successful alternative method of learning.

Peleus Uhley
Platform Security Strategist