Posts in Category "Security"

Why Moms Can Be Great at Computer Security

As a new mom, I’ve come to a few realizations as to why I think moms can be really innovative and outright great when it comes to solving problems in computer security. I realize these anecdotes and experiences can apply to any parent, so please take this as purely from my personal “mom” perspective. This is not to say moms are necessarily better (my biases aside), but, I do think there are some skills we learn on-the-fly as new mothers that can become invaluable in our security careers. And vice-versa – there are many skills I’ve picked up throughout my security career that have come in really handy as a new mom. Here are my thoughts on some of the key areas where I think these paths overlap:

  • We are ok with not being popular. Any parent who has had to tell their kid “no,” ground them, or otherwise “ruin their lives” knows that standing firm in what is right is sometimes not easy – but, it is part of the job. Security is not all that different. We often tell people that taking unsafe shortcuts or not building products and services with security in mind will not happen on our watch. From time to time, product teams are mad when we have to go over their heads to make sure key things like SSL is enabled by default as a requirement for launch of a new service. In incident response, for example, we sometimes have to make hard decisions like taking a service offline until the risk can be mitigated. And we are ok with doing all of this because we know it is the right thing to do. However, when we do it, we are kind but firm – and, as a result, we are not always the most liked person in a meeting, and we’re very OK with that.
  • We can more easily juggle multiple tasks and priorities. My primary focus has always been incident response, but it was not until I had a child that I realized how well my job prepared me for parenthood. A security incident usually has many moving pieces at once – investigate, confirm, mitigate, update execs, and a host of other things – and they all need to be done right now. Parents are often driving carpools while eating breakfast, changing diapers on a conference call while petting the dog with a spare foot (you know this is not an exaggeration), and running through Costco while going through math flash cards with our daughters. At the end of each workday, we have to prioritize dinner, chores, after school activities, and bedtime routines. It all seems overwhelming. But, in a matter of minutes, a plan has formed and we are off to the races! We delegate, we make lists, and somehow it all gets done. Just like we must do with our security incident response activites.
  • We trust but verifyThis is an actual conversation:

Mom: Did you brush your teeth?
Kid: Yes
Mom (knowing the kid has not been in the bathroom in hours): Are you sure? Let me smell your breath
Kid : Ugggghhhh… I’ll go brush them now…

I hear a similar conversation over and over in my head in security meeting after meeting. It usually is something like this:

Engineer: I have completed all the action items you laid out in our security review
Mom (knowing that the review was yesterday and it will take about 10 hours of engineering work to complete): Are you sure? Let’s look at how you implemented “X.”
Engineer : Oh, I meant most of the items are done
Mom: It is great you are starting on these so quickly. Please let me know when they are done.

Unfortunately, this does indeed happen sometimes – hence why I must be such a staunch guardian. Security can take time and is sometimes not as interesting as coding a new feature. So, like a kid who would rather watch TV than brush his teeth because it is not seen as a big deal to not brush, we have to gently nudge and we have to verify.

  • We are masters at seeing hidden dangers and potential pitfalls. When a baby learns to roll, crawl, and walk, moms are encouraged to get down at “baby level” to see and anticipate potentially dangerous situations. Outlet covers are put on, dangerous chemical cleaners no longer live under the sink, and bookcases are mounted to the walls. As kids get older, the dangers we see are different, but we never stop seeing them. Some of this is just “mom worry” – and we have to keep it in check to avoid becoming dreaded “helicopter parents.” However, we are conditioned to see a few steps ahead and we learn to think about the worst case scenario. Seeing worst case scenarios and thinking like an attacker are two things that make security professionals good at their jobs. Many are seen as paranoid, and, quite frankly, that paranoia is not all that dissimilar to “mom worry.” Survival of the species has relied on protection of our young, and although a new release of software is not exactly a baby, you can’t turn off that protective instinct.

It was really surprising to me the similarities between work and parenthood. Being a parent and being a security professional sound so dissimilar on the surface, but, it is amazing how the two feed each other – and how my growth in one area has helped my growth in the other. It also shows how varying backgrounds can be your path to a successful security career.

 

Lindsey Wegrzyn Rush
Sr. Manager, Security Coordination Center

Securely Deploying MongoDB 3.0

I recently needed to set up an advanced, sharded MongoDB 3.0 database with all the best practices enabled for a deployment of the CRITs web application. This would be an opportunity for me to get first hand experience with the recommended security guidance that I recommend to other Adobe teams. This post will cover some of the lessons that I learned along the way. This isn’t a replacement for reading the documentation. Rather, it is a story to bookmark for when one of your teams is ready to deploy a secure MongoDB 3.0 instance and is looking for real-world examples to supplement the documentation.

MongoDB provides a ton of security documentation and tutorials, which are invaluable. It is highly recommended that you read them thoroughly before you begin, since there are a lot of important details captured in the documents. The tutorials often contain important details that aren’t always captured in the core documentation for a specific feature.

If you are migrating from an older version, you’ll quickly find that MongoDB has been very active in improving the security of its software. The challenge will be that some of your previous work may now be deprecated. For instance, the password hashing functions have migrated from MONGODB-CR to SCRAM-SHA-1 . The configuration file switched in version 2.6 from name-value pairs to YAML. Oddly, when I downloaded the most recent version of MongoDB, it came with the name-value pair version by default. While name-value pairs are still supported, I decided to create the new YAML version from scratch to avoid a migration later. In addition, keyfile authorization between cluster servers has been replaced with X.509. These improvements are all things you will want to track when migrating from an older version of MongoDB.

In prepping for the deployment, there are a few things you will want to do:

  • Get a virtual notepad. A lot of MongoDB commands are lengthy to type, and you will end up pasting them more than once.
  • After reading the documentation and coming up with a plan for the certificate architecture, create a script for generating certificates. You will end up generating one to two certificates per server.
  • Anytime you deploy a certificate system, you should have a plan for certificate maintenance such as certificate expirations.
  • The system is dependent on a solid base. Make sure you have basic sysadmin tasks done first, such as using NTP to ensure hosts have consistent clocks for timestamps.

If you are starting from scratch, I would recommend getting MongoDB cluster connectivity established, followed by layering on security. At a minimum, establish basic connectivity between the shards. If you try to do security and a fresh install at the same time, you may have a harder time debugging.

Enabling basic SSL between hosts

I have noticed confusion over which versions of MongoDB support SSL, since it has changed over time and there were differences between standard and enterprise versions. Some package repositories for open-source OSs are hosting the older versions of MongoDB. The current MongoDB 3.0 page says, “New in version 3.0: Most MongoDB distributions now include support for SSL.”  Since I wasn’t sure what “most” meant, I downloaded the standard Ubuntu version (not enterprise) from the MongoDB hosted repository, as described here: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/ That version did support SSL out of the box.

MongoDB has several levels of SSL settings, including disabled, allowSSL, preferSSL, and requireSSL. These can be useful if you are slowly migrating a system, are learning the command line, or have different needs for different roles. For instance, you may specify requireSSL for your shards and config servers to ensure secure inter-MongoDB communication. For your MongoDB router instance, you may choose a setting of preferSSL to allow legacy web applications to connect without SSL, while still maintaining secure inter-cluster communication.

If you plan to also use X.509 for cluster authorization, you should consider whether you will also be using cluster authentication and whether you want to specify a separate certificate for clusterAuth. If you go with separate certificates, you will want to set the serverAuth Extended Key Usage (EKU) attribute on the SSL certificate and create a separate clientAuth certificate for cluster authorization. A final configuration for the SSL configuration would be:

net:

    ssl:

      CAFile: “root_CA_public.pem”

      mode: requireSSL

      PEMKeyFile: “mongo-shard1-serverAuth.pem”

      PEMKeyPassword: YourPasswordHereIfNecessary

Enabling authentication between servers

In recent versions of MongoDB, the inter-cluster authentication method has changed in version 2.6 from using keyfiles to leveraging X.509 certificates. The keyfile authentication was just a shared secret, whereas X.509 verifies approval from a known CA. To ease migration from older implementations, MongoDB lets you start at keyfile, then move to hybrid support with sendKeyFile and sendX509, before finally ending at the X.509-only authentication setting: x509. If you have not already enabled keyfiles in an existing MongoDB deployment, then you may need to take your shard offline in order to enable it. If you are using a separate certificate for X.509 authentication, then you will want to set the clientAuth EKU in the certificate.

The certificates used for inter-cluster authentication must have their X.509 subject (O, OU, DN, etc.) set exactly the same, except for the hostname in the CN. The CN, or Subject Alternative Name, must match the hostname of the server. If you want flexibility to move shards to new instances without reissuing certificates, you may want a secondary DNS infrastructure that will allow you to remap static hostnames to different instances. When a cluster node is successfully authenticated to another cluster node, it will get admin privileges for the instance. The following settings will enable cluster authentication:

net:

   ssl:

      CAFile: “/etc/mongodb/rootCA.pem”

      clusterFile: “mongo-shard1-clientAuth.pem”

      clusterPassword: YourClusterFilePEMPasswordHere

      CRLFile: “YourCRLFileIfNecessary.pem”

 

   security:

       clusterAuthMode: x509

Client authentication and authorization

MongoDB authorization can support a set of built-in roles and user defined roles for those who want to split authorization levels across multiple users. However, authorization is not enabled by default. To enable authorization, you must specify the following in your config file:

       security:

          authorization: enabled

There was a significant change in the authorization model changed between 2.4 and 2.6. If you are doing an upgrade from 2.4, be sure to read the release notes for all the details. The 2.4 model is no longer supported in MongoDB 3.0. Also, an existing environment may have downtime because you have to sync changing your app to use the MongoDB password as well as enabling authentication in MongoDB.

For user-level account access, you will have a choice between traditional username and password, LDAP proxy, Kerberos, and X.509. For my isolated infrastructure, I had to choose between X.509 and username/password. Which approach is correct depends on how you interact with the server and how you manage secrets. While I had to use a username and password for the CRITs web application, I wanted to play with X.509 for the local shard admin accounts. The X.509 authentication can only be used with servers that have SSL enabled. While it is not strictly necessary to have local shard admin accounts, the documentation suggested that they would eventually be needed for maintenance. From the admin database, X.509 users can be added to the $external database using the following command:

   db.getSiblingDB(“$external”).runCommand(

      {

          createUser: “DC=org,DC=example, CN=clusterAdmin,OU=My Group,O=My Company,ST=California,C=US”,

          roles: [

             { role: ‘clusterAdmin’, db: ‘admin’ }

          ]

      }

   )

The createUser field contains the subject from the client certificate for the cluster admin. Once added, the command line for a connection as the clusterAdmin would look like this:

       mongo –ssl –sslCAFile root_CA_public.pem –sslPEMKeyFile ./clusterAdmin.pem mongo_shard1:27018/admin

Although you provided the key in the command line, you still need to run the auth command that corresponds to the clusterAdmin.pem certificate in order to convert to that role:

   db.getSiblingDB(“$external”).auth(

      {

         mechanism: “MONGODB-X509″,

         user: “ DC=org,DC=example, CN=clusterAdmin,OU=My Group,O=My Company,ST=California,C=US

      }

   );

The localhost exception allows you to create the first user administrator in the admin database when authorization is enabled. However, once you have created the first admin account, you should remember to disable it by specifying:

       setParameter:

           enableLocalhostAuthBypass: false

Once you have the admin accounts created, you can create the application roles against the application database with more restricted privileges:

   db.createUser(

      {

         user: “crits_app_user”,

         pwd: “My$ecur3AppPassw0rd”

         roles: [

            { role: “readWrite”, db: “crits” }

         ]

         writeConcern: { w: “majority” , wtimeout: 5000 }

      }

   ) 

At this stage, there are still other security options worth reviewing. For instance, there are some SSL settings I didn’t cover because they already default to the secure setting. If you are migrating from an older database, then you will want to check the additional settings, since some behavior may change. Hopefully, this post will help you to get started with secure communication, authentication, and authorization aspects of MongoDB 3.0.

 

Peleus Uhley
Lead Security Strategist

SAFECode Goes to Washington

On a recent trip to Washington, DC, I had the opportunity to participate in a series of meetings with policymakers on Capitol Hill and in the Administration to discuss SAFECode’s  (Software Assurance Forum for Excellence in Code) role in and commitment to improving software security.  If you’re not familiar with SAFECode, I encourage you to visit the SAFECode website to learn more about the organization. At a high level, SAFECode advances effective software assurance methods, and identifies and promotes best practices for developing and delivering more secure and reliable software, hardware, and services in an industry-led effort.

The visit to DC was set up to promote some of the work being done across our industry to analyze, apply, and promote the best mix of software assurance technology, process, and training. Along with some of my colleagues from EMC and CA Technologies, we spent the beginning of the trip at the Software and Supply Chain Assurance Working Group, where we presented on the topic of software assurance assessment. The premise of our presentation was that there is no one-size-fits-all approach to software assurance, and that a focus on the supplier’s software assurance process is the right way to assess the maturity of an organization when it comes to software security.

One of the other important aspects we discussed with policymakers was SAFECode’s role in promoting the need for security education and training for developers. We are considering ways to support the expansion of software security education in university programs and plan to add new offerings to the SAFECode Security Engineering training curriculum, a free program aimed at helping those looking to create an in-house training program for their product development teams as well as individuals interested in enhancing their skills.

Overall, this was a very productive trip, and we look forward to working with policymakers as they tackle some of the toughest software security issues we are facing today.

 
David Lenoe, Director of Adobe Secure Software Engineering
SAFECode Board Member

Updated Security Information for Adobe Creative Cloud

As part of our major release of Creative Cloud on June 16th, 2015, we released an updated version of our security white paper for Adobe Creative Cloud for enterprise. In addition, we released a new white paper about the security architecture and capabilities of Adobe Business Catalyst. This updated information is useful in helping I.T. security professionals evaluate the security posture of our Creative Cloud offerings.

Adobe Creative Cloud for enterprise gives large organizations access to Adobe’s creative desktop and mobile applications and services, workgroup collaboration, and license management tools. It also includes flexible deployment, identity management options including Federated ID with Single Sign-On, annual license true-ups, and enterprise-level customer support — and it works with other Adobe enterprise offerings. This version of the white paper includes updated information about:

  • Various enterprise storage options now available, including updated information about geolocation of shared storage data
  • Enhancements to entitlement and identity management services
  • Enhancements to password management
  • Security architecture of shared services and the new enterprise managed services

Adobe Business Catalyst is an all-in-one business website and online marketing solution providing an integrated platform for Content Management (CMS), Customer Relationship Management (CRM), E‐Mail Marketing, ECommerce, and Analytics. The security white paper now available includes information about:

  • Overall architecture of Business Catalyst
  • PCI/DSS compliance information
  • Authentication and services
  • Ongoing risk management for the Business Catalyst application and infrastructure

Both white papers are available for download on the Adobe Security resources page on adobe.com.

 

Chris Parkerson
Sr. Marketing Strategy Manager

Binspector: Evolving a Security Tool

Binary formats and files are inescapable. Although optimal for computers, they are impractical to understand for the typical developer. Binspector was born when I found myself scouring JPEG metadata blocks to make sure they were telling consistent stories. The tool’s introspection capabilities have since transformed it into an intelligent fuzzing utility, and I am excited to share it in the hopes others will benefit.

I joined Photoshop core engineering in 2008. One of my first tasks was to improve the metadata policies of our Save for Web plugin. When a file was being saved, the exporter would embed metadata in a mash-up of up to three formats (IPTC/IIM, Exif, and XMP). The problem was that the outgoing metadata blocks were inconsistent and oftentimes held conflicting data within the same file. High-granularity details like the metadata source and any conflicting values are lost by the time a GUI presents them. No image viewer (Photoshop included) has such a high degree of introspection.

I ended up writing several tools to handle the binary formats I was interested in, and it was not long before I saw generalities between the tools. I cobbled together a domain-specific language that let me define the interpretation of individual fields in a binary file as well as the relationships between them. That file formed an abstract syntax tree, and when combined with a binary that fit the format, I could suss out knowledge of any bit.

It was at this point that Binspector started to take shape.

Once a format grammar has been built, analysis of a file becomes quite interesting. For example, any binary file can be validated against a format grammar and if it fails, Binspector is able to give a detailed account of where the grammar and binary differ.

Binspector evolved into a security tool when I related its analytical capability to fuzzing. The Photoshop team invests heavily in its stability, and corrupted file formats are a well-known attack vector. Fuzzing for the most part has a “spray and pray” heuristic: throw gobs of (nearly) random data at an application, and see what sticks. The problem with this method is that one has to generate a lot of candidate files to get input code to fail unexpectedly.

By adding knowledge to the fuzzer, might we increase the ‘interesting failure’ rate? For example, what would happen if I took a perfectly fine 100×100-pixel image and set just the width bytes to, say, 255? Finding and fuzzing that specific a target would require a tool that had introspection into a binary file format- exactly what Binspector had been written to do!

The key insight was to have Binspector flag a binary field every time it had been used to further read the file. Need to know the width of the document? Flag the width. Want to know the count of Exif fields? Flag the count. At the end of analysis, an array of potential weak points had been generated. The next phase was to generate exact copies of the known-good file with these specific weak points modified in various ways.

A rising tide raises all boats, and this is certainly true in the domain of application stability and security. Improving the robustness of an application can only improve the general condition of an operating environment. That is one of the reasons why I am elated to release Binspector as open source software. My long-term vision is to build up a body of binary format grammars that can be used by anyone to improve the reliability of input code.

Foster Brereton
Sr. Computer Scientist

Applying the SANS Cybersecurity Engineering Graduate Certificate to Adobe’s Secure Product Lifecycle (part 1 of 2)

In the constantly changing world of product security it is critical for development teams to stay current on the current trends in cybersecurity. The Adobe Photoshop team evaluates additional training programs often to help complement Adobe’s ASSET Software Security Certification Program.  One of those is the SANS Cybersecurity Engineering Graduate Certificate Program.  This blog series discusses how we are leveraging the knowledge from this program to help improve product security for Adobe Photoshop.

The SANS Cybersecurity Engineering Graduate Certificate is a three course certificate that offers hand’s on practical security training – such as in the proper usage of static code analysis. A best practice of modern software development is to perform static code analysis early in the software development lifecycle before the code is released to quality engineering. On the Photoshop team we use static code analysis regularly in our continuous build environment. This analysis helps ensure that if there are any new defects introduced during development, they can be immediately fixed by the engineer who added them. This allows the quality engineering team to focus on automation, functional testing, usability testing and other aspects of overall quality instead of, for example, accidental NULL dereferences.

In addition to the course material and labs, graduate students are asked to write peer-reviewed research papers. I am primarily responsible for security of the Adobe Photoshop CC desktop application and I developed my research paper based upon my experiences. When the Heartbleed bug was disclosed in April 2014, I was curious to know why this type of bug wasn’t caught by static analysis tools. I chose to examine this question and how it applies to Photoshop.

The resulting paper, The Role of Static Analysis in Heartbleed, showed that Heartbleed wasn’t initially caught by static analysis tools. This is because one of the goals of static analysis is not to generate too many false positives that the engineers need to sift through. To solve this, we asked the vendor of one of the popular static analysis tools, Coverity, to add a new TAINTED_SCALAR checker which was general enough to not only detect Heartbleed, but also other potential byte-swap defects. Andy Chou’s blog post details how by looking at byte-swap operations specifically, and not by making the checker only specific to Heartbleed, other software development teams can benefit. This idea was proven correct when the Photoshop team applied the latest release of Coverity’s tools including our request to our codebase. We have identified and fixed a number of issues from this new TAINTED_SCALAR checker.

The value of additional training can only be fully realized if you can apply the knowledge to a set of problems that are found on the job. This is one of the advantages of the SANS program – the  practical application of applying this knowledge through a research paper makes the program more valuable to my work.

In part 2 of this blog series, I will examine how the NetWars platform was used to help the overall security profile of Adobe Photoshop.

Jeff Sass
Engineering Manager, Photoshop

An Industry Leader’s Contributions

In the security industry, we’re focused on the impact of offensive advancements and how to best adapt defensive strategies without much reflection on how our industry has evolved.  I wanted to take a moment to reflect on the history of our industry in the context of one individual’s contribution.

After many years in the software engineering and security business, Steve Lipner, Partner Director of Program Management, will retire from Microsoft this month.  Steve’s contributions to the security industry are many and far reaching.  Many of the concepts he helped develop form the basis for today’s approach to building more secure systems.

In the early 2000’s Steve suffered through CodeRed and Nimda, two worms that affected Microsoft Internet Information Server 4.0 and 5.0.  In January 2002 when Bill Gates issued his “Trustworthy Computing memo” shifting the company’s focus from adding features to pursuing secure software, Steve and his team went to work training thousands of developers and started a radical series of “security pushes” that enabled Microsoft to change the corporate culture to emphasize product security.

Steve likes to joke that he started running the Microsoft Security Response Center (MSRC) when he was 32; the punchline being that the retirement-aged person he is today is strictly due to the ravages of the job. Microsoft security was once called one of the hardest jobs out there and Steve’s work is truly an inspiration.

The Security Development Lifecycle (SDL) is the process that emerged during these security improvements.  Steve’s team has been responsible for the application of the SDL process across Microsoft, while also making it possible for hundreds of security organizations to adopt, or like Adobe, use it as a model for their respective secure product engineering frameworks

Along with Michael Howard, Lipner co-authored of the book The Security Development Lifecycle and he is named as inventor on 12 U.S. patents and two pending applications in the field of computer and network security.  He served two terms on the United States Information Security and Privacy Advisory Board and its predecessor.  I’ve had the pleasure of working with Steve on the board for SAFECode – The Software Assurance Forum for Excellence in Code – a non-profit dedicated to the advancement of effective software assurance methods.

I’d like to thank Steve for all of the important contributions he has made to the security industry.

Brad Arkin
Vice President & CSO

 

Adobe Document Cloud Security Overview Now Available

A white paper detailing the security features and architecture of core Adobe Document Cloud services is now available. The new Adobe Document Cloud combines a completely re-imagined Adobe Acrobat with the power of e-signatures. Now you can edit, sign, send and track documents wherever you are—across desktop, mobile and web. This paper covers the key regulations and standards Document Cloud adheres to, the security architecture of the offering, and describes its core capabilities for protecting sensitive information. You can download this paper now from adobe.com.

Chris Parkerson
Senior Marketing Strategy Manager

Top 10 Web Hacking Techniques of 2014

This year, I once again had the privilege to be one of judges for the “Top 10 Web Hacking Techniques” list that is organized by Matt Johansen and Johnathan Kuskos of the WhiteHat Security team. This is a great honor and a lot of fun to do, although the task of voting also requires a lot of reflection. A significant amount of work went into finding the issues, and that should be respected in the analysis for the top spot. This blog reflects my personal interpretation of the nominees this year.

My first job as a judge is to establish my criteria for judging. For instance:

  • Did the issue involve original or creative research?
  • What was the ultimate severity of the issue?
  • How many people could be affected by the vulnerability?
  • Did the finding change the conversation in the security community?

The last question is what made judging this years entries different from previous years. Many of the bugs were creative and could be damaging for a large number of people. However, for several of the top entries, the attention that they received helped change the conversation in the security industry.

A common trend in this year’s top 10 was the need to update third-party libraries. Obviously, Heartbleed (#1) and POODLE (#3) brought attention to keeping OpenSSL up-to-date. However, if you read the details on the Misfortune Cookie attack (#5), there was the following:

AllegroSoft issued a fixed version to address the Misfortune Cookie vulnerability in 2005, which was provided to licensed manufacturers. The patch propagation cycle, however, is incredibly slow (sometimes non-existent) with these types of devices. We can confirm many devices today still ship with the vulnerable version in place. 

Third-party libraries can be difficult to track and maintain in large organizations and large projects. Kymberlee Price and Jake Kouns spent the year giving great presentations on the risks of third-party code and how to deal with it.

Heartbleed and Shellshock were also part of the year of making attacks media-friendly by providing designer logos. Many of us rolled our eyes at how the logos drew additional media attention to the issues. Although, it is impossible to ignore how the added media attention helped expedite difficult projects such as the deprecation of SSLv3. Looking beyond the logos, these bugs had other attributes which made them accessible in terms of tracking and understanding the severity. For instance, besides a memorable name, Heartbleed included a detailed FAQ which helped to quickly explain the bug’s impact. Typically, a researcher would have had to dig through source code changelists which is difficult or consult HeartBleed’s CVSS score (5 out of 10) which can be misleading. Once you remove the cynicism from the logo discussion, the question that remains is what can the industry learn from these events that will allow our industry to better communicate critical information to a mass audience?

In addition, these vulnerabilities brought attention to the discussion around the “many eyes make all bugs shallow” theory. Shell Shock was a vulnerability that went undetected for years in the default shell used by most security engineers. Once security engineers began reviewing the code affected by Shell Shock, three other CVEs were identified within the same week. The remote code execution in Apache Struts ClassLoader (#8) was another example of a vulnerability in a popular open-source project. The Heartbleed vulnerability prompted the creation of the Core Infrastructure Initiative to formally assist with projects like OpenSSL, OpenSSH and the Network Time Protocol. Prior to the CII, OpenSSL only received about $2,000 per year in donations. The CII funding makes it possible to pursue projects such as having the NCC Group’s consultants audit OpenSSL.

In addition to bugs in third-party libraries, there was also some creative original research. For instance, the Rosetta Flash vulnerability (#4) combined the fact that the JSONP protocol allows attackers to control the first few bytes of a response with the fact that ZLIB compression format allows you to define the characters used for compression. Combining these two issues meant that an attacker could bounce a specially crafted, ZLIB-compressed SWF file off of a JSONP endpoint to get it to execute in their domain context. This technique worked on JSONP endpoints for several popular websites. Rather than asking JSONP endpoints to add data validation, Adobe changed Flash Player so that SWFs restrict the types of ZLIB-compressed data that is accepted.

The 6th and 7th issues on the list both dealt with authentication issues that reminded us that authentication systems are a complex network of trust. The research into “Hacking PayPal with One Click” (#6) combined three different bugs to create a CSRF attack against PayPal. While the details around the “Google Two-Factor Authentication Bypass” weren’t completely clear, it also reminded us that many trust systems are chained together. Two-factor authentication systems frequently rely on your phone. If you can social engineer a mobile carrier to redirect the victim’s account, then you can subvert the second factor in two-factor authentication.

The last two issues dealt with more subtle issues than remote code execution. Both show how little things can matter. The Facebook DDOS attack (#9) leveraged the simple support of image tags in the Notes service. If you include enough image tags on enough notes, then you could get over 100 Facebook servers generating traffic to the target. Lastly, “Covert Timing Channels based on HTTP Cache Headers” (#10) looked at ways hidden messages can be conveyed via headers that would otherwise be ignored in most traffic analysis.

Overall, this year was interesting in terms of how the bugs changed our industry. For instance, the fact that a large portion of the industry was dependent on OpenSSL was well known. However, without Heartbleed, the funding to have a major consulting firm perform a formal security audit would have never been made possible. Research from POODLE demonstrated that significant sites in the Alexa Top 1000 hadn’t adopted TLS which has been around since 1999. POODLE helped force the industry to accelerate the migration forward off of SSLv3 and onto TLS. In February, the PCI standard’s council announced, “because of these weaknesses, no version of SSL meets PCI SSC’s definition of ‘strong cryptography.” When a researcher’s work identifies a major risk, then that is clearly important within the scope of that one product or service. When a researcher’s work can help inspire changing the course of the industry, then that is truly remarkable.

For those attending RSA Conference, Matt Johansen and Johnathan Kuskos will be presenting the details of the Top 10 Web Hacking Techniques of 2014 on April 24 at 9:00 AM.

 

Peleus Uhley
Lead Security Strategist

Adobe @ the Women in Cybersecurity Conference (WiCyS)

Adobe sponsored the recent Women in Cyber Security Conference held in Atlanta, Georgia.  Alongside two of my colleagues, Julia Knecht and Kim Rogers, I had the opportunity to attend this conference and meet the many talented women in attendance.   

The overall enthusiasm of the conference was incredibly positive.  From the presentations and keynotes and into the hallways in between, discussion focused on the general knowledge spread about the information security sector and the even larger need for more resources in the industry, which dovetailed into the many programs and recruiting efforts to help more women and minorities, who are focused on security, to enter and stay in the security field.  It was very inspiring to see so many women interested in and working in security.

One of the first keynotes, presented by Jenn Lesser Henley, Director of Security Operations at Facebook, immediately set the inspiring tone of the conference with a motivational presentation which debunked the myths of why people don’t see security as an appealing job field.  She included the need for better ‘stock images’, which currently portray those in security working in a dark, isolated room on a computer, wearing a balaclava, which of course is very far from the actual collaborative engaging environment where security occurs.  The security field is so vast and growing in different directions that the variety of jobs, skills and people needed to meet this growth is as much exciting as it is challenging.  Jenn addressed the diversity gap of women and minorities in security and challenged the audience to take action in reducing that gap…immediately.  To do so, she encouraged women and minorities to dispel the unappealing aspects of the cyber security field by surrounding themselves with the needed support or a personal cheerleading team, in order to approach each day with an awesome attitude.

Representation of attendees seemed equally split across industry, government and academia.  There was definitely a common goal across all of us participating in the Career and Graduate School Fair to enroll and/or hire the many talented women and minorities into the cyber security field, no matter the company, organization, or university.   My advice to many attendees was to simply apply, apply, apply.

Other notable keynote speakers included:

  • Sherri Ramsay of CyberPoint who shared fascinating metrics on cyber threats and challenges, and her thoughts on the industry’s future. 
  • Phyllis Schneck, the Deputy Under Secretary for Cybersecurity and Communications at the Department of Homeland Security, who spoke to the future of DHS’ role in cybersecurity and the goal to further build a national capacity to support a more secure and resilient cyberspace.  She also gave great career advice to always keep learning and keep up ‘tech chops’, to not be afraid to experiment, to maintain balance and find more time to think. 
  • Angela McKay, Director of Cybersecurity Policy and Strategy at Microsoft, spoke about the need for diverse perspectives and experiences to drive cyber security innovations.  She encouraged women to recognize the individuality in themselves and others, and to be adaptable, versatile and agile in changing circumstances, in order to advance both professionally and personally. 

Finally, alongside Julia Knecht from our Digital Marketing security team, I presented a workshop regarding “Security Management in the Product Lifecycle”.  We discussed how to build and reinforce a security culture in order to keep a healthy security mindset across a company, organization and throughout one’s career path.  Using our own experiences working on security at Adobe, we engaged in a great discussion with the audience on what security programs and processes to put into place that advocate, create, establish, encourage, inspire, prepare, drive and connect us to the ever evolving field of security.  More so, we emphasized the importance of communication about security both internally within an organization, and also externally with the security community.  This promotes a collaborative, healthy forum for security discussion, and encourages more people to engage and become involved.

All around, the conference was incredibly inspiring and a great stepping stone to help attract more women and minorities to the cyber security field.

Wendy Poland
Product Security Group Program Manager