Posts in Category "Security"

Top 10 Web Hacking Techniques of 2014

This year, I once again had the privilege to be one of judges for the “Top 10 Web Hacking Techniques” list that is organized by Matt Johansen and Johnathan Kuskos of the WhiteHat Security team. This is a great honor and a lot of fun to do, although the task of voting also requires a lot of reflection. A significant amount of work went into finding the issues, and that should be respected in the analysis for the top spot. This blog reflects my personal interpretation of the nominees this year.

My first job as a judge is to establish my criteria for judging. For instance:

  • Did the issue involve original or creative research?
  • What was the ultimate severity of the issue?
  • How many people could be affected by the vulnerability?
  • Did the finding change the conversation in the security community?

The last question is what made judging this years entries different from previous years. Many of the bugs were creative and could be damaging for a large number of people. However, for several of the top entries, the attention that they received helped change the conversation in the security industry.

A common trend in this year’s top 10 was the need to update third-party libraries. Obviously, Heartbleed (#1) and POODLE (#3) brought attention to keeping OpenSSL up-to-date. However, if you read the details on the Misfortune Cookie attack (#5), there was the following:

AllegroSoft issued a fixed version to address the Misfortune Cookie vulnerability in 2005, which was provided to licensed manufacturers. The patch propagation cycle, however, is incredibly slow (sometimes non-existent) with these types of devices. We can confirm many devices today still ship with the vulnerable version in place. 

Third-party libraries can be difficult to track and maintain in large organizations and large projects. Kymberlee Price and Jake Kouns spent the year giving great presentations on the risks of third-party code and how to deal with it.

Heartbleed and Shellshock were also part of the year of making attacks media-friendly by providing designer logos. Many of us rolled our eyes at how the logos drew additional media attention to the issues. Although, it is impossible to ignore how the added media attention helped expedite difficult projects such as the deprecation of SSLv3. Looking beyond the logos, these bugs had other attributes which made them accessible in terms of tracking and understanding the severity. For instance, besides a memorable name, Heartbleed included a detailed FAQ which helped to quickly explain the bug’s impact. Typically, a researcher would have had to dig through source code changelists which is difficult or consult HeartBleed’s CVSS score (5 out of 10) which can be misleading. Once you remove the cynicism from the logo discussion, the question that remains is what can the industry learn from these events that will allow our industry to better communicate critical information to a mass audience?

In addition, these vulnerabilities brought attention to the discussion around the “many eyes make all bugs shallow” theory. Shell Shock was a vulnerability that went undetected for years in the default shell used by most security engineers. Once security engineers began reviewing the code affected by Shell Shock, three other CVEs were identified within the same week. The remote code execution in Apache Struts ClassLoader (#8) was another example of a vulnerability in a popular open-source project. The Heartbleed vulnerability prompted the creation of the Core Infrastructure Initiative to formally assist with projects like OpenSSL, OpenSSH and the Network Time Protocol. Prior to the CII, OpenSSL only received about $2,000 per year in donations. The CII funding makes it possible to pursue projects such as having the NCC Group’s consultants audit OpenSSL.

In addition to bugs in third-party libraries, there was also some creative original research. For instance, the Rosetta Flash vulnerability (#4) combined the fact that the JSONP protocol allows attackers to control the first few bytes of a response with the fact that ZLIB compression format allows you to define the characters used for compression. Combining these two issues meant that an attacker could bounce a specially crafted, ZLIB-compressed SWF file off of a JSONP endpoint to get it to execute in their domain context. This technique worked on JSONP endpoints for several popular websites. Rather than asking JSONP endpoints to add data validation, Adobe changed Flash Player so that SWFs restrict the types of ZLIB-compressed data that is accepted.

The 6th and 7th issues on the list both dealt with authentication issues that reminded us that authentication systems are a complex network of trust. The research into “Hacking PayPal with One Click” (#6) combined three different bugs to create a CSRF attack against PayPal. While the details around the “Google Two-Factor Authentication Bypass” weren’t completely clear, it also reminded us that many trust systems are chained together. Two-factor authentication systems frequently rely on your phone. If you can social engineer a mobile carrier to redirect the victim’s account, then you can subvert the second factor in two-factor authentication.

The last two issues dealt with more subtle issues than remote code execution. Both show how little things can matter. The Facebook DDOS attack (#9) leveraged the simple support of image tags in the Notes service. If you include enough image tags on enough notes, then you could get over 100 Facebook servers generating traffic to the target. Lastly, “Covert Timing Channels based on HTTP Cache Headers” (#10) looked at ways hidden messages can be conveyed via headers that would otherwise be ignored in most traffic analysis.

Overall, this year was interesting in terms of how the bugs changed our industry. For instance, the fact that a large portion of the industry was dependent on OpenSSL was well known. However, without Heartbleed, the funding to have a major consulting firm perform a formal security audit would have never been made possible. Research from POODLE demonstrated that significant sites in the Alexa Top 1000 hadn’t adopted TLS which has been around since 1999. POODLE helped force the industry to accelerate the migration forward off of SSLv3 and onto TLS. In February, the PCI standard’s council announced, “because of these weaknesses, no version of SSL meets PCI SSC’s definition of ‘strong cryptography.” When a researcher’s work identifies a major risk, then that is clearly important within the scope of that one product or service. When a researcher’s work can help inspire changing the course of the industry, then that is truly remarkable.

For those attending RSA Conference, Matt Johansen and Johnathan Kuskos will be presenting the details of the Top 10 Web Hacking Techniques of 2014 on April 24 at 9:00 AM.

 

Peleus Uhley
Lead Security Strategist

Adobe @ the Women in Cybersecurity Conference (WiCyS)

Adobe sponsored the recent Women in Cyber Security Conference held in Atlanta, Georgia.  Alongside two of my colleagues, Julia Knecht and Kim Rogers, I had the opportunity to attend this conference and meet the many talented women in attendance.   

The overall enthusiasm of the conference was incredibly positive.  From the presentations and keynotes and into the hallways in between, discussion focused on the general knowledge spread about the information security sector and the even larger need for more resources in the industry, which dovetailed into the many programs and recruiting efforts to help more women and minorities, who are focused on security, to enter and stay in the security field.  It was very inspiring to see so many women interested in and working in security.

One of the first keynotes, presented by Jenn Lesser Henley, Director of Security Operations at Facebook, immediately set the inspiring tone of the conference with a motivational presentation which debunked the myths of why people don’t see security as an appealing job field.  She included the need for better ‘stock images’, which currently portray those in security working in a dark, isolated room on a computer, wearing a balaclava, which of course is very far from the actual collaborative engaging environment where security occurs.  The security field is so vast and growing in different directions that the variety of jobs, skills and people needed to meet this growth is as much exciting as it is challenging.  Jenn addressed the diversity gap of women and minorities in security and challenged the audience to take action in reducing that gap…immediately.  To do so, she encouraged women and minorities to dispel the unappealing aspects of the cyber security field by surrounding themselves with the needed support or a personal cheerleading team, in order to approach each day with an awesome attitude.

Representation of attendees seemed equally split across industry, government and academia.  There was definitely a common goal across all of us participating in the Career and Graduate School Fair to enroll and/or hire the many talented women and minorities into the cyber security field, no matter the company, organization, or university.   My advice to many attendees was to simply apply, apply, apply.

Other notable keynote speakers included:

  • Sherri Ramsay of CyberPoint who shared fascinating metrics on cyber threats and challenges, and her thoughts on the industry’s future. 
  • Phyllis Schneck, the Deputy Under Secretary for Cybersecurity and Communications at the Department of Homeland Security, who spoke to the future of DHS’ role in cybersecurity and the goal to further build a national capacity to support a more secure and resilient cyberspace.  She also gave great career advice to always keep learning and keep up ‘tech chops’, to not be afraid to experiment, to maintain balance and find more time to think. 
  • Angela McKay, Director of Cybersecurity Policy and Strategy at Microsoft, spoke about the need for diverse perspectives and experiences to drive cyber security innovations.  She encouraged women to recognize the individuality in themselves and others, and to be adaptable, versatile and agile in changing circumstances, in order to advance both professionally and personally. 

Finally, alongside Julia Knecht from our Digital Marketing security team, I presented a workshop regarding “Security Management in the Product Lifecycle”.  We discussed how to build and reinforce a security culture in order to keep a healthy security mindset across a company, organization and throughout one’s career path.  Using our own experiences working on security at Adobe, we engaged in a great discussion with the audience on what security programs and processes to put into place that advocate, create, establish, encourage, inspire, prepare, drive and connect us to the ever evolving field of security.  More so, we emphasized the importance of communication about security both internally within an organization, and also externally with the security community.  This promotes a collaborative, healthy forum for security discussion, and encourages more people to engage and become involved.

All around, the conference was incredibly inspiring and a great stepping stone to help attract more women and minorities to the cyber security field.

Wendy Poland
Product Security Group Program Manager

Re-assessing Web Application Vulnerabilities for the Cloud

As I have been working with our cloud teams, I have found myself constantly reconsidering my legacy assumptions from my Web 2.0 days. I discussed a few of the high-level ones previously on this blog. For OWASP AppSec California in January, I decided to explore the impact of the cloud on Web application vulnerabilities. One of the assumptions that I had going into cloud projects was that the cloud was merely a hosting provider layer issue that only changed how the servers were managed. The risks to the web application logic layer would remain pretty much the same. I was wrong.

One of the things that kicked off my research in this area was watching Andres Riancho’s “Pivoting in Amazon Clouds,” talk at Black Hat last year. He had found a remote file include vulnerability in an AWS hosted Web application he was testing. Basically, the attacker convinces the Web application to act as a proxy and fetch the content of remote sites. Typically, this vulnerability could be used for cross-site scripting or defacement since the attacker could get the contents of the remote site injected into the context of the current Web application. Riancho was able to use that vulnerability to reach the metadata server for the EC2 instance and retrieve AWS configuration information. From there, he was able to use that information, along with a few of the client’s defense-in-depth issues, to escalate into taking over the entire AWS account. Therefore, the possible impacts for this vulnerability have increased.

The cloud also involves migration to a DevOps process. In the past, a network layer vulnerability led to network layer issues, and a Web application layer vulnerability led to Web application vulnerabilities. Today, since the scope of these roles overlap, a breakdown in the DevOps process means network layer issues can impact Web applications.

One vulnerability making the rounds recently is a vulnerability dealing with breakdowns in the DevOps process. The flow of the issue is as follows:

  1. The app/product team requests an S3 bucket called my-bucket.s3.amazonaws.com.
  2. The app team requests the IT team to register the my-bucket.example.org DNS name, which will point to my-bucket.s3.amazonaws.com, because a custom corporate domain will make things clearer for customers.
  3. Time elapses, and the app team decides to migrate to my-bucket2.s3.amazonaws.com.
  4. The app team requests from the IT team a new DNS name (my-bucket2.example.org) pointing to this new bucket.
  5. After the transition, the app team deletes the original my-bucket.s3.amazonaws.com bucket.

This all sounds great. Except, in this workflow, the application team didn’t inform IT and the original DNS entry was not deleted. An attacker can now register my-bucket.s3.amazon.com for their malicious content. Since the my-bucket.example.org DNS name still points there, the attacker can convince end users that their malicious content is example.org’s content.

This exploit is a defining example of why DevOps needs to exist within an organization. The flaw in this situation is a disconnect between the IT/Ops team that manages the DNS server and the app team that manages the buckets. The result of this disconnect can be a severe Web application vulnerability.

Many cloud migrations also involve switching from SQL databases to NoSQL databases. In addition to following the hardening guidelines for the respective databases, it is important to look at how developers are interacting with these databases.

Along with new NoSQL databases, there are a ton of new methods for applications to interact with the NoSQL databases.  For instance, the Unity JDBC driver allows you to create traditional SQL statements for use with the MongoDB NoSQL database. Developers also have the option of using REST frontends for their database. It is clear that a security researcher needs to know how an attacker might inject into the statements for their specific NoSQL server. However, it’s also important to look at the way that the application is sending the NoSQL statements to the database, as that might add additional attack surface.

NoSQL databases can also take existing risks and put them in a new context. For instance, in the context of a webpage, an eval() call results in cross-site scripting (XSS). In the context of MongoDB’s server side JavaScript support, a malicious injection into eval() could allow server-side JavaScript injection (SSJI). Therefore database developers who choose not to disable the JavaScript support, need to be trained on JavaScript injection risks when using statements like eval() and $where or when using a DB driver that exposes the Mongo shell. Existing JavaScript training on eval() would need to be modified for the database context since the MongoDB implementation is different from the browser version.

My original assumption that a cloud migration was primarily an infrastructure issue was false. While many of these vulnerabilities were always present and always severe, the migration to cloud platforms and processes means these bugs can manifest in new contexts, with new consequences. Existing recommendations will need to be adapted for the cloud. For instance, many NoSQL databases do not support the concept of prepared statements, so alternative defensive methods will be required. If your team is migrating an application to the cloud, it is important to revisit your threat model approach for the new deployment context.

Peleus Uhley
Lead Security Strategist

Adobe @ CanSecWest 2015

Along with other members of the ASSET team, I recently attended CanSecWest 2015, an annual security conference held in Vancouver, Canada.  Pwn2Own is also co-located in the same venue as CanSecWest (a summary of this year’s results can be found here).  This was my first time attending CanSecWest and I found that I enjoyed the single-track style of the conference (it reminded me of the IEEE Symposium on Security and Privacy, which is also a small, single-track conference, though more academic in content).

Overall, there were some great presentations.  “I see, therefore I am… You” presented by Jan “starbug” Krissler of T-Labs/CCC (abstract listed here) detailed methods of using high resolution images to create techniques for authenticating to biometric systems, such as fingerprint readers, iris scanners, and facial recognition systems.  Given the advancements in high resolution cameras, the necessary base images can even be taken from a distance.  One can also use high resolution still images, such as from political campaign posters, or high resolution video.  Using such images, in some cases one can directly authenticate to the biometric system.  In one example, the face recognition software required the user to blink or move before unlocking the system (presumably to avoid unlocking simply for still images); however, Jan found that if you hold a printed image of the user’s face in front of the camera and simply swipe a pencil down and up across the face, then the system will unlock.  Overall, this presentation was insightful, engaging, and generally amusing.  It highlights how more effort needs to be placed on improving the security of biometric systems and that they are not yet ready to be solely relied upon for authentication. I recommend that those interested in biometric security watch this presentation once the recording is available (NOTE: there is one slide that some may find objectionable).

The last day of the conference had multiple talks about BIOS and UEFI security.  The day was kicked off with the presentation entitled “How many million BIOSes would you like to infect?” presented by Corey Kallenberg and Xeno Kovah of LegbaCore (abstract listed here, slides available here).  They showed how their “LightEater” System Management Mode (SMM) malware implant could operate with very high privilege and read everything from memory in a manner undetectable to the OS.  They demonstrated this live on multiple laptops, including a “military grade” MSI system that was running Tails via live boot.  This could be used to steal GPG keys, passwords, or decrypted messages.  They also showed how Serial-over-LAN could be used to exfiltrate data, including the ability to encrypt the data so as to bypass intrusion detection systems that are looking for certain signatures to identify this type of exploit.  Their analysis showed that UEFI systems share similar code, meaning that many BIOSes are vulnerable to being hooked and implanted with LightEater.  The aim of their presentation was to show that more attention should be put forth towards BIOS security.

When conducting application security reviews, threat modeling is used to understand the overall system and identify potential weakness in the security posture of the system.  The security techniques used to address those weakness, also rely on some root of trust, be that a CA or the underlying local host/OS.  This presentation highlights that when your root of trust is the local host and you are the victim of a targeted attack, then the security measures you defined may be inadequate.  Using defense in depth techniques along with other standard security best practices when designing your system can help minimize the impact of such techniques (for instance, using service-to-service authentication mechanisms that have an expiry, are least privileged, and limit server-side the source location of the client, so that if this exploit happens to the host, the service authentication token is not useful from an external network).

Todd Baumeister
Web Security Researcher

Adobe “Hack Days”

Within the ASSET team, we apply different techniques to keep our skills sharp. One technique we use periodically is to hold a “hack day,” which is an all-day event for the entire team. They are similar in concept to developer hack days but they are focused on security. These events are used to build security innovation and teamwork.

As in many large organizations, there are always side projects that everyone would like to work on. But they can be difficult to complete or even research when the work has to be squeezed in-between meetings and emails. The “free time” approach can be challenging depending on the state of your projects and how much they eat into the “free time.”  Therefore, we set aside one day every few months where the team is freed from all distractions and given the chance to focus on something of their choice for an entire day. That focus can generate a wealth of insight that more than compensates for the time investment. We have learned that sometimes a creative approach to security skill building is necessary for organizations that have a wide product portfolio.

There are very few rules for hack day. The general guidelines are:

  • The work has to be technical.
  • Researchers can work individually or as teams.
  • The work does not have to be focused on normally assigned duties. For instance, researchers can target any product or service within the company, regardless of whether they are the researcher assigned to it.
  • The Adobe world should be better at the end of the day. This can be a directly measurable achievement (e.g., new bugs, new tools, etc.). It can also be an indirect improvement, such as learning a new security skill set.
  • Researchers work from the same room(s) so that ideas can be exchanged in real time.
  • Lunch is provided so that people can stay together and share ideas and stories while they take a break from working.

Researchers are allowed to pick their own hack day projects. This is to encourage innovation and creative thinking. If a researcher examines someone else’s product or service, it can often provide a valuable third-party perspective. The outcomes from our hack days have included:

  • Refreshing the researcher’s experience with the existing security tools they are recommending to teams.
  • Trying out a new technique a researcher learned about at a conference but never had the chance to practically apply.
  • Allowing the team time to create a tool they have been meaning to build.
  • Allowing the team to dig into a product or service at a deeper level than just specs and obtain a broader view than what is gained by spot reviews of code. This helps the researcher give more informed advice going forward.
  • Providing an opportunity for researchers to challenge their assumptions about the product or service through empirical analysis.
  • And of course, team building.

A good example of the benefits of hack days comes from a pair of researchers who decided to perform a penetration test on an existing service. This service had already gone through multiple third-party consultant reviews and typically received a good health report. Therefore, the assumption was that it would be a difficult target because all the low-hanging fruit had already been eliminated. Nonetheless, the team was able to find several significant security vulnerabilities just from a single day’s effort.

While the individual bugs were interesting, what was more interesting was trying to identify why their assumption that the yield would be low was wrong. This led to a re-evaluation of the current process for that service. Should we rotate the consultancy?  Were the consultants focused on the wrong areas? Why did the existing internal process fail to catch the bugs? How do we fix this going forward? This kind of insight and the questions it prompted, more than justified a day of effort, and it was a rewarding find for the researchers involved.

With a mature application, experienced penetration testers often average less than one bug a day. Therefore, the hack day team may finish the day without finding any. But finding bugs is not the ultimate goal of a hack day. Rather, the goal is to allow researchers to gain a deeper understanding of the tools they recommend, the applications they work with, and the skills they want to grow. We have learned that a creative approach to security skill building is necessary for organizations, especially ones that have a wide product portfolio.

Given the outcomes we have achieved, a one-day investment is a bargain. While the team has the freedom to work on these things at any time, setting aside official days to focus solely on these projects helps accelerate innovation and research—and that’s of immense value to any organization. Hack days help our security team stay up to speed with Adobe’s complex technology stacks that vary across the company.  So if your organization feels trapped in the daily grind of tracking issues, consider a periodic hack day of your own to help your team break free of routine and innovate.

Peleus Uhley
Lead Security Strategist

“Hacker Village” at Adobe Tech Summit

During Adobe’s Tech Summit, hundreds of people from across the company visited the Hacker Village. Adobe Tech Summit is an annual gathering of product development teams from across all businesses and geographies. We get together to share best practices, information about the latest tools and techniques, and innovations to both inspire and educate.

The Hacker Village was designed to teach about the various attack types that could target our software and services. It consisted of six booths. Each booth was focused on a specific attack (cross-site scripting, SQL injection etc.) or security-related topic.

The booths were designed to demonstrate a particular attack and give visitors the opportunity to try the attack for themselves; including attacking web applications, cryptography and computer systems and more. For instance, the RFID booth was designed to demonstrate how a potential attacker can steal information from RFID cards. Upon visiting the information booth, visitors chose a RFID card that represented a super hero and were told to keep it hidden. Unbeknownst to our visitors we had a volunteer RFID thief carry a high powered RFID device concealed in a messenger bag. By getting within two feet of a card, he was able to successfully steal the information from the RFID card and display which super hero RFID cards had been compromised.

In the Wi-Fi booth, visitors learned about how susceptible wireless networks are to attacks. Lab participants were able to see what access points their own mobile device had connected to in the past, by intercepting the probe requests sent by a mobile device. The wireless drone introduced visitors to the concept of war flying – mapping out wireless networks. At the other booths, visitors successfully exploited sql injection using SQLmap, cross-site scripting using BeEF, system hacking using Armitage, and cracking passwords using John the ripper.

By completing one or more of the labs, the participants had the opportunity to take home their very own suite of hacker tools.

The Hacker Village was a huge success. In just a three hour time frame, the Hacker Village had more than 325 visitors and 225 lab participants. Most of the participants completed multiple labs, with dozens visiting all six booths. The feedback was positive and many people showed a strong interest in security after visiting one or more of the booths.

Taylor Lobb
Senior Security Analyst

Automating Credential Management

Every enterprise maintains a set of privileged accounts for a variety of use cases. They are essential to creating new builds, configuring application and database servers, and accessing various parts of the infrastructure at run-time. These privileged accounts and passwords can be extremely powerful weapons in the hands of attackers, because they open access to critical systems and the sensitive information that resides on them. Moreover, stealing credentials is often seen as a way for cybercriminals to hide in plain sight since it appears as a legitimate access to the system.

In order to support the scaling of our product development we need to ensure that our environments remain secure while they grow to meet the increasing demands of our customers. For us, the best way to achieve this is by enforcing security at each layer, and relying on automation to maintain security controls regardless of scaling needs. Tooling is an important part of enabling this automation, and password management solutions come to our aid here.  Using a common tool for credential management is one method Adobe uses to help secure our environment. Proper password management helps make deployments more flexible.  We ensure that the access key and API key needed to authenticate to the backup database is not stored on the application server. As a defense-in-depth mechanism, we store the keys in a password manager and pull them at run time when the backup script on the server is executed.  This way we can have the keys in one central location rather than being scattered on individual machines when we scale our application servers. Rotating these credentials becomes easier and we can easily confirm that there are no cached credentials or misconfigured machines in the environment.  We can also maintain a changeable whitelist of the application servers that need to access the password manager, preventing access to the credentials from any IP address that we do not trust.

If an attacker were able to access build machines they could create malicious binaries that would appear to be signed by a legitimate source. This could enable the hacker to distribute malware to unsuspecting victims very easily. We use two major functions of commercially available password managers to help secure our build environment.  We leverage the credential management solution in order to avoid having credentials on any of our build servers. The goal here is similar to the use-case above where we want to keep all keys off the servers, only retrieving them at run-time.  In order to support this, we’ve had to build an extensive library for the client-side components that need to pull credentials.  This library allows us to provision new virtual machines constantly with a secure configuration and a robust communication channel with the credential manager.  Adapting tooling in this way to suit our needs has been a recurring theme in our effort to find solutions to deployment challenges.

Our build environment also uses the remote access functionality provided by password managers, which allows users to open a remote session to the target machine using the password manager as a proxy.  We ensure that this is the only mechanism in which engineers can access machines, and we maintain video recordings of the actions executed on the target machine. This gives us a clear audit trail of who accessed the machine, what they did, and when they logged out.  Also, since we initiate the remote session, none of the users or admins need to know what the actual passwords are since the password manager handles the authentication to the machine.  This prevents passwords from being written down and shared – it also becomes seamless to change them as needed.

Credential management has become a challenge primarily because of the sheer number of passwords and keys out there. Given some of our use-cases we’ve found commercially available password management tools can help make deployments easier in the long-term.  Adobe is a large organization with unique products that have very different platforms – having a central location for dealing with password management can help solve some of the challenges that we face as a services company.  As we look to expand each service, we will continue to adapt our usage of tools like these so that we can help keep our infrastructure safe and provide a more secure experience to all our customers.

Pranjal Jumde and Rajat Shah
ASSET Security Researchers

Adobe Shared Cloud Now SOC2- Security Type 1 Compliant

We are very happy to report that KPMG LLP has completed their attestation and issued the final Type 1 SOC2 Security report for Adobe’s Digital Media Shared Cloud.

Adobe’s Shared Cloud is the infrastructure component supporting the Adobe Creative Cloud.   Adobe Creative Cloud teams can build their product and service offerings on top of the pluggable platform provide by Shared Cloud.

Completion of this project is a very important first step in the compliance roadmap for Adobe Creative Cloud.  Any Adobe service will inherit the controls that are in-scope for this Type 1 SOC2-Security report to the extent they leverage Shared Cloud as their data repository platform and Adobe Cloud Operations for their cloud operations.

Several Adobe teams worked closely together to ensure the successful completion of the project.  The teams will now focus on completing Type 2 attestation in 2015.

A big thanks to everyone involved.

Abhi Pandit
Sr. Director of Risk Advisory and Assurance

Join Us at ISSE EU in Brussels October 14 – 15!

Adobe will be participating again this year in the ISSE EU conference in Brussels, Belgium, Oct. 14-15, 2014. This conference attracts senior decision makers in IT Security from a wide range of industries and governmental organizations. There are numerous sessions tackling many of the current hot topics in security including cloud security, identity management, the Internet of Things (IoT), data protection & privacy, compliance & regulation, and the changing role of IT Security professionals adapting to these changes. 

Adobe will be talking about a few of our security initiatives and programs during the event, specifically highlighting our security training program which I currently manage. The materials from this program now form the basis of the open-source, free security training program from SAFECode (https://training.safecode.org). Many organizations have now used these materials to develop their own security training programs. I will be available on-site to answer questions about these programs. 

We will also have three sessions during the conference. Director of Product Security David Lenoe will present a keynote presentation on “Maintaining a Security Organization That Can Adapt to Change” on Tuesday, Oct. 14, at 11:45 a.m. According to Forrester Research, “51 % of organizations said it’s a challenge or major challenge to hire security staff with the right skills” – and keeping them happy, productive, and nimble is also a major challenge. This session will discuss Adobe’s approach to addressing these issues in our organization that we believe may provide valuable insight into handling these issues in your own organization. 

On Tuesday at 3:10 p.m., Mohit Kalra, senior manager for secure software engineering, will provide insight into “Deciding the Right Metrics & Dashboards for Security Success.” This session will discuss what makes a “good” security roadmap and then how to properly measure and share progress against that roadmap to help ensure success.  

Last but not least, on Wednesday, Oct. 15, at 2:40 p.m. I will discuss how “Building Security In Takes Everyone Thinking Like a Security Pro.” While we realize this is a mouthful, it’s probably best description I can give for the goal of the ASSET Certification Program (http://blogs.adobe.com/security/2013/05/training-secure-software-engineers-part-1.html) at Adobe. We as an industry not only need to increase our security fluency, we also need to have people that can look at the product they are working on with a hacker’s eye and raise a flag when they see something that may become an issue in the future.  

In this talk, I will spend most of the time dedicated to the experiential elements of the program that gives us the ability to build our experts. For example, people have taught themselves how to perform manual penetration testing. On the flip side there are a lot of projects where candidates have created ways to automate scanning or other processes. One of the more innovative projects was the creation of the Hackfest (http://blogs.adobe.com/security/?s=hackfest&submit=). As one security champion, Elaine Finnell, puts it, “For myself, pursuing the brown belt (in the program) has pushed me beyond simply absorbing information and into doing. Similar to how a science classroom has a lab, putting the information I learn both during the training and during outside trainings into practice helps to solidify my understanding of security principles. While I’m still not an expert on executing penetration testing, fuzzing, or architecture analysis, every experience I have doing this type of work alongside experts serves to improve my ability to be a security champion within my team.”

I love to talk about this stuff. I’ll be available in Adobe’s booth on the expo floor and if you’re going to be there, so please hit me up. I’m also available on Twitter – @JoshKWAdobe. More information about the training program can also be found in our new white paper available at http://www.adobe.com/content/dam/Adobe/en/security/pdfs/adobe-security-training-wp-web.pdf and on the Security@Adobe blog (http://blogs.adobe.com/security/2013/05/training-secure-software-engineers-part-1.html).

You can follow @AdobeSecurity for the latest happenings during ISSE EU as we will be live tweeting during the event – look for the hashtag #AdobeISSE. Also, more information about all of our security initiatives can be found at http://www.adobe.com/security.   

 


Josh Kebbel-Wyen 

Senior Security Program Manager 

Top 10 Hacking Techniques of 2013: A Few Things to Consider in 2014

For the last few years, I’ve been a part of the annual ranking of top 10 web hacking techniques organized by WhiteHat Security. Each year, it’s an honor to be asked to participate, and this year is no different. Not only does judging the Top 10 Web Hacking Techniques allow me to research these potential threats more closely, it also informs my day-to-day work.

WhiteHat’s Matt Johansen and Johnathan Kuskos have provided a detailed overview of the top 10 with some highlights available via this webinar.  This blog post will further describe some of the lessons learned from the community’s research.

1. XML-based Attacks Will Receive More Attention

This year, two of the top 15 focused on XML-based attacks. XML is the foundation of a large portion of the information we exchange over the Internet, making it an important area of study.

Specifically, both researchers focused on XML External Entities. In terms of practical applications of their research, last month Facebook gave out their largest bug bounty yet for an XML external entity attack. The Facebook attack demonstrated an arbitrary file read that they later re-classified as a potential RCE bug.

Advanced XML features such as XML external entities, XSLT and similar options are very powerful. If you are using an XML parser, be sure to check which features can be disabled to reduce your attack surface. For instance, the Facebook patch for the exploit was to set libxml_disable_entity_loader(true).

In addition, JSON is becoming an extensively used alternative to XML. As such, the JSON community is adding similar features to the JSON format. Developers will need to understand all the features that their JSON parsers support to ensure that their parsers are not providing more functionality than their APIs are intended to support.

2. SSL Takes Three of the Top 10 Spots

In both the 2011 and 2012 Top 10 lists, SSL attacks made it into the top spot.  For the 2013 list, three attacks on SSL made it into the top 10: Lucky 13, BREACH and Weaknesses in RC4. Advances in research always lead to more advances in research. In fact, the industry has already seen our first new report against SSL in 2014.  It will be hard to predict how much farther and faster research will advance, but it is safe to assume that it will.

Last year at BlackHat USA, Alex Stamos, Thomas Ptacek, Tom Ritter and Javed Samuel presented a session titled “The Factoring Dead: Preparing for the Cryptopocalypse.” In the presentation, they highlighted some of the challenges that the industry is facing in preparing for a significant breach of a cryptographic algorithm or protocol. Most systems are not designed for cryptographic agility and updating cryptography requires a community effort.

These three Top 10 entries further highlight the need for our industry to improve our crypto agility within our critical infrastructure. Developers and administrators, you should start examining your environments for TLS v1.2 support. All major browsers currently support this protocol. Also, review your infrastructure to determine if you could easily adopt future versions of TLS and/or different cryptographic ciphers for your TLS communication. The OWASP Transport Layer Protection Cheat Sheet provides more information on steps to hard your TLS implementation.

3. XSS Continues to Be a Common Concern for Security Professionals

We’ve known about cross-side scripting (XSS) in the community for over a decade, but it’s interesting that people still find innovative ways to both produce and detect it. At the most abstract level, solving the problem is complex because JavaScript is a Turing-complete language that is under active development. HTML5 and CSS3 are on the theoretical edge of Turing-Completeness in that you can implement Rule 110 so long as you have human interaction. Therefore, in theory, you could not make an absolute statement about the security of a web page without solving the halting problem.

The No. 1 entry in the Top 10 this year demonstrated that this problem is further complicated due to the fact that browsers will try to automatically correct bad code. What you see in the written code is not necessarily what the browser will interpret at execution. To solve this, any static analysis approach would not only need to know the language but also know how the browser will rewrite any flaws.

This is why HTML5 security advances such as Content Security Policies (CSP) and iframe sandboxes are so important (or even non-standards-based protections such as X-XSS-Protection).  Static analysis will be able to help you find many of your flaws. However, due to all the variables at play, they cannot guarantee a flawless site. Additional mitigations like CSP will lessen the real world exploitability of any remaining flaws in the code.

These were just a few of the things I noticed as a part of the panel this year. Thanks to Jeremiah Grossman, Matt Johansen, Johnathan Kuskos and the entire WhiteHat Security team for putting this together. It’s a valuable resource for the community – and I’m excited to see what makes the list next year.

Peleus Uhley

Lead Security Strategist