Posts in Category "Security"

Applying the SANS Cybersecurity Engineering Graduate Certificate to Adobe’s Secure Product Lifecycle (part 1 of 2)

In the constantly changing world of product security it is critical for development teams to stay current on the current trends in cybersecurity. The Adobe Photoshop team evaluates additional training programs often to help complement Adobe’s ASSET Software Security Certification Program.  One of those is the SANS Cybersecurity Engineering Graduate Certificate Program.  This blog series discusses how we are leveraging the knowledge from this program to help improve product security for Adobe Photoshop.

The SANS Cybersecurity Engineering Graduate Certificate is a three course certificate that offers hand’s on practical security training – such as in the proper usage of static code analysis. A best practice of modern software development is to perform static code analysis early in the software development lifecycle before the code is released to quality engineering. On the Photoshop team we use static code analysis regularly in our continuous build environment. This analysis helps ensure that if there are any new defects introduced during development, they can be immediately fixed by the engineer who added them. This allows the quality engineering team to focus on automation, functional testing, usability testing and other aspects of overall quality instead of, for example, accidental NULL dereferences.

In addition to the course material and labs, graduate students are asked to write peer-reviewed research papers. I am primarily responsible for security of the Adobe Photoshop CC desktop application and I developed my research paper based upon my experiences. When the Heartbleed bug was disclosed in April 2014, I was curious to know why this type of bug wasn’t caught by static analysis tools. I chose to examine this question and how it applies to Photoshop.

The resulting paper, The Role of Static Analysis in Heartbleed, showed that Heartbleed wasn’t initially caught by static analysis tools. This is because one of the goals of static analysis is not to generate too many false positives that the engineers need to sift through. To solve this, we asked the vendor of one of the popular static analysis tools, Coverity, to add a new TAINTED_SCALAR checker which was general enough to not only detect Heartbleed, but also other potential byte-swap defects. Andy Chou’s blog post details how by looking at byte-swap operations specifically, and not by making the checker only specific to Heartbleed, other software development teams can benefit. This idea was proven correct when the Photoshop team applied the latest release of Coverity’s tools including our request to our codebase. We have identified and fixed a number of issues from this new TAINTED_SCALAR checker.

The value of additional training can only be fully realized if you can apply the knowledge to a set of problems that are found on the job. This is one of the advantages of the SANS program – the  practical application of applying this knowledge through a research paper makes the program more valuable to my work.

In part 2 of this blog series, I will examine how the NetWars platform was used to help the overall security profile of Adobe Photoshop.

Jeff Sass
Engineering Manager, Photoshop

An Industry Leader’s Contributions

In the security industry, we’re focused on the impact of offensive advancements and how to best adapt defensive strategies without much reflection on how our industry has evolved.  I wanted to take a moment to reflect on the history of our industry in the context of one individual’s contribution.

After many years in the software engineering and security business, Steve Lipner, Partner Director of Program Management, will retire from Microsoft this month.  Steve’s contributions to the security industry are many and far reaching.  Many of the concepts he helped develop form the basis for today’s approach to building more secure systems.

In the early 2000’s Steve suffered through CodeRed and Nimda, two worms that affected Microsoft Internet Information Server 4.0 and 5.0.  In January 2002 when Bill Gates issued his “Trustworthy Computing memo” shifting the company’s focus from adding features to pursuing secure software, Steve and his team went to work training thousands of developers and started a radical series of “security pushes” that enabled Microsoft to change the corporate culture to emphasize product security.

Steve likes to joke that he started running the Microsoft Security Response Center (MSRC) when he was 32; the punchline being that the retirement-aged person he is today is strictly due to the ravages of the job. Microsoft security was once called one of the hardest jobs out there and Steve’s work is truly an inspiration.

The Security Development Lifecycle (SDL) is the process that emerged during these security improvements.  Steve’s team has been responsible for the application of the SDL process across Microsoft, while also making it possible for hundreds of security organizations to adopt, or like Adobe, use it as a model for their respective secure product engineering frameworks

Along with Michael Howard, Lipner co-authored of the book The Security Development Lifecycle and he is named as inventor on 12 U.S. patents and two pending applications in the field of computer and network security.  He served two terms on the United States Information Security and Privacy Advisory Board and its predecessor.  I’ve had the pleasure of working with Steve on the board for SAFECode – The Software Assurance Forum for Excellence in Code – a non-profit dedicated to the advancement of effective software assurance methods.

I’d like to thank Steve for all of the important contributions he has made to the security industry.

Brad Arkin
Vice President & CSO


Adobe Document Cloud Security Overview Now Available

A white paper detailing the security features and architecture of core Adobe Document Cloud services is now available. The new Adobe Document Cloud combines a completely re-imagined Adobe Acrobat with the power of e-signatures. Now you can edit, sign, send and track documents wherever you are—across desktop, mobile and web. This paper covers the key regulations and standards Document Cloud adheres to, the security architecture of the offering, and describes its core capabilities for protecting sensitive information. You can download this paper now from

Chris Parkerson
Senior Marketing Strategy Manager

Top 10 Web Hacking Techniques of 2014

This year, I once again had the privilege to be one of judges for the “Top 10 Web Hacking Techniques” list that is organized by Matt Johansen and Johnathan Kuskos of the WhiteHat Security team. This is a great honor and a lot of fun to do, although the task of voting also requires a lot of reflection. A significant amount of work went into finding the issues, and that should be respected in the analysis for the top spot. This blog reflects my personal interpretation of the nominees this year.

My first job as a judge is to establish my criteria for judging. For instance:

  • Did the issue involve original or creative research?
  • What was the ultimate severity of the issue?
  • How many people could be affected by the vulnerability?
  • Did the finding change the conversation in the security community?

The last question is what made judging this years entries different from previous years. Many of the bugs were creative and could be damaging for a large number of people. However, for several of the top entries, the attention that they received helped change the conversation in the security industry.

A common trend in this year’s top 10 was the need to update third-party libraries. Obviously, Heartbleed (#1) and POODLE (#3) brought attention to keeping OpenSSL up-to-date. However, if you read the details on the Misfortune Cookie attack (#5), there was the following:

AllegroSoft issued a fixed version to address the Misfortune Cookie vulnerability in 2005, which was provided to licensed manufacturers. The patch propagation cycle, however, is incredibly slow (sometimes non-existent) with these types of devices. We can confirm many devices today still ship with the vulnerable version in place. 

Third-party libraries can be difficult to track and maintain in large organizations and large projects. Kymberlee Price and Jake Kouns spent the year giving great presentations on the risks of third-party code and how to deal with it.

Heartbleed and Shellshock were also part of the year of making attacks media-friendly by providing designer logos. Many of us rolled our eyes at how the logos drew additional media attention to the issues. Although, it is impossible to ignore how the added media attention helped expedite difficult projects such as the deprecation of SSLv3. Looking beyond the logos, these bugs had other attributes which made them accessible in terms of tracking and understanding the severity. For instance, besides a memorable name, Heartbleed included a detailed FAQ which helped to quickly explain the bug’s impact. Typically, a researcher would have had to dig through source code changelists which is difficult or consult HeartBleed’s CVSS score (5 out of 10) which can be misleading. Once you remove the cynicism from the logo discussion, the question that remains is what can the industry learn from these events that will allow our industry to better communicate critical information to a mass audience?

In addition, these vulnerabilities brought attention to the discussion around the “many eyes make all bugs shallow” theory. Shell Shock was a vulnerability that went undetected for years in the default shell used by most security engineers. Once security engineers began reviewing the code affected by Shell Shock, three other CVEs were identified within the same week. The remote code execution in Apache Struts ClassLoader (#8) was another example of a vulnerability in a popular open-source project. The Heartbleed vulnerability prompted the creation of the Core Infrastructure Initiative to formally assist with projects like OpenSSL, OpenSSH and the Network Time Protocol. Prior to the CII, OpenSSL only received about $2,000 per year in donations. The CII funding makes it possible to pursue projects such as having the NCC Group’s consultants audit OpenSSL.

In addition to bugs in third-party libraries, there was also some creative original research. For instance, the Rosetta Flash vulnerability (#4) combined the fact that the JSONP protocol allows attackers to control the first few bytes of a response with the fact that ZLIB compression format allows you to define the characters used for compression. Combining these two issues meant that an attacker could bounce a specially crafted, ZLIB-compressed SWF file off of a JSONP endpoint to get it to execute in their domain context. This technique worked on JSONP endpoints for several popular websites. Rather than asking JSONP endpoints to add data validation, Adobe changed Flash Player so that SWFs restrict the types of ZLIB-compressed data that is accepted.

The 6th and 7th issues on the list both dealt with authentication issues that reminded us that authentication systems are a complex network of trust. The research into “Hacking PayPal with One Click” (#6) combined three different bugs to create a CSRF attack against PayPal. While the details around the “Google Two-Factor Authentication Bypass” weren’t completely clear, it also reminded us that many trust systems are chained together. Two-factor authentication systems frequently rely on your phone. If you can social engineer a mobile carrier to redirect the victim’s account, then you can subvert the second factor in two-factor authentication.

The last two issues dealt with more subtle issues than remote code execution. Both show how little things can matter. The Facebook DDOS attack (#9) leveraged the simple support of image tags in the Notes service. If you include enough image tags on enough notes, then you could get over 100 Facebook servers generating traffic to the target. Lastly, “Covert Timing Channels based on HTTP Cache Headers” (#10) looked at ways hidden messages can be conveyed via headers that would otherwise be ignored in most traffic analysis.

Overall, this year was interesting in terms of how the bugs changed our industry. For instance, the fact that a large portion of the industry was dependent on OpenSSL was well known. However, without Heartbleed, the funding to have a major consulting firm perform a formal security audit would have never been made possible. Research from POODLE demonstrated that significant sites in the Alexa Top 1000 hadn’t adopted TLS which has been around since 1999. POODLE helped force the industry to accelerate the migration forward off of SSLv3 and onto TLS. In February, the PCI standard’s council announced, “because of these weaknesses, no version of SSL meets PCI SSC’s definition of ‘strong cryptography.” When a researcher’s work identifies a major risk, then that is clearly important within the scope of that one product or service. When a researcher’s work can help inspire changing the course of the industry, then that is truly remarkable.

For those attending RSA Conference, Matt Johansen and Johnathan Kuskos will be presenting the details of the Top 10 Web Hacking Techniques of 2014 on April 24 at 9:00 AM.


Peleus Uhley
Lead Security Strategist

Adobe @ the Women in Cybersecurity Conference (WiCyS)

Adobe sponsored the recent Women in Cyber Security Conference held in Atlanta, Georgia.  Alongside two of my colleagues, Julia Knecht and Kim Rogers, I had the opportunity to attend this conference and meet the many talented women in attendance.   

The overall enthusiasm of the conference was incredibly positive.  From the presentations and keynotes and into the hallways in between, discussion focused on the general knowledge spread about the information security sector and the even larger need for more resources in the industry, which dovetailed into the many programs and recruiting efforts to help more women and minorities, who are focused on security, to enter and stay in the security field.  It was very inspiring to see so many women interested in and working in security.

One of the first keynotes, presented by Jenn Lesser Henley, Director of Security Operations at Facebook, immediately set the inspiring tone of the conference with a motivational presentation which debunked the myths of why people don’t see security as an appealing job field.  She included the need for better ‘stock images’, which currently portray those in security working in a dark, isolated room on a computer, wearing a balaclava, which of course is very far from the actual collaborative engaging environment where security occurs.  The security field is so vast and growing in different directions that the variety of jobs, skills and people needed to meet this growth is as much exciting as it is challenging.  Jenn addressed the diversity gap of women and minorities in security and challenged the audience to take action in reducing that gap…immediately.  To do so, she encouraged women and minorities to dispel the unappealing aspects of the cyber security field by surrounding themselves with the needed support or a personal cheerleading team, in order to approach each day with an awesome attitude.

Representation of attendees seemed equally split across industry, government and academia.  There was definitely a common goal across all of us participating in the Career and Graduate School Fair to enroll and/or hire the many talented women and minorities into the cyber security field, no matter the company, organization, or university.   My advice to many attendees was to simply apply, apply, apply.

Other notable keynote speakers included:

  • Sherri Ramsay of CyberPoint who shared fascinating metrics on cyber threats and challenges, and her thoughts on the industry’s future. 
  • Phyllis Schneck, the Deputy Under Secretary for Cybersecurity and Communications at the Department of Homeland Security, who spoke to the future of DHS’ role in cybersecurity and the goal to further build a national capacity to support a more secure and resilient cyberspace.  She also gave great career advice to always keep learning and keep up ‘tech chops’, to not be afraid to experiment, to maintain balance and find more time to think. 
  • Angela McKay, Director of Cybersecurity Policy and Strategy at Microsoft, spoke about the need for diverse perspectives and experiences to drive cyber security innovations.  She encouraged women to recognize the individuality in themselves and others, and to be adaptable, versatile and agile in changing circumstances, in order to advance both professionally and personally. 

Finally, alongside Julia Knecht from our Digital Marketing security team, I presented a workshop regarding “Security Management in the Product Lifecycle”.  We discussed how to build and reinforce a security culture in order to keep a healthy security mindset across a company, organization and throughout one’s career path.  Using our own experiences working on security at Adobe, we engaged in a great discussion with the audience on what security programs and processes to put into place that advocate, create, establish, encourage, inspire, prepare, drive and connect us to the ever evolving field of security.  More so, we emphasized the importance of communication about security both internally within an organization, and also externally with the security community.  This promotes a collaborative, healthy forum for security discussion, and encourages more people to engage and become involved.

All around, the conference was incredibly inspiring and a great stepping stone to help attract more women and minorities to the cyber security field.

Wendy Poland
Product Security Group Program Manager

Re-assessing Web Application Vulnerabilities for the Cloud

As I have been working with our cloud teams, I have found myself constantly reconsidering my legacy assumptions from my Web 2.0 days. I discussed a few of the high-level ones previously on this blog. For OWASP AppSec California in January, I decided to explore the impact of the cloud on Web application vulnerabilities. One of the assumptions that I had going into cloud projects was that the cloud was merely a hosting provider layer issue that only changed how the servers were managed. The risks to the web application logic layer would remain pretty much the same. I was wrong.

One of the things that kicked off my research in this area was watching Andres Riancho’s “Pivoting in Amazon Clouds,” talk at Black Hat last year. He had found a remote file include vulnerability in an AWS hosted Web application he was testing. Basically, the attacker convinces the Web application to act as a proxy and fetch the content of remote sites. Typically, this vulnerability could be used for cross-site scripting or defacement since the attacker could get the contents of the remote site injected into the context of the current Web application. Riancho was able to use that vulnerability to reach the metadata server for the EC2 instance and retrieve AWS configuration information. From there, he was able to use that information, along with a few of the client’s defense-in-depth issues, to escalate into taking over the entire AWS account. Therefore, the possible impacts for this vulnerability have increased.

The cloud also involves migration to a DevOps process. In the past, a network layer vulnerability led to network layer issues, and a Web application layer vulnerability led to Web application vulnerabilities. Today, since the scope of these roles overlap, a breakdown in the DevOps process means network layer issues can impact Web applications.

One vulnerability making the rounds recently is a vulnerability dealing with breakdowns in the DevOps process. The flow of the issue is as follows:

  1. The app/product team requests an S3 bucket called
  2. The app team requests the IT team to register the DNS name, which will point to, because a custom corporate domain will make things clearer for customers.
  3. Time elapses, and the app team decides to migrate to
  4. The app team requests from the IT team a new DNS name ( pointing to this new bucket.
  5. After the transition, the app team deletes the original bucket.

This all sounds great. Except, in this workflow, the application team didn’t inform IT and the original DNS entry was not deleted. An attacker can now register for their malicious content. Since the DNS name still points there, the attacker can convince end users that their malicious content is’s content.

This exploit is a defining example of why DevOps needs to exist within an organization. The flaw in this situation is a disconnect between the IT/Ops team that manages the DNS server and the app team that manages the buckets. The result of this disconnect can be a severe Web application vulnerability.

Many cloud migrations also involve switching from SQL databases to NoSQL databases. In addition to following the hardening guidelines for the respective databases, it is important to look at how developers are interacting with these databases.

Along with new NoSQL databases, there are a ton of new methods for applications to interact with the NoSQL databases.  For instance, the Unity JDBC driver allows you to create traditional SQL statements for use with the MongoDB NoSQL database. Developers also have the option of using REST frontends for their database. It is clear that a security researcher needs to know how an attacker might inject into the statements for their specific NoSQL server. However, it’s also important to look at the way that the application is sending the NoSQL statements to the database, as that might add additional attack surface.

NoSQL databases can also take existing risks and put them in a new context. For instance, in the context of a webpage, an eval() call results in cross-site scripting (XSS). In the context of MongoDB’s server side JavaScript support, a malicious injection into eval() could allow server-side JavaScript injection (SSJI). Therefore database developers who choose not to disable the JavaScript support, need to be trained on JavaScript injection risks when using statements like eval() and $where or when using a DB driver that exposes the Mongo shell. Existing JavaScript training on eval() would need to be modified for the database context since the MongoDB implementation is different from the browser version.

My original assumption that a cloud migration was primarily an infrastructure issue was false. While many of these vulnerabilities were always present and always severe, the migration to cloud platforms and processes means these bugs can manifest in new contexts, with new consequences. Existing recommendations will need to be adapted for the cloud. For instance, many NoSQL databases do not support the concept of prepared statements, so alternative defensive methods will be required. If your team is migrating an application to the cloud, it is important to revisit your threat model approach for the new deployment context.

Peleus Uhley
Lead Security Strategist

Adobe @ CanSecWest 2015

Along with other members of the ASSET team, I recently attended CanSecWest 2015, an annual security conference held in Vancouver, Canada.  Pwn2Own is also co-located in the same venue as CanSecWest (a summary of this year’s results can be found here).  This was my first time attending CanSecWest and I found that I enjoyed the single-track style of the conference (it reminded me of the IEEE Symposium on Security and Privacy, which is also a small, single-track conference, though more academic in content).

Overall, there were some great presentations.  “I see, therefore I am… You” presented by Jan “starbug” Krissler of T-Labs/CCC (abstract listed here) detailed methods of using high resolution images to create techniques for authenticating to biometric systems, such as fingerprint readers, iris scanners, and facial recognition systems.  Given the advancements in high resolution cameras, the necessary base images can even be taken from a distance.  One can also use high resolution still images, such as from political campaign posters, or high resolution video.  Using such images, in some cases one can directly authenticate to the biometric system.  In one example, the face recognition software required the user to blink or move before unlocking the system (presumably to avoid unlocking simply for still images); however, Jan found that if you hold a printed image of the user’s face in front of the camera and simply swipe a pencil down and up across the face, then the system will unlock.  Overall, this presentation was insightful, engaging, and generally amusing.  It highlights how more effort needs to be placed on improving the security of biometric systems and that they are not yet ready to be solely relied upon for authentication. I recommend that those interested in biometric security watch this presentation once the recording is available (NOTE: there is one slide that some may find objectionable).

The last day of the conference had multiple talks about BIOS and UEFI security.  The day was kicked off with the presentation entitled “How many million BIOSes would you like to infect?” presented by Corey Kallenberg and Xeno Kovah of LegbaCore (abstract listed here, slides available here).  They showed how their “LightEater” System Management Mode (SMM) malware implant could operate with very high privilege and read everything from memory in a manner undetectable to the OS.  They demonstrated this live on multiple laptops, including a “military grade” MSI system that was running Tails via live boot.  This could be used to steal GPG keys, passwords, or decrypted messages.  They also showed how Serial-over-LAN could be used to exfiltrate data, including the ability to encrypt the data so as to bypass intrusion detection systems that are looking for certain signatures to identify this type of exploit.  Their analysis showed that UEFI systems share similar code, meaning that many BIOSes are vulnerable to being hooked and implanted with LightEater.  The aim of their presentation was to show that more attention should be put forth towards BIOS security.

When conducting application security reviews, threat modeling is used to understand the overall system and identify potential weakness in the security posture of the system.  The security techniques used to address those weakness, also rely on some root of trust, be that a CA or the underlying local host/OS.  This presentation highlights that when your root of trust is the local host and you are the victim of a targeted attack, then the security measures you defined may be inadequate.  Using defense in depth techniques along with other standard security best practices when designing your system can help minimize the impact of such techniques (for instance, using service-to-service authentication mechanisms that have an expiry, are least privileged, and limit server-side the source location of the client, so that if this exploit happens to the host, the service authentication token is not useful from an external network).

Todd Baumeister
Web Security Researcher

Adobe “Hack Days”

Within the ASSET team, we apply different techniques to keep our skills sharp. One technique we use periodically is to hold a “hack day,” which is an all-day event for the entire team. They are similar in concept to developer hack days but they are focused on security. These events are used to build security innovation and teamwork.

As in many large organizations, there are always side projects that everyone would like to work on. But they can be difficult to complete or even research when the work has to be squeezed in-between meetings and emails. The “free time” approach can be challenging depending on the state of your projects and how much they eat into the “free time.”  Therefore, we set aside one day every few months where the team is freed from all distractions and given the chance to focus on something of their choice for an entire day. That focus can generate a wealth of insight that more than compensates for the time investment. We have learned that sometimes a creative approach to security skill building is necessary for organizations that have a wide product portfolio.

There are very few rules for hack day. The general guidelines are:

  • The work has to be technical.
  • Researchers can work individually or as teams.
  • The work does not have to be focused on normally assigned duties. For instance, researchers can target any product or service within the company, regardless of whether they are the researcher assigned to it.
  • The Adobe world should be better at the end of the day. This can be a directly measurable achievement (e.g., new bugs, new tools, etc.). It can also be an indirect improvement, such as learning a new security skill set.
  • Researchers work from the same room(s) so that ideas can be exchanged in real time.
  • Lunch is provided so that people can stay together and share ideas and stories while they take a break from working.

Researchers are allowed to pick their own hack day projects. This is to encourage innovation and creative thinking. If a researcher examines someone else’s product or service, it can often provide a valuable third-party perspective. The outcomes from our hack days have included:

  • Refreshing the researcher’s experience with the existing security tools they are recommending to teams.
  • Trying out a new technique a researcher learned about at a conference but never had the chance to practically apply.
  • Allowing the team time to create a tool they have been meaning to build.
  • Allowing the team to dig into a product or service at a deeper level than just specs and obtain a broader view than what is gained by spot reviews of code. This helps the researcher give more informed advice going forward.
  • Providing an opportunity for researchers to challenge their assumptions about the product or service through empirical analysis.
  • And of course, team building.

A good example of the benefits of hack days comes from a pair of researchers who decided to perform a penetration test on an existing service. This service had already gone through multiple third-party consultant reviews and typically received a good health report. Therefore, the assumption was that it would be a difficult target because all the low-hanging fruit had already been eliminated. Nonetheless, the team was able to find several significant security vulnerabilities just from a single day’s effort.

While the individual bugs were interesting, what was more interesting was trying to identify why their assumption that the yield would be low was wrong. This led to a re-evaluation of the current process for that service. Should we rotate the consultancy?  Were the consultants focused on the wrong areas? Why did the existing internal process fail to catch the bugs? How do we fix this going forward? This kind of insight and the questions it prompted, more than justified a day of effort, and it was a rewarding find for the researchers involved.

With a mature application, experienced penetration testers often average less than one bug a day. Therefore, the hack day team may finish the day without finding any. But finding bugs is not the ultimate goal of a hack day. Rather, the goal is to allow researchers to gain a deeper understanding of the tools they recommend, the applications they work with, and the skills they want to grow. We have learned that a creative approach to security skill building is necessary for organizations, especially ones that have a wide product portfolio.

Given the outcomes we have achieved, a one-day investment is a bargain. While the team has the freedom to work on these things at any time, setting aside official days to focus solely on these projects helps accelerate innovation and research—and that’s of immense value to any organization. Hack days help our security team stay up to speed with Adobe’s complex technology stacks that vary across the company.  So if your organization feels trapped in the daily grind of tracking issues, consider a periodic hack day of your own to help your team break free of routine and innovate.

Peleus Uhley
Lead Security Strategist

“Hacker Village” at Adobe Tech Summit

During Adobe’s Tech Summit, hundreds of people from across the company visited the Hacker Village. Adobe Tech Summit is an annual gathering of product development teams from across all businesses and geographies. We get together to share best practices, information about the latest tools and techniques, and innovations to both inspire and educate.

The Hacker Village was designed to teach about the various attack types that could target our software and services. It consisted of six booths. Each booth was focused on a specific attack (cross-site scripting, SQL injection etc.) or security-related topic.

The booths were designed to demonstrate a particular attack and give visitors the opportunity to try the attack for themselves; including attacking web applications, cryptography and computer systems and more. For instance, the RFID booth was designed to demonstrate how a potential attacker can steal information from RFID cards. Upon visiting the information booth, visitors chose a RFID card that represented a super hero and were told to keep it hidden. Unbeknownst to our visitors we had a volunteer RFID thief carry a high powered RFID device concealed in a messenger bag. By getting within two feet of a card, he was able to successfully steal the information from the RFID card and display which super hero RFID cards had been compromised.

In the Wi-Fi booth, visitors learned about how susceptible wireless networks are to attacks. Lab participants were able to see what access points their own mobile device had connected to in the past, by intercepting the probe requests sent by a mobile device. The wireless drone introduced visitors to the concept of war flying – mapping out wireless networks. At the other booths, visitors successfully exploited sql injection using SQLmap, cross-site scripting using BeEF, system hacking using Armitage, and cracking passwords using John the ripper.

By completing one or more of the labs, the participants had the opportunity to take home their very own suite of hacker tools.

The Hacker Village was a huge success. In just a three hour time frame, the Hacker Village had more than 325 visitors and 225 lab participants. Most of the participants completed multiple labs, with dozens visiting all six booths. The feedback was positive and many people showed a strong interest in security after visiting one or more of the booths.

Taylor Lobb
Senior Security Analyst

Automating Credential Management

Every enterprise maintains a set of privileged accounts for a variety of use cases. They are essential to creating new builds, configuring application and database servers, and accessing various parts of the infrastructure at run-time. These privileged accounts and passwords can be extremely powerful weapons in the hands of attackers, because they open access to critical systems and the sensitive information that resides on them. Moreover, stealing credentials is often seen as a way for cybercriminals to hide in plain sight since it appears as a legitimate access to the system.

In order to support the scaling of our product development we need to ensure that our environments remain secure while they grow to meet the increasing demands of our customers. For us, the best way to achieve this is by enforcing security at each layer, and relying on automation to maintain security controls regardless of scaling needs. Tooling is an important part of enabling this automation, and password management solutions come to our aid here.  Using a common tool for credential management is one method Adobe uses to help secure our environment. Proper password management helps make deployments more flexible.  We ensure that the access key and API key needed to authenticate to the backup database is not stored on the application server. As a defense-in-depth mechanism, we store the keys in a password manager and pull them at run time when the backup script on the server is executed.  This way we can have the keys in one central location rather than being scattered on individual machines when we scale our application servers. Rotating these credentials becomes easier and we can easily confirm that there are no cached credentials or misconfigured machines in the environment.  We can also maintain a changeable whitelist of the application servers that need to access the password manager, preventing access to the credentials from any IP address that we do not trust.

If an attacker were able to access build machines they could create malicious binaries that would appear to be signed by a legitimate source. This could enable the hacker to distribute malware to unsuspecting victims very easily. We use two major functions of commercially available password managers to help secure our build environment.  We leverage the credential management solution in order to avoid having credentials on any of our build servers. The goal here is similar to the use-case above where we want to keep all keys off the servers, only retrieving them at run-time.  In order to support this, we’ve had to build an extensive library for the client-side components that need to pull credentials.  This library allows us to provision new virtual machines constantly with a secure configuration and a robust communication channel with the credential manager.  Adapting tooling in this way to suit our needs has been a recurring theme in our effort to find solutions to deployment challenges.

Our build environment also uses the remote access functionality provided by password managers, which allows users to open a remote session to the target machine using the password manager as a proxy.  We ensure that this is the only mechanism in which engineers can access machines, and we maintain video recordings of the actions executed on the target machine. This gives us a clear audit trail of who accessed the machine, what they did, and when they logged out.  Also, since we initiate the remote session, none of the users or admins need to know what the actual passwords are since the password manager handles the authentication to the machine.  This prevents passwords from being written down and shared – it also becomes seamless to change them as needed.

Credential management has become a challenge primarily because of the sheer number of passwords and keys out there. Given some of our use-cases we’ve found commercially available password management tools can help make deployments easier in the long-term.  Adobe is a large organization with unique products that have very different platforms – having a central location for dealing with password management can help solve some of the challenges that we face as a services company.  As we look to expand each service, we will continue to adapt our usage of tools like these so that we can help keep our infrastructure safe and provide a more secure experience to all our customers.

Pranjal Jumde and Rajat Shah
ASSET Security Researchers