Adobe @ the Women in Cybersecurity Conference (WiCyS)

Adobe sponsored the recent Women in Cyber Security Conference held in Atlanta, Georgia.  Alongside two of my colleagues, Julia Knecht and Kim Rogers, I had the opportunity to attend this conference and meet the many talented women in attendance.   

The overall enthusiasm of the conference was incredibly positive.  From the presentations and keynotes and into the hallways in between, discussion focused on the general knowledge spread about the information security sector and the even larger need for more resources in the industry, which dovetailed into the many programs and recruiting efforts to help more women and minorities, who are focused on security, to enter and stay in the security field.  It was very inspiring to see so many women interested in and working in security.

One of the first keynotes, presented by Jenn Lesser Henley, Director of Security Operations at Facebook, immediately set the inspiring tone of the conference with a motivational presentation which debunked the myths of why people don’t see security as an appealing job field.  She included the need for better ‘stock images’, which currently portray those in security working in a dark, isolated room on a computer, wearing a balaclava, which of course is very far from the actual collaborative engaging environment where security occurs.  The security field is so vast and growing in different directions that the variety of jobs, skills and people needed to meet this growth is as much exciting as it is challenging.  Jenn addressed the diversity gap of women and minorities in security and challenged the audience to take action in reducing that gap…immediately.  To do so, she encouraged women and minorities to dispel the unappealing aspects of the cyber security field by surrounding themselves with the needed support or a personal cheerleading team, in order to approach each day with an awesome attitude.

Representation of attendees seemed equally split across industry, government and academia.  There was definitely a common goal across all of us participating in the Career and Graduate School Fair to enroll and/or hire the many talented women and minorities into the cyber security field, no matter the company, organization, or university.   My advice to many attendees was to simply apply, apply, apply.

Other notable keynote speakers included:

  • Sherri Ramsay of CyberPoint who shared fascinating metrics on cyber threats and challenges, and her thoughts on the industry’s future. 
  • Phyllis Schneck, the Deputy Under Secretary for Cybersecurity and Communications at the Department of Homeland Security, who spoke to the future of DHS’ role in cybersecurity and the goal to further build a national capacity to support a more secure and resilient cyberspace.  She also gave great career advice to always keep learning and keep up ‘tech chops’, to not be afraid to experiment, to maintain balance and find more time to think. 
  • Angela McKay, Director of Cybersecurity Policy and Strategy at Microsoft, spoke about the need for diverse perspectives and experiences to drive cyber security innovations.  She encouraged women to recognize the individuality in themselves and others, and to be adaptable, versatile and agile in changing circumstances, in order to advance both professionally and personally. 

Finally, alongside Julia Knecht from our Digital Marketing security team, I presented a workshop regarding “Security Management in the Product Lifecycle”.  We discussed how to build and reinforce a security culture in order to keep a healthy security mindset across a company, organization and throughout one’s career path.  Using our own experiences working on security at Adobe, we engaged in a great discussion with the audience on what security programs and processes to put into place that advocate, create, establish, encourage, inspire, prepare, drive and connect us to the ever evolving field of security.  More so, we emphasized the importance of communication about security both internally within an organization, and also externally with the security community.  This promotes a collaborative, healthy forum for security discussion, and encourages more people to engage and become involved.

All around, the conference was incredibly inspiring and a great stepping stone to help attract more women and minorities to the cyber security field.

Wendy Poland
Product Security Group Program Manager

Re-assessing Web Application Vulnerabilities for the Cloud

As I have been working with our cloud teams, I have found myself constantly reconsidering my legacy assumptions from my Web 2.0 days. I discussed a few of the high-level ones previously on this blog. For OWASP AppSec California in January, I decided to explore the impact of the cloud on Web application vulnerabilities. One of the assumptions that I had going into cloud projects was that the cloud was merely a hosting provider layer issue that only changed how the servers were managed. The risks to the web application logic layer would remain pretty much the same. I was wrong.

One of the things that kicked off my research in this area was watching Andres Riancho’s “Pivoting in Amazon Clouds,” talk at Black Hat last year. He had found a remote file include vulnerability in an AWS hosted Web application he was testing. Basically, the attacker convinces the Web application to act as a proxy and fetch the content of remote sites. Typically, this vulnerability could be used for cross-site scripting or defacement since the attacker could get the contents of the remote site injected into the context of the current Web application. Riancho was able to use that vulnerability to reach the metadata server for the EC2 instance and retrieve AWS configuration information. From there, he was able to use that information, along with a few of the client’s defense-in-depth issues, to escalate into taking over the entire AWS account. Therefore, the possible impacts for this vulnerability have increased.

The cloud also involves migration to a DevOps process. In the past, a network layer vulnerability led to network layer issues, and a Web application layer vulnerability led to Web application vulnerabilities. Today, since the scope of these roles overlap, a breakdown in the DevOps process means network layer issues can impact Web applications.

One vulnerability making the rounds recently is a vulnerability dealing with breakdowns in the DevOps process. The flow of the issue is as follows:

  1. The app/product team requests an S3 bucket called my-bucket.s3.amazonaws.com.
  2. The app team requests the IT team to register the my-bucket.example.org DNS name, which will point to my-bucket.s3.amazonaws.com, because a custom corporate domain will make things clearer for customers.
  3. Time elapses, and the app team decides to migrate to my-bucket2.s3.amazonaws.com.
  4. The app team requests from the IT team a new DNS name (my-bucket2.example.org) pointing to this new bucket.
  5. After the transition, the app team deletes the original my-bucket.s3.amazonaws.com bucket.

This all sounds great. Except, in this workflow, the application team didn’t inform IT and the original DNS entry was not deleted. An attacker can now register my-bucket.s3.amazon.com for their malicious content. Since the my-bucket.example.org DNS name still points there, the attacker can convince end users that their malicious content is example.org’s content.

This exploit is a defining example of why DevOps needs to exist within an organization. The flaw in this situation is a disconnect between the IT/Ops team that manages the DNS server and the app team that manages the buckets. The result of this disconnect can be a severe Web application vulnerability.

Many cloud migrations also involve switching from SQL databases to NoSQL databases. In addition to following the hardening guidelines for the respective databases, it is important to look at how developers are interacting with these databases.

Along with new NoSQL databases, there are a ton of new methods for applications to interact with the NoSQL databases.  For instance, the Unity JDBC driver allows you to create traditional SQL statements for use with the MongoDB NoSQL database. Developers also have the option of using REST frontends for their database. It is clear that a security researcher needs to know how an attacker might inject into the statements for their specific NoSQL server. However, it’s also important to look at the way that the application is sending the NoSQL statements to the database, as that might add additional attack surface.

NoSQL databases can also take existing risks and put them in a new context. For instance, in the context of a webpage, an eval() call results in cross-site scripting (XSS). In the context of MongoDB’s server side JavaScript support, a malicious injection into eval() could allow server-side JavaScript injection (SSJI). Therefore database developers who choose not to disable the JavaScript support, need to be trained on JavaScript injection risks when using statements like eval() and $where or when using a DB driver that exposes the Mongo shell. Existing JavaScript training on eval() would need to be modified for the database context since the MongoDB implementation is different from the browser version.

My original assumption that a cloud migration was primarily an infrastructure issue was false. While many of these vulnerabilities were always present and always severe, the migration to cloud platforms and processes means these bugs can manifest in new contexts, with new consequences. Existing recommendations will need to be adapted for the cloud. For instance, many NoSQL databases do not support the concept of prepared statements, so alternative defensive methods will be required. If your team is migrating an application to the cloud, it is important to revisit your threat model approach for the new deployment context.

Peleus Uhley
Lead Security Strategist

Adobe @ CanSecWest 2015

Along with other members of the ASSET team, I recently attended CanSecWest 2015, an annual security conference held in Vancouver, Canada.  Pwn2Own is also co-located in the same venue as CanSecWest (a summary of this year’s results can be found here).  This was my first time attending CanSecWest and I found that I enjoyed the single-track style of the conference (it reminded me of the IEEE Symposium on Security and Privacy, which is also a small, single-track conference, though more academic in content).

Overall, there were some great presentations.  “I see, therefore I am… You” presented by Jan “starbug” Krissler of T-Labs/CCC (abstract listed here) detailed methods of using high resolution images to create techniques for authenticating to biometric systems, such as fingerprint readers, iris scanners, and facial recognition systems.  Given the advancements in high resolution cameras, the necessary base images can even be taken from a distance.  One can also use high resolution still images, such as from political campaign posters, or high resolution video.  Using such images, in some cases one can directly authenticate to the biometric system.  In one example, the face recognition software required the user to blink or move before unlocking the system (presumably to avoid unlocking simply for still images); however, Jan found that if you hold a printed image of the user’s face in front of the camera and simply swipe a pencil down and up across the face, then the system will unlock.  Overall, this presentation was insightful, engaging, and generally amusing.  It highlights how more effort needs to be placed on improving the security of biometric systems and that they are not yet ready to be solely relied upon for authentication. I recommend that those interested in biometric security watch this presentation once the recording is available (NOTE: there is one slide that some may find objectionable).

The last day of the conference had multiple talks about BIOS and UEFI security.  The day was kicked off with the presentation entitled “How many million BIOSes would you like to infect?” presented by Corey Kallenberg and Xeno Kovah of LegbaCore (abstract listed here, slides available here).  They showed how their “LightEater” System Management Mode (SMM) malware implant could operate with very high privilege and read everything from memory in a manner undetectable to the OS.  They demonstrated this live on multiple laptops, including a “military grade” MSI system that was running Tails via live boot.  This could be used to steal GPG keys, passwords, or decrypted messages.  They also showed how Serial-over-LAN could be used to exfiltrate data, including the ability to encrypt the data so as to bypass intrusion detection systems that are looking for certain signatures to identify this type of exploit.  Their analysis showed that UEFI systems share similar code, meaning that many BIOSes are vulnerable to being hooked and implanted with LightEater.  The aim of their presentation was to show that more attention should be put forth towards BIOS security.

When conducting application security reviews, threat modeling is used to understand the overall system and identify potential weakness in the security posture of the system.  The security techniques used to address those weakness, also rely on some root of trust, be that a CA or the underlying local host/OS.  This presentation highlights that when your root of trust is the local host and you are the victim of a targeted attack, then the security measures you defined may be inadequate.  Using defense in depth techniques along with other standard security best practices when designing your system can help minimize the impact of such techniques (for instance, using service-to-service authentication mechanisms that have an expiry, are least privileged, and limit server-side the source location of the client, so that if this exploit happens to the host, the service authentication token is not useful from an external network).

Todd Baumeister
Web Security Researcher

Adobe “Hack Days”

Within the ASSET team, we apply different techniques to keep our skills sharp. One technique we use periodically is to hold a “hack day,” which is an all-day event for the entire team. They are similar in concept to developer hack days but they are focused on security. These events are used to build security innovation and teamwork.

As in many large organizations, there are always side projects that everyone would like to work on. But they can be difficult to complete or even research when the work has to be squeezed in-between meetings and emails. The “free time” approach can be challenging depending on the state of your projects and how much they eat into the “free time.”  Therefore, we set aside one day every few months where the team is freed from all distractions and given the chance to focus on something of their choice for an entire day. That focus can generate a wealth of insight that more than compensates for the time investment. We have learned that sometimes a creative approach to security skill building is necessary for organizations that have a wide product portfolio.

There are very few rules for hack day. The general guidelines are:

  • The work has to be technical.
  • Researchers can work individually or as teams.
  • The work does not have to be focused on normally assigned duties. For instance, researchers can target any product or service within the company, regardless of whether they are the researcher assigned to it.
  • The Adobe world should be better at the end of the day. This can be a directly measurable achievement (e.g., new bugs, new tools, etc.). It can also be an indirect improvement, such as learning a new security skill set.
  • Researchers work from the same room(s) so that ideas can be exchanged in real time.
  • Lunch is provided so that people can stay together and share ideas and stories while they take a break from working.

Researchers are allowed to pick their own hack day projects. This is to encourage innovation and creative thinking. If a researcher examines someone else’s product or service, it can often provide a valuable third-party perspective. The outcomes from our hack days have included:

  • Refreshing the researcher’s experience with the existing security tools they are recommending to teams.
  • Trying out a new technique a researcher learned about at a conference but never had the chance to practically apply.
  • Allowing the team time to create a tool they have been meaning to build.
  • Allowing the team to dig into a product or service at a deeper level than just specs and obtain a broader view than what is gained by spot reviews of code. This helps the researcher give more informed advice going forward.
  • Providing an opportunity for researchers to challenge their assumptions about the product or service through empirical analysis.
  • And of course, team building.

A good example of the benefits of hack days comes from a pair of researchers who decided to perform a penetration test on an existing service. This service had already gone through multiple third-party consultant reviews and typically received a good health report. Therefore, the assumption was that it would be a difficult target because all the low-hanging fruit had already been eliminated. Nonetheless, the team was able to find several significant security vulnerabilities just from a single day’s effort.

While the individual bugs were interesting, what was more interesting was trying to identify why their assumption that the yield would be low was wrong. This led to a re-evaluation of the current process for that service. Should we rotate the consultancy?  Were the consultants focused on the wrong areas? Why did the existing internal process fail to catch the bugs? How do we fix this going forward? This kind of insight and the questions it prompted, more than justified a day of effort, and it was a rewarding find for the researchers involved.

With a mature application, experienced penetration testers often average less than one bug a day. Therefore, the hack day team may finish the day without finding any. But finding bugs is not the ultimate goal of a hack day. Rather, the goal is to allow researchers to gain a deeper understanding of the tools they recommend, the applications they work with, and the skills they want to grow. We have learned that a creative approach to security skill building is necessary for organizations, especially ones that have a wide product portfolio.

Given the outcomes we have achieved, a one-day investment is a bargain. While the team has the freedom to work on these things at any time, setting aside official days to focus solely on these projects helps accelerate innovation and research—and that’s of immense value to any organization. Hack days help our security team stay up to speed with Adobe’s complex technology stacks that vary across the company.  So if your organization feels trapped in the daily grind of tracking issues, consider a periodic hack day of your own to help your team break free of routine and innovate.

Peleus Uhley
Lead Security Strategist

“Hacker Village” at Adobe Tech Summit

During Adobe’s Tech Summit, hundreds of people from across the company visited the Hacker Village. Adobe Tech Summit is an annual gathering of product development teams from across all businesses and geographies. We get together to share best practices, information about the latest tools and techniques, and innovations to both inspire and educate.

The Hacker Village was designed to teach about the various attack types that could target our software and services. It consisted of six booths. Each booth was focused on a specific attack (cross-site scripting, SQL injection etc.) or security-related topic.

The booths were designed to demonstrate a particular attack and give visitors the opportunity to try the attack for themselves; including attacking web applications, cryptography and computer systems and more. For instance, the RFID booth was designed to demonstrate how a potential attacker can steal information from RFID cards. Upon visiting the information booth, visitors chose a RFID card that represented a super hero and were told to keep it hidden. Unbeknownst to our visitors we had a volunteer RFID thief carry a high powered RFID device concealed in a messenger bag. By getting within two feet of a card, he was able to successfully steal the information from the RFID card and display which super hero RFID cards had been compromised.

In the Wi-Fi booth, visitors learned about how susceptible wireless networks are to attacks. Lab participants were able to see what access points their own mobile device had connected to in the past, by intercepting the probe requests sent by a mobile device. The wireless drone introduced visitors to the concept of war flying – mapping out wireless networks. At the other booths, visitors successfully exploited sql injection using SQLmap, cross-site scripting using BeEF, system hacking using Armitage, and cracking passwords using John the ripper.

By completing one or more of the labs, the participants had the opportunity to take home their very own suite of hacker tools.

The Hacker Village was a huge success. In just a three hour time frame, the Hacker Village had more than 325 visitors and 225 lab participants. Most of the participants completed multiple labs, with dozens visiting all six booths. The feedback was positive and many people showed a strong interest in security after visiting one or more of the booths.

Taylor Lobb
Senior Security Analyst

Automating Credential Management

Every enterprise maintains a set of privileged accounts for a variety of use cases. They are essential to creating new builds, configuring application and database servers, and accessing various parts of the infrastructure at run-time. These privileged accounts and passwords can be extremely powerful weapons in the hands of attackers, because they open access to critical systems and the sensitive information that resides on them. Moreover, stealing credentials is often seen as a way for cybercriminals to hide in plain sight since it appears as a legitimate access to the system.

In order to support the scaling of our product development we need to ensure that our environments remain secure while they grow to meet the increasing demands of our customers. For us, the best way to achieve this is by enforcing security at each layer, and relying on automation to maintain security controls regardless of scaling needs. Tooling is an important part of enabling this automation, and password management solutions come to our aid here.  Using a common tool for credential management is one method Adobe uses to help secure our environment. Proper password management helps make deployments more flexible.  We ensure that the access key and API key needed to authenticate to the backup database is not stored on the application server. As a defense-in-depth mechanism, we store the keys in a password manager and pull them at run time when the backup script on the server is executed.  This way we can have the keys in one central location rather than being scattered on individual machines when we scale our application servers. Rotating these credentials becomes easier and we can easily confirm that there are no cached credentials or misconfigured machines in the environment.  We can also maintain a changeable whitelist of the application servers that need to access the password manager, preventing access to the credentials from any IP address that we do not trust.

If an attacker were able to access build machines they could create malicious binaries that would appear to be signed by a legitimate source. This could enable the hacker to distribute malware to unsuspecting victims very easily. We use two major functions of commercially available password managers to help secure our build environment.  We leverage the credential management solution in order to avoid having credentials on any of our build servers. The goal here is similar to the use-case above where we want to keep all keys off the servers, only retrieving them at run-time.  In order to support this, we’ve had to build an extensive library for the client-side components that need to pull credentials.  This library allows us to provision new virtual machines constantly with a secure configuration and a robust communication channel with the credential manager.  Adapting tooling in this way to suit our needs has been a recurring theme in our effort to find solutions to deployment challenges.

Our build environment also uses the remote access functionality provided by password managers, which allows users to open a remote session to the target machine using the password manager as a proxy.  We ensure that this is the only mechanism in which engineers can access machines, and we maintain video recordings of the actions executed on the target machine. This gives us a clear audit trail of who accessed the machine, what they did, and when they logged out.  Also, since we initiate the remote session, none of the users or admins need to know what the actual passwords are since the password manager handles the authentication to the machine.  This prevents passwords from being written down and shared – it also becomes seamless to change them as needed.

Credential management has become a challenge primarily because of the sheer number of passwords and keys out there. Given some of our use-cases we’ve found commercially available password management tools can help make deployments easier in the long-term.  Adobe is a large organization with unique products that have very different platforms – having a central location for dealing with password management can help solve some of the challenges that we face as a services company.  As we look to expand each service, we will continue to adapt our usage of tools like these so that we can help keep our infrastructure safe and provide a more secure experience to all our customers.

Pranjal Jumde and Rajat Shah
ASSET Security Researchers

Adobe @ NullCon Goa 2015

The ASSET team in Noida recently attended NullCon, a well-known Indian conference centered around information security held in Goa. My team and I attended different trainings on client side security, malware analysis, mobile pen-testing & fuzzing, delivered by industry experts in their respective fields. A training I found particularly helpful was one on client-side security by Mario Heiderich. This training revealed several interesting aspects of browser parsing engines. Mario revealed various ways XSS protections can be defeated and how using modern JavaScript frameworks like AngularJS can also expand attack surface. This knowledge can help us build better protective “shields” for web applications.

Out of the two night talks, the one I found most interesting was on the Google fuzzing framework. The speaker, Abhishek Arya, discussed how fuzz testing for Chrome is scaled using a large infrastructure that can be automated to reveal exploitable bugs with the least amount of human intervention. During the main conference, I attended a couple of good talks discussing such topics as the “sandbox paradox”, an attacker’s perspective on ECMA-2015, drone attacks, and the Cuckoo sandbox. James Forshaw‘s talk on sandboxing was of particular interest as it provided useful knowledge on sandboxes that utilize special APIs on the Windows platform that can help make them better. Another beneficial session was by Jurriaan Bremer on Cuckoo sandbox where he demonstrated how his tool can be used to automate analysis on malware samples.

Day 2 started with the keynote sessions from Paul Vixie (Farsight Security) and Katie Moussouris (HackerOne). A couple of us also attended a lock picking workshop. We were given picks for some well-known lock types. We were then walked through the process of how to go about picking those particular locks. We were successful opening quite a few locks. I also played Bug Bash along with Gineesh (Echosign Team) and Abhijeth (IT Team) where we were given live targets to find vulnerabilities. We were successful in finding a couple of critical issues winning our team some nice prize money. :-)

Adobe has been a sponsor of NullCon for several years. At this year’s event, we were seeking suitable candidates for openings on our various security teams. In between talks, we assisted our HR team in the Adobe booth explaining the technical aspects of our jobs to prospective candidates. We were successful in getting many attendees interested in our available positions.

Overall, the conference was a perfect blend of learning, technical discussion, networking, and fun.

 

Vaibhav Gupta
Security Researcher- ASSET

Information about Adobe’s Certification Roadmap now available!

At Adobe, we take the security of your data and digital experiences seriously. To this end, we have implemented a foundational framework of security processes and controls to protect our infrastructure, applications and services and help us comply with a number of industry accepted best practices, standards and certifications. This framework is called the Adobe Common Controls Framework (CCF). One of the goals of CCF is to provide clear guidance to our operations, security and development teams on how to secure our infrastructure and applications. We analyzed the criteria for the most common certifications and found a number of overlaps. We analyzed over 1000 requirements from relevant frameworks and standards and rationalized them down to about 200 Adobe-specific controls.

Today we have released a white paper detailing CCF and how Adobe is using it to help meet the requirements of important standards such as SOC2, ISO, and PCI DSS among others. CCF is a critical component of Adobe’s overall security strategy. We hope this white paper not only educates on how Adobe is working to achieve these industry certifications, but also provides useful knowledge that is beneficial to your own efforts in achieving compliance with regulations and standards affecting your business.

Never Stop Coding

Several members of Adobe’s security team have taken to the media to offer career advice to aspiring security professionals (you can read more about that here, here, and here). For those interested in security researcher positions, my advice is to never stop coding. This is true whether you are working in an entry-level position or are already a senior researcher.

Within the security industry, it has often been said, “It is easier to teach a developer about security than it is to teach a security researcher about development.” This thought can be applied to hiring decisions. Those trained solely in security can be less effective in a development organization for several reasons.

Often, pure security researchers have seen only the fail in the industry. This leads them to assume vulnerable code is always the product of apathetic or unskilled developers. Since they have never attempted large-scale development, they don’t have a robust understanding of the complex challenges in secure code development. A researcher can’t be effective in a development organization if he or she doesn’t have an appreciation of the challenges the person on the other side of the table faces.

The second reason is that people with development backgrounds can give better advice. For instance, when NoSQL databases became popular, people quickly mapped the concept of SQL injection to NoSQL injection. At a high level, they are both databases of information and both accept queries for their information. So both can have injections. Therefore, people were quick to predict that NoSQL injection would quickly become as common as SQL injection. At a high level, that is accurate.

SQL injection is popular because it is a “structured query language,” which means all SQL databases follow the same basic structured format. If you dig into NoSQL databases, you quickly realize that their query formats can vary widely from SQL-esque queries (Cassandra), to JSON-based queries (MongoDB, DynamoDB), to assembly-esque queries (Redis). This means that injection attacks have to be more customized to the target. Although, if you are able to have a coding level discussion with the developers, then you may discover that they are using a database driver which allows them to use traditional SQL queries against a NoSQL database. That could mean that traditional SQL injections are also possible against your NoSQL infrastructure. Security recommendations for a NoSQL environment also have to be more targeted. For instance, prepared statements are available in Cassandra but not in MongoDB. This is all knowledge that you can learn by digging deep into a subject and experimenting with technologies at a developer level.

Lastly, you learn to appreciate how “simple” changes can be more complex than you first imagine. I recently tried to commit some changes to the open-source project, CRITs. While my first commit was functional, I’ve already refactored the code twice in the process of getting it production-ready. The team was absolutely correct in rejecting the changes because the design could be improved. The current version is measurably better than my first rough-sketch proposal. While I don’t like making mistakes in public, these sorts of humbling experiences remind me of the challenges faced by the developers I work with. There can be a fairly large gap between a working design and a good design. This means your “simple recommendation” actually may be quite complex. In the process of trying to commit to the project, I learned a lot more about tools such as MongoDB and Django than I ever would have learned skimming security best practice documentation. That will make me more effective within Adobe when talking to product teams using these tools, since I will better understand their language and concerns. In addition, I am making a contribution to the security community that others may benefit from.

At this point in my career, I am in a senior position, a long way from when I first started over 15 years ago as a developer. However, I still try to find time for coding projects to keep my skills sharp and my knowledge up-to-date. If you look at the people leading the industry at companies such as Google, Etsy, iSec Partners, etc., many are respected because they are also keeping their hands on the keyboards and are speaking from direct knowledge. They not only provide research but also tools to empower others. Whether you are a recent grad or a senior researcher, never lose sight of the code, where it all starts.

Peleus Uhley
Lead Security Strategist

More Effective Threat Modeling

There are a lot of theories about threat models. Their utility often depends on the context and the job to which they are applied. I was asked to speak about threat models at the recent BSIMM Community Conference, which made me formally re-evaluate my thoughts on the matter. Over the years I’ve used threat models in many ways at both the conceptual level and at the application level. In preparing for the conference I first tried to deconstruct the purpose of threat models. Then I looked at the ways I’ve implemented their intent.

Taking a step back to examine their value with respect to any risk situation, you examine things such as who, what, how, when, and why:

Who is the entity conducting the attack, including nation states, organized crime, and activists.

What is the ultimate target of the attack, such as credit card data.

How is the method by which attackers will get to the data, such as SQL injection.

Why captures the reason the target is important to the attacker. Does the data have monetary value? Or are you just a pool of resources an attacker can leverage in pursuit of other goals?

A threat can be described as who will target what, using how in order to achieve why.

We will come back to when in a moment. Threat models typically put most of the focus on what and how. The implicit assumption is that it doesn’t really matter who or why—your focus is on stopping the attack. Focusing on what and how allows you to identify potential bugs that will crop up in the design, regardless of who might be conducting the attack and their motivation.

The challenge with focusing solely on what and how is that they change over time. How is dependent on the specifics of the implementation, which will change as it grows. On the other hand, who and why tend to be fairly constant. Sometimes, focusing on who and why can lead to new ideas for overall mitigations that can protect you better than the point fixes identified by how.

For instance, we knew that attackers using advanced persistent threat (APT) (who) were fuzzing (how) Flash Player (what). To look at the problem from a different angle, we decided to stop and ask why. It wasn’t solely because of Flash Player’s ubiquity. At the time, most Flash Player attacks were being delivered via Office documents. Attackers were focusing on Flash Player because they could embed it in an Office document to conduct targeted spearphishing attacks. Targeted spearphishing is a valuable attack method because you can directly access a specific target with minimal exposure. By adding a Flash Player warning dialogue to alert users of a potential spearphishing attempt in Office, we addressed why Flash Player was of value to them. After that simple mitigation was added, the number of zero-day attacks dropped noticeably.

I also mentioned that when could be useful. Most people think of threat models as a tool for the design phase. However, threat models can also be used in developing incident response plans. You can take any given risk and consider, “When this mitigation fails or is bypassed, we will respond by…”

Therefore, having a threat model for an application can be extremely useful in controlling both high-level (who/why) and low-level threats (how/what). That said, the reality is that many companies have moved away from traditional threat models. Keeping a threat model up-to-date can be a lot of effort in a rapid development environment. Adam Shostack covered many of the common issues with this in his blog post, The Trouble with Threat Modeling. The question each team faces is how to achieve the value of threat modeling using a more scalable method.

Unfortunately, there is not a one-size-fits-all solution to this problem. For the teams I have worked with, my approach has been to try and keep the spirit of threat modeling but be flexible on the implementation. Threat models can also have different focuses, as Shostack describes in his blog post, Reinvigorate your Threat Modeling Process. To cover all the variants would be too involved for a single post, but here are three general suggestions:

  1. There should be a general high-level threat model for the overall application. This high-level model ensures everyone is headed in the same direction, and it can be updated as needed for major changes to the application. A high-level threat model can be good for sharing with customers, for helping new hires to understand the security design of the application, and as a reference for the security team.
  2. Threat models don’t have to be documented in the traditional threat model format. The traditional format is very clear and organized, but it can also be complex and difficult to document in different tools. The goal of a threat model is to document risks and plans to address them. For individual features, this can be in a simple paragraph form that everyone can understand. Even writing, “this feature has no security implications,” is informative.
  3. Put the information where developers are most likely to find it. For instance, adding a security section to the spec using the simplified format suggested eliminates the need to cross-reference a separate document, helping to ensure that everyone involved will read the security information. The information could also be captured in the user story for the feature. If your code is the documentation, see if your Javadoc tool supports custom tags. If so, you could encourage your developers to use an @security tag when documenting code. If you follow Behavior Driven Development, your threat model can be captured as Cucumber test assertions. Getting this specific means the developer won’t always have the complete picture of how the control fits into the overall design. However, it is important for them to know that the documentation note is there for a specific security reason. If the developer has questions, the security champion can always help them cross-reference it to the overall threat model.

Overall I think the concept of threat modeling still serves a valid purpose. Examining how and what can ensure your implementation is sound, and you can also identify higher level mitigations by examining who and why. The traditional approach to threat modeling may not be the right fit for modern teams, though teams can achieve the goal if they are creative with their implementation of the concept. Along with our development processes, threat modeling must also evolve.

Peleus Uhley
Lead Security Strategist