Creatively Scaling Application Security Coverage and Depth

Secure Product Lifecycle (SPLC)

One of the biggest challenges and opportunities for an Application security (AppSec) team is to scale effectively. The “shift-left” recommendation for security in the software development life cycle (SDLC) emphasizes early course correction to help bake in security controls and to reduce potential cost of changes introduced later in the SDLC (Figure 1). Shifting left then entails finding potential security concerns and the need for security controls by reviewing artifacts produced in requirements, architecture, design and coding phases.

Figure 1: Incorporating security earlier (left) in SDLC reduces costs. Early stages of SDLC describe intended systems (e.g., design) as opposed to reality (e.g., code). The automation friendliness also decreases as we move left in the SDLC (e.g., artifacts describing design/requirements are free text documents). 

Unfortunately, outside of the coding phase, adding security in earlier phases can be mostly a manual activity. This limits security coverage and depth of exploration of products often manifesting as potential blind spots in product portfolios (Figure 2). As we move through the phases in the left of the SDLC, the artifacts describe “intended” system functionality that may behave differently when implemented. The divergence in translating intentions (e.g. requirements/design) into reality (e.g. code) is how many bugs (including security) are introduced. Finally, the most up-to-date representation of a workflow is the code executing in production as architecture, design documents, and their threat models struggle to keep pace. 

Figure 2: Typical coverage of a manual review based AppSec program. Within the pyramid representing all products, green – AppSec coverage, white – blind spots
Figure 3: Reducing blind spots through automation projects and improving depth of pre-existing engagements. 

For scaling AppSec creatively, we have adopted an “improve the left by learning from the right” mantra. We augment the threat modeling of requirements, architecture, and design documents for early course correction with automation focused on code, configuration and logs. Automation helps with two types of coverage. One is the number of projects touched (breadth) and the second is the type of knowledge gained (depth). Also, “shifting left” means that you’re shifting manual focus to earlier in the cycle and abandoning manual testing which was to the right. Automation helps to not completely abandon the testing phase when you shift human focus to the left. 

As shown in Figure 3, automation facilitates analysis of a specific potential weakness/property for workflows in the pyramid. Apart from catching divergences in translating intentions to reality, it’s an effective method to initiate a dialog and correct the security posture of products that have yet to engage with the security team. While automation may not cover all topics a typical manual security review does, it applies to a larger number of workflows and scales much better. The automation projects could improve security coverage over time by exploring one topic at a time. The creativity in balancing topics that automation explores greatly helps in bringing pragmatic value to the business. Automation also complements the depth offered by manual AppSec reviews with an issue- or topic-specific coverage at scale.

Automation projects should explore topics related to prioritized risks to a business. Such risks are typically identified by speculative exercises (e.g. threat modeling) and data driven exercises (e.g. past incidents, security issues). Not all prioritized risks will be good candidates for automation projects. Typically, risks that manifest as identical/similar variants across many workflows are ideal for pragmatic automation projects. For example, helping to eliminate secrets from code repositories boils down to finding keywords matching a collection of regular expressions and is a good candidate for automation projects. Whereas the setup/user accounts needed to find inadequate authorization checks may vary across workflows and automation at scale may be cumbersome. 

Both dynamic (live traffic, logs) and static (code, configuration) artifacts are well-suited for automation projects. The way a risk manifests provides pointers to suitability of dynamic or static artifacts for an automation project. For example, a missing security header (say, “Strict-Transport-Security”) can be detected both from live traffic and code whereas an insecurely embedded credential or use of a weak crypto library can only be found by analyzing code. 

Automation should work hand in hand with manual reviews. For example, if weak secret management is prevalent, a manual AppSec review can highlight the need to handle secrets correctly – storing in an approved platform, defining/enforcing access control, securely deploying in production, rotation, etc. The corresponding automation project could find secrets in source code thus identifying concrete instances at scale where either the AppSec engagement needs to improve or the affected products need to adopt stronger security controls and engage with the security team. 

As another example, collections of configurations can be lucrative targets. Such collections may encode security properties for many workflows (e.g. configuration files for services at API gateways may capture authentication/authorization intents). Automation on such files then allows a security team to reason about desired behaviors on all workflows represented by such collections. Special care is needed to limit false positives reported by automation projects as that directly affects a security team’s credibility and ability to get work done. A few additional runtime checks go a long way in weeding out false positives. For example, a simple POST request could verify that a backend implements its own authentication rather than relying on an API gateway. While these additional steps may reduce the speed at which automation delivers, in the long run such discipline can keep the business focused on actual risks and can be a win-win for product and security teams. 

Over the last many years, Adobe security team has invested in building core capabilities that greatly increase effectiveness and depth of security automation. For example, the Security Automation Framework (SAF) allows us to run specific checks against web properties while Marinus allows us to list all publicly accessible Adobe web properties. Such capabilities go a long way in enabling automation projects but also create possible new opportunities (e.g. correlation/prioritization).

In summary, an AppSec program needs to intelligently find the right balance of focusing on all stages of the software development life cycle. While focus on the left allows early course correction, creatively tapping into the right improves coverage and depth of exploration into risks for a business. Influencing the left by learning from the right allows for course correction of existing workflows as an added benefit and paves a way to avoid potential mistakes in the future.

Prithvi Bisht
Manager, Secure Software Engineering

Secure Product Lifecycle (SPLC)

Posted on 03-10-2020