Automating Credential Management

Every enterprise maintains a set of privileged accounts for a variety of use cases. They are essential to creating new builds, configuring application and database servers, and accessing various parts of the infrastructure at run-time. These privileged accounts and passwords can be extremely powerful weapons in the hands of attackers, because they open access to critical systems and the sensitive information that resides on them. Moreover, stealing credentials is often seen as a way for cybercriminals to hide in plain sight since it appears as a legitimate access to the system.

In order to support the scaling of our product development we need to ensure that our environments remain secure while they grow to meet the increasing demands of our customers. For us, the best way to achieve this is by enforcing security at each layer, and relying on automation to maintain security controls regardless of scaling needs. Tooling is an important part of enabling this automation, and password management solutions come to our aid here.  Using a common tool for credential management is one method Adobe uses to help secure our environment. Proper password management helps make deployments more flexible.  We ensure that the access key and API key needed to authenticate to the backup database is not stored on the application server. As a defense-in-depth mechanism, we store the keys in a password manager and pull them at run time when the backup script on the server is executed.  This way we can have the keys in one central location rather than being scattered on individual machines when we scale our application servers. Rotating these credentials becomes easier and we can easily confirm that there are no cached credentials or misconfigured machines in the environment.  We can also maintain a changeable whitelist of the application servers that need to access the password manager, preventing access to the credentials from any IP address that we do not trust.

If an attacker were able to access build machines they could create malicious binaries that would appear to be signed by a legitimate source. This could enable the hacker to distribute malware to unsuspecting victims very easily. We use two major functions of commercially available password managers to help secure our build environment.  We leverage the credential management solution in order to avoid having credentials on any of our build servers. The goal here is similar to the use-case above where we want to keep all keys off the servers, only retrieving them at run-time.  In order to support this, we’ve had to build an extensive library for the client-side components that need to pull credentials.  This library allows us to provision new virtual machines constantly with a secure configuration and a robust communication channel with the credential manager.  Adapting tooling in this way to suit our needs has been a recurring theme in our effort to find solutions to deployment challenges.

Our build environment also uses the remote access functionality provided by password managers, which allows users to open a remote session to the target machine using the password manager as a proxy.  We ensure that this is the only mechanism in which engineers can access machines, and we maintain video recordings of the actions executed on the target machine. This gives us a clear audit trail of who accessed the machine, what they did, and when they logged out.  Also, since we initiate the remote session, none of the users or admins need to know what the actual passwords are since the password manager handles the authentication to the machine.  This prevents passwords from being written down and shared – it also becomes seamless to change them as needed.

Credential management has become a challenge primarily because of the sheer number of passwords and keys out there. Given some of our use-cases we’ve found commercially available password management tools can help make deployments easier in the long-term.  Adobe is a large organization with unique products that have very different platforms – having a central location for dealing with password management can help solve some of the challenges that we face as a services company.  As we look to expand each service, we will continue to adapt our usage of tools like these so that we can help keep our infrastructure safe and provide a more secure experience to all our customers.

Pranjal Jumde and Rajat Shah
ASSET Security Researchers

Adobe @ NullCon Goa 2015

The ASSET team in Noida recently attended NullCon, a well-known Indian conference centered around information security held in Goa. My team and I attended different trainings on client side security, malware analysis, mobile pen-testing & fuzzing, delivered by industry experts in their respective fields. A training I found particularly helpful was one on client-side security by Mario Heiderich. This training revealed several interesting aspects of browser parsing engines. Mario revealed various ways XSS protections can be defeated and how using modern JavaScript frameworks like AngularJS can also expand attack surface. This knowledge can help us build better protective “shields” for web applications.

Out of the two night talks, the one I found most interesting was on the Google fuzzing framework. The speaker, Abhishek Arya, discussed how fuzz testing for Chrome is scaled using a large infrastructure that can be automated to reveal exploitable bugs with the least amount of human intervention. During the main conference, I attended a couple of good talks discussing such topics as the “sandbox paradox”, an attacker’s perspective on ECMA-2015, drone attacks, and the Cuckoo sandbox. James Forshaw‘s talk on sandboxing was of particular interest as it provided useful knowledge on sandboxes that utilize special APIs on the Windows platform that can help make them better. Another beneficial session was by Jurriaan Bremer on Cuckoo sandbox where he demonstrated how his tool can be used to automate analysis on malware samples.

Day 2 started with the keynote sessions from Paul Vixie (Farsight Security) and Katie Moussouris (HackerOne). A couple of us also attended a lock picking workshop. We were given picks for some well-known lock types. We were then walked through the process of how to go about picking those particular locks. We were successful opening quite a few locks. I also played Bug Bash along with Gineesh (Echosign Team) and Abhijeth (IT Team) where we were given live targets to find vulnerabilities. We were successful in finding a couple of critical issues winning our team some nice prize money. :-)

Adobe has been a sponsor of NullCon for several years. At this year’s event, we were seeking suitable candidates for openings on our various security teams. In between talks, we assisted our HR team in the Adobe booth explaining the technical aspects of our jobs to prospective candidates. We were successful in getting many attendees interested in our available positions.

Overall, the conference was a perfect blend of learning, technical discussion, networking, and fun.

 

Vaibhav Gupta
Security Researcher- ASSET

Information about Adobe’s Certification Roadmap now available!

At Adobe, we take the security of your data and digital experiences seriously. To this end, we have implemented a foundational framework of security processes and controls to protect our infrastructure, applications and services and help us comply with a number of industry accepted best practices, standards and certifications. This framework is called the Adobe Common Controls Framework (CCF). One of the goals of CCF is to provide clear guidance to our operations, security and development teams on how to secure our infrastructure and applications. We analyzed the criteria for the most common certifications and found a number of overlaps. We analyzed over 1000 requirements from relevant frameworks and standards and rationalized them down to about 200 Adobe-specific controls.

Today we have released a white paper detailing CCF and how Adobe is using it to help meet the requirements of important standards such as SOC2, ISO, and PCI DSS among others. CCF is a critical component of Adobe’s overall security strategy. We hope this white paper not only educates on how Adobe is working to achieve these industry certifications, but also provides useful knowledge that is beneficial to your own efforts in achieving compliance with regulations and standards affecting your business.

Never Stop Coding

Several members of Adobe’s security team have taken to the media to offer career advice to aspiring security professionals (you can read more about that here, here, and here). For those interested in security researcher positions, my advice is to never stop coding. This is true whether you are working in an entry-level position or are already a senior researcher.

Within the security industry, it has often been said, “It is easier to teach a developer about security than it is to teach a security researcher about development.” This thought can be applied to hiring decisions. Those trained solely in security can be less effective in a development organization for several reasons.

Often, pure security researchers have seen only the fail in the industry. This leads them to assume vulnerable code is always the product of apathetic or unskilled developers. Since they have never attempted large-scale development, they don’t have a robust understanding of the complex challenges in secure code development. A researcher can’t be effective in a development organization if he or she doesn’t have an appreciation of the challenges the person on the other side of the table faces.

The second reason is that people with development backgrounds can give better advice. For instance, when NoSQL databases became popular, people quickly mapped the concept of SQL injection to NoSQL injection. At a high level, they are both databases of information and both accept queries for their information. So both can have injections. Therefore, people were quick to predict that NoSQL injection would quickly become as common as SQL injection. At a high level, that is accurate.

SQL injection is popular because it is a “structured query language,” which means all SQL databases follow the same basic structured format. If you dig into NoSQL databases, you quickly realize that their query formats can vary widely from SQL-esque queries (Cassandra), to JSON-based queries (MongoDB, DynamoDB), to assembly-esque queries (Redis). This means that injection attacks have to be more customized to the target. Although, if you are able to have a coding level discussion with the developers, then you may discover that they are using a database driver which allows them to use traditional SQL queries against a NoSQL database. That could mean that traditional SQL injections are also possible against your NoSQL infrastructure. Security recommendations for a NoSQL environment also have to be more targeted. For instance, prepared statements are available in Cassandra but not in MongoDB. This is all knowledge that you can learn by digging deep into a subject and experimenting with technologies at a developer level.

Lastly, you learn to appreciate how “simple” changes can be more complex than you first imagine. I recently tried to commit some changes to the open-source project, CRITs. While my first commit was functional, I’ve already refactored the code twice in the process of getting it production-ready. The team was absolutely correct in rejecting the changes because the design could be improved. The current version is measurably better than my first rough-sketch proposal. While I don’t like making mistakes in public, these sorts of humbling experiences remind me of the challenges faced by the developers I work with. There can be a fairly large gap between a working design and a good design. This means your “simple recommendation” actually may be quite complex. In the process of trying to commit to the project, I learned a lot more about tools such as MongoDB and Django than I ever would have learned skimming security best practice documentation. That will make me more effective within Adobe when talking to product teams using these tools, since I will better understand their language and concerns. In addition, I am making a contribution to the security community that others may benefit from.

At this point in my career, I am in a senior position, a long way from when I first started over 15 years ago as a developer. However, I still try to find time for coding projects to keep my skills sharp and my knowledge up-to-date. If you look at the people leading the industry at companies such as Google, Etsy, iSec Partners, etc., many are respected because they are also keeping their hands on the keyboards and are speaking from direct knowledge. They not only provide research but also tools to empower others. Whether you are a recent grad or a senior researcher, never lose sight of the code, where it all starts.

Peleus Uhley
Lead Security Strategist

More Effective Threat Modeling

There are a lot of theories about threat models. Their utility often depends on the context and the job to which they are applied. I was asked to speak about threat models at the recent BSIMM Community Conference, which made me formally re-evaluate my thoughts on the matter. Over the years I’ve used threat models in many ways at both the conceptual level and at the application level. In preparing for the conference I first tried to deconstruct the purpose of threat models. Then I looked at the ways I’ve implemented their intent.

Taking a step back to examine their value with respect to any risk situation, you examine things such as who, what, how, when, and why:

Who is the entity conducting the attack, including nation states, organized crime, and activists.

What is the ultimate target of the attack, such as credit card data.

How is the method by which attackers will get to the data, such as SQL injection.

Why captures the reason the target is important to the attacker. Does the data have monetary value? Or are you just a pool of resources an attacker can leverage in pursuit of other goals?

A threat can be described as who will target what, using how in order to achieve why.

We will come back to when in a moment. Threat models typically put most of the focus on what and how. The implicit assumption is that it doesn’t really matter who or why—your focus is on stopping the attack. Focusing on what and how allows you to identify potential bugs that will crop up in the design, regardless of who might be conducting the attack and their motivation.

The challenge with focusing solely on what and how is that they change over time. How is dependent on the specifics of the implementation, which will change as it grows. On the other hand, who and why tend to be fairly constant. Sometimes, focusing on who and why can lead to new ideas for overall mitigations that can protect you better than the point fixes identified by how.

For instance, we knew that attackers using advanced persistent threat (APT) (who) were fuzzing (how) Flash Player (what). To look at the problem from a different angle, we decided to stop and ask why. It wasn’t solely because of Flash Player’s ubiquity. At the time, most Flash Player attacks were being delivered via Office documents. Attackers were focusing on Flash Player because they could embed it in an Office document to conduct targeted spearphishing attacks. Targeted spearphishing is a valuable attack method because you can directly access a specific target with minimal exposure. By adding a Flash Player warning dialogue to alert users of a potential spearphishing attempt in Office, we addressed why Flash Player was of value to them. After that simple mitigation was added, the number of zero-day attacks dropped noticeably.

I also mentioned that when could be useful. Most people think of threat models as a tool for the design phase. However, threat models can also be used in developing incident response plans. You can take any given risk and consider, “When this mitigation fails or is bypassed, we will respond by…”

Therefore, having a threat model for an application can be extremely useful in controlling both high-level (who/why) and low-level threats (how/what). That said, the reality is that many companies have moved away from traditional threat models. Keeping a threat model up-to-date can be a lot of effort in a rapid development environment. Adam Shostack covered many of the common issues with this in his blog post, The Trouble with Threat Modeling. The question each team faces is how to achieve the value of threat modeling using a more scalable method.

Unfortunately, there is not a one-size-fits-all solution to this problem. For the teams I have worked with, my approach has been to try and keep the spirit of threat modeling but be flexible on the implementation. Threat models can also have different focuses, as Shostack describes in his blog post, Reinvigorate your Threat Modeling Process. To cover all the variants would be too involved for a single post, but here are three general suggestions:

  1. There should be a general high-level threat model for the overall application. This high-level model ensures everyone is headed in the same direction, and it can be updated as needed for major changes to the application. A high-level threat model can be good for sharing with customers, for helping new hires to understand the security design of the application, and as a reference for the security team.
  2. Threat models don’t have to be documented in the traditional threat model format. The traditional format is very clear and organized, but it can also be complex and difficult to document in different tools. The goal of a threat model is to document risks and plans to address them. For individual features, this can be in a simple paragraph form that everyone can understand. Even writing, “this feature has no security implications,” is informative.
  3. Put the information where developers are most likely to find it. For instance, adding a security section to the spec using the simplified format suggested eliminates the need to cross-reference a separate document, helping to ensure that everyone involved will read the security information. The information could also be captured in the user story for the feature. If your code is the documentation, see if your Javadoc tool supports custom tags. If so, you could encourage your developers to use an @security tag when documenting code. If you follow Behavior Driven Development, your threat model can be captured as Cucumber test assertions. Getting this specific means the developer won’t always have the complete picture of how the control fits into the overall design. However, it is important for them to know that the documentation note is there for a specific security reason. If the developer has questions, the security champion can always help them cross-reference it to the overall threat model.

Overall I think the concept of threat modeling still serves a valid purpose. Examining how and what can ensure your implementation is sound, and you can also identify higher level mitigations by examining who and why. The traditional approach to threat modeling may not be the right fit for modern teams, though teams can achieve the goal if they are creative with their implementation of the concept. Along with our development processes, threat modeling must also evolve.

Peleus Uhley
Lead Security Strategist

Join Us at CSA EMEA Congress November 19 – 20!

Adobe will be participating again this year in the Cloud Security Alliance (CSA) EMEA Congress event in Rome, Italy, November 19 – 20, 2014. This conference attracts senior decision makers in IT Security from a wide range of industries and governmental organizations. This event focuses on regulatory, compliance, governance, and technical security issues facing both cloud service providers and users of cloud services. We’re excited to be back at what promises to be another great event this year.

I will be presenting a keynote session entitled “Security Roadmaps and Dashboards, Oh My!” on Thursday, November 20th, at 9:40 a.m. A “good” security roadmap is going to come from an ear-to-the-ground approach to security across all teams. It should also reflect current security industry trends. This is essential in creating a multi-faceted, balanced security roadmap that actually drives teams to build security into everything they do. How do you build and keep a solid, adaptable security roadmap in place? By focusing on the right metrics to measure success against the roadmap and developing meaningful dashboards to communicate progress and success to management. This presentation will discuss how Adobe tackled this problem across its very large product, service, and I.T. organization and provide insights into how you might tackle this problem in your own organization. I will also be available in our booth to answer questions after the session.

Please make sure to follow @AdobeSecurity on Twitter for the latest happenings during CSA EMEA Congress as we will be live tweeting during the event – look for the hashtag #AdobeCSA.

 

David Lenoe

Director, Product Security

Adobe Shared Cloud Now SOC2- Security Type 1 Compliant

We are very happy to report that KPMG LLP has completed their attestation and issued the final Type 1 SOC2 Security report for Adobe’s Digital Media Shared Cloud.

Adobe’s Shared Cloud is the infrastructure component supporting the Adobe Creative Cloud.   Adobe Creative Cloud teams can build their product and service offerings on top of the pluggable platform provide by Shared Cloud.

Completion of this project is a very important first step in the compliance roadmap for Adobe Creative Cloud.  Any Adobe service will inherit the controls that are in-scope for this Type 1 SOC2-Security report to the extent they leverage Shared Cloud as their data repository platform and Adobe Cloud Operations for their cloud operations.

Several Adobe teams worked closely together to ensure the successful completion of the project.  The teams will now focus on completing Type 2 attestation in 2015.

A big thanks to everyone involved.

Abhi Pandit
Sr. Director of Risk Advisory and Assurance

Looking Back at the Grace Hopper Celebration

As someone new to the Grace Hopper Celebration (GHC), I was excited and overwhelmed on realizing there were around 8000 women from more than 60 countries. I had the opportunity to meet some really interesting people from within and outside of Adobe.

The keynote by Shafi Goldwasser (winner of the 2012 ACM Turing award) was especially interesting. She discussed cryptography and the varied, seemingly paradoxical solutions it can help us achieve. Highlighting the need to store data privately in the cloud with the ability to simultaneously harness that data to solve problems (e.g. research in medicine), she emphasized that the “magic of cryptography” as the key to this, and spoke at some length on looking at problems through the “cryptographic lens.”

Dr. Arati Prabhakar’s (Dir of DARPA) keynote during the award ceremonies was very inspiring. She talked about the benefits military research has provided to areas like the Internet, material sciences and safer warfare, and talked about further research into new areas, such as producing new materials and chemicals and rethinking complex military systems. She even showed the audience a video of a robotic arm being controlled by a quadriplegic woman hooked up to a computer.

The majority of presentations I attended were related to security, where I met smart and motivated women working in the security field, and a lot of students interested in security. The talks varied from Lorrie Cranor’s talk on analyzing and storing passwords safely, to a panel discussion integration of security in SDLC (panelists included Justine Osborne, Leigh Honeywell and Parisa Tabriz) to homomorphic encryption and its future uses (Mariana Raykova and Giselle Font). Other talks ranged from security fundamentals and cryptography aimed at college students to more “hot topics” like wearable technology, biometrics, cloud computing and HCI.

I also helped out at the career fair, and met a lot of undergraduates interested in working with Adobe. It was fun talking with them about what I do and learning about what they were interested in, including two students Adobe had sponsored to attend GHC this year. I met a number of industry professionals as well as students at talks and events who are working on including more girls and women in tech through outreach programs, hackathons and mentoring. It was refreshing to see a few men attending the GHC too.

The theme of the GHC this year was “Everyone, Everywhere.” It was a very inclusive environment, and apart from the talks there were events to make our evenings fun- ice breakers and dances. The long list of impressive speakers, motivating panelists and encouraging mentors/organizations were all very accessible and inspiring. I had a great time at GHC and I hope more people (men and women!) get to attend the conference in the future.

Devika Yeragudipati
ASSET Security Researcher

Join Us at ISSE EU in Brussels October 14 – 15!

Adobe will be participating again this year in the ISSE EU conference in Brussels, Belgium, Oct. 14-15, 2014. This conference attracts senior decision makers in IT Security from a wide range of industries and governmental organizations. There are numerous sessions tackling many of the current hot topics in security including cloud security, identity management, the Internet of Things (IoT), data protection & privacy, compliance & regulation, and the changing role of IT Security professionals adapting to these changes. 

Adobe will be talking about a few of our security initiatives and programs during the event, specifically highlighting our security training program which I currently manage. The materials from this program now form the basis of the open-source, free security training program from SAFECode (https://training.safecode.org). Many organizations have now used these materials to develop their own security training programs. I will be available on-site to answer questions about these programs. 

We will also have three sessions during the conference. Director of Product Security David Lenoe will present a keynote presentation on “Maintaining a Security Organization That Can Adapt to Change” on Tuesday, Oct. 14, at 11:45 a.m. According to Forrester Research, “51 % of organizations said it’s a challenge or major challenge to hire security staff with the right skills” – and keeping them happy, productive, and nimble is also a major challenge. This session will discuss Adobe’s approach to addressing these issues in our organization that we believe may provide valuable insight into handling these issues in your own organization. 

On Tuesday at 3:10 p.m., Mohit Kalra, senior manager for secure software engineering, will provide insight into “Deciding the Right Metrics & Dashboards for Security Success.” This session will discuss what makes a “good” security roadmap and then how to properly measure and share progress against that roadmap to help ensure success.  

Last but not least, on Wednesday, Oct. 15, at 2:40 p.m. I will discuss how “Building Security In Takes Everyone Thinking Like a Security Pro.” While we realize this is a mouthful, it’s probably best description I can give for the goal of the ASSET Certification Program (http://blogs.adobe.com/security/2013/05/training-secure-software-engineers-part-1.html) at Adobe. We as an industry not only need to increase our security fluency, we also need to have people that can look at the product they are working on with a hacker’s eye and raise a flag when they see something that may become an issue in the future.  

In this talk, I will spend most of the time dedicated to the experiential elements of the program that gives us the ability to build our experts. For example, people have taught themselves how to perform manual penetration testing. On the flip side there are a lot of projects where candidates have created ways to automate scanning or other processes. One of the more innovative projects was the creation of the Hackfest (http://blogs.adobe.com/security/?s=hackfest&submit=). As one security champion, Elaine Finnell, puts it, “For myself, pursuing the brown belt (in the program) has pushed me beyond simply absorbing information and into doing. Similar to how a science classroom has a lab, putting the information I learn both during the training and during outside trainings into practice helps to solidify my understanding of security principles. While I’m still not an expert on executing penetration testing, fuzzing, or architecture analysis, every experience I have doing this type of work alongside experts serves to improve my ability to be a security champion within my team.”

I love to talk about this stuff. I’ll be available in Adobe’s booth on the expo floor and if you’re going to be there, so please hit me up. I’m also available on Twitter – @JoshKWAdobe. More information about the training program can also be found in our new white paper available at http://www.adobe.com/content/dam/Adobe/en/security/pdfs/adobe-security-training-wp-web.pdf and on the Security@Adobe blog (http://blogs.adobe.com/security/2013/05/training-secure-software-engineers-part-1.html).

You can follow @AdobeSecurity for the latest happenings during ISSE EU as we will be live tweeting during the event – look for the hashtag #AdobeISSE. Also, more information about all of our security initiatives can be found at http://www.adobe.com/security.   

 


Josh Kebbel-Wyen 

Senior Security Program Manager 

The Simplest Form of Secure Coding

Within the security industry, we often run into situations where providing immediate security guidance isn’t straightforward. For instance, a team may be using a new, cutting edge language that doesn’t have many existing security tools or guidelines available. If you are a small startup, then you may not have the budget for the enterprise security solutions. In large organizations, the process of migrating a newly acquired team into your existing tools and trainings may take several weeks. What advice can we give to those teams to get them down the road to security today?

In these situations, I remind them to go back to their original developer training. Many developers are familiar with the term “technical debt which refers to the “eventual consequences of poor system design, software architecture or software development within a codebase.” Technical security debt is one component of an application’s overall technical debt. The higher the technical debt is for an application, the greater the chance for security issues. Moreover, it’s much easier to integrate security tools and techniques into code that has been developed with solid processes.

To a certain extent, the industry has known this for awhile. Developers like prepared statements because pre-compiled code runs faster, and security people like it because pre-compiled code is less injectable. Developers want exception handling because it makes the web application more stable and they can cleanly direct users to a support page which is a better user experience. Security people want exception handling so that there is a plan for malicious input and because showing the stack trace is an information leak. However, let’s take this a step further.

If you search the web for “Top 10″ lists for developer best practices and/or common coding mistakes, then you will see there’s a clear overlap in traditional coding principles and security principles across all languages.  For example, the Modern IE  Cross Browser Development Standards & Interoperability Best Practices is written for developers and justifies these points using concepts that are important to clean development. However, I can go through the same list and justify many of their points using security concepts. Here are just a few of their recommendations and how they relate to security:

  • Use a build process with tools to check for errors and minify files. On the security side, establishing this will enable you to more easily integrate security tools into the build process.
  • Always use a standards-based doctype to avoid Quirks Mode. Quirks Mode makes it easier to inject XSS vulnerabilities into your site.
  • Avoid inlineJavaScript tags in HTML markup. In security, avoiding inlineJavaScript makes it easier to support Content Security Policies.

Switching to Ruby on Rails, here’s a list of the 10 Most Common Rails Mistakes and how to avoid them so developers can create better applications. When you look at those errors from a security perspective, you will also see overlaps:

  • Common Mistake #1-3: Putting too much logic in the controller/view/model. These three points all deal with keeping your code cleanly designed for better maintainability. Security is a common reason for performing code maintenance. Often times, your response to active attacks against your system will be slowed down because the code cannot be easily changed or is too complex to identify a single validation point.

    This section also reminds us that the controller is the recommended place for first-level session and request parameter management. This allows for high-level sanity checking before the data makes it into your model.

  •   Common Mistake #5: Using too many gems. Controlling your third-party libraries also helps to limit your attack surface and reduce the maintenance costs of keeping them up-to-date for security vulnerabilities.
  •   Common Mistake #7: Lack of automated tests. As mentioned in the HTML lists, using an automated test framework enables you to also include security tests. This blog refers to use techniques such as BDD for which there are also Ruby-based BDD security frameworks like Gauntlt.
  •  Common Mistake #10: Checking sensitive information into source code repositories. This is clearly a security rule. In this example, they are referring to a specific issue with Rails secret tokens. However, this is a common mistake for development in general. Separating credentials from code is simply good coding hygiene – it can prevent an unintended leak of the credential and permit a credential rotation without having to rebuild the application.

Even if you go back to a 2007 article in the IBM WebSphere Developer Technical Journal on The Top Java EE Best Practices, which is described as a “best-of-the-best list of what we feel are the most important and significant best practices for Java EE,” then you will see the same themes being echoed within the first five principles of the list:

  •   Always use MVC.This was also mentioned in the Rails Top 10. Centralized development allows for centralized validation.
  •  Don’t reinvent the wheel. This is true for security, as well. For instance, don’t invent your own cryptography library wheel!
  •  Apply automated unit tests and test harnesses at every layer. Again, this will make it easier to include security tests.
  •  Develop to the specifications, not the application server.This point highlights the importance of not locking your code into a specific version of the server. One of the most frequent issues large enterprises struggle with is migrating from older, vulnerable platforms, because the code is too dependent on the old environment.This concept is also related to #16 in their list, “Plan for version updates”.
  • Plan for using Java EE security from Day One. The idea here is similar to “Don’t reinvent the wheel.” Most development platforms provide security frameworks that are already tested and ready to use.

As you can see, regardless of your coding language, security best practices tend to overlap with your developer best practices. Following them will either directly make your code more secure or make it easier to integrate security controls later. In meetings with management, developers and security people can be aligned in requesting adequate time to code properly.

It’s true that security-specific training will always be necessary for topics such as authentication, authorization, cryptography, etc. And security training certainly shows you how to think about your code defensively which will help with application logic errors. However, a lot of the low-hanging bugs and security issues can be caught by following basic good, old-fashioned coding best practices. The more you can control your overall technical debt, the more you will control your security debt.

Peleus Uhley
Lead Security Strategist