Posts in Category "Security"

Better Security Through Automation

Automation Strategies

“Automate all the things!” is a popular meme in the cloud community. Many of today’s talks at security conferences discuss the latest, sophisticated automation tool developed by a particular organization. However, adding “automation” to a project does not magically make things better by itself. Any idea can be automated; including the bad ones. For instance, delivering false positives “at scale” is not going to help your teams. This blog will discuss some of the projects that we are currently working on and the reasoning behind their goals.

Computer science has been focused on automation since its inception. The advent of the cloud only frees our ideas from being resource bound by hardware. However, that doesn’t necessarily mean that automation must take up 100 scalable machines. Sometimes simple automation projects can have large impacts. Within Adobe, we have several types of automation projects underway to help us with security. The goals range from business-level dashboards and compliance projects to low level security testing projects.

 

Defining goals

One large project that we are currently building is a security automation framework focused on security assertions. When you run a traditional web security scanner against a site, it will try to tell you everything about everything on the site. In order to do that effectively, you have to do a lot of pre-configuration (authentication, excluded directories, etc.). Working with Mohit Kalra, the Sr, Security Manager for the ASSET security team, we experimented with the idea of security assertions. Basically, could a scanner answer one true/false question about the site with a high degree of accuracy? Then we would ask that one simple question across all of our properties in order to get a meaningful measurement.

For instance, let’s compare the following two possible automation goals for combating XSS:

(a) Traditional automation: Give me the location of every XSS vulnerability for this site.

(b) Security assertion: Does the site return a Content Security Policy (CSP) header?

A web application testing tool like ZAP can be used to automate either goal. Both of these tests can be conducted across all of your properties for testing at scale. Which goal you choose will decide the direction of your project:

Effort to implement:

(a) Potentially requires effort towards tuning and configuration with a robust scanner in order to get solid results. There is a potential risk to the tested environment (excessive DB entries, high traffic, etc.)

(b) A straight forward measurement with a simple scanner or script. There is a low risk to the tested environment.

Summarizing the result for management:

(a) This approach provides a complex measurement of risk that can involve several variables (reflected vs. persistent, potential value of the site, cookie strategy, etc.). The risk that is measured is a point-in-time assessment since new XSS bugs might be introduced later with new code.

(b) This approach provides a simple measurement of best practice adoption across the organization. A risk measurement can be inferred but it is not absolute. If CSP adoption is already high, then more fine grained tests targeting individual rules will be necessary. However, if CSP adoption is still in the early stages, then just measuring who has started the adoption process can be useful.

Developer interpretation of the result:

(a) Development teams will think in terms of immediate bugs filed.

(b) Development teams will focus on the long term goal of defining a basic CSP.

Both (a) and (b) have merits depending on the needs of the organization. The traditional strategy (a) can give you very specific data about how prevalent XSS bugs are across the organization. However, tuning the tools to effectively find and report all that data is a significant time investment. The security assertion strategy (b) focuses more on long term XSS mitigations by measuring CSP adoption within the organization. The test is simpler to implement with less risk to the target environments. Tackling smaller automation projects has the added value of providing experience that may be necessary when designing larger automation projects.

Which goal is a higher priority will depend on your organization’s current needs. We found that, in playing with the true/false approach of security assertions, we focused more of our energy on what data was necessary versus just what data was possible. In addition, since security assertions are assumed to be simple tests, we focused more of our design efforts on perfecting the architecture of scalable testing environment rather than the idiosyncrasies of the tools that the environment would be running. Many automation projects try to achieve depth and breadth at the same time by running complex tools at scale. We decided to take an intermediate step by using security assertions to focus on breadth first and then to layer on depth as we proceed.

 

Focused automation within continual deployment

Creating automation environments to scan entire organizations can be a long term project. Smaller automation projects can often provide quick wins and valuable experience on building automation. For instance, continuous build systems are often a single chokepoint through which a large portion of your cloud must pass before deployment. Many of today’s continuous build environments allow for extensions that can be used to automate processes.

As an example, PCI requires that code check-ins are reviewed. Verifying this process is followed consistently requires significant human labor. One of our Creative Cloud security champions, Jed Glazner, developed a Jenkins plugin which can verify each check-in was reviewed. The plugin monitors the specified branch and ensures that all commits belong to a pull request, and that the pull requests were not self merged. This allows for daily, automatic verification of the process for compliance.

Jed worked on a similar project where he created a Maven plug-in that lists all third-party Java libraries and their versions within the application. The plugin would then upload that information to our third-party library tracker so that we can immediately identify libraries that need updates. Since the plug-in was integrated into the Maven build system, the data provided to the third-party library tracker was always based on the latest nightly build and it was always a complete list.

 

Your organization will eventually build, buy or borrow a large scale automation tool that scales out the enumeration of immediate risk issues. However, before you jump head first into trying to build a robust scanning environment from scratch, be sure to first identify what core questions you need the tools to answer in order to support your program. You might find that starting with smaller automation tasks that track long term objectives or operational best practices can be just as useful to an organization. Deploying these smaller projects can also provide experience that can help your plans for larger automation projects.

 

Peleus Uhley
Principal Scientist

Improving Security for Mobile Productivity with Adobe Document Cloud

Recently Adobe announced that it is working with Microsoft to help improve security of mobile applications and productivity. We have integrated Adobe Acrobat Reader DC with Microsoft Intune, a solution for secure enterprise mobile application management. This gives I.T. management and security professionals more control over critical productivity applications deployed to their mobile device users. This functionality is currently available for Android devices and will soon be made available for iOS devices.

Our work with Microsoft is part of Adobe’s overall commitment to help keep our customers’ critical assets and data secure. We will continue to work with Microsoft and other security community partners to improve security across our products and services.

For more information about this solution, please see our post on the Adobe Document Cloud blog.

Bronwen Matthews
Sr. Product Marketing Manager, Security

Hacktoberfest 2015

Autumn has arrived, and National Cybersecurity Awareness Month with it. We wanted to celebrate and raise awareness about security at Adobe. What could be better than bringing hands on training, a capture the flag competition and beer together in a single day across the world? That is exactly what we did and we called it Hacktoberfest.

Around 160 people in the US, Europe and India came together on October 14th to take part in a full day focused on security. The day progressed from a broad, hands-on threat modeling training to learning tools like Burp Suite to a Capture the Flag event for prizes.

We saw a lot of new faces at this event; no doubt due to the prizes offered for the capture the flag. There was also a diverse skill set present in the room; from people in nontechnical roles to those that have a lot of experience pen testing internally. We learned that our community is hungry for training and a deeper understanding of security. All of the material, except for one training, was developed in-house.

When most people’s interaction with security training is spent with computer-based training, there is great value in bringing people together in a face-to-face event where they can interact not only with the trainers, but also with each other. While we’ve done smaller, more targeted trainings in the past, this was the first truly global event.

People really loved the hands on nature of the day, we had responses like: “I thought the capture the flag event was incredibly fun and engaging.” and “I liked the demonstration on how to use Burp Suite to attack a service/site.”

One of the unique aspects of the day was its global nature. Essentially two events were run, one in the US time zones and one in India. We did our best to create the same experience for the two groups while paying attention to their different content needs. All presentations were local and questions could be answered in real time.

Of course, the most popular event of the day was the Capture the Flag event. One of our researchers, took it upon himself to create an environment to host the game. It’s called WOPR and we will be providing more information on it soon. Two other researchers worked to create the challenges for the game.

There was quite a lot of energy in all of those conference rooms as people engaged with the training and the competition. The most important lesson we learned from this exercise is that people at Adobe, all around the world, care about securing our products.

 

Josh Kebbel-Wyen
Sr. Security Program Manager, Training

Security Collaboration at Adobe

At Adobe we recognize that our customers benefit when we take a collaborative approach to vulnerability disclosure.  We pride ourselves on the symbiotic relationship we’ve cultivated with the security community and continue to value the contributions that security researchers of all stripes make to hardening our software.

As a measure of the value we place in external code reviews and security testing, Adobe interfaces with the security community through a spectrum of engagement models, including (but not limited to):

  • Traditional third-party code reviews and pen-tests
  • Crowd-sourced pen-tests
  • Voluntary disclosures to our Product Security Incident Response Team (PSIRT)
  • Submissions to our web application disclosure program on HackerOne

Code reviews and pen-tests

Before Adobe introduces a major upgrade or new product, feature or online service offering, a code review and pen-test is often performed by an external security company.  These traditional third-party reviews provide a layer of assurance to complement our internal security assessments and static code analysis that are part of our Secure Product Lifecycle (SPLC).

Crowd-sourced pen-tests

To benefit from a larger pool of security researchers, Adobe also uses crowd-sourced pen-tests in tightly scoped, time-bound engagements involving an elite pool of pen-testers targeting a single service offering or web application.   This approach has helped supplement the traditional pen tests against our online services by increasing code coverage and testing techniques.

Disclosures to PSIRT

The Product Security Incident Response Team (PSIRT) is responsible for Adobe’s vulnerability disclosure program, and typically responds first to the security community’s submissions of vulnerabilities affecting an Adobe product, online service or web property.  In addition to its role as conduit with external researchers, PSIRT partners with both internal and external stakeholders to ensure vulnerabilities are handled in a manner that both minimizes risk to customers and encourages researchers to disclose in a coordinated fashion.

Disclosures via HackerOne

In March 2015, Adobe launched its web application vulnerability disclosure program on HackerOne.  This platform offers researchers the opportunity to build a reputation and learn from others in the community, while allowing vendors to streamline workflows and scale resources more effectively.

As new bug hunting and reporting platforms enable part-time hobbyists to become full-time freelance researchers, we look forward to continuing a constructive collaboration with an ever-widening pool of security experts.

 

Pieter Ockers
PSIRT Security Program Manager

Disha Agarwal
Product Security Manager

Recap: BlackHat 2015 and r00tz@DefCon 2015

This year Adobe security team members were out in force attending BlackHat 2015 and – new this year – helping inspire the next generation of security professionals at the r00tz @ DefCon conference for kids. Adobe was a major sponsor of the r00tz conference this year helping to set up and run a 3D printing workshop and hackfest for the young attendees.

BlackHat brings together the top experts in the security field to discuss and expose current issues in the information security industry. While there were a variety of talks covering a wide breadth of topics, here are some talks that stood out to us during our time there.

In the discussion panel “Is the NSA Still Listening to Your Phone Calls?” Mark Jaycox from the Electronic Frontier Foundation (EFF) and Jamil Jaffer, former member of the House Permanent Select Committee on Intelligence (HPSCI), talked about government surveillance, and the tradeoffs between keeping our privacy and using surveillance to defend against current threats. It was interesting to see two people on opposite sides of the spectrum openly discussing this complex issue. I felt that by listening to the two parties discuss their points I was able to walk away with a more informed opinion on the current stance of government surveillance in the country today.

James Kettle from Portswigger talked about server side template injection and showed techniques to identify and exploit it on popular template engines such as FreeMarker, Velocity, Twig and Jade. This vulnerability occurs when users are allowed to edit templates or untrusted user input is embedded in the template. It was interesting to see how this vulnerability can be used to directly attack web servers and perform remote code execution instead of cross site scripting. The talk raised awareness on the damage one can do if an application is vulnerable to template injection. Our researchers and security champions will be able to apply the information gained from this talk to identify and mitigate template injection in Adobe products.

In “The Node.js Highway: Attacks are at Full Throttle” talk, Maty Siman and Amit Ashbel discussed the security issues and demonstrated new attack techniques against Node.js applications. The attack on pseudo random number generator in Node.js which allows an attacker to predict the next number given 3 consecutive numbers was quite interesting. This means an application generating password using PRNG might reveal the passwords of all the users. The talk educated our researchers and security champions on new vulnerabilities to look for while reviewing a Node.js application.

In addition to all of the great learnings and networking at BlackHat 2015, many from our team stayed around after BlackHat to attend DefCon and help out at the r00tz @ DefCon conference for kids. This was Adobe’s first year sponsoring the r00tz conference. With the help of our awesome Photoshop engineering teams, we were able to get kid-ready workstations set up with our creative tools and hooked up to cool MakerBot 3D printers. It was a lot of fun helping kids take all of the ideas in their heads and translate them into physical objects they could then take home with them – with, of course, a lot of hacking involved to get many of the ideas to work. In addition to our 3D printing workshop, there were other exercises including a capture the flag contest and robot building contest. It was very rewarding for all of us to sponsor and be a part of inspiring these kids to pursue careers in technology.

 

Tim Fleer
Security Program Manager

Karthik Thotta Ganesh
Web Security Researcher

Adobe Security Team @ BlackHat 2015

I am headed to BlackHat 2015 in Las Vegas this week with members of our Adobe product security teams. We are looking forward to connecting with the security community throughout the week. We also hope to meet up with some of you at the parties, at the craps tables, or just mingling outside the session rooms during the week. Make sure to follow our team on Twitter @AdobeSecurity. Feel free to follow me as well @BradArkin. We’ll be tweeting info as to our observations and happenings during the week. Look for the hashtag #AdobeBH2015.

This year we are also proud to sponsor the r00tz Kids Conference @ DefCon. Members of our teams will be helping out with 3D printing and other workshops during this great conference for future security pros. We hope you are able to bring your kids along to join our team at this fun event as part of your DefCon experience.

We are looking forward to a great week in Vegas.

Brad Arkin
VP and Chief Security Officer

Why Moms Can Be Great at Computer Security

As a new mom, I’ve come to a few realizations as to why I think moms can be really innovative and outright great when it comes to solving problems in computer security. I realize these anecdotes and experiences can apply to any parent, so please take this as purely from my personal “mom” perspective. This is not to say moms are necessarily better (my biases aside), but, I do think there are some skills we learn on-the-fly as new mothers that can become invaluable in our security careers. And vice-versa – there are many skills I’ve picked up throughout my security career that have come in really handy as a new mom. Here are my thoughts on some of the key areas where I think these paths overlap:

  • We are ok with not being popular. Any parent who has had to tell their kid “no,” ground them, or otherwise “ruin their lives” knows that standing firm in what is right is sometimes not easy – but, it is part of the job. Security is not all that different. We often tell people that taking unsafe shortcuts or not building products and services with security in mind will not happen on our watch. From time to time, product teams are mad when we have to go over their heads to make sure key things like SSL is enabled by default as a requirement for launch of a new service. In incident response, for example, we sometimes have to make hard decisions like taking a service offline until the risk can be mitigated. And we are ok with doing all of this because we know it is the right thing to do. However, when we do it, we are kind but firm – and, as a result, we are not always the most liked person in a meeting, and we’re very OK with that.
  • We can more easily juggle multiple tasks and priorities. My primary focus has always been incident response, but it was not until I had a child that I realized how well my job prepared me for parenthood. A security incident usually has many moving pieces at once – investigate, confirm, mitigate, update execs, and a host of other things – and they all need to be done right now. Parents are often driving carpools while eating breakfast, changing diapers on a conference call while petting the dog with a spare foot (you know this is not an exaggeration), and running through Costco while going through math flash cards with our daughters. At the end of each workday, we have to prioritize dinner, chores, after school activities, and bedtime routines. It all seems overwhelming. But, in a matter of minutes, a plan has formed and we are off to the races! We delegate, we make lists, and somehow it all gets done. Just like we must do with our security incident response activites.
  • We trust but verifyThis is an actual conversation:

Mom: Did you brush your teeth?
Kid: Yes
Mom (knowing the kid has not been in the bathroom in hours): Are you sure? Let me smell your breath
Kid : Ugggghhhh… I’ll go brush them now…

I hear a similar conversation over and over in my head in security meeting after meeting. It usually is something like this:

Engineer: I have completed all the action items you laid out in our security review
Mom (knowing that the review was yesterday and it will take about 10 hours of engineering work to complete): Are you sure? Let’s look at how you implemented “X.”
Engineer : Oh, I meant most of the items are done
Mom: It is great you are starting on these so quickly. Please let me know when they are done.

Unfortunately, this does indeed happen sometimes – hence why I must be such a staunch guardian. Security can take time and is sometimes not as interesting as coding a new feature. So, like a kid who would rather watch TV than brush his teeth because it is not seen as a big deal to not brush, we have to gently nudge and we have to verify.

  • We are masters at seeing hidden dangers and potential pitfalls. When a baby learns to roll, crawl, and walk, moms are encouraged to get down at “baby level” to see and anticipate potentially dangerous situations. Outlet covers are put on, dangerous chemical cleaners no longer live under the sink, and bookcases are mounted to the walls. As kids get older, the dangers we see are different, but we never stop seeing them. Some of this is just “mom worry” – and we have to keep it in check to avoid becoming dreaded “helicopter parents.” However, we are conditioned to see a few steps ahead and we learn to think about the worst case scenario. Seeing worst case scenarios and thinking like an attacker are two things that make security professionals good at their jobs. Many are seen as paranoid, and, quite frankly, that paranoia is not all that dissimilar to “mom worry.” Survival of the species has relied on protection of our young, and although a new release of software is not exactly a baby, you can’t turn off that protective instinct.

It was really surprising to me the similarities between work and parenthood. Being a parent and being a security professional sound so dissimilar on the surface, but, it is amazing how the two feed each other – and how my growth in one area has helped my growth in the other. It also shows how varying backgrounds can be your path to a successful security career.

 

Lindsey Wegrzyn Rush
Sr. Manager, Security Coordination Center

Securely Deploying MongoDB 3.0

I recently needed to set up an advanced, sharded MongoDB 3.0 database with all the best practices enabled for a deployment of the CRITs web application. This would be an opportunity for me to get first hand experience with the recommended security guidance that I recommend to other Adobe teams. This post will cover some of the lessons that I learned along the way. This isn’t a replacement for reading the documentation. Rather, it is a story to bookmark for when one of your teams is ready to deploy a secure MongoDB 3.0 instance and is looking for real-world examples to supplement the documentation.

MongoDB provides a ton of security documentation and tutorials, which are invaluable. It is highly recommended that you read them thoroughly before you begin, since there are a lot of important details captured in the documents. The tutorials often contain important details that aren’t always captured in the core documentation for a specific feature.

If you are migrating from an older version, you’ll quickly find that MongoDB has been very active in improving the security of its software. The challenge will be that some of your previous work may now be deprecated. For instance, the password hashing functions have migrated from MONGODB-CR to SCRAM-SHA-1 . The configuration file switched in version 2.6 from name-value pairs to YAML. Oddly, when I downloaded the most recent version of MongoDB, it came with the name-value pair version by default. While name-value pairs are still supported, I decided to create the new YAML version from scratch to avoid a migration later. In addition, keyfile authorization between cluster servers has been replaced with X.509. These improvements are all things you will want to track when migrating from an older version of MongoDB.

In prepping for the deployment, there are a few things you will want to do:

  • Get a virtual notepad. A lot of MongoDB commands are lengthy to type, and you will end up pasting them more than once.
  • After reading the documentation and coming up with a plan for the certificate architecture, create a script for generating certificates. You will end up generating one to two certificates per server.
  • Anytime you deploy a certificate system, you should have a plan for certificate maintenance such as certificate expirations.
  • The system is dependent on a solid base. Make sure you have basic sysadmin tasks done first, such as using NTP to ensure hosts have consistent clocks for timestamps.

If you are starting from scratch, I would recommend getting MongoDB cluster connectivity established, followed by layering on security. At a minimum, establish basic connectivity between the shards. If you try to do security and a fresh install at the same time, you may have a harder time debugging.

Enabling basic SSL between hosts

I have noticed confusion over which versions of MongoDB support SSL, since it has changed over time and there were differences between standard and enterprise versions. Some package repositories for open-source OSs are hosting the older versions of MongoDB. The current MongoDB 3.0 page says, “New in version 3.0: Most MongoDB distributions now include support for SSL.”  Since I wasn’t sure what “most” meant, I downloaded the standard Ubuntu version (not enterprise) from the MongoDB hosted repository, as described here: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/ That version did support SSL out of the box.

MongoDB has several levels of SSL settings, including disabled, allowSSL, preferSSL, and requireSSL. These can be useful if you are slowly migrating a system, are learning the command line, or have different needs for different roles. For instance, you may specify requireSSL for your shards and config servers to ensure secure inter-MongoDB communication. For your MongoDB router instance, you may choose a setting of preferSSL to allow legacy web applications to connect without SSL, while still maintaining secure inter-cluster communication.

If you plan to also use X.509 for cluster authorization, you should consider whether you will also be using cluster authentication and whether you want to specify a separate certificate for clusterAuth. If you go with separate certificates, you will want to set the serverAuth Extended Key Usage (EKU) attribute on the SSL certificate and create a separate clientAuth certificate for cluster authorization. A final configuration for the SSL configuration would be:

net:

    ssl:

      CAFile: “root_CA_public.pem”

      mode: requireSSL

      PEMKeyFile: “mongo-shard1-serverAuth.pem”

      PEMKeyPassword: YourPasswordHereIfNecessary

Enabling authentication between servers

In recent versions of MongoDB, the inter-cluster authentication method has changed in version 2.6 from using keyfiles to leveraging X.509 certificates. The keyfile authentication was just a shared secret, whereas X.509 verifies approval from a known CA. To ease migration from older implementations, MongoDB lets you start at keyfile, then move to hybrid support with sendKeyFile and sendX509, before finally ending at the X.509-only authentication setting: x509. If you have not already enabled keyfiles in an existing MongoDB deployment, then you may need to take your shard offline in order to enable it. If you are using a separate certificate for X.509 authentication, then you will want to set the clientAuth EKU in the certificate.

The certificates used for inter-cluster authentication must have their X.509 subject (O, OU, DN, etc.) set exactly the same, except for the hostname in the CN. The CN, or Subject Alternative Name, must match the hostname of the server. If you want flexibility to move shards to new instances without reissuing certificates, you may want a secondary DNS infrastructure that will allow you to remap static hostnames to different instances. When a cluster node is successfully authenticated to another cluster node, it will get admin privileges for the instance. The following settings will enable cluster authentication:

net:

   ssl:

      CAFile: “/etc/mongodb/rootCA.pem”

      clusterFile: “mongo-shard1-clientAuth.pem”

      clusterPassword: YourClusterFilePEMPasswordHere

      CRLFile: “YourCRLFileIfNecessary.pem”

 

   security:

       clusterAuthMode: x509

Client authentication and authorization

MongoDB authorization can support a set of built-in roles and user defined roles for those who want to split authorization levels across multiple users. However, authorization is not enabled by default. To enable authorization, you must specify the following in your config file:

       security:

          authorization: enabled

There was a significant change in the authorization model changed between 2.4 and 2.6. If you are doing an upgrade from 2.4, be sure to read the release notes for all the details. The 2.4 model is no longer supported in MongoDB 3.0. Also, an existing environment may have downtime because you have to sync changing your app to use the MongoDB password as well as enabling authentication in MongoDB.

For user-level account access, you will have a choice between traditional username and password, LDAP proxy, Kerberos, and X.509. For my isolated infrastructure, I had to choose between X.509 and username/password. Which approach is correct depends on how you interact with the server and how you manage secrets. While I had to use a username and password for the CRITs web application, I wanted to play with X.509 for the local shard admin accounts. The X.509 authentication can only be used with servers that have SSL enabled. While it is not strictly necessary to have local shard admin accounts, the documentation suggested that they would eventually be needed for maintenance. From the admin database, X.509 users can be added to the $external database using the following command:

   db.getSiblingDB(“$external”).runCommand(

      {

          createUser: “DC=org,DC=example, CN=clusterAdmin,OU=My Group,O=My Company,ST=California,C=US”,

          roles: [

             { role: ‘clusterAdmin’, db: ‘admin’ }

          ]

      }

   )

The createUser field contains the subject from the client certificate for the cluster admin. Once added, the command line for a connection as the clusterAdmin would look like this:

       mongo –ssl –sslCAFile root_CA_public.pem –sslPEMKeyFile ./clusterAdmin.pem mongo_shard1:27018/admin

Although you provided the key in the command line, you still need to run the auth command that corresponds to the clusterAdmin.pem certificate in order to convert to that role:

   db.getSiblingDB(“$external”).auth(

      {

         mechanism: “MONGODB-X509”,

         user: “ DC=org,DC=example, CN=clusterAdmin,OU=My Group,O=My Company,ST=California,C=US

      }

   );

The localhost exception allows you to create the first user administrator in the admin database when authorization is enabled. However, once you have created the first admin account, you should remember to disable it by specifying:

       setParameter:

           enableLocalhostAuthBypass: false

Once you have the admin accounts created, you can create the application roles against the application database with more restricted privileges:

   db.createUser(

      {

         user: “crits_app_user”,

         pwd: “My$ecur3AppPassw0rd”

         roles: [

            { role: “readWrite”, db: “crits” }

         ]

         writeConcern: { w: “majority” , wtimeout: 5000 }

      }

   ) 

At this stage, there are still other security options worth reviewing. For instance, there are some SSL settings I didn’t cover because they already default to the secure setting. If you are migrating from an older database, then you will want to check the additional settings, since some behavior may change. Hopefully, this post will help you to get started with secure communication, authentication, and authorization aspects of MongoDB 3.0.

 

Peleus Uhley
Lead Security Strategist

SAFECode Goes to Washington

On a recent trip to Washington, DC, I had the opportunity to participate in a series of meetings with policymakers on Capitol Hill and in the Administration to discuss SAFECode’s  (Software Assurance Forum for Excellence in Code) role in and commitment to improving software security.  If you’re not familiar with SAFECode, I encourage you to visit the SAFECode website to learn more about the organization. At a high level, SAFECode advances effective software assurance methods, and identifies and promotes best practices for developing and delivering more secure and reliable software, hardware, and services in an industry-led effort.

The visit to DC was set up to promote some of the work being done across our industry to analyze, apply, and promote the best mix of software assurance technology, process, and training. Along with some of my colleagues from EMC and CA Technologies, we spent the beginning of the trip at the Software and Supply Chain Assurance Working Group, where we presented on the topic of software assurance assessment. The premise of our presentation was that there is no one-size-fits-all approach to software assurance, and that a focus on the supplier’s software assurance process is the right way to assess the maturity of an organization when it comes to software security.

One of the other important aspects we discussed with policymakers was SAFECode’s role in promoting the need for security education and training for developers. We are considering ways to support the expansion of software security education in university programs and plan to add new offerings to the SAFECode Security Engineering training curriculum, a free program aimed at helping those looking to create an in-house training program for their product development teams as well as individuals interested in enhancing their skills.

Overall, this was a very productive trip, and we look forward to working with policymakers as they tackle some of the toughest software security issues we are facing today.

 
David Lenoe, Director of Adobe Secure Software Engineering
SAFECode Board Member

Updated Security Information for Adobe Creative Cloud

As part of our major release of Creative Cloud on June 16th, 2015, we released an updated version of our security white paper for Adobe Creative Cloud for enterprise. In addition, we released a new white paper about the security architecture and capabilities of Adobe Business Catalyst. This updated information is useful in helping I.T. security professionals evaluate the security posture of our Creative Cloud offerings.

Adobe Creative Cloud for enterprise gives large organizations access to Adobe’s creative desktop and mobile applications and services, workgroup collaboration, and license management tools. It also includes flexible deployment, identity management options including Federated ID with Single Sign-On, annual license true-ups, and enterprise-level customer support — and it works with other Adobe enterprise offerings. This version of the white paper includes updated information about:

  • Various enterprise storage options now available, including updated information about geolocation of shared storage data
  • Enhancements to entitlement and identity management services
  • Enhancements to password management
  • Security architecture of shared services and the new enterprise managed services

Adobe Business Catalyst is an all-in-one business website and online marketing solution providing an integrated platform for Content Management (CMS), Customer Relationship Management (CRM), E‐Mail Marketing, ECommerce, and Analytics. The security white paper now available includes information about:

  • Overall architecture of Business Catalyst
  • PCI/DSS compliance information
  • Authentication and services
  • Ongoing risk management for the Business Catalyst application and infrastructure

Both white papers are available for download on the Adobe Security resources page on adobe.com.

 

Chris Parkerson
Sr. Marketing Strategy Manager