Meeting Compliance Challenges with Adobe CCF

The Adobe Common Controls Framework (CCF) enables clear guidance to all of our product and services teams on how to secure our infrastructure and applications. We analyzed the criteria for the most common security certifications and found a number of overlaps. As a result, we were able to take over 1000 requirements from relevant cloud security frameworks and standards and rationalize them down to about 200 Adobe-specific controls. Control owners know exactly what is required to address expectations of stakeholders and customers when it comes to implementing those controls. It also supports more efficient implementation by allowing teams to inherit control capabilities as they are completed throughout the organization.

Watch as Abhi Pandit, our Senior Director for Governance, Risk, and Compliance (GRC), walks through the Adobe CCF, how it is helping us meet the compliance challenges we face in adhering to multiple standards and regulations, and learn how you can use a framework like CCF in your organization to assist with your own compliance challenges. You can learn more about CCF and Adobe’s progress in meeting various standards and regulations across our product lines in our white paper.

Security Collaboration at Adobe

At Adobe we recognize that our customers benefit when we take a collaborative approach to vulnerability disclosure.  We pride ourselves on the symbiotic relationship we’ve cultivated with the security community and continue to value the contributions that security researchers of all stripes make to hardening our software.

As a measure of the value we place in external code reviews and security testing, Adobe interfaces with the security community through a spectrum of engagement models, including (but not limited to):

  • Traditional third-party code reviews and pen-tests
  • Crowd-sourced pen-tests
  • Voluntary disclosures to our Product Security Incident Response Team (PSIRT)
  • Submissions to our web application disclosure program on HackerOne

Code reviews and pen-tests

Before Adobe introduces a major upgrade or new product, feature or online service offering, a code review and pen-test is often performed by an external security company.  These traditional third-party reviews provide a layer of assurance to complement our internal security assessments and static code analysis that are part of our Secure Product Lifecycle (SPLC).

Crowd-sourced pen-tests

To benefit from a larger pool of security researchers, Adobe also uses crowd-sourced pen-tests in tightly scoped, time-bound engagements involving an elite pool of pen-testers targeting a single service offering or web application.   This approach has helped supplement the traditional pen tests against our online services by increasing code coverage and testing techniques.

Disclosures to PSIRT

The Product Security Incident Response Team (PSIRT) is responsible for Adobe’s vulnerability disclosure program, and typically responds first to the security community’s submissions of vulnerabilities affecting an Adobe product, online service or web property.  In addition to its role as conduit with external researchers, PSIRT partners with both internal and external stakeholders to ensure vulnerabilities are handled in a manner that both minimizes risk to customers and encourages researchers to disclose in a coordinated fashion.

Disclosures via HackerOne

In March 2015, Adobe launched its web application vulnerability disclosure program on HackerOne.  This platform offers researchers the opportunity to build a reputation and learn from others in the community, while allowing vendors to streamline workflows and scale resources more effectively.

As new bug hunting and reporting platforms enable part-time hobbyists to become full-time freelance researchers, we look forward to continuing a constructive collaboration with an ever-widening pool of security experts.

 

Pieter Ockers
PSIRT Security Program Manager

Disha Agarwal
Product Security Manager

Join Us at these Upcoming Security Events


On September 24 – 25, 2015, at the Hyatt Regency San Francisco, meet members of the Adobe security team at AppSec USA 2015, presented by the Open Web Application Security Project (OWASP). Rohit Pitke, one of our security engineers, will be speaking on the topic of “Continuous Cloud Security Automation” from 3 – 4 p.m. on Thursday, September 24. Our team will be in the primary booth area near the conference track rooms. We will have information available about our key security initiatives. Several of our recent blog posts, informative brochures, and cool giveaways are also available in our booth if you can stop by.

We are also sponsoring the upcoming Privacy.Security.Risk 2015 conference, presented by the International Association of Privacy Professionals (IAPP) and the Cloud Security Alliance), September 30 – October 1 at the Bellagio in Las Vegas. Our CSO Brad Arkin will be speaking in one of the breakout sessions on October 1 from 2:30 to 3:30 p.m. Make sure to join us for his informative talk.

In addition, Adobe is sponsoring the upcoming Information Security Executives (ISE) Northeast event at the Westin Times Square in New York City on October 8th. Members of our security team will be there and available to answer any questions you have about overall security of our offerings and our efforts in meeting important industry and regulatory standards. We will have information and brochures in our booth and will also be giving away an XBox One game console during the final prize draw at the end of the evening.

We hope to see you at these upcoming events.

Recap: BlackHat 2015 and r00tz@DefCon 2015

This year Adobe security team members were out in force attending BlackHat 2015 and – new this year – helping inspire the next generation of security professionals at the r00tz @ DefCon conference for kids. Adobe was a major sponsor of the r00tz conference this year helping to set up and run a 3D printing workshop and hackfest for the young attendees.

BlackHat brings together the top experts in the security field to discuss and expose current issues in the information security industry. While there were a variety of talks covering a wide breadth of topics, here are some talks that stood out to us during our time there.

In the discussion panel “Is the NSA Still Listening to Your Phone Calls?” Mark Jaycox from the Electronic Frontier Foundation (EFF) and Jamil Jaffer, former member of the House Permanent Select Committee on Intelligence (HPSCI), talked about government surveillance, and the tradeoffs between keeping our privacy and using surveillance to defend against current threats. It was interesting to see two people on opposite sides of the spectrum openly discussing this complex issue. I felt that by listening to the two parties discuss their points I was able to walk away with a more informed opinion on the current stance of government surveillance in the country today.

James Kettle from Portswigger talked about server side template injection and showed techniques to identify and exploit it on popular template engines such as FreeMarker, Velocity, Twig and Jade. This vulnerability occurs when users are allowed to edit templates or untrusted user input is embedded in the template. It was interesting to see how this vulnerability can be used to directly attack web servers and perform remote code execution instead of cross site scripting. The talk raised awareness on the damage one can do if an application is vulnerable to template injection. Our researchers and security champions will be able to apply the information gained from this talk to identify and mitigate template injection in Adobe products.

In “The Node.js Highway: Attacks are at Full Throttle” talk, Maty Siman and Amit Ashbel discussed the security issues and demonstrated new attack techniques against Node.js applications. The attack on pseudo random number generator in Node.js which allows an attacker to predict the next number given 3 consecutive numbers was quite interesting. This means an application generating password using PRNG might reveal the passwords of all the users. The talk educated our researchers and security champions on new vulnerabilities to look for while reviewing a Node.js application.

In addition to all of the great learnings and networking at BlackHat 2015, many from our team stayed around after BlackHat to attend DefCon and help out at the r00tz @ DefCon conference for kids. This was Adobe’s first year sponsoring the r00tz conference. With the help of our awesome Photoshop engineering teams, we were able to get kid-ready workstations set up with our creative tools and hooked up to cool MakerBot 3D printers. It was a lot of fun helping kids take all of the ideas in their heads and translate them into physical objects they could then take home with them – with, of course, a lot of hacking involved to get many of the ideas to work. In addition to our 3D printing workshop, there were other exercises including a capture the flag contest and robot building contest. It was very rewarding for all of us to sponsor and be a part of inspiring these kids to pursue careers in technology.

 

Tim Fleer
Security Program Manager

Karthik Thotta Ganesh
Web Security Researcher

Top 5 Things You Should Know About FedRAMP and Adobe’s Cloud Services for Government

In July, Adobe Experience Manager and Connect Managed Services received FedRAMP Authorization for its Cloud Services for Government. The Department of Health and Human Services (HHS) granted Adobe an Authority to Operate (ATO) for these specific cloud services run by Adobe Managed Services. Most importantly, this ATO can be leveraged government-wide, thereby decreasing the time and cost for other agencies and organizations as they adopt Adobe’s technology. So what exactly does this mean and why is it important? Here are the top 5 things you need to know:

1. What is FedRAMP?
The Federal Risk and Authorization Management Program (FedRAMP) provides a cost-effective, risk-based approach for the adoption and use of cloud services. It is a joint collaboration by the Department of Homeland Security (DHS), Department of Defense (DoD) and General Services Administration (GSA) as well as other working groups to assist agencies in meeting FISMA requirements for cloud systems. It provides a single, standard approach to security assessment, authorization and monitoring of cloud services.

2. Why Should I Care About FedRAMP?
According to the official FedRAMP site, FedRAMP is based upon the same set of security controls as documented in the Federal Information Security Management Act (FISMA) of 2001. These controls are outlined by the National Institute of Standards and Technology (NIST 800-53). Where FISMA exists as the approval process for on-premise programs, FedRAMP exists as the equivalent for cloud solutions. With recent legislation, all agencies seeking to use cloud services can only implement ones that are FedRAMP certified. More information about FedRAMP can be found here.

3. What does Adobe offer?
Adobe is the first FedRAMP cloud service provider (CSP) to deliver this combination of solutions:
• Web Content Management (WCM)
• Electronic Forms with eSignatures
• Documents Rights Management (DRM)
• Web-conferencing
• E-Learning (LMS)

These FedRAMP authorized solutions are supported by Adobe Products, run by Adobe Managed Services, from a specific region within the Amazon Web Services infrastructure.
i. Adobe Experience Manager Managed Services on Amazon GovCloud
ii. Adobe Connect Managed Services on Amazon GovCloud

4. What’s the big deal about FedRAMP Authorization ?
An agency authorized Authority To Operate (ATO) is the FedRAMP stamp-of-approval for federal agencies. It allows government entities (as well as commercial organizations) to more easily adopt Adobe’s FedRAMP certified cloud solutions. Approval from one agency means an approval for all agencies on the federal level – making an ATO extremely valuable for cloud service providers (CSPs).

Adobe partnered with the Department of Health and Human Services (HHS) to determine that Adobe’s approved cloud services comply with FedRAMP requirements. In working through the FedRAMP Security Assessment Framework (SAF), Adobe’s approved cloud services were first examined to be of FedRAMP standards and reviewed to ensure that solutions were properly documented. They then were evaluated by the Veris Group, a third party assessment organization (3APO) to make sure the software performs as documented, and had to pass 328 separate security controls in order to become FedRAMP authorized. The approval process is very intensive and takes anywhere from one to three years to complete. Accordingly, Adobe’s investment is significant and further demonstrates how Adobe stays ahead of the curve in terms of security and compliance.

5. Benefits of FedRAMP Certification for Cloud Based Solutions
In 2011 the U.S Federal Government released the Federal Cloud Computing Strategy that instituted a “Cloud First” policy emphasizing cloud services by requiring agencies to adopt a cloud solution if one exists. This strategy was developed as a result of three main benefits of cloud services: its deployment speed, minimal on-premise upkeeping, and constant stream of updates.
• Fast deployment speed – Hosted cloud solutions are typically already ‘up and running’ compared to on-premise solutions which can take months to implement. The beautiful part of the cloud is its scalability – it can be grow or shrink to suit the demands of the enterprise.
• Minimal on-premise housekeeping – in on-premise solutions, a lot of time is required of the security staff of individual agencies to set up servers, install software, manage patches and updates, performing backups and troubleshooting problems. With cloud solutions, there typically aren’t on-site servers, the software installs, patches and back up is the responsibility of the cloud service provider. This saves federal agencies time and money and allows the agency’s security team to focus on their core job.
• Always the newest version – Cloud solutions are constantly updated to provide new features or services and keep up to date with the changing security landscape. Cloud service providers also learn from the implementation of its software for one agency in order to improve the product to its other customers. These learnings help ensure that our customers are getting a secure and high quality service.

The US Government has clearly identified cloud solutions as the way of the future. With its recent FedRAMP authorization, Adobe seeks to be cemented as one of the leaders of cloud solutions in the public sector with its unique cloud service solutions.

You can learn more about how FedRAMP – and Adobe solutions – are helping to bring about the “consumerization of Government” in my other recent blog.

John Landwehr
Vice President & Public Sector CTO

Adobe Security Team @ BlackHat 2015

I am headed to BlackHat 2015 in Las Vegas this week with members of our Adobe product security teams. We are looking forward to connecting with the security community throughout the week. We also hope to meet up with some of you at the parties, at the craps tables, or just mingling outside the session rooms during the week. Make sure to follow our team on Twitter @AdobeSecurity. Feel free to follow me as well @BradArkin. We’ll be tweeting info as to our observations and happenings during the week. Look for the hashtag #AdobeBH2015.

This year we are also proud to sponsor the r00tz Kids Conference @ DefCon. Members of our teams will be helping out with 3D printing and other workshops during this great conference for future security pros. We hope you are able to bring your kids along to join our team at this fun event as part of your DefCon experience.

We are looking forward to a great week in Vegas.

Brad Arkin
VP and Chief Security Officer

Why Moms Can Be Great at Computer Security

As a new mom, I’ve come to a few realizations as to why I think moms can be really innovative and outright great when it comes to solving problems in computer security. I realize these anecdotes and experiences can apply to any parent, so please take this as purely from my personal “mom” perspective. This is not to say moms are necessarily better (my biases aside), but, I do think there are some skills we learn on-the-fly as new mothers that can become invaluable in our security careers. And vice-versa – there are many skills I’ve picked up throughout my security career that have come in really handy as a new mom. Here are my thoughts on some of the key areas where I think these paths overlap:

  • We are ok with not being popular. Any parent who has had to tell their kid “no,” ground them, or otherwise “ruin their lives” knows that standing firm in what is right is sometimes not easy – but, it is part of the job. Security is not all that different. We often tell people that taking unsafe shortcuts or not building products and services with security in mind will not happen on our watch. From time to time, product teams are mad when we have to go over their heads to make sure key things like SSL is enabled by default as a requirement for launch of a new service. In incident response, for example, we sometimes have to make hard decisions like taking a service offline until the risk can be mitigated. And we are ok with doing all of this because we know it is the right thing to do. However, when we do it, we are kind but firm – and, as a result, we are not always the most liked person in a meeting, and we’re very OK with that.
  • We can more easily juggle multiple tasks and priorities. My primary focus has always been incident response, but it was not until I had a child that I realized how well my job prepared me for parenthood. A security incident usually has many moving pieces at once – investigate, confirm, mitigate, update execs, and a host of other things – and they all need to be done right now. Parents are often driving carpools while eating breakfast, changing diapers on a conference call while petting the dog with a spare foot (you know this is not an exaggeration), and running through Costco while going through math flash cards with our daughters. At the end of each workday, we have to prioritize dinner, chores, after school activities, and bedtime routines. It all seems overwhelming. But, in a matter of minutes, a plan has formed and we are off to the races! We delegate, we make lists, and somehow it all gets done. Just like we must do with our security incident response activites.
  • We trust but verifyThis is an actual conversation:

Mom: Did you brush your teeth?
Kid: Yes
Mom (knowing the kid has not been in the bathroom in hours): Are you sure? Let me smell your breath
Kid : Ugggghhhh… I’ll go brush them now…

I hear a similar conversation over and over in my head in security meeting after meeting. It usually is something like this:

Engineer: I have completed all the action items you laid out in our security review
Mom (knowing that the review was yesterday and it will take about 10 hours of engineering work to complete): Are you sure? Let’s look at how you implemented “X.”
Engineer : Oh, I meant most of the items are done
Mom: It is great you are starting on these so quickly. Please let me know when they are done.

Unfortunately, this does indeed happen sometimes – hence why I must be such a staunch guardian. Security can take time and is sometimes not as interesting as coding a new feature. So, like a kid who would rather watch TV than brush his teeth because it is not seen as a big deal to not brush, we have to gently nudge and we have to verify.

  • We are masters at seeing hidden dangers and potential pitfalls. When a baby learns to roll, crawl, and walk, moms are encouraged to get down at “baby level” to see and anticipate potentially dangerous situations. Outlet covers are put on, dangerous chemical cleaners no longer live under the sink, and bookcases are mounted to the walls. As kids get older, the dangers we see are different, but we never stop seeing them. Some of this is just “mom worry” – and we have to keep it in check to avoid becoming dreaded “helicopter parents.” However, we are conditioned to see a few steps ahead and we learn to think about the worst case scenario. Seeing worst case scenarios and thinking like an attacker are two things that make security professionals good at their jobs. Many are seen as paranoid, and, quite frankly, that paranoia is not all that dissimilar to “mom worry.” Survival of the species has relied on protection of our young, and although a new release of software is not exactly a baby, you can’t turn off that protective instinct.

It was really surprising to me the similarities between work and parenthood. Being a parent and being a security professional sound so dissimilar on the surface, but, it is amazing how the two feed each other – and how my growth in one area has helped my growth in the other. It also shows how varying backgrounds can be your path to a successful security career.

 

Lindsey Wegrzyn Rush
Sr. Manager, Security Coordination Center

Securely Deploying MongoDB 3.0

I recently needed to set up an advanced, sharded MongoDB 3.0 database with all the best practices enabled for a deployment of the CRITs web application. This would be an opportunity for me to get first hand experience with the recommended security guidance that I recommend to other Adobe teams. This post will cover some of the lessons that I learned along the way. This isn’t a replacement for reading the documentation. Rather, it is a story to bookmark for when one of your teams is ready to deploy a secure MongoDB 3.0 instance and is looking for real-world examples to supplement the documentation.

MongoDB provides a ton of security documentation and tutorials, which are invaluable. It is highly recommended that you read them thoroughly before you begin, since there are a lot of important details captured in the documents. The tutorials often contain important details that aren’t always captured in the core documentation for a specific feature.

If you are migrating from an older version, you’ll quickly find that MongoDB has been very active in improving the security of its software. The challenge will be that some of your previous work may now be deprecated. For instance, the password hashing functions have migrated from MONGODB-CR to SCRAM-SHA-1 . The configuration file switched in version 2.6 from name-value pairs to YAML. Oddly, when I downloaded the most recent version of MongoDB, it came with the name-value pair version by default. While name-value pairs are still supported, I decided to create the new YAML version from scratch to avoid a migration later. In addition, keyfile authorization between cluster servers has been replaced with X.509. These improvements are all things you will want to track when migrating from an older version of MongoDB.

In prepping for the deployment, there are a few things you will want to do:

  • Get a virtual notepad. A lot of MongoDB commands are lengthy to type, and you will end up pasting them more than once.
  • After reading the documentation and coming up with a plan for the certificate architecture, create a script for generating certificates. You will end up generating one to two certificates per server.
  • Anytime you deploy a certificate system, you should have a plan for certificate maintenance such as certificate expirations.
  • The system is dependent on a solid base. Make sure you have basic sysadmin tasks done first, such as using NTP to ensure hosts have consistent clocks for timestamps.

If you are starting from scratch, I would recommend getting MongoDB cluster connectivity established, followed by layering on security. At a minimum, establish basic connectivity between the shards. If you try to do security and a fresh install at the same time, you may have a harder time debugging.

Enabling basic SSL between hosts

I have noticed confusion over which versions of MongoDB support SSL, since it has changed over time and there were differences between standard and enterprise versions. Some package repositories for open-source OSs are hosting the older versions of MongoDB. The current MongoDB 3.0 page says, “New in version 3.0: Most MongoDB distributions now include support for SSL.”  Since I wasn’t sure what “most” meant, I downloaded the standard Ubuntu version (not enterprise) from the MongoDB hosted repository, as described here: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/ That version did support SSL out of the box.

MongoDB has several levels of SSL settings, including disabled, allowSSL, preferSSL, and requireSSL. These can be useful if you are slowly migrating a system, are learning the command line, or have different needs for different roles. For instance, you may specify requireSSL for your shards and config servers to ensure secure inter-MongoDB communication. For your MongoDB router instance, you may choose a setting of preferSSL to allow legacy web applications to connect without SSL, while still maintaining secure inter-cluster communication.

If you plan to also use X.509 for cluster authorization, you should consider whether you will also be using cluster authentication and whether you want to specify a separate certificate for clusterAuth. If you go with separate certificates, you will want to set the serverAuth Extended Key Usage (EKU) attribute on the SSL certificate and create a separate clientAuth certificate for cluster authorization. A final configuration for the SSL configuration would be:

net:

    ssl:

      CAFile: “root_CA_public.pem”

      mode: requireSSL

      PEMKeyFile: “mongo-shard1-serverAuth.pem”

      PEMKeyPassword: YourPasswordHereIfNecessary

Enabling authentication between servers

In recent versions of MongoDB, the inter-cluster authentication method has changed in version 2.6 from using keyfiles to leveraging X.509 certificates. The keyfile authentication was just a shared secret, whereas X.509 verifies approval from a known CA. To ease migration from older implementations, MongoDB lets you start at keyfile, then move to hybrid support with sendKeyFile and sendX509, before finally ending at the X.509-only authentication setting: x509. If you have not already enabled keyfiles in an existing MongoDB deployment, then you may need to take your shard offline in order to enable it. If you are using a separate certificate for X.509 authentication, then you will want to set the clientAuth EKU in the certificate.

The certificates used for inter-cluster authentication must have their X.509 subject (O, OU, DN, etc.) set exactly the same, except for the hostname in the CN. The CN, or Subject Alternative Name, must match the hostname of the server. If you want flexibility to move shards to new instances without reissuing certificates, you may want a secondary DNS infrastructure that will allow you to remap static hostnames to different instances. When a cluster node is successfully authenticated to another cluster node, it will get admin privileges for the instance. The following settings will enable cluster authentication:

net:

   ssl:

      CAFile: “/etc/mongodb/rootCA.pem”

      clusterFile: “mongo-shard1-clientAuth.pem”

      clusterPassword: YourClusterFilePEMPasswordHere

      CRLFile: “YourCRLFileIfNecessary.pem”

 

   security:

       clusterAuthMode: x509

Client authentication and authorization

MongoDB authorization can support a set of built-in roles and user defined roles for those who want to split authorization levels across multiple users. However, authorization is not enabled by default. To enable authorization, you must specify the following in your config file:

       security:

          authorization: enabled

There was a significant change in the authorization model changed between 2.4 and 2.6. If you are doing an upgrade from 2.4, be sure to read the release notes for all the details. The 2.4 model is no longer supported in MongoDB 3.0. Also, an existing environment may have downtime because you have to sync changing your app to use the MongoDB password as well as enabling authentication in MongoDB.

For user-level account access, you will have a choice between traditional username and password, LDAP proxy, Kerberos, and X.509. For my isolated infrastructure, I had to choose between X.509 and username/password. Which approach is correct depends on how you interact with the server and how you manage secrets. While I had to use a username and password for the CRITs web application, I wanted to play with X.509 for the local shard admin accounts. The X.509 authentication can only be used with servers that have SSL enabled. While it is not strictly necessary to have local shard admin accounts, the documentation suggested that they would eventually be needed for maintenance. From the admin database, X.509 users can be added to the $external database using the following command:

   db.getSiblingDB(“$external”).runCommand(

      {

          createUser: “DC=org,DC=example, CN=clusterAdmin,OU=My Group,O=My Company,ST=California,C=US”,

          roles: [

             { role: ‘clusterAdmin’, db: ‘admin’ }

          ]

      }

   )

The createUser field contains the subject from the client certificate for the cluster admin. Once added, the command line for a connection as the clusterAdmin would look like this:

       mongo –ssl –sslCAFile root_CA_public.pem –sslPEMKeyFile ./clusterAdmin.pem mongo_shard1:27018/admin

Although you provided the key in the command line, you still need to run the auth command that corresponds to the clusterAdmin.pem certificate in order to convert to that role:

   db.getSiblingDB(“$external”).auth(

      {

         mechanism: “MONGODB-X509”,

         user: “ DC=org,DC=example, CN=clusterAdmin,OU=My Group,O=My Company,ST=California,C=US

      }

   );

The localhost exception allows you to create the first user administrator in the admin database when authorization is enabled. However, once you have created the first admin account, you should remember to disable it by specifying:

       setParameter:

           enableLocalhostAuthBypass: false

Once you have the admin accounts created, you can create the application roles against the application database with more restricted privileges:

   db.createUser(

      {

         user: “crits_app_user”,

         pwd: “My$ecur3AppPassw0rd”

         roles: [

            { role: “readWrite”, db: “crits” }

         ]

         writeConcern: { w: “majority” , wtimeout: 5000 }

      }

   ) 

At this stage, there are still other security options worth reviewing. For instance, there are some SSL settings I didn’t cover because they already default to the secure setting. If you are migrating from an older database, then you will want to check the additional settings, since some behavior may change. Hopefully, this post will help you to get started with secure communication, authentication, and authorization aspects of MongoDB 3.0.

 

Peleus Uhley
Lead Security Strategist

SAFECode Goes to Washington

On a recent trip to Washington, DC, I had the opportunity to participate in a series of meetings with policymakers on Capitol Hill and in the Administration to discuss SAFECode’s  (Software Assurance Forum for Excellence in Code) role in and commitment to improving software security.  If you’re not familiar with SAFECode, I encourage you to visit the SAFECode website to learn more about the organization. At a high level, SAFECode advances effective software assurance methods, and identifies and promotes best practices for developing and delivering more secure and reliable software, hardware, and services in an industry-led effort.

The visit to DC was set up to promote some of the work being done across our industry to analyze, apply, and promote the best mix of software assurance technology, process, and training. Along with some of my colleagues from EMC and CA Technologies, we spent the beginning of the trip at the Software and Supply Chain Assurance Working Group, where we presented on the topic of software assurance assessment. The premise of our presentation was that there is no one-size-fits-all approach to software assurance, and that a focus on the supplier’s software assurance process is the right way to assess the maturity of an organization when it comes to software security.

One of the other important aspects we discussed with policymakers was SAFECode’s role in promoting the need for security education and training for developers. We are considering ways to support the expansion of software security education in university programs and plan to add new offerings to the SAFECode Security Engineering training curriculum, a free program aimed at helping those looking to create an in-house training program for their product development teams as well as individuals interested in enhancing their skills.

Overall, this was a very productive trip, and we look forward to working with policymakers as they tackle some of the toughest software security issues we are facing today.

 
David Lenoe, Director of Adobe Secure Software Engineering
SAFECode Board Member

Updated Security Information for Adobe Creative Cloud

As part of our major release of Creative Cloud on June 16th, 2015, we released an updated version of our security white paper for Adobe Creative Cloud for enterprise. In addition, we released a new white paper about the security architecture and capabilities of Adobe Business Catalyst. This updated information is useful in helping I.T. security professionals evaluate the security posture of our Creative Cloud offerings.

Adobe Creative Cloud for enterprise gives large organizations access to Adobe’s creative desktop and mobile applications and services, workgroup collaboration, and license management tools. It also includes flexible deployment, identity management options including Federated ID with Single Sign-On, annual license true-ups, and enterprise-level customer support — and it works with other Adobe enterprise offerings. This version of the white paper includes updated information about:

  • Various enterprise storage options now available, including updated information about geolocation of shared storage data
  • Enhancements to entitlement and identity management services
  • Enhancements to password management
  • Security architecture of shared services and the new enterprise managed services

Adobe Business Catalyst is an all-in-one business website and online marketing solution providing an integrated platform for Content Management (CMS), Customer Relationship Management (CRM), E‐Mail Marketing, ECommerce, and Analytics. The security white paper now available includes information about:

  • Overall architecture of Business Catalyst
  • PCI/DSS compliance information
  • Authentication and services
  • Ongoing risk management for the Business Catalyst application and infrastructure

Both white papers are available for download on the Adobe Security resources page on adobe.com.

 

Chris Parkerson
Sr. Marketing Strategy Manager