Posts tagged "PKI"

Securely Deploying MongoDB 3.0

I recently needed to set up an advanced, sharded MongoDB 3.0 database with all the best practices enabled for a deployment of the CRITs web application. This would be an opportunity for me to get first hand experience with the recommended security guidance that I recommend to other Adobe teams. This post will cover some of the lessons that I learned along the way. This isn’t a replacement for reading the documentation. Rather, it is a story to bookmark for when one of your teams is ready to deploy a secure MongoDB 3.0 instance and is looking for real-world examples to supplement the documentation.

MongoDB provides a ton of security documentation and tutorials, which are invaluable. It is highly recommended that you read them thoroughly before you begin, since there are a lot of important details captured in the documents. The tutorials often contain important details that aren’t always captured in the core documentation for a specific feature.

If you are migrating from an older version, you’ll quickly find that MongoDB has been very active in improving the security of its software. The challenge will be that some of your previous work may now be deprecated. For instance, the password hashing functions have migrated from MONGODB-CR to SCRAM-SHA-1 . The configuration file switched in version 2.6 from name-value pairs to YAML. Oddly, when I downloaded the most recent version of MongoDB, it came with the name-value pair version by default. While name-value pairs are still supported, I decided to create the new YAML version from scratch to avoid a migration later. In addition, keyfile authorization between cluster servers has been replaced with X.509. These improvements are all things you will want to track when migrating from an older version of MongoDB.

In prepping for the deployment, there are a few things you will want to do:

  • Get a virtual notepad. A lot of MongoDB commands are lengthy to type, and you will end up pasting them more than once.
  • After reading the documentation and coming up with a plan for the certificate architecture, create a script for generating certificates. You will end up generating one to two certificates per server.
  • Anytime you deploy a certificate system, you should have a plan for certificate maintenance such as certificate expirations.
  • The system is dependent on a solid base. Make sure you have basic sysadmin tasks done first, such as using NTP to ensure hosts have consistent clocks for timestamps.

If you are starting from scratch, I would recommend getting MongoDB cluster connectivity established, followed by layering on security. At a minimum, establish basic connectivity between the shards. If you try to do security and a fresh install at the same time, you may have a harder time debugging.

Enabling basic SSL between hosts

I have noticed confusion over which versions of MongoDB support SSL, since it has changed over time and there were differences between standard and enterprise versions. Some package repositories for open-source OSs are hosting the older versions of MongoDB. The current MongoDB 3.0 page says, “New in version 3.0: Most MongoDB distributions now include support for SSL.”  Since I wasn’t sure what “most” meant, I downloaded the standard Ubuntu version (not enterprise) from the MongoDB hosted repository, as described here: http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/ That version did support SSL out of the box.

MongoDB has several levels of SSL settings, including disabled, allowSSL, preferSSL, and requireSSL. These can be useful if you are slowly migrating a system, are learning the command line, or have different needs for different roles. For instance, you may specify requireSSL for your shards and config servers to ensure secure inter-MongoDB communication. For your MongoDB router instance, you may choose a setting of preferSSL to allow legacy web applications to connect without SSL, while still maintaining secure inter-cluster communication.

If you plan to also use X.509 for cluster authorization, you should consider whether you will also be using cluster authentication and whether you want to specify a separate certificate for clusterAuth. If you go with separate certificates, you will want to set the serverAuth Extended Key Usage (EKU) attribute on the SSL certificate and create a separate clientAuth certificate for cluster authorization. A final configuration for the SSL configuration would be:

net:

    ssl:

      CAFile: “root_CA_public.pem”

      mode: requireSSL

      PEMKeyFile: “mongo-shard1-serverAuth.pem”

      PEMKeyPassword: YourPasswordHereIfNecessary

Enabling authentication between servers

In recent versions of MongoDB, the inter-cluster authentication method has changed in version 2.6 from using keyfiles to leveraging X.509 certificates. The keyfile authentication was just a shared secret, whereas X.509 verifies approval from a known CA. To ease migration from older implementations, MongoDB lets you start at keyfile, then move to hybrid support with sendKeyFile and sendX509, before finally ending at the X.509-only authentication setting: x509. If you have not already enabled keyfiles in an existing MongoDB deployment, then you may need to take your shard offline in order to enable it. If you are using a separate certificate for X.509 authentication, then you will want to set the clientAuth EKU in the certificate.

The certificates used for inter-cluster authentication must have their X.509 subject (O, OU, DN, etc.) set exactly the same, except for the hostname in the CN. The CN, or Subject Alternative Name, must match the hostname of the server. If you want flexibility to move shards to new instances without reissuing certificates, you may want a secondary DNS infrastructure that will allow you to remap static hostnames to different instances. When a cluster node is successfully authenticated to another cluster node, it will get admin privileges for the instance. The following settings will enable cluster authentication:

net:

   ssl:

      CAFile: “/etc/mongodb/rootCA.pem”

      clusterFile: “mongo-shard1-clientAuth.pem”

      clusterPassword: YourClusterFilePEMPasswordHere

      CRLFile: “YourCRLFileIfNecessary.pem”

 

   security:

       clusterAuthMode: x509

Client authentication and authorization

MongoDB authorization can support a set of built-in roles and user defined roles for those who want to split authorization levels across multiple users. However, authorization is not enabled by default. To enable authorization, you must specify the following in your config file:

       security:

          authorization: enabled

There was a significant change in the authorization model changed between 2.4 and 2.6. If you are doing an upgrade from 2.4, be sure to read the release notes for all the details. The 2.4 model is no longer supported in MongoDB 3.0. Also, an existing environment may have downtime because you have to sync changing your app to use the MongoDB password as well as enabling authentication in MongoDB.

For user-level account access, you will have a choice between traditional username and password, LDAP proxy, Kerberos, and X.509. For my isolated infrastructure, I had to choose between X.509 and username/password. Which approach is correct depends on how you interact with the server and how you manage secrets. While I had to use a username and password for the CRITs web application, I wanted to play with X.509 for the local shard admin accounts. The X.509 authentication can only be used with servers that have SSL enabled. While it is not strictly necessary to have local shard admin accounts, the documentation suggested that they would eventually be needed for maintenance. From the admin database, X.509 users can be added to the $external database using the following command:

   db.getSiblingDB(“$external”).runCommand(

      {

          createUser: “DC=org,DC=example, CN=clusterAdmin,OU=My Group,O=My Company,ST=California,C=US”,

          roles: [

             { role: ‘clusterAdmin’, db: ‘admin’ }

          ]

      }

   )

The createUser field contains the subject from the client certificate for the cluster admin. Once added, the command line for a connection as the clusterAdmin would look like this:

       mongo –ssl –sslCAFile root_CA_public.pem –sslPEMKeyFile ./clusterAdmin.pem mongo_shard1:27018/admin

Although you provided the key in the command line, you still need to run the auth command that corresponds to the clusterAdmin.pem certificate in order to convert to that role:

   db.getSiblingDB(“$external”).auth(

      {

         mechanism: “MONGODB-X509″,

         user: “ DC=org,DC=example, CN=clusterAdmin,OU=My Group,O=My Company,ST=California,C=US

      }

   );

The localhost exception allows you to create the first user administrator in the admin database when authorization is enabled. However, once you have created the first admin account, you should remember to disable it by specifying:

       setParameter:

           enableLocalhostAuthBypass: false

Once you have the admin accounts created, you can create the application roles against the application database with more restricted privileges:

   db.createUser(

      {

         user: “crits_app_user”,

         pwd: “My$ecur3AppPassw0rd”

         roles: [

            { role: “readWrite”, db: “crits” }

         ]

         writeConcern: { w: “majority” , wtimeout: 5000 }

      }

   ) 

At this stage, there are still other security options worth reviewing. For instance, there are some SSL settings I didn’t cover because they already default to the secure setting. If you are migrating from an older database, then you will want to check the additional settings, since some behavior may change. Hopefully, this post will help you to get started with secure communication, authentication, and authorization aspects of MongoDB 3.0.

 

Peleus Uhley
Lead Security Strategist

Digital Signatures with PIV and PIV-I Credentials

In response to Homeland Security Presidential Directive (HSPD) 12, NIST created a program for improving the identification and authentication of Federal employees and contractors to Federal facilities and information systems.  This program is Federal Information Processing Standard (FIPS) 201, entitled Personal Identity Verification (PIV) of Federal Employees and Contractors, which as of September 2011 had issued over 5 million credentials.  PIV-I expands the interoperable secure PKI credentialing to Non-Federal Issuers (NFI) so that other organizations seeking identity federation can include their own employees.  Currently approved PIV-I providers include DigiCert, Entrust, Operational Research Consultants, VeriSign/Symantec, and Verizon Business.  The CertiPath bridge also supports PIV-I credential providers such as Citi and HID.

If you have a PIV or PIV-I card, and are interested in digitally signing documents for consent/approval signatures or certified publishing – Adobe Acrobat and Adobe Reader will automatically validate digital signatures via US Federal Common Policy.  Through the Adobe Approved Trust List  (AATL) program, the following trust anchors are included in version 9 and higher:

  • Common Policy — 2010 expiry — Common Hardware, Common High, Medium HW CBP
  • Common Policy — 2027 expiry — Common Hardware, Common High, Medium HW CBP
  • Federal Common Policy CA — 2030 expiry — Common Hardware, Common High, Medium HW CBP, SHA1 Hardware
To have the digital signature automatically validate for any recipient, whether or not they have a PIV/PIV-I credential, the signer’s system must build a complete certificate chain for path validation to reach one of the supported trust anchors.  If the signer’s system only has the signer’s certificate – it will not validate for anyone else automatically.  A recommendation to make this easier is for all of the issuing certificate authority public key certificates to be stored on the smartcard and available to the OS+applications.  That way the card can be truly portable and sign documents on any system.  Otherwise, the system administrator will need to ensure all of the certificates are otherwise installed into the OS and available to Adobe Acrobat/Reader.
As an example, below is an overview of configuring digital signatures with the HID PIV-I service.
After the customer application is approved and credentials are being issued, the user will need to install the chain of certificates on their signing systems.  The certificates required are:
  1. HIDSigningCA1
  2. HIDRootCA1
  3. Federal Bridge CA
  4. CertiPath Bridge CA – G2

There are several ways these certificates can be installed.  The easiest is to open the attached file HID_PIV-I_AdobeConfiguration.pdf, which provides a simplified installation experience into Adobe Acrobat and Adobe Reader.  You can also download the FDF directly here:  HID-PIV-I-Certs-AdobeReader.fdf

Now you can sign a PDF file and it will automatically validate for anyone with Acrobat or Reader version 9.1 or higher.

Sample HID PIV-I Signature document digitally signed with a production HID PIV-I card looks like this:

Here is the path that the digital signature follows for validation:

What is a Certified Document and when should you use it?

A Certified Document provides PDF document and forms recipients with added assurances of its authenticity and integrity.  Here are two frequent uses cases for Certified Documents that illustrate these capabilities:

  1. You publish files and want the recipients to know that the files really did originate from you and they have not been accidentally or maliciously modified since you published them.
  2. You distribute electronic forms with pre-populated information, and want to make sure recipients are not accidentally or maliciously modifying your form data when returning them to you.

To certify a document,you can use Acrobat on the desktop or LiveCycle Digital Signatures as part of an automated process on a server.  To verify the certification on a document, desktop users simply open PDFs with the free Adobe Reader or Adobe Acrobat.  If you would like an automated process to verify certified documents on a server, LiveCycle Digital Signatures can also verify certified document status.

When a document has valid certification, a blue ribbon in a blue bar will show above the document in the viewer, like this:

In this case, the document originated from the United States Government Printing Office.  It was published as part of an automated Adobe LiveCycle process, and the source document is publicly available here (http://www.gpo.gov/fdsys/pkg/BILLS-106s761enr/pdf/BILLS-106s761enr.pdf) as part of their Federal Digital System which has very specific requirements on authentication when publishing official US Government documents to the public.  In 2008, the Executive Office of the President, Office of Management and Budget (OMB) stated the White House was no longer ordering hard copy paper versions of the US Federal budget, and instead has posted certified PDF documents online.

Certified documents are also implemented at Antwerp Port Authority for electronic invoices and at a number of higher education institutions for delivering student transcripts electronically, including Penn State, Northwestern, Stanford, and more.

In addition to static documents, certifying a document increases the level of security in electronic forms workflows.  Here is an example:
a) Organization generates a form for recipient to complete and return
b) Form contains some specific transactional information, like an interest rate (3%) and term (15yrs).
c) Recipient decides they will change the rate and term to be more favorable, and then digitally signs it and returns it.

Typically, the form publisher would have to manually review every completed form to look for such errors, and they can often be overlooked.  The better solution is to certify the form as it is published to the recipient.  The added assurances here are that the recipient knows it’s an official form that hasn’t been tampered with, and when the publishing organization receives a completed and signed form back – they know that what was sent out has not been changed along the way.  The certification also allows the form author/publisher to specify which fields and form elements are locked, and which can be filled in by the recipient.

Here is an example of a certified form:

The source PDF file is available here as a Sample.

In either of these cases, if an unauthorized change is made to a certified document, the blue ribbon will turn to a red X – indicator.

More information on automating digital signatures for documents and forms is available in this previous post (LiveCycle Digital Signatures: Three Common Use Cases)

Certified documents utilize PKI and digital signatures to provide the assurances of authenticity and integrity.  These are capabilities built into the ISO 32000 standard PDF specification as well as Adobe Acrobat, Reader, and LiveCycle.  Adobe products utilize FIPS certified encryption implementations of RSA and SHA hashing algorithms (up to RSA4096 and SHA512).  The publisher/signer utilizes their private key certificate to sign documents on the desktop (Acrobat) or server (LiveCycle) and recipients simply use Acrobat or Reader to view them.

Recommendations and best practices:

A. Make sure your signing certificate is trusted by your recipient community.  This can be accomplished in several ways:

1) Utilize the Adobe CDS/AATL program, where certificates are automatically trusted and the recipients have zero configuration to validate digital signatures.  You can either obtain a certificate from a registered Adobe provider, or if you meet the strict program requirements – have your certificate authority automatically trusted.  NOTE: If you are publishing documents to the general public, CDS/AATL is the only recommended option.

2) Utilize enterprise install and management capabilities to push out trust anchors in pre-configured installations as well as maintained on an internal server

3) Utilize an enterprise desktop configuration setting to trust the existing certificate store in the operating system (e.g. Windows CAPI)

B) When certifying a document, make sure that all certificates from the trust chain are available on the signing system (desktop or server).  This includes not only the end-entity signing certificate, but also any intermediate certificates up to the trust anchor.  That way, the recipient only needs to have the trust anchor, as described in the previous section.

C) When publishing a certified document with a digital signature, make sure you are online and able to reach the revocation information published by the certificate authorities.  That way, long term validation (LTV) information is stored in the document.  If this information is not included, the certified document will no longer validate after a signing certificate expires.

D) By default certified documents utilize the system clock as a date/time indicator.  If you have higher assurance needs for time, utilize an RFC3161 based timestamp authority as part of the digital signature process