Posts in Category "Uncategorized"

Security misconceptions – Watermarks, Usage Rights and Rights Management

There is a confusion about what features of Acrobat and PDFs in general offer by way of securing documents. I would like to do a very cursory overview of the items that I have so far seen users consider “security.”

To be clear, by “security” I mean the ability or inability to access the contents of the PDF, thus safeguarding information from entering the wrong hands.

1) Not Security-Oriented

a) Watermarks

Unlike on your Dollar, Euro or Pound notes (etc), the watermark is NOT a guarantee of integrity, veracity or anything at all.

In the PDF world, a visible watermark only exists as a notification mechanism. If a watermark says “Confidential,” it is only warning the viewer that the content is confidential, but will not otherwise try to make itself indelible.

It is meant to be a very visible mark on the page, with the added property of not completely obfuscation the items underneath (allowing readability to be maintained)

b) Certification

A Certified PDF carries a digital signature certifying that certain things can and cannot be done with it. Namely:

-A PDF certified to run privileged scripts can run scripts requiring special privileges, such as writing to the hard drive.
-A PDF certified to be unmodified means that so long as the PDF has been modified withing given parameters (fields filled in for example), then the certification will hold. If a visual aspect of the PDF changes though, the certification will be broken, and Acrobat will report an error.

Certification covers a number of other use cases as well, but I hope the above illustrates sufficiently why this is a not a security-related item, rather a usability concern.

c) Reader Extensions Usage Rights

Acrobat and LiveCycle can extend the usability of PDFs to Adobe Reader, the free PDF viewing application. By extending usability features, you can allow Reader users to fill in forms and save that content, add comment annotation, and other functionality.

However, if the same extended form is opened in Acrobat, the user can do to the PDF pretty much anything that Acrobat has at its disposition.

REUR adds functionality to Reader. Any extra functionality it does not add is a restriction that Reader already had.

2) Security-Oriented

a) Password Protection

Using password protection, you can encrypt the PDF so it can only be opened by a person who has the password. You can also prevent the PDF from being used in certain ways, such as modifying the pages.

You cannot however track who has opened the PDF, when and at what IP. That is the domain of Rights Management.

b) LiveCycle Rights Management (aka Policy Server)

LiveCycle 7 introduced Policy Server, later renamed to LiveCycle Rights Management. Adobe LiveCycle/ADEP Rights Management protects your documents from being accessed by parties you have not authorized to do so.

This allows the document publisher to:
-protect with a user ID/password combination
-force the identification to go to a remote server
-restrict usage rights depending on the user’s group

With this is mind, you must be aware that ONLY persons that are trusted should be granted a login to the document. If, on a document that you want to protect, you have granted access to a person you do not Trust Entirely, you have opened the door to having your information stolen – be it via sreen grab, or simply photographing the screen with a camera.

It’s like having the best vault to protect your secrets and giving the secretary the passcode for safekeep. If the secretary is honest, they will leave your items well alone. But if you did not trust them in the first place, the vault, for all its technology and mechanisms, cannot protect your secrets – because you’ve willingly given the key to the intruder.

3) A note on Rights Management and SSL

To use Adobe LiveCycle Rights Management, you need to setup the server to be able to server SSL connections, and configure the callback URL appropriately in the LiveCycle/ADEP Rights Management service configuration.

Note that if the server’s SSL certificate specifies external CRLs, you must be able to grant the client application free network access to the CRL’s URL – otherwise the connection will fail.

I hope that this article has allowed you to understand the subtle difference between the perceived security tools and actual security features – and most importantly, the fact that if you suspect a user may likely try to do Bad Things with your information, you should not give them the keys to the vault.

My own Rule Number One of security is: “don’t trust anyone, not even those you trust.” Then add exceptions, based on well-founded assumptions.

— Tai

Understanding the LiveCycle GDS – and freeing up disk space

LiveCycle, as an piece of Enterprise software, tends to assume that you may want to keep a quantity of data around for posterity. Long-lived processes can cause a lot of disk space bloat, and whilst this is fine for those who wish to archive lots, this may not be ideal when running a lower-spec server.

In this article, I will point out the main areas where data and disk space use can happen, and how to clean up.

1) About Short-Lived and Long-Lived processes

Processes (also known as “workflows”, or “orchestrations”) are created in LiveCycle Workbench. This tool that allows you to create workflows, or processes, organized into Applications; and each process can be either long-lived (“asynchronous”) or short-lived (“synchronous”).

When a short-lived process is invoked, the response is only returned once the whole process has run. For this reason, no short-lived process can have a step which requires human interactions – namely, a Workspace task.

When a long-lived process is invoked, the request returns immediately. The process will run, but you will need to get the result through a different request or action. Long-lived processes do not need to have a human-centric activity in them: you could use a long-lived process to send a document to the server for procesing, without needing to know what status it ended up in.

Note that for any process that stalls, the documents associated will also be kept, ready for recuperation, analysis and debugging.

2) About the Global Document Store

The Global Document Store, also known as “the GDS” is a space on the hard drive or in the database (depending on your configuration in the Core System Configuration) where LiveCycle stores documents during the running of processes, and once long-lived processes are complete.

Note that whilst the GDS stores the files themselves, the references to them that processes need are stored in the database. For this reason, the GDS and the database must NEVER be out of sync. Should that happen, any processes that are running would fail, making data recuperation difficult or even insurmountable.

In short-lived processes, when documents are larger than a certain size, they will be written to the GDS instead of being held in memory. This size is set in the Admin UI as the Document Max Inline Size. When a result document is produced, no mater what its size, it will be written to the GDS. Short lived processes can return the document itself, or a URL to the document. Accessing this URL will cause LiveCycle to lookup the document in the GDS to write it back to the client.

Documents from short-lived processes are removed after their time is passed. The Sweep setting (set in the Admin UI in the Core System Configuration) determines how frequently the GDS is scanned for documents to delete, and its associated Document Disposal Timeout determines how long the document should be kept for. If during a sweep of the GDS any new document is found from a short-lived process, it is marked for expiry by placing a similarly named document in the GDS, with a timestamp indicating the clock time after which the document should be deleted – this clock time is determined by the disposal timeout. Every sweep checks the timestamp, and if the clock time is after the one specified in the timeout, it will be deleted. The URL returned from short-lived processes need these documents for an amount of time, between the time the URL is returned to the user, and the time the user clicks the URL. It is good to set the Document Disposal Timeout to a value between 30s to 120s, depending on the load expected on the server.

Long-lived processes will write required documents to the GDS before assigning them to a human-centric task so that they can be obtained later when the user actually logs on to process them. At the end of the process, the final collaterals are kept in the GDS for posterity and later review if required.

Thus, for long-lived processes, the files are never disposed of. The default behaviour for the GDS then is to constantly grow, if long-lived processes are used. If you do not want this to happen, you must perform regular purges.

3) Purging Jobs

In LiveCycle ES, a command-line purge tool is provided to purge jobs that either completed or were terminated. This exists still in ES2, should you ever need it.

In LiveCycle ES2, the Health Monitor was introduced to offer a graphical UI for performing purges.

In ES2 SP2, a purge scheduler was introduced to automate, at intervals, the purge of jobs.

a) If you are on ES2 SP2

Connect to Admin UI and go to Health Monitor > Job Purge Scheduler

Schedule a One Time Purge for records older than 1 day

b) If you are using ES2 pre-SP2

Connect to Admin UI and go to Health Monitor > Work Manager. Search with the following criteria:

-Category = Job Manager
-Status = Terminated
-Create Time 2 Weeks
-(iterate over time periods)

Delete any terminated processes that are found.

c) If you are on LiveCycle ES

The purge tool requires some knowledge of the contents of the LiveCycle database; for this reason I will not cover this in this article.

You can find most of the required information in the link below, however you would be best advised to operate under the guidance of the Enterprise Support service, if you can.

4) A note on process recordings

I would like to add a special note here concerning process recordings. These can be activated via Workbench by right-clicking on a process or on a process canvas, and selecting Process Recordings > Start Recording

This will record the activity of every time the process is launched, including the contents of LiveCycle variables, branches followed, etc, at EVERY step of the process, for later review in Workbench.

Even processes not started in Workbench will be recorded.

For this reason, process recordings must be activated ONLY for debugging purposes.

Process recordings are heavy, and are not suitable for a production server, both in terms of performance and space used. They can easily be deleted via Workbench through the playback dialog.

Central ‘s !Replace! command requires a caret line

Central Pro Output Server has syntax for the preamble which allows you to replace lines in the field nominated data. There is however an additional requirement that is implied, but not explicitly stated:

!Replace! only works with lines that start with “^”

The following definitions:

^define group:!Replace!^bogus!content ^comment content
^define group:!Replace!xyz ^comment xyz
^define group:!Replace!^standalone ^standalone complex

With the following data:

^bogus content

Produce the following result:

^comment content
^standalone complex

If you want to have placeholder data, the data elements need to start with a caret “^” character. If you are passing in XML for example, it could look like this:

A line to be replaced using !Replace! must be quoted verbatim. If you want to use replacement patterns, you will need to add your own pre-processing.

Central Migration Birdge : centralDataAccess requires lower case FNF commands

Central Migration Bridge allows you to use your Central Pro Output Server inputs with LiveCycle and XFA forms by converting Field Nominated Data to XML using the centralDataAccess operation.

A caveat however is that the field nominated data commands need to be in lower case for centralDataAccess to understand it. This will not suit some setups, which have their data applications producing upper case commands.

You can work around this issue by adding an ExecuteScript activity before centralDataAccess activity. Say the FND was loaded into a document vairable call inFNF. The following script would read that data, process it, and write it back to the inFNF variable:

import java.util.regex.Pattern;
import java.util.regex.Matcher;
import com.adobe.idp.Document;

 // XPath to the process variable in which you loaded the field nominated data
String procvar = "/process_data/inFNF";

StringBuffer buff = new StringBuffer();
String crlf = "\r\n";
Document indata = patExecContext.getProcessDataDocumentValue(procvar);
BufferedReader fnf_in = new BufferedReader(new InputStreamReader(indata.getInputStream() ));

String line = fnf_in.readLine();
Pattern p = Pattern.compile("^(\\^\\$?[A-Z]+?)($|\\s+.*)");

while(line != null) {
    Matcher m = p.matcher(line);
    if(m.find() ) {
        line =;
    line = fnf_in.readLine();

Document outdoc = new Document(buff.toString().getBytes() );


The use of regular expressions helps isolate the command and set it to lower case, rather than affect the whole line.

Cannot install LiveCycle ES SP3 on Linux

There is a known issue with the LiveCycle ES SP3 installer wherein on some Linux systems, when running the installer you will get an OutOfMemoryError despite having more than 1 GB free memory (you probably don’t need that much, but it’s always safe on a server to have some leeway…)

To get around this issue, you can use a Windows staging platform for a Linux target system to perform the installation, deploy manually, and remotely run the Configuration Manager. This is also useful for locked down Linux systems where the app server is installed and accessible (for JBoss this means at least RW permissions in JBoss’s deploy directory), but you do not have the administrative rights to run the LiveCycle installer.

1. What does the patch/SP installer do?

Unlike your average Windows installer or Nix package installer, simply running and completing the LiveCycle Service Pack or patch installer will not have applied it to your live system. What the installer does is copy a series of JAR files to the LiveCycle installation’s holding directory, ready to be packged into EAR files for deployment – but does not do the packaging for you.

The packaging is left to the LiveCycle Configuration Manager which when launched will load the configurations from your previous installation run, and package the EARs accordingly, taking into account local system paths and parameters.

2. What’s the staging platform?

A Windows staging platform allows you to run the patch installer on a mock LiveCycle installation. You can thus run a patch installer on Windows and compile EARs for Nix, and then upload these files from the Windows box to the Nix box.

To setup a Windows staging platform, run the LiveCycle installer on a Windows machine and select a manual install. It will then ask you whether you want a regular installation or a staging installation. Select the staging option for the target operating system of your choice and complete the install, using the same parameters as you would use if installing natively on your target system.

Now you can run patch installations for your target LiveCycle live system from this “dummy” system.

3. Steps required for installing via a staging platform

  1. Copy the installer packages for LiveCycle and the Service Pack to a Windows machine (Win 2003 Server, Win XP Pro, Win7, Win 2008 Server)
  2. Run the LC installer – this is provided both with Windows and Linux executables, the actual significant code is Java so the same ZIP as used for Linux install can be used for Windows install
  3. Select a manual install, and then select a staging platform for your target operating system
  4. Once the components are “installed”, run the configuration manager to compile the EAR files
  5. When prompted to deploy the EARs, deploy the EAR files to the application server and restart it
  6. Continue with the configuration manager pointing to hte LiveCycle server’s IP address

Information is available from page 12 of the following guide:

Cannot import or export runtime configs from Admin UI

The LiveCycle ES2 documentation reads:

You can export the runtime configuration information for deployed applications.

1. In LiveCycle Administration Console, click Services > Applications and Services > Application Management.
2. Click the name of the application.
3. Click Export Runtime Config and save the configuration file (XML) that is produced.

But some people will not see the buttons “Export Runtime Config” and “Import Runtime Config”. This is unusual behaviour, that for now I have not identified a reason for; but here is a workaround:

-Right-click the main area of the page, and choose to “show only this frame”
-Check the URL – replace “” with “” and hit enter

You will now be able to see the import and export buttons, fully functional.

Changing the JBoss multicast port for LiveCycle

You may find you need to change the multicast address+port of your LiveCycle cluster – specifically, if your LC cluster shares the same multicast config as other clusters on your intranet, they might find eachother and try to combine into a super cluster.

This is a huge issue if the two are LiveCycle clusters for example, but one is the test cluster and the other is a QA or even production cluster. This can cause all sorts of havoc ending up with database inconsistenceies, lost data, gridlocked requests etc… If you have a communication or database issue on a cluster, you MUST check your multicast setup as first step !

Multicasting in LiveCycle ES and LiveCycle ES2 is built-in to the application server – WebSphere or Web Logic; or JBoss in our case.

To change the multicast port for a JBoss cluster:
– stop each of the JBoss nodes;
– edit for each node the run.bat (Windows) or (*nix) script as appropriate;
– identify in each the multicast port, and change it (all to the same port);
– save the changes, and start JBoss again

Remember: no two clusters on a same network/intranet may use the same multicast address + port. That is absolute.

(Class) cannot be cast to (interface) in a custom LiveCycle component


You are getting an error along the following lines when running a custom component for LiveCycle ES and ES2:

"[com.adobe.] cannot be cast to [com.adobe.]"

even though you know for sure that AChild is a child or implementor of AParent – for instance:

"com.adobe.idp.taskmanager.dsc.client.query.TaskRowImpl cannot be cast to com.adobe.idp.taskmanager.dsc.client.query.TaskRow"


-You have included the JAR files from LiveCycle’s deployment directories into your component JAR
-If you remove the Adobe JAR from your Custom Component’s JAR, you get a ClassDefNotFoundError.


-Remove the Adobe JARs from your custom component’s JAR
-Modify your component XML and add tags as appropriate to include the Adobe packages you need.


This puzzling problem is to do simply with how different class loaders interact. To abstract over any proper LiveCycle classes, I will be talking about a fictitious Pan class.

# 1) About class loaders

A Pan is not a Pan when you have two different definitions of a Pan.
When you load a class called Pan with one class loader, and you load it again with a different class loader, they are different definitions in memory of a Pan.
So when you try to assign [a Pan instance from the first class loader] to [a Pan variable defined by the second class loader], the JVM will not match them up as being the same type of Pan – and you will get the unexpected ClassCastException.

-When the JVM is started, it uses a class loader to start loading all the classes it needs, along with the web application server.
-When the app server starts loading LiveCycle, it creates several class loaders – one for each EAR, for example; and within the EARs, one for each WAR, etc.
-Many different components are loaded by many different class loaders

When you deploy your custom component, it gets loaded by its own class loader. Here’s where it gets tricky.

# 2) Delegating class loaders

If you ask LiveCycle to give you a Pan via some API call, it will give you a Pan that one of its class loaders defined. When in your code you declare a variable to hold that Pan, you’re using your class loader’s Pan definition. A definition clash ensues.

Obviously you don’t want to load your own Pan when LiveCycle already has a definition for a Pan – remove the Adobe JARs that you included with your custom component. If you run the project now, you may get a ClassDefNotFoundError. Why?

If your class loader could not find a Pan of its own, it delegates finding the Pan up to the class loader that loaded it – the parent class loader. The parent class loader will ask your class loader what packages it needs to look in – this is specified in your component XML under the tag. The parent class loader will look for all packages that it is aware of for the packages you specified in the component XML; then look in those for class definitions. If it can’t find what you are looking for, the search will be deferred to its own parent class loader, so on and so forth until the root class loader is reached.

At this point, either one of two things can happen:
-if you specified a package in component XML that contains the Pan you seek, that class definition will be passed back to your class loader.
-if you did not specify a package containing the Pan you seek, NoClassDefFoundError will be thrown.

The document has been changed since it was created and these rights are no longer valid.

As a viewing program, Adobe Reader cannot make any changes to plain PDFs – it is not made for file creation or editing.

However, if you have Adobe Acrobat or Adobe LiveCycle Reader Extensions ES/ES2, you can activate functionality in a PDF that allows even Adobe Reader to handle that given PDF with advanced features, such as filling in forms, saving data into forms, commenting and annotating. These PDFs are known as “Reader-Extended PDFs”.

Sometimes when opening such PDFs, you will see the following warning:

This document contained certain rights to enable special features in Adobe Reader. The document has been changed since it was created and these rights are no longer valid. Please contact the author for the original version of this document

This means one thing: some program opened the PDF and made changes to the structure of the PDF itself – this is different from form-filling or annotating.

The change could have been made by an Adobe program (although Reader 8.1.0 displayed the message erroneously until 8.1.2 fixed it), another program (typically other PDF writers), or even automated programs that analyze and modify files.

At this point, only Adobe Acrobat can read the file, and re-apply the special features. If Reader has detected structural change however, it is advised to obtain a fresh copy of the form, as the changed version might not behave properly. Acrobat can export data from the damaged form, which can then be imported (by Acrobat) into the fresh form.

The fresh form can then be saved with this data in it, after which the special features can be applied to it.
The extended PDF can be returned then to the user to be used in Adobe Reader.

Converting DOC(X) in LiveCycle PDF Generator yeilds error: ALC-PDG-019-060-Encountered error while importing the XMP file.

When PDFG ES2 converts Word documents, it extracts the metadata and compiles it as an XMP for the PDF.

There is a known problem however, when a file contains custom metadata: PDFG cannot compile the XMP properly for such files.

To check the metadata in the DOC, open Word.
Use the MS Office menu (the big round one, top left) : Prepare : Properties
A small yellow info bar will appear above the editing area.
In that area, Click “DIP Properties – …” and select “Advanced properties …”
Go to the “Custom” tab

If there are items in the space at the bottom labelled “Properties”, then your DOCX is likely to fail to convert.

There are two solutions to this:

A) Workaround.
1/ Log on to the LiveCycle Admin UI
2/ Go to Services : PDF generator : File type Settings : [your file type settings]
3/ Open Word file type settings
4/ Uncheck “Convert document info”
5/ Save

This will prevent PDFG from trying to extract and convert the Word metadata – thus excluding the problematic custom information

2) Obtain the patch

If you have an Adobe Platinum support agreement, you can contact technical support and request the patch.