Tag Archives: Workbench

Task Summary Page

Workspace in LiveCycle ES4 now shows a new tab called the “Summary” tab. As the name indicates this tab can be used to summarize the current task details. By default the “Summary” tab does not have any content in it. The process developer can create the content of the “Summary” tab. In general it is used to show some extra information related to the current task- such as Task Summary or display the driving directions specific to the address mentioned in the task.

Read the complete post here .

How to render xdp as HTML form in HTML Workspace

By default xdp forms are rendered as PDF forms even in HTML workspace. So to render/submit existing xdp/pdf as HTML form in HTML Workspace, we need to add a new action profile called “HTML” which will include render process as “render new html form” and submit process as “submit new html form”. Below are some simple steps to configure this:

Read the complete post here.

Unable to Create new application in workbench

- Pankaj Gakhar, Software Engineer @ Adobe

You may receive the below error while creating an application in workbench:-

Failed to create application, please check if application name contains invalid string.

This is because the service provider has been changed to a value other thanRepositoryProviderService. Restore it back and you’ll be able to create applications in workbench.


Read the complete post at Adobe LiveCycle Blog.

LiveCycle ES 3 Release Now Available

Dave Welch, Senior Director – LiveCycle

We are pleased to announce the release of Adobe LiveCycle Enterprise Suite 3 (ES3). LiveCycle ES3 contains the document and data services capabilities, including electronic forms and business processes, which were formerly part of the Adobe Digital Enterprise Platform (ADEP), a brand that is being retired.

The new LiveCycle ES3 release incorporates:

  • Document services capabilities available with ADEP and the recent ADEP Document Services service pack 1
  • LiveCycle Data Services 4.6.1
  • Updates to LiveCycle Connectors for Microsoft® SharePoint® and IBM® FileNet

LiveCycle offers a number of components that help extend the value of existing back-end systems by better engaging users, streamlining processes, managing correspondence, and strengthening security.

Read the complete post at http://blogs.adobe.com/livecycle/2012/03/livecycle-es-3-release-now-available.html.

Solutions and the Application Context

– Saket Agarwal

Solutions over ADEP have introduced the concept of an application context (aka app context), which can be seen as a unique identifier, that various server side modules use to identify the execution context (from the current execution thread) and process requests in context of that solution. For instance, when persisting content/assets onto the CRX repository, the platform’s persistence module (official known as Platform Content) uses the current (invoking) app context to determine where to store the content/assets, and what configurations to use (which would typically be different for different solutions). See snapshot below, indicating the solution specific content areas.

Note that the storage location is /content/apps/cm for Correspondence Management, and /content/apps/icr for Integrated Content Review, which happen to be the app contexts for the two solutions.

Since it is essential for the server to identify the execution context, if you do not set or establish the application context before you make calls to the solution APIs, you will encounter a server error that says :  “Unable to fetch application context“. To set the app context, use one of the two methods:

App context in your Flex application

If you are invoking a solution API from a flex application, ensure that you set the app context using:

var appToken:AsyncToken = com.adobe.ep.ux.content.services.search.lccontent.LCCQueryServiceFactory.getInstance().setAppContextService("/content/apps/cm"); // setting app context for the CM solution
appToken.addResponder(new mx.rpc.Responder(<your success handler>, <your fault handler>));

App context in your Java application

If you are invoking a solution API from a java based application, ensure that you set the app context using:

com.adobe.livecycle.content.appcontext.AppContextManager.setCurrentAppContext("/content/apps/cm"); // setting app context for the CM solution


The app context concept is also used (or rather leveraged) in other scenarios such as driving solution specific Building Block (BB) configurations. Since a Building Block is meant to be reusable across solutions, it exposes certain configurations that can be different for different solutions. Hence, the BB needs to behave differently depending upon the context in which it is being used or invoked. Below is an example where the Data Dictionary BB is used by two solutions – CM and ICR – and has configurations specific to each solution, lying within the solution’s specific app context – /apps/cm for CM and /apps/icr for ICR.

Original article at http://blogs.adobe.com/saket/solutions-and-the-application-context/.

Updating a LiveCycle Process on a Live System


Recently I have been seeing reports of stalled / failed processes, both short-lived and long-lived. It turns out that in some cases, the process developers were updating processes on their live systems without taking due precautions. The following post is to explain what is actually happening under the hood, and why you can’t simply “hot change” a process.

A simple process

Imagine you have a process called MyNotify with two steps in it: a first task (not start point) AssignUser, and a second task SendEmail. Pretty straight forward.

You have deployed this process, and process instances are merrily being kicked off. Great. Now you realize that you don’t want the AssignUser task; rather, you want a Workspace Start Point. Remove the AssignUser task, configure the Workspace Start Point to open the same form as you had in the now deletd AssignUser task. Save and deploy.

You may now start getting calls from your users saying their form submission never resulted in an email. Upon investigating, you will likely find errors saying that “Job ID could not be found for Job […]“.

Oh no. What happened?

Tasks not found

Let’s wind back to where there were two steps. When a process instance is kicked off, what happens is that an entry in the database is made saying that process MyNotify has been kicked off by user “Alice” (for example), and it has been assigned to Alice, in the AssignUser task.

When Alice submits the form, LiveCycle goes to its database and checks the process – Alice just submitted the AssignUser step, so it checks MyNotify for an AssignUser step, and figures out what to do next from that.

If you have deleted the AssignUser task from MyNotify, LiveCycle will be unable to determine what to do next – and stall the process instance. Future instances should work fine (they are before the change point), anything that was at the SendEmail point in the process will be fine as well (they are after the change point). It is specifically process instances that are in a state that could be affected by the change that risk failure, and that will happen without possibility of roll-back or recuperation (bar performing some incantations in the database to recuperate the data, and then constructing some half-processes to mimick pre-change parts of the changed process, in the hope of pushing the data onwards — a lot of avoidable hassle, itself prone to failure).

Staying safe

It is understandable that you may think that if the process were modified, already-running processes would continue with the older version. This is not so, or at least, not in this scenario.

When kicking off a new process instance, LiveCycle does not automatically make a copy of the process to be followed should a new edit of the process come into existence. Maintaining a copy of all the configurations of a process for each instance created would cause extremely severe performance issues on the server, so a more de-coupled implementation serves best. The work of ensuring that a new copy of the process configuration is created is left up to the process developer.

Once a process is deployed, there are only two “safe” ways of updating the process.

Application Version

The first is to create a new Application version, and when all the new changes are made, deploy the new version.

  • All old instances will continue their course in the old version.
  • Workspace processes will kick off in the new version
  • Calls to the old version of the process will still call the old version

Once you have ascertained for sure that NO more processes from the old version are running, and that no applications rely on the old version, you can safely undeploy it from runtime, and delete it if you so wish.

When will references point to the new version automatically?

Having done this, all references to sub-processes inside the same application will be updated, as will all resources referenced relatively inside the application. For example, MyApplication/1.0/ProcessA can reference /FormA (in Assign User tasks for example) which means “the current application’s version’s FormA”. A sub-process on the process-workflow convas will be referenced relatively if the sub-process is contained inside the same application.

When do you need to update references manually?

Take special heed however — processes and resources that do not reside in the same Application do not see their references updated — they are absolutely referenced. If MyApplication/1.0/ProcessA is upgraded, it will not change the references to, for example, OtherApplication/1.0/OtherProcess. Similarly, if we upgrade to OtherApplication/2.0/OtherProcess, then MyApplication/1.0/ProcessA will still point to OtherApplication/1.0/OtherProcess.

Finally, note that if you create a new version of an application, all resources and processes are copied over with it with new version numbers too, and any textual references to other resources remain the same. So if MyNotify/1.0/ProcessA has a variable referencing MyNotify/1.0/FormA, then MyNotify/2.0/ProcessA will also reference MyNotify/1.0/FormA ; the same goes for resources uch as XDPs, XSLTs, and any other resources stored in an Application. The reason for this is that these references are stored as “strings” (pieces of text) and Workbench cannot, upon creating a new version, differentiate what snippets of text all through the Application are versions to update, what versions should not be updated, and what strings do not actually contain versions at all.

Down time (clean “brute force” update)

The other way is to plan downtime.

  • Prevent people from kicking off new processes
  • Let all processes complete
  • … and only then deploy the new edit of the process

You could choose the second method to avoid having to update references to resources, but it would defeat the purpose of having this versioning at all and does, unavoidably, incur the overhead of a downtime.

Original article at http://blogs.adobe.com/an_tai/archives/143.

Short-lived process and the utilization of native processes

Livecycle uses native processes for Forms and Output related operations, more on this specific topic you can find here.

This blogpost will go into detail about how short-lived processes utilize these native processes and how you can optimize this.

To demonstrate this, I am using the following process.





It has a setValue operation, then a renderPDFForm operation and it ends with an executeScript.

The executeScript has the following contents:

System.out.println( “After the render operation” );

In this process, renderPDFForm is using the native process XMLForm. If there is no running XMLForm process when you invoke this process, it will start a new one. What you also can do is to kill the existing processes and then invoke the process and you will also see that the process it started again.

By default you will have a maximum of 4 XMLForm processes running on your system, and these are allocated via a pooling mechanism.

To have more information on this allocation/deallocation process, apply the following setting to your JVM :


When you now invoke the process you will something like this in the log :

[com.adobe.service.ResourcePooler] *****http- trying to obtain lock on com.adobe.service.ResourcePooler@1cf0bb for resource allocation
[com.adobe.service.ResourcePooler] *****http- allocated ProcessResource@f6b087(name=XMLForm.exe,pid=1392), Pool: {ProcessResource@f6b087(name=XMLForm.exe,pid=1392)=true}, initializing=false, poolsize=1, max=4
[STDOUT] After the render operation
[com.adobe.service.ResourcePooler] *****http- deallocated ProcessResource@f6b087(name=XMLForm.exe,pid=1392), Pool: {ProcessResource@f6b087(name=XMLForm.exe,pid=1392)=false}, initializing=false, poolsize=1, max=4

So in normal English this would be :
1. Looking for an available XMLForm process
2. Allocating process with pid=1392 for this short-lived process
3. The message from the executeScript
4. Deallocating the process, and returning it back to the pool

At first sight you would say this is working as expected, and it is.

But the thing to look for is why the deallocating is taking place AFTER the executeScript.

In a short-lived process the allocation of the native-process is maintained for the whole transaction, this means when the short-lived process has completed the native-process is returned to the pool.

Of course in our example having a simple executeScript will not extend the duration of the lock, but imagine you do send out an email and afterwards execute webservice call.

Having these steps in your process will extend the lock where it doesn’t need to.

What is the impact?

If you are locking the native process longer than needed then you can end up with reaching the maximum number of native process in use. When that happens any subsequent operations that use the native process will have to wait until a native-process is returned back to the pool.

In that case what you will see is that processes are taking longer to complete, but you won’t see an increase of CPU-usage on the machine.

This locking effect on native processes can also lead to a deadlock.  Consider the following process:

Here the process actually uses the XMLForm native process twice, once to render a form, and then, conditionally, an XMLForm native process may be used indirectly to carry out a flatten operation on the form.  What can happen in this instance is that a lock is first obtained during the render, then when the flatten operation is performed indirectly by Assembler, it will attempt to find another different XMLForm native process that is not already locked.  When multiple instances of this process run concurrently, all available native processes can be locked in the first step, with none available to carry out the second step and a deadlock will result.  When this happens the deadlock is eventually broken when the transaction times out.

What is the solution?

What we want to achieve is to return the native-process to the pool as soon as possible, so for example when the render-operation has completed.
The way to do this is to create a wrapper process around renderPDFForm, this wrapper process contains only one step and has a transaction setting “REQUIRES NEW”.





This process is now called from the process, instead of doing a call directly to renderPDFForm.





If we use this subprocess in our example, then you will see the following output in the logfile:

[com.adobe.service.ResourcePooler] *****http- trying to obtain lock on com.adobe.service.ResourcePooler@1cf0bb for resource allocation
[com.adobe.service.ResourcePooler] *****http- allocated ProcessResource@f6b087(name=XMLForm.exe,pid=1392), Pool: {ProcessResource@f6b087(name=XMLForm.exe,pid=1392)=true}, initializing=false, poolsize=1, max=4
[com.adobe.service.ResourcePooler] *****http- deallocated ProcessResource@f6b087(name=XMLForm.exe,pid=1392), Pool: {ProcessResource@f6b087(name=XMLForm.exe,pid=1392)=false}, initializing=false, poolsize=1, max=4
[STDOUT] After the render operation

As you can see now in the output the native-process is returned back to the pool BEFORE it starts the next step.
The wrapper process has been created for renderPDFForm, but this also applies for the other operations of the FormsService and OutputService.


Original article at http://blogs.adobe.com/livecycle/2011/05/short-lived-process-and-the-utilization-of-native-processes.html.

Best Practice for Developing with LiveCycle Workbench ES2 – Application Structure, Iteration and Transition

Author: JianJun Jiao
Editor: Suhas Kakkadasam Sridhara Yogin

Customers using LiveCycle Workbench to develop solutions will have the following requirements:
Note: To differentiate a customer’s LiveCycle application with an application in Workbench, we take “project” as customers’ LiveCycle application.

1. Three environments (DEV – for development of LiveCycle applications, TEST – for testing the developed LiveCycle Applications, PRODUCTION – for deploying and using LiveCycle applications.);
– Project could be transferred to cross environments smoothly.

2. Multiple active development streams aka. parallel development with different developer roles;
– Roles could be process developer, form developer, Flex developer, schema developer and so on;

3. An iterative development style;
– Project could be handed over for testing or production while development of later versions can continue.

LiveCycle Workbench ES2 provides necessary fundamental functionality so that these development requirements can be met. The new development model in LiveCycle ES2 is application centric rather than repository centric that was used in previous versions.

Here is a general solution:

1. Use multiple structured applications for parallel development and resource management;
2. Make use of application versions for iterations;
3. Use LiveCycle archives for transitions;

Here is an application structure sample:







In LiveCycle ES, all processes, forms and other assets are stored in a flat structure called repository. But in LiveCycle ES2, application is used to manage resources which are called application assets. A LiveCycle application is simply and logically a container of assets during design-time.

In LiveCycle Workbench ES2, it’s suggested several applications be used to better manage the project resources. It’s to say one project could have several LiveCycle applications in Workbench.

For the sample application structure above, it has some advantages:

  1. Categorize different type of assets in different applications. Forms and processes could be put in different applications. Thus, the project is more logical and administrable. Subsequently, multiple developers of different roles can work more effectively on the project in parallel.
  2. Some resources say fragments and schemas are categorized together so they can be better reused.
  3. Reduce the impact scope when developers do some operations say deployment/undeployment applications.
  4. Better support the parallel development. Generally, the more applications the team used, the less team members would affect each other.
  5. It leads to better performance. It takes much more time to do operations say synchronize/deploy on one application with a big account of assets. That’s to say, the performance thus degradates a lot on a large application. For LiveCycle ES2, it’s recommended the count of assets in one application should be less than 50.

In LiveCycle Workbench ES2, every application version is composed by a major version and a minor version. For every application asset, it has revisions. LiveCycle developers can make use of these versions for iterations or even releases. Generally, a release could be a combination of different versions of applications.

Talking to transitions between development, testing and production environment, LiveCycle archives can be used. In Workbench, you can create an archive for each application; or, you can create an archive for all applications of your project. These archives then can be imported to other environments.

Original article at http://blogs.adobe.com/livecycle/2011/06/best-practice-for-developing-with-livecycle-workbench-es2-%E2%80%93-application-structure-iteration-and-transition.html.

Access custom Microsoft Office properties using LiveCycle services

– Samartha Vashishtha

Marcel van Espen, over at the Dr Flex and Dr LiveCycle blog, explains how you can create a LiveCycle process to access custom Office properties. His blog post also includes a useful example.

“Within LiveCycle Workbench ES, one of the services in the common category that you can use is ‘Export XMP’. This service will extract all the available metadata from a PDF document. If you have converted a MS-Office document to a PDF document, you will be surprised what metadata is also converted. All these properties now become accessible.”


Read the complete post here.

Original article at http://blogs.adobe.com/ADEPhelp/2011/04/access-custom-microsoft-office-properties-using-livecycle-services.html.