Posts in Category "General"

LiveCycle ES2: This scheduler instance (SchedulerName) is still active but was recovered by another instance in the cluster

Issue

If you are working with LiveCycle ES2 in a cluster you may notice the following message in the server log:

org.quartz.impl.jdbcjobstore.JobStoreSupport findFailedInstances “This scheduler instance (<SchedulerName>) is still active but was recovered by another instance in the cluster. This may cause inconsistent behavior”.

Reason

This exception often occurs when the clock times on the cluster nodes are not synchronized.  If the clock times on cluster nodes are more than 1.7 seconds out of synch you will start to see these Quartz messages in the log.

Solution

Synchronize the time on all cluster nodes and then restart the cluster.  The messages should no longer appear in the log.

reference: (1647846)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 0.0/10 (0 votes cast)

LiveCycle ES: java.net.BindException in server log following SuSE Linux update

Issue

You may errors similar to the following in your server log:

WARN  [org.jboss.system.ServiceController] Problem starting service jboss:service=WebService
java.lang.Exception: Port 8083 already in use.
...
[org.jboss.system.ServiceController] Problem starting service jboss:service=Naming
java.rmi.server.ExportException: Port already in use: 1098; nested exception is:
Caused by: java.net.BindException: Address already in use
...
[org.jboss.system.ServiceController] Problem starting service jboss:service=invoker,type=jrmp
java.rmi.server.ExportException: Port already in use: 4444; nested exception is:
Caused by: java.net.BindException: Address already in use

Reason

These errors and related non-functioning of the LiveCycle components deployed is usually the result of an operating system update for the bind name server.  These updates are provided by opensuse.org and may have been installed in the OS by the administrators.

These updates will break the functionality in any running LiveCycle server as it affects the communication protocols.

Solution

You will need to restart the entire Linux operating system to allow the updates to be successfully propagated into all the relevant applications.

reference: (182815638)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 0.0/10 (0 votes cast)

LiveCycle ES2: repeated messages from org.hibernate.util.NamingHelper in WebSphere log

Issue

If you are working with LiveCycle ES2 on WebSphere, you may notice the following informational message repeating every few milliseconds in the WebSphere SystemOut.log:

00000051 NamingHelper  I org.hibernate.util.NamingHelper getInitialContext JNDI InitialContext properties:{}

This message will repeat so often that it will fill up the log file which makes it very hard to find actual error or warning messages.

Reason

This message is benign and can be ignored.

Solution

To prevent this message filling up your log file you can disable it in the WebSphere console:

  1. In the WebSphere navigation tree, click Servers > Server Types > Websphere application servers.
  2. Click an application server listed in the right pane.
  3. Click Troubleshooting > Change Log Level Details.
  4. Click Runtime tab.
  5. Under General Properties, enable Save runtime changes to configuration as well.
  6. In the Components list, navigate to the org.hibernate.* package. Click the package and select and click Message and Trace Levels. Select warning from the popped up list.
  7. Click Apply

reference: (182822947/2995147)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 0.0/10 (0 votes cast)

LiveCycle ES2: service version not updated when patching DSC components

Issue

If you are patching DSC components through Workbench and using version numbers to track your updates, you may notice that the version of the service does not get updated.  The service has actually been updated and the new functionality should be available.

Reason

This is a product issue in LiveCycle ES2 where the service version number does not get updated correctly when deploying an updated component.  You can check the version numbers in Workbench after deploying the component, or you can check in AdminUI > Services > Applications and Services > Service Management:

The version numbers for the services can be updated in the component.xml file included with your DSC component JAR file.  The following line is used to define the service version number:

      <auto-deploy service-id=”Log” minor-version=”0″ major-version=”1″ category-id=”neo-onboarding”/>

Solution

This issue is fixed in ES3 (LiveCycle 10).  There is a patch available for LiveCycle ES2 SP2 (9.0.0.2) and LiveCycle ES SP4 (8.2.1.4), so contact enterprise support if you require one of those patches.

With the fix, the service versions will be updated correctly as follows:

reference: (182724548/2749526)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 8.0/10 (1 vote cast)

LiveCycle ES2: SECJ0305I error in WebSphere log when using an email endpoint

Issue

When using an email endpoint in LiveCycle ES2 on WebSphere, you may notice the following error in the server log each time the email endpoint is invoked, or scans for new emails:

00000020 RoleBasedAuth A   SECJ0305I: The role-based authorization check failed for naming-authz operation NameServer:bind_java_object. 
The user UNAUTHENTICATED (unique ID: UNAUTHENTICATED) was not granted any of the following required roles: CosNamingWrite, CosNamingDelete, CosNamingCreate.
00000020 EmailReaderIm E com.adobe.idp.dsc.provider.service.email.impl.EmailReaderImpl getEmailSourceLock NO_PERMISSION exception caught

Reason

This error is repeatedly sent to the server log due to missing permissions for the CORBA naming service groups in WebSphere.

Here is the configuration with the missing privileges:

Solution

By making a small change in the WAS admin console you can resolve these errors.  You need to add the privileges for “Cos Naming Write”, “Cos Naming Delete” and “Cos Naming Create” to the CORBA naming service groups.

Open the WebSphere administration console

Goto Environment > Naming > CORBA Naming service groups

Add the following privileges “Cos Naming Write”, “Cos Naming Delete” and “Cos Naming Create”

Restart the WebSphere application server for the changes to take affect

Here is the correct configuration:

reference: (182724546/2998051)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 7.0/10 (1 vote cast)

LiveCycle ES: OutOfMemoryError: SystemFailure Proctor : memory has remained chronically below 1.048.576 bytes

Issue

 The following exception appears repeatedly in the LiveCycle server log files after the server has been running for some time:

17.04.10 12:04:22:174 CEST 000000bc WorkPolicyEva W com.adobe.idp.dsc.workmanager.workload.policy.WorkPolicyEvaluationBackground
Task run Policies violated for statistics on ‘adobews__1730804381:wm_default’. 
Cause: com.adobe.idp.dsc.workmanager.workload.policy.WorkPolicyViolationException: Policy Violation. Will attempt to recover in 60000 ms

17.04.10 12:04:49:402 CEST 000002bc ExceptionUtil E CNTR0020E: EJB encountered an undefined exception calling the method “getObjectType” 
for Bean “BeanId(dms_lc_was#adobe-pof.jar#adobe_POFDataDictionaryLocalEJB, null)”. 
Exception details: java.lang.OutOfMemoryError: SystemFailure Proctor : memory has remained chronically below 1.048.576 bytes 
(out of a maximum of 1.073.741.824 ) for 15 sec.
 At com.gemstone.gemfire.SystemFailure.runProctor(SystemFailure.java:573)
 At com.gemstone.gemfire.SystemFailure$4.run(SystemFailure.java:536)
 At java.lang.Thread.run(Thread.java:811)

When these errors have been repeated in the log for some time the LiveCycle server usually stops processing new work items and goes into a kind of standby mode, waiting on the garbage collector to clean up the Java heap.  This problem has been reported so far on WebSphere only.

Explanation

LiveCycle has a specific threshold for memory usage which, when reached, results in a call to the garbage collector to clean up the heap.  LiveCycle allows a certain amount of time for the garbage collector to react and clean the heap. If this does not happen in the specified time window, then LiveCycle goes into standby mode to prevent data loss due to a memory deficit.

There are two areas of concern to the out-of-memory condition: Adobe Work Manager and the LiveCycle cache subsystem.  Both of these subsystems use a similar approach to identifying a potential or imminent memory deficit.

1. Adobe Work Manager

Adobe Work manager periodically checks the apparent available memory as a percent of the maximum allowable size of the heap. It uses this check as a metric of overall system load, which it then uses to plan the execution of background tasks. When encountering what it perceives as a memory deficit, the Work Manager generates the exception above and refrains from scheduling background tasks. Upon resolution of the memory deficit by a garbage collection event, the Work Manager resumes normal operation.

You can ignore the memory-availability checking of the Work Manager and LiveCycle will continue work as expected. You can configure Work Manager to refrain from using memory availability as a factor in its scheduling. To ignore memory checking in the Work Manager, use the following property:

-Dadobe.workmanager.memory-control.enabled=false

2. LiveCycle cache subsystem

The LiveCycle cache subsystem also periodically checks the apparent available memory in the same way as Work Manager. When the LiveCycle cache subsystem perceives a critical memory deficit, it enters an emergency shutdown mode and no longer operates properly. Consequently, LiveCycle functions that use the cache subsystem no longer operate properly. There is no means to recover other than restarting the applications.

The critical logic within the cache subsystem senses the memory using runtime.freeMemory(). The logic depends on the ability to trigger a garbage collection via System.gc() within the configured wait period defined for assessing the memory deficit. Reviewing the circumstances when System.gc() can be called, this subsystem falls within the best practices described by IBM. That is, a GC is requested only when memory is indeed reaching capacity and the JVM would, of its own accord, be scheduling a GC.

When System.gc() is disabled via -Xdisableexplicitgc, the GC trigger is ignored and the cache subsystem waits for a GC to occur due to the JVMs actions. When the system is lightly loaded or idle, the rate of memory growth is slow. The time between memory exceeding the cache subsystem’s tolerance, and when memory is exhausted sometimes exceeds the period defined for waiting for deficit resolution. This interval is sometimes detected as a critical memory deficit, triggering an emergency shutdown.

The default configuration of the cache subsystem is:

• chronic_memory_threshold — 1 MB

• MEMORY_MAX_WAIT —  15 seconds

Solution

Adobe recommends avoiding these emergency shutdown activities being triggered by enabling System.gc(), and allowing the cache subsystem to operate as designed.

Adobe recognizes that this solution isn’t feasible for all customers, and has identified the following work-around configuration.  If you cannot enable the system Garbage Collector by removing the –Xdisableexplicitgc parameter then you will need to apply the 3 parameters as mentioned below.

-Dgemfire.SystemFailure.chronic_memory_threshold=0 -- zero bytes

With this parameter set to 0, the cache subsystem doesn’t attempt to react to any memory deficit under any circumstances. This setting alleviates the problem reported above. But, it also disables the cache subsystem’s ability to react to actual severe memory deficits. The likelihood of an actual system OOM is low. The impact of such a situation extends well beyond the cache subsystem into other LiveCycle and customer applications. So the recover activities of this one subsystem are unlikely to be critical to the overall state of system health.  Adobe feels that it is reasonable to disable it.

-Dgemfire.SystemFailure.MEMORY_POLL_INTERVAL=360 -- six minutes or 1/10th hour
-Dgemfire.SystemFailure.MEMORY_MAX_WAIT=7200 -- two hours

With the memory deficit and associated potential recovery activities disabled, it is not necessary or useful to set these timer values to monitor memory aggressively. Instead, set the MEMORY_MAX_WAIT to a time period approximating the longest likely duration between garbage collections on a lightly loaded system — about two hours. The poll interval is set to ten times per hour.  The basic polling logic cannot be entirely disabled, but these long poll intervals minimize the already light impact of the thread activity.

Additional information

 LiveCycle Cache Subsystem Configurable JVM Properties

For reference, the controlling parameters for the LiveCycle cache subsystem mentio1ned above are defined as follows:

gemfire.SystemFailure.chronic_memory_threshold = <bytes>

default=1048576 bytes

This setting is the minimum amount of memory that the cache subsystem tolerates before attempting recover activities.  The initial recovery activity is to attempt a System.gc().  If the apparent memory deficit persists for the MEMORY_MAX_WAIT, the cache subsystem enters a system failure mode.  If this value is 0 bytes, the cache subsystem never senses a memory deficit or attempts to take corrective action.

gemfire.SystemFailure.MEMORY_POLL_INTERVAL = <seconds>

default is 1 sec

This setting is the interval, in seconds, that the cache subsystem thread awakens and assesses system free memory.

gemfire.SystemFailure.MEMORY_MAX_WAIT = <seconds>

default is 15 sec

This setting is the maximum amount of time, in seconds, that the cache subsystem thread tolerates seeing free memory stay below the minimum threshold. After exceeding the specified time interval it declares a system failure.

reference: (181544183/2607146)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 7.0/10 (2 votes cast)

LiveCycle ES: explanation of GDS directories

Introduction:

The global document storage (GDS) in LiveCycle ES is a directory used to store long-lived files such as PDF files used within a process or DSC deployment archives. Long-lived files are a critical part of the overall state of the LiveCycle ES environment. If some or all long-lived documents are lost or corrupted, the LiveCycle ES server may become unstable. Input documents for asynchronous job invocation are also stored in the GDS and must be available in order to process requests. Therefore, it is important that the GDS is stored on the redundant array of independent disks (RAID) and backed up regularly.

In general, it is not recommended to make any manual changes in the GDS directory as doing so can cause issues in your LiveCycle server leaving it in a unstable state.  This information is provided as an explanation of the directories within the GDS only, and should not be seen as a guide to making changes.

Directories:

The GDS can contain the following sub-folders:

1. audit -  This folder is used to store ‘Record and Playback’ data when recording is activated through Workbench.  ‘Record and Playback’ can be turned on/off at the orchestration (process) level.  If you turn it on for a particular process, it will save data in the /audit folder for every invocation of the process.

Running load testing on processes with recording turned on is therefore not recommended as this can fill up your hard disk.  You can purge the audit folder without consequences (apart from losing your process recordings).  It is best to purge it by using Workbench to delete process recordings.

2. backup – This folder is used to store data related to using LC backup mode.  The folder is managed by LC and should not be modified manually.

3. docmXXX… folders – These folders contain data and files related to long-lived processes.  Do not manually modify the docm folders otherwise the GDS may get out-of-sync with the DB and you will have to restore from a backup.

Once the process instances are completed or terminated, you can clean them using the purge utility in the SDK, if it is Ok for your organization to clean archived process data.  Once the completed/terminated process instances are purged using the purge utility, the related docm folders will also be cleaned by LC.  This can be useful to control the disk usage of the LC server.

4. removeOn.XXX… folders – These folders contain data related to the running services in LC.  They are managed by LC and cleaned automatically after document disposal timeout

5. sessionInvocation-adobews__XXX… – These folders are created by LC when performing document operations when the LC7 compatibility layer is installed.  The remaining empty folders can be cleaned up manually even when the LC server is running without breaking any dependencies.  This is important on some platforms like AIX where there is a kernal limit (~32,000) on the number of sub-folders in a folder.  There is a patch available for LiveCycle ES2 to prevent these folders being created when the LC7 compatibility layer is installed.  Contact enterprise support if you require this patch.

6. sessionJobManager-XXX – These folders are created by LC when using watched folders to perform document operations.  They are managed by LC and should not be modified manually.

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 4.0/10 (4 votes cast)

LiveCycle ES2: workload distribution of batch events in clustered environments

Description

 If you are producing and consuming batch events in a clustered environment in LiveCycle ES2 you may notice that all events produced are consumed on the same cluster node.  This is the expected behaviour even though it may seem as though LiveCycle is not making use of the distribution benefits of a clustered environment.

This behaviour is related to the run-time property of the process, i.e. short-lived (synchronous) versus long-lived (asynchronous).

Explanation

Producer process Consumer process Result
Short-lived Long-lived Distributed
Short-lived Short-lived Same node
Long-lived Short-lived Same node
Long-lived Long-lived Distributed

When the consumer process is short-lived, the event is created on the 1st cluster node (server1), and a work item to handle the event is created and assigned to the workmanager on server1.  The workmanager handles the work item and synchronously invokes the short-lived consumer process.  The execution of that process happens in the same workmanager thread and so everything happens only on server1.

When the consumer process is long-lived, the event is created on server1, and a work item to handle the event is created and assigned to the workmanager on server1.  The workmanager handles the work item and asynchronously sends a request to the JobManager to invoke the long-lived process.  At this point event handling is done and everything has happened on server1.  At some point after this, JobManager picks up the invocation request and starts the long-lived consumer process.  This can happen on either node (server1 or server2) in the cluster.

reference: (182264457)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 9.0/10 (1 vote cast)

LiveCycle ES: java.net.SocketException using WebServices

Issue

 If you are using web services in your processes in LiveCycle ES or ES2, you may receive the following exception in the logs:

2011-04-20 17:30:44,072 ERROR [com.eviware.soapui.impl.wsdl.WsdlSubmit] Exception in request: java.net.SocketException: Software caused connection abort: recv failed
2011-04-20 17:30:44,072 ERROR [com.eviware.soapui.SoapUI] An error occured [Software caused connection abort: recv failed], see error log for details
2011-04-20 17:30:44,072 ERROR [soapui.errorlog] java.net.SocketException: Software caused connection abort: recv failed
java.net.SocketException: Software caused connection abort: recv failed
 at java.net.SocketInputStream.socketRead0(Native Method)
 at java.net.SocketInputStream.read(SocketInputStream.java:129)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
 at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
 at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:77)
 at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:105)

This exception occus when the web service WSDL cannot be reached (i.e. network issues, port numbers, firewall).  It can also be related to database problems.

Troubleshooting

1. Check the web service is reachable and responds as expected.

2. Try to restart the database server.

Additional information

 http://forums.adobe.com/message/3203930

reference: (182264455)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 7.8/10 (5 votes cast)

LiveCycle ES2: ALC-FDI-001-305: Operation aborted: Malformed input PDF or data

Issue

 When you use the FormDataIntegration Service in LiveCycle ES2 to merge XML data into a PDF file, you may encounter the following exception:

2010-03-29 14:58:31,885 ERROR [com.adobe.livecycle.formdataintegration.client.ImportFormDataException] 
ALC-FDI-001-305: Operation aborted: Malformed input PDF or data. 
2010-03-29 14:58:31,964 ERROR [com.adobe.idp.workflow.dsc.invoker.WorkflowDSCInvoker] 
An exception was thrown with name com.adobe.livecycle.formdataintegration.client.ImportFormDataException message: 
ALC-FDI-001-305: Operation aborted: Malformed input PDF or data. while invoking service FormDataIntegration 
and operation importData and no fault routes were found to be configured.

This exception occurs only for certain PDF files and other XML datasets cause the same error in the PDF file. However, you can use the same data XML in other PDF files without error. There are no visible problems in the PDF file, and no problems reported by Preflight.

Reason

This error occurs because of a bug in our Gibson library. Gibson doesn’t correctly handle a rich text field with an empty body element as value.

Solution

This issue has been addressed in LiveCycle ES2 SP1 and later versions.

reference: (181504595/2591446)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 7.0/10 (1 vote cast)

LiveCycle ES2: “The embedded font program ‘PZSWZL+CustomSymbol’ cannot be read.”

Issue

When you convert PDF files to PDF/A using LiveCycle ES2, and then validate the files with a PDF validator, you receive the following error:

The embedded font program ‘PZSWZL+CustomSymbol’ cannot be read.

The font PZSWZL+CustomSymbol must be embedded.

Validation in LiveCycle ES2 and Acrobat 8 or 9 validates the PDF/A without errors.  Validation with current validators such as Acrobat X Preflight, or 3 Heights PDF Validator produce the error above.

Solution

This issue is a problem in LiveCycle ES2 (9.0.0.0, 9.0.0.1, and 9.0.0.2) as it should also return an error that the PDF/A is invalid.  This issue is fixed in LiveCycle ES3 (LC10) and later.  There is a patch available for ES2 SP1 and SP2, so contact enterprise support if you require one of these patches.

Additional information

PDF 1.4 recognizes only the following cmaps for TrueType fonts:

  • cmap subtable with platform ID 3 and encoding ID 1 (Microsoft Unicode, also called  [3,1])
  • cmap subtable with platform ID 1 and encoding 0 (Macintosh Roman, also called [1,0])

In this case, the PDF file was using a custom font CustomSymbol that contained a (3, 0) cmap subtable, which PDF 1.4 doesn’t recognize.  Therefore, Acrobat X and other PDF/A validators correctly reported an error when checking for PDF/A-1b compliance (As PDF/A-1b is based upon PDF 1.4).

reference: (181779161/2714061)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 6.0/10 (3 votes cast)

LiveCycle ES2: PDF file size bloat using FormDataIntegration importData method

Issue

When you export data with the exportData method of the FormDataIntegration Service, and then import the data using importData, the PDF file size increases dramatically.  This problem does not occur if you use the import and export options in Acrobat, or use LiveCycle ES 8.2.1 SP3.

Solution

This is a bug in LiveCycle ES2 SP1.  Update to LiveCycle ES2 SP2 or later.  You can also contact Enterprise Support if you require a patch for LiveCycle ES2 SP1 9.0.0.1.

reference: (181664835/2660679)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 0.0/10 (0 votes cast)

LiveCycle ES2: java.net.ProtocolException using password protected WSDL

Issue

When you try to load a password-protected WSDL using the WebService service in LiveCycle ES2 (9.0.0.0), the following exception occurs:

java.net.ProtocolException thrown: Server redirected too many times (20)

Solution

This is a bug in LC ES2.  Update to LiveCycle ES2 SP1 or later.  Contact Enterprise Support if you require a patch for LiveCycle ES2 (9.0.0.0).

reference: (2466491)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 0.0/10 (0 votes cast)

LiveCycle ES2: results missing when using a Search Template

Issue

There are no search results when using a process variable at the top of the sort order list in a Search Template in the adminui in LiveCycle ES2 SP1.

Solution

This is a bug in LiveCycle ES2 SP1.  Update to LiveCycle ES2 SP2 or later.  You can also contact Enterprise Support if you require a patch for ES2 SP1 9.0.0.1.

Additional information

To reproduce this issue:

  1. Create a search template (Services > LiveCycle Workspace ES2 > Search Template Definition) with process variables in the layout column.
  2. Put the process variable at the top of the sort order list in the sort tab of the Search Template Definition.
  3. Go to Workspace; there are no search results.
  4. Move the process variable down in the sort order list in sort tab and save the Search Template.
  5. Refresh Workspace and there are result rows in the search template.

The search result rows should appear even when the process variable is at the top of the sort order list.

reference: (181660090/2651808)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 0.0/10 (0 votes cast)

LiveCycle ES2: duplicate process variables in Search Template definition

Issue

When you define search templates in the adminui, some process variables are duplicated in the Criteria and Layout tabs. The duplication occurs after you switch between template names in the Identification tab.

Solution

Use LiveCycle ES2 SP2 or ES3.  Or, contact Enterprise Support to obtain a patch for ES2 SP1, 9.0.0.1.

Additional information

To reproduce this error:

  1. Create two search templates (Services > LiveCycle Workspace ES2 > Search Template Definition) with each having process variables in layout column.
  2. Switch to the other search template name in Identification tab, and switch back to the first several times. The process variable is duplicated under Process Variables heading in Criteria as well as in Layout tabs.

reference: (181648653/2651801)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 0.0/10 (0 votes cast)