Posts in Category "LiveCycle"

LiveCycle ES2: Unexpected exception. Failed to create directory: \\<cluster>\GDS

Issue

If you have configured the WebSphere node agents to start as Windows services instead of using the batch files in your cluster you may experience the following behaviour:

  • in AdminUI > Services > Applications and Services > Application Management, you may receive an ALC-DSC-033-000 error
  • in AdminUI > Services > Applications and Services > Service Management, you will see many services marked Inactive, and you cannot stop/start them

You will also notice the following errors in the SystemOut.log on the individual cluster nodes when starting the servers:

E com.adobe.idp.dsc.registry.service.impl.ServiceStoreImpl createServiceConfigurationFromBOI
    COULD NOT PROCESS SERVICE: Process Management (system)/Queue Sharing VERSION: 1.0 MARKING AS INACTIVE
E com.adobe.idp.dsc.registry.service.impl.ServiceStoreImpl createServiceConfigurationFromBOI TRAS0014I:
    The following exception was logged java.lang.ClassNotFoundException: server.startup : 1Class name
    com.adobe.idp.workflow.dsc.descriptor.WorkflowDSCDescriptor from package com.adobe.idp.workflow.dsc.descriptor not found.
....
W com.adobe.idp.dsc.registry.service.impl.ServiceRegistryImpl getHeadActiveConfiguration
    Head active ServiceConfiguration for service: Process Management (system)/Queue Sharing version: 1.0 is not active as expected.
    The service was probably Marked INACTIVE due to an error loading the service.  Please look at the logs to see if this service was Marked INACTIVE.
...

The SystemOut.log will also contain the following error:

E com.adobe.idp.DocumentFileBackend getTimeSkewDelta DOCS001: Unexpected exception. Failed to create directory: \\es2cluster\GDS.
E com.adobe.idp.Document passivate DOCS001: Unexpected exception. While re-passivating a document for inline data resizing..
E com.adobe.idp.Document passivate TRAS0014I: The following exception was logged com.adobe.idp.DocumentError:
Failed to create directory: \\es2cluster\GDS
    at com.adobe.idp.DocumentFileBackend.getTimeSkewDelta(DocumentFileBackend.java:113)
    at com.adobe.idp.DocumentFileBackend.<init>(DocumentFileBackend.java:86)
    at com.adobe.idp.Document.getGlobalBackend(Document.java:3153)
    at com.adobe.idp.Document.writeToGlobalBackend(Document.java:2977)
    at com.adobe.idp.Document.doInputStream(Document.java:1698)
    at com.adobe.idp.Document.passivateInitData(Document.java:1457)
    at com.adobe.idp.Document.passivate(Document.java:1235)
    at com.adobe.idp.Document.passivateGlobally(Document.java:1205)
    at com.adobe.idp.dsc.management.impl.ArchiveStoreImpl.createNewArchive(ArchiveStoreImpl.java:636)
    at com.adobe.idp.dsc.management.impl.ArchiveStoreImpl._getArchive(ArchiveStoreImpl.java:438)

Reason

The Windows services that you have created do not have the correct privileges for accessing the LiveCycle GDS directory which is required for the proper operation of the LiveCycle server.

Solution

Open the Services view from the Control Panel in Windows.
Select the WAS NodeAgent service and goto properties > LogOn > Log on as > This account.
Browse for an Administrator account with sufficient privileges to access the GDS share.
Restart the NodeAgent.
Then restart the cluster from the WAS ND admin console.

Additional Information

To create the nodeagent as a Windows service instead of using the batch files:

1. Change to the \bin directory where WebSphere is installed, for example:

C:\Program Files\IBM\WebSphere\AppServer\bin

2. Run the following command:

wasservice -add ctgNode01_nodeagent
          -servername nodeagent
          -profilePath “C:\Program Files\IBM\WebSphere\AppServer\profiles\ctgAppSrv01″
          -wasHome “C:\Program Files\IBM\WebSphere\AppServer”
          -logFile “C:\Program Files\IBM\WebSphere\AppServer\profiles\ctgAppSrv01\logs\nodeagent\startNode.log”
          -logRoot “C:\Program Files\IBM\WebSphere\AppServer\ctgAppSrv01\logs\nodeagent”
          -restart true
          -startType automatic

You can do this for each cluster node.  Now when you restart Windows, the nodeagent on each cluster node will start automatically.

reference: (1911394)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 9.0/10 (1 vote cast)

LiveCycle ES2: service version not updated when patching DSC components

Issue

If you are patching DSC components through Workbench and using version numbers to track your updates, you may notice that the version of the service does not get updated.  The service has actually been updated and the new functionality should be available.

Reason

This is a product issue in LiveCycle ES2 where the service version number does not get updated correctly when deploying an updated component.  You can check the version numbers in Workbench after deploying the component, or you can check in AdminUI > Services > Applications and Services > Service Management:

The version numbers for the services can be updated in the component.xml file included with your DSC component JAR file.  The following line is used to define the service version number:

      <auto-deploy service-id=”Log” minor-version=”0″ major-version=”1″ category-id=”neo-onboarding”/>

Solution

This issue is fixed in ES3 (LiveCycle 10).  There is a patch available for LiveCycle ES2 SP2 (9.0.0.2) and LiveCycle ES SP4 (8.2.1.4), so contact enterprise support if you require one of those patches.

With the fix, the service versions will be updated correctly as follows:

reference: (182724548/2749526)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 8.0/10 (1 vote cast)

LiveCycle ES2: SECJ0305I error in WebSphere log when using an email endpoint

Issue

When using an email endpoint in LiveCycle ES2 on WebSphere, you may notice the following error in the server log each time the email endpoint is invoked, or scans for new emails:

00000020 RoleBasedAuth A   SECJ0305I: The role-based authorization check failed for naming-authz operation NameServer:bind_java_object. 
The user UNAUTHENTICATED (unique ID: UNAUTHENTICATED) was not granted any of the following required roles: CosNamingWrite, CosNamingDelete, CosNamingCreate.
00000020 EmailReaderIm E com.adobe.idp.dsc.provider.service.email.impl.EmailReaderImpl getEmailSourceLock NO_PERMISSION exception caught

Reason

This error is repeatedly sent to the server log due to missing permissions for the CORBA naming service groups in WebSphere.

Here is the configuration with the missing privileges:

Solution

By making a small change in the WAS admin console you can resolve these errors.  You need to add the privileges for “Cos Naming Write”, “Cos Naming Delete” and “Cos Naming Create” to the CORBA naming service groups.

Open the WebSphere administration console

Goto Environment > Naming > CORBA Naming service groups

Add the following privileges “Cos Naming Write”, “Cos Naming Delete” and “Cos Naming Create”

Restart the WebSphere application server for the changes to take affect

Here is the correct configuration:

reference: (182724546/2998051)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 7.0/10 (1 vote cast)

LiveCycle Forms ES: text-overlapping on page break using nested subforms

Issue

If you are using LiveCycle Forms ES to render complex XFA forms as PDF and the forms contain subforms/tables which can split over multiple pages, you may notice some text-overlapping following a page break as follows:

Solution

You can use some processing instructions in the XDP template to control how the page break affects the field content.  There are 2 processing instructions that can be used in this case to fix the text-overlapping issue, so that the contents no longer overlap the field boundaries:

<?layout allowDissonantSplits 1?>
<?layout allowJaggedRowSplits 1?>

You should add these PI’s to your XDP templates only if you are encountering the issue described above.  You can test these PI’s in LiveCycle Designer ES by using the PDF Preview tab.

By adding these PI’s the text-overlapping is resolved and appears as follows on the rendered PDF form:

reference: (182616383/2959283)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 0.0/10 (0 votes cast)

LiveCycle ES2: Search template returns results outside of date filter

Issue

If you are using Search Templates in Workspace ES2 to search for tasks between certain dates for a particular process, you may notice that the results return tasks outside of the specified dates.

This problem occurs when using a localized Workspace and adding the Task Start date in the search template criteria as seen below:

Then you specify a date range when executing the search template in Workspace ES2:

The results returned from the localized Workspace versions are not filtered at all on date, whereas in the English Workspace they are (i.e. it only returns the results between the given dates):

Reason

This is a bug in the localized versions of LiveCycle Workspace ES2 where the results are not filtered by the date criteria specified.

You can check this by using a database logging to track what SQL statements are being sent to the database when the search template is executed.  In the localized Workspace the SQL is as follows:

SELECT  DISTINCT T0.id, T0.status, T0.step_name, T0.route_list, T0.process_name, T0.process_instance_id, T0.action_instance_id, T0.update_time, T0.create_time, T1.id, T1.type, T2.id, T2.status, T2.complete_time, T3.id, T3.workflow_principal_id, T4.id, T4.commonname, T5.id FROM tb_task T0  INNER JOIN tb_assignment T1  ON (T0.current_assignment_id = T1.id) INNER JOIN tb_process_instance T2  ON (T0.process_instance_id = T2.id) INNER JOIN tb_queue T3  ON (T1.queue_id = T3.id) INNER JOIN EDCPRINCIPALENTITY T4  ON (T3.workflow_principal_id = T4.id) INNER JOIN tb_task_acl T5  ON (T0.id = T5.task_id) WHERE (T5.user_id = ’9B365DDD-C7D4-102C-8DFF-00000A24CEC9′  AND T0.process_name = ‘Task_Forwarding/Task_Forwarding_Test’ )

whereas in the English Workspace the SQL contains the date criteria at the end:

SELECT  DISTINCT T0.id, T0.status, T0.step_name, T0.route_list, T0.process_name, T0.process_instance_id, T0.action_instance_id, T0.update_time, T0.create_time, T1.id, T1.type, T2.id, T2.status, T2.complete_time, T3.id, T3.workflow_principal_id, T4.id, T4.commonname, T5.id FROM tb_task T0  INNER JOIN tb_assignment T1  ON (T0.current_assignment_id = T1.id) INNER JOIN tb_process_instance T2  ON (T0.process_instance_id = T2.id) INNER JOIN tb_queue T3  ON (T1.queue_id = T3.id) INNER JOIN EDCPRINCIPALENTITY T4  ON (T3.workflow_principal_id = T4.id) INNER JOIN tb_task_acl T5  ON (T0.id = T5.task_id) WHERE (T5.user_id = ’9B365DDD-C7D4-102C-8DFF-00000A24CEC9′  AND T0.process_name = ‘Task_Forwarding/Task_Forwarding_Test’  AND T0.create_time >= ’2011-08-14 00:00:00′  AND T0.create_time <= ’2011-08-22 00:00:00′ )

Solution

There is a patch available for Workspace ES2 SP2 (9.0.0.2), so contact enterprise support if you require this patch.  The issue will also be fixed in future versions of Workspace (Es2 SP3, and ADEP).

reference: (182579627/2951216)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 8.5/10 (2 votes cast)

LiveCycle ES: OutOfMemoryError: SystemFailure Proctor : memory has remained chronically below 1.048.576 bytes

Issue

 The following exception appears repeatedly in the LiveCycle server log files after the server has been running for some time:

17.04.10 12:04:22:174 CEST 000000bc WorkPolicyEva W com.adobe.idp.dsc.workmanager.workload.policy.WorkPolicyEvaluationBackground
Task run Policies violated for statistics on ‘adobews__1730804381:wm_default’. 
Cause: com.adobe.idp.dsc.workmanager.workload.policy.WorkPolicyViolationException: Policy Violation. Will attempt to recover in 60000 ms

17.04.10 12:04:49:402 CEST 000002bc ExceptionUtil E CNTR0020E: EJB encountered an undefined exception calling the method “getObjectType” 
for Bean “BeanId(dms_lc_was#adobe-pof.jar#adobe_POFDataDictionaryLocalEJB, null)”. 
Exception details: java.lang.OutOfMemoryError: SystemFailure Proctor : memory has remained chronically below 1.048.576 bytes 
(out of a maximum of 1.073.741.824 ) for 15 sec.
 At com.gemstone.gemfire.SystemFailure.runProctor(SystemFailure.java:573)
 At com.gemstone.gemfire.SystemFailure$4.run(SystemFailure.java:536)
 At java.lang.Thread.run(Thread.java:811)

When these errors have been repeated in the log for some time the LiveCycle server usually stops processing new work items and goes into a kind of standby mode, waiting on the garbage collector to clean up the Java heap.  This problem has been reported so far on WebSphere only.

Explanation

LiveCycle has a specific threshold for memory usage which, when reached, results in a call to the garbage collector to clean up the heap.  LiveCycle allows a certain amount of time for the garbage collector to react and clean the heap. If this does not happen in the specified time window, then LiveCycle goes into standby mode to prevent data loss due to a memory deficit.

There are two areas of concern to the out-of-memory condition: Adobe Work Manager and the LiveCycle cache subsystem.  Both of these subsystems use a similar approach to identifying a potential or imminent memory deficit.

1. Adobe Work Manager

Adobe Work manager periodically checks the apparent available memory as a percent of the maximum allowable size of the heap. It uses this check as a metric of overall system load, which it then uses to plan the execution of background tasks. When encountering what it perceives as a memory deficit, the Work Manager generates the exception above and refrains from scheduling background tasks. Upon resolution of the memory deficit by a garbage collection event, the Work Manager resumes normal operation.

You can ignore the memory-availability checking of the Work Manager and LiveCycle will continue work as expected. You can configure Work Manager to refrain from using memory availability as a factor in its scheduling. To ignore memory checking in the Work Manager, use the following property:

-Dadobe.workmanager.memory-control.enabled=false

2. LiveCycle cache subsystem

The LiveCycle cache subsystem also periodically checks the apparent available memory in the same way as Work Manager. When the LiveCycle cache subsystem perceives a critical memory deficit, it enters an emergency shutdown mode and no longer operates properly. Consequently, LiveCycle functions that use the cache subsystem no longer operate properly. There is no means to recover other than restarting the applications.

The critical logic within the cache subsystem senses the memory using runtime.freeMemory(). The logic depends on the ability to trigger a garbage collection via System.gc() within the configured wait period defined for assessing the memory deficit. Reviewing the circumstances when System.gc() can be called, this subsystem falls within the best practices described by IBM. That is, a GC is requested only when memory is indeed reaching capacity and the JVM would, of its own accord, be scheduling a GC.

When System.gc() is disabled via -Xdisableexplicitgc, the GC trigger is ignored and the cache subsystem waits for a GC to occur due to the JVMs actions. When the system is lightly loaded or idle, the rate of memory growth is slow. The time between memory exceeding the cache subsystem’s tolerance, and when memory is exhausted sometimes exceeds the period defined for waiting for deficit resolution. This interval is sometimes detected as a critical memory deficit, triggering an emergency shutdown.

The default configuration of the cache subsystem is:

• chronic_memory_threshold — 1 MB

• MEMORY_MAX_WAIT —  15 seconds

Solution

Adobe recommends avoiding these emergency shutdown activities being triggered by enabling System.gc(), and allowing the cache subsystem to operate as designed.

Adobe recognizes that this solution isn’t feasible for all customers, and has identified the following work-around configuration.  If you cannot enable the system Garbage Collector by removing the –Xdisableexplicitgc parameter then you will need to apply the 3 parameters as mentioned below.

-Dgemfire.SystemFailure.chronic_memory_threshold=0 -- zero bytes

With this parameter set to 0, the cache subsystem doesn’t attempt to react to any memory deficit under any circumstances. This setting alleviates the problem reported above. But, it also disables the cache subsystem’s ability to react to actual severe memory deficits. The likelihood of an actual system OOM is low. The impact of such a situation extends well beyond the cache subsystem into other LiveCycle and customer applications. So the recover activities of this one subsystem are unlikely to be critical to the overall state of system health.  Adobe feels that it is reasonable to disable it.

-Dgemfire.SystemFailure.MEMORY_POLL_INTERVAL=360 -- six minutes or 1/10th hour
-Dgemfire.SystemFailure.MEMORY_MAX_WAIT=7200 -- two hours

With the memory deficit and associated potential recovery activities disabled, it is not necessary or useful to set these timer values to monitor memory aggressively. Instead, set the MEMORY_MAX_WAIT to a time period approximating the longest likely duration between garbage collections on a lightly loaded system — about two hours. The poll interval is set to ten times per hour.  The basic polling logic cannot be entirely disabled, but these long poll intervals minimize the already light impact of the thread activity.

Additional information

 LiveCycle Cache Subsystem Configurable JVM Properties

For reference, the controlling parameters for the LiveCycle cache subsystem mentio1ned above are defined as follows:

gemfire.SystemFailure.chronic_memory_threshold = <bytes>

default=1048576 bytes

This setting is the minimum amount of memory that the cache subsystem tolerates before attempting recover activities.  The initial recovery activity is to attempt a System.gc().  If the apparent memory deficit persists for the MEMORY_MAX_WAIT, the cache subsystem enters a system failure mode.  If this value is 0 bytes, the cache subsystem never senses a memory deficit or attempts to take corrective action.

gemfire.SystemFailure.MEMORY_POLL_INTERVAL = <seconds>

default is 1 sec

This setting is the interval, in seconds, that the cache subsystem thread awakens and assesses system free memory.

gemfire.SystemFailure.MEMORY_MAX_WAIT = <seconds>

default is 15 sec

This setting is the maximum amount of time, in seconds, that the cache subsystem thread tolerates seeing free memory stay below the minimum threshold. After exceeding the specified time interval it declares a system failure.

reference: (181544183/2607146)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 7.0/10 (2 votes cast)

LiveCycle ES: explanation of GDS directories

Introduction:

The global document storage (GDS) in LiveCycle ES is a directory used to store long-lived files such as PDF files used within a process or DSC deployment archives. Long-lived files are a critical part of the overall state of the LiveCycle ES environment. If some or all long-lived documents are lost or corrupted, the LiveCycle ES server may become unstable. Input documents for asynchronous job invocation are also stored in the GDS and must be available in order to process requests. Therefore, it is important that the GDS is stored on the redundant array of independent disks (RAID) and backed up regularly.

In general, it is not recommended to make any manual changes in the GDS directory as doing so can cause issues in your LiveCycle server leaving it in a unstable state.  This information is provided as an explanation of the directories within the GDS only, and should not be seen as a guide to making changes.

Directories:

The GDS can contain the following sub-folders:

1. audit -  This folder is used to store ‘Record and Playback’ data when recording is activated through Workbench.  ‘Record and Playback’ can be turned on/off at the orchestration (process) level.  If you turn it on for a particular process, it will save data in the /audit folder for every invocation of the process.

Running load testing on processes with recording turned on is therefore not recommended as this can fill up your hard disk.  You can purge the audit folder without consequences (apart from losing your process recordings).  It is best to purge it by using Workbench to delete process recordings.

2. backup – This folder is used to store data related to using LC backup mode.  The folder is managed by LC and should not be modified manually.

3. docmXXX… folders – These folders contain data and files related to long-lived processes.  Do not manually modify the docm folders otherwise the GDS may get out-of-sync with the DB and you will have to restore from a backup.

Once the process instances are completed or terminated, you can clean them using the purge utility in the SDK, if it is Ok for your organization to clean archived process data.  Once the completed/terminated process instances are purged using the purge utility, the related docm folders will also be cleaned by LC.  This can be useful to control the disk usage of the LC server.

4. removeOn.XXX… folders – These folders contain data related to the running services in LC.  They are managed by LC and cleaned automatically after document disposal timeout

5. sessionInvocation-adobews__XXX… – These folders are created by LC when performing document operations when the LC7 compatibility layer is installed.  The remaining empty folders can be cleaned up manually even when the LC server is running without breaking any dependencies.  This is important on some platforms like AIX where there is a kernal limit (~32,000) on the number of sub-folders in a folder.  There is a patch available for LiveCycle ES2 to prevent these folders being created when the LC7 compatibility layer is installed.  Contact enterprise support if you require this patch.

6. sessionJobManager-XXX – These folders are created by LC when using watched folders to perform document operations.  They are managed by LC and should not be modified manually.

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 4.0/10 (4 votes cast)

LiveCycle ES2: UserM:GENERIC_WARNING: errorCode:12817 errorCodeHEX:0×3211 The user [user] is marked as Obsolete

Issue

If you are accessing any LiveCycle services and have problems getting a response, you may notice the following warning in the server logs:

WARN  [com.adobe.idp.common.errors.exception.IDPLoggedException] (Thread-21) 
UserM:GENERIC_WARNING: [Thread Hashcode: 1859008299] 
com.adobe.idp.common.errors.exception.IDPLoggedException| [AuthenticationManagerBean] 
errorCode:12817 errorCodeHEX:0x3211 message:The user <user> is marked as Obsolete

If you enable DEBUG level logging you will see the following DEBUG information in the log:

=========== Authentication failure detail report ==================
Scheme Type : Username/Password 
UserId : user 
Current Thread : ajp-0.0.0.0-11148-2
Following users were identified as per received authentication data. Details are (UserId, domain, oid)
     - user, DefaultDom, 7C5E5622-96A9-102F-AE67-00000XXXXXXX 
Following are the response details from various authProviders.
1 - com.adobe.idp.um.provider.authentication.LDAPAuthProviderImpl
- Authentication Failed : Exception stacktraces are avialable at TRACE level 
Messages collected for this AuthProvider are provided below
    - LDAP authentication failed for user [user] in Domain [corp.domain]
        - Unprocessed Continuation Reference(s)
2 - com.adobe.idp.um.provider.authentication.LocalAuthProviderImpl
- Authentication Failed : Exception stacktraces are avialable at TRACE level 
Messages collected for this AuthProvider are provided below
    - The user user is marked as Obsolete
    - No local user found with UserId [user] in Domain [DefaultDOM]

These warnings in the log may also be accompanied by an Error 500 if you are attempting to call the LC services through a browser/web application.

Reason

This issue can occur when you are attempting to access the services with a user account that has been marked obsolete in the LiveCycle database.  This can occur if you have deleted this specific user from LDAP or from the local domain in LiveCycle.

If you have written applications depending on this user account then you will encounter the problem outlined above when running/calling those applications.

Solution

You could either re-create the user in your LDAP or local domain, or you can create a new user and then change your application to reference this new user rather than the obsolete user account.

reference: (183305926)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 5.5/10 (2 votes cast)

LiveCycle ES2: repeated ServiceNotFoundException in the server log

Description

 If you are running LiveCycle ES2 you may notice the following exception message in the server log:

2011-04-20 18:03:54,291 INFO [com.adobe.idp.dsc.workmanager.impl.ExecutableUnitWrapper] No class loader found for service: Events. Cause: com.adobe.idp.dsc.registry.ServiceNotFoundException: Service: Events not found.

Explanation

 This message on its own is benign, i.e. it does not indicate a problem with the LiveCycle server, and can therefore be ignored.  It is an INFO level message, and not an ERROR or WARN level message which indicates the priority.

If the message does occur as an ERROR or WARN level message with other exceptions, then this should not be ignored and you should analyse the other exceptions to find the root cause of the issue.

Additional information

The work manager in LiveCycle was looking for a service called “Events” in order to get its class loader to serialize/de-serialize objects (ExecutableUnit) whose definition is in that service.  When it couldn’t find the class loader for the Events service, LiveCycle will continue to use the default class loader.

reference: (182264454/2853085)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 10.0/10 (1 vote cast)

LiveCycle Designer: XFAImageService error importing a Photoshop JPG into an XFA form

Issue

 If you are trying to import JPEG images produced with Adobe Photoshop into an XFA-based form in LiveCycle Designer, then you may notice the following error in the warning palette:

Error reading JPeg file : JFIF supports only 1 and 3 component streams. 

XFAImageService: Image cannot be resolved for node: StaticImage1

Solutions

 1. Use Photoshop version 6 or less to produce the JPEG images

2. Re-save the JPEG images from Photoshop using the “Save for Web…” option in the File menu

3. Re-save the JPEG images using Paint in windows, or any other image editing tool which does not add the preview information described below

Additional information

 This error occurs when Photoshop 7 or greater was used to create the JPEG images.  This is due to a change in the JPEG format produced with Photoshop 7 and greater, which now includes a thumbnail preview of the JPEG image included in the JPEG profile.  Such profile information does conform to the JPEG/JFIF specification, however some applications like web browsers, or in this case, LiveCycle Designer cannot handle the preview data correctly.

Further information can be found below:

http://photo.net/ps7-problems.html

reference: (182368249)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 7.7/10 (3 votes cast)

LiveCycle ES: java.lang.OutOfMemoryError deploying DSC components in LCM

Issue

 If you are deploying the DSC components using the LiveCycle Configuration Manager (LCM) on WebSphere (particularly WAS7), you may receive an error that not all components were successfully deployed.  In the server log you may see the following error:

JVMDUMP006I Processing dump event “systhrow”, detail “java/lang/OutOfMemoryError” – please wait.

Reason

The default memory settings for the EJB Deploy tool in IBM WebSphere are not enough for deploying the LiveCycle components, and/or the EAR files.

Solution

To fix this issue, edit %WAS_HOME%\deploytool\itp\ejbdeploy.bat (or ejbdeploy.sh in UNIX). Change -Xms256M -Xmx256M to -Xms1024M -Xmx1024M.  Then re-run LCM.

Note:  There is no need to re-start the appserver instance.

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 0.0/10 (0 votes cast)

LiveCycle ES: ALC-TTN-011-031: Bootstrapping failed for platform component [DocumentServiceContainer] | clustered environments

Issue

 If you are attempting a clustered installation of LiveCycle ES, you may experience errors during the bootstrapping phase of LiveCycle Configuration Manager (LCM) with the following exception in the LCM log:

[5/25/11 8:09:12:768 EDT] 0000003f DSCBootstrapp E com.adobe.livecycle.bootstrap.bootstrappers.AbstractBoostrapper log ALC-TTN-011-031: 
Bootstrapping failed for platform component [DocumentServiceContainer]. The wrapped exception's message reads: See nested exception; nested exception is: 
java.lang.Exception: java.lang.NoClassDefFoundError: com.adobe.livecycle.cache.adapter.GemfireCacheAdapter (initialization failure)

[5/25/11 8:09:12:768 EDT] 0000003f DSCBootstrapp E com.adobe.livecycle.bootstrap.bootstrappers.AbstractBoostrapper log TRAS0014I: 
The following exception was logged com.adobe.livecycle.bootstrap.BootstrapException: ALC-TTN-011-031: 
Bootstrapping failed for platform component [DocumentServiceContainer]. The wrapped exception's message reads: See nested exception; nested exception is: 
java.lang.Exception: java.lang.NoClassDefFoundError: com.adobe.livecycle.cache.adapter.GemfireCacheAdapter (initialization failure)
 at com.adobe.livecycle.bootstrap.bootstrappers.DSCBootstrapper.bootstrap(DSCBootstrapper.java:73)
 at com.adobe.livecycle.bootstrap.framework.ManualBootstrapInvoker.invoke(ManualBootstrapInvoker.java:78)
 at com.adobe.livecycle.bootstrap.framework.BootstrapServlet.doGet(BootstrapServlet.java:156)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:743)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)

LCM may also display error message dialogs such as the following:

Reason

In a clustered environment, the cluster caching must be configured correctly and working before the database initialization can complete successfully.

Solution

The cluster caching should be configured using one of the options below, depending on whether you are using UDP or TCP caching:
A) UDP caching uses the following Java argument to set a port number: -Dadobe.cache.multicast-port=<port number>
The multicast port must be unique to the LiveCycle ES cluster (that is, the port must not be used by any other cluster on the same network). It is recommended that you configure the same <port number> on all nodes in the LiveCycle ES cluster.

B) TCP caching uses the following Java argument: -Dadobe.cache.cluster-locators=<IPaddress>[<port number>],<IPaddress><port number>]
Configure, as a comma-separated list, the locators for all nodes of the cluster. The value for <IPaddress> is the IP address of the computer running the locator, and the value for <portnumber> is any unused port between 1 and 65535 (the port must not be used by any other cluster on the same network). It is recommended that you configure the same <port number> for all locators.

Troubleshooting

Here are some further troubleshooting tips for LCM errors related to the DB initialization step:

  • ensure the DB privileges for the DB user match what is specified in the installation documentation
  • ensure sure the user that is starting the application server has R/W access to the GDS and LC TEMP directories (Bootstrap will fail if the permissions are incorrect)
  • it may be necessary to drop the DB and re-create it if there was an error the first time
  • if using WebSphere, test the DB connection from the WAS admin console
VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 9.0/10 (1 vote cast)

LiveCycle ES: EventPolicyException on IDPSchedulerService during server startup

Issue

 If you are running LiveCycle ES (8.x) you may notice the following exceptions in the server log when you restart the application server:

[05.05.11 10:40:43:809 CEST] 00000084 ExceptionCont W Scheduling subscription evaluation resulted in error. 
Reason: There is no active service configuration for service: IDPSchedulerService
[05.05.11 10:40:43:809 CEST] 00000084 EventManagerI E Event Data storage failed. 
Reason: com.adobe.idp.event.policy.EventPolicyException: There is no active service configuration for service: IDPSchedulerService
.....
Caused by: ALC-DSC-023-000: com.adobe.idp.dsc.registry.service.NoActiveServiceConfigurationException: 
There is no active service configuration for service: IDPSchedulerService

The same kind of exception can also refer to other services like EncryptionService, OutputService and so on, depending on the services deployed and used on your installation.  Something has gone wrong during server startup and these services have not finished loading successfully.  There can be many different causes for such behaviour, i.e. the app.server was somehow busy handling other requests or under load during LiveCycle start.

Solution

 Restart the application server again, paying attention not to send any requests or calls to the server if possible.

Troubleshooting

If there are other applications deployed on the same application server as LiveCycle, then try to stop/remove those applications to identify which one may be interrupting the LiveCycle startup.

reference: (182305661)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 0.0/10 (0 votes cast)

LiveCycle ES2: workload distribution of batch events in clustered environments

Description

 If you are producing and consuming batch events in a clustered environment in LiveCycle ES2 you may notice that all events produced are consumed on the same cluster node.  This is the expected behaviour even though it may seem as though LiveCycle is not making use of the distribution benefits of a clustered environment.

This behaviour is related to the run-time property of the process, i.e. short-lived (synchronous) versus long-lived (asynchronous).

Explanation

Producer process Consumer process Result
Short-lived Long-lived Distributed
Short-lived Short-lived Same node
Long-lived Short-lived Same node
Long-lived Long-lived Distributed

When the consumer process is short-lived, the event is created on the 1st cluster node (server1), and a work item to handle the event is created and assigned to the workmanager on server1.  The workmanager handles the work item and synchronously invokes the short-lived consumer process.  The execution of that process happens in the same workmanager thread and so everything happens only on server1.

When the consumer process is long-lived, the event is created on server1, and a work item to handle the event is created and assigned to the workmanager on server1.  The workmanager handles the work item and asynchronously sends a request to the JobManager to invoke the long-lived process.  At this point event handling is done and everything has happened on server1.  At some point after this, JobManager picks up the invocation request and starts the long-lived consumer process.  This can happen on either node (server1 or server2) in the cluster.

reference: (182264457)

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 9.0/10 (1 vote cast)