If you see messages such as follows in your J2EE appserver logs, you might want to consider increasing the heap allocated to the appserver JVM hosting LiveCycle, or allocating more CPUs to the server instance:
WorkPolicyEva W com.adobe.idp.dsc.workmanager.workload.policy.WorkPolicyEvaluationBackgroundTask run Policies violated for statistics on ‘adobews__-1335901232:wm_default’. Cause: com.adobe.idp.dsc.workmanager.workload.policy.WorkPolicyViolationException: Work policy violated. Will attempt to recover in 60000 ms
MemoryPolicy W com.adobe.idp.dsc.workmanager.workload.policy.MemoryPolicy applyPolicy Memory threshold exceeded: maximum limit 95.0, current value 100.0
Work Manager backing off for 60 seconds will not affect the performance of your short-lived orchestrations. However, if you have long-lived orchestrations deployed, their processing will be affected.
Although your mileage will vary, the following heap and garbage collection settings for the Sun HotSpot JVM have been found to be effective in avoiding Work Manager policy violations. Please note that we have several customers currently in PRODUCTION with heap sizes of 8 GB or more:
-server (run the JVM in server mode)
-Xms2048m (minimum heap size is 2 GB)
-Xmx2048m (maximum heap size is 2 GB)
-XX:NewRatio=1 (allocate half of the heap to the “Eden” Generation where most of the new objects are created and then destroyed by garbage collection runs)
-XX:MaxTenuringThreshold=100 (force an object in the “Eden” Generation to survive at least 100 garbage collections before incurring the copying cost of moving that to the “Survivor” Generation)
-XX:PermSize=512m (minimum size of the “Permanent” Generation is 0.5 GB)
-XX:MaxPermSize=512m (maximum size of the “Permanent” Generation is 0.5 GB)
-XX:+UseParallelGC (Use the parallel garbage collector)
-XX:+UseParallelOldGC (Use the parallel garbage collector for the heap’s “Old” Generation)
-XX:ParallelGCThreads=4 (allocate 4 threads for the parallel garbage collector, assuming the server instance has at least 4 CPU cores, use a higher setting if the server instance has more CPU cores).
In the x64 world, make sure you deploy LiveCycle to servers with either Intel Xeon or AMD Opteron processors. In the Solaris SPARC world, deploy LiveCycle to servers with SPARC64 VII or UltraSPARC IIIi or IV CPUs. In the IBM AIX world, use POWER7 or POWER6 CPUs.
If deploying LiveCycle to hardware-virtualized environments, make sure you allocate at least two vCPUs at minimum 2 GHz clock speed, preferably 3 GHz or more, minimum 6 GB of RAM and 60 GB of storage to each of the VMs. If using SAN storage, insist on Tier I SAN storage (highest performance).
Micro-partitioning CPUs by clock ticks in POWER AIX environments is not economic. Since most LiveCycle components are licensed by the number of server CPUs (two CPU cores represent one LiveCycle CPU license), it is in your economic interest to deploy LiveCycle to the best performing CPUs you can afford. LiveCycle performance is CPU-bound in most cases. Micro-partitioning will unnecessarily hamper your performance/price ratio.
Although HyperThreading (Intel) and Chip Multi Threading (Sun/Oracle) tricks the operating system into reporting multiple CPU cores, these only provide incremental benefits (~30%) instead of orders of magnitude (2x) benefits.