So you’ve just finished working on this amazing process, which works great thanks to the dozens of operations and sub-processes you’ve precisely put together in Workbench. But now that you’re about to go into production, a few load-testing runs reveal that the solution gets worryingly slow as the invocation volume grows…
The testers are probably complaining already, waiting several minutes to get a PDF rendered or submit a simple task. But how can you define what’s taking so much time? Which one of the 150 steps that compose your main process is responsible for the performance degradation? Where is this incredibly annoying bottleneck?
The simple approach
Of course you could use some logging to get a detailed picture of how your process is running, something like this:
And yes, this will give you the information you’re looking for, but let’s face it: this approach is far from ideal for profiling, two main reasons for this:
- You’re effectively doubling the number of steps your process contains, just for logging!
- The amount of logs generated during a load test will be huge, and analyzing them to actually extract a useful piece of information will be extremely time-consuming.
A more elegant approach, using JMX
A much nicer way to get this data is to use the JMX capabilities of your Livecycle server. You might already be familiar with JMX, using it to monitor your JVM heap utilization (as described in this post) . But it actually allows you to get much more information than that!
In JConsole, if you click on the MBeans tab you can navigate through the services to find the one called AdobeService, where the complete list of services used by Livecycle can be seen:
For each of these services you can retrieve, in the Attributes section, the average execution time, the number of invocations, and the minimum and maximum invocation times. The same attributes are available for the default Livecycle services, but also for any specific process defined by the developers.
Clicking on Operations, you can reset the counters by hitting the Reset button; this is particularly useful in the context of a punctual load-testing run.
Batch extraction of the JMX Counters
Obviously going through each service manually can be painful, but there is a way to avoid it. The information contained in the MBeans page can also be retrieved as a batch using a tool called jmxCounters.jar (available here, simply rename .adobe into .jar), as such:
- First copy the jmxCounters.jar file to the server, or on a machine with access to the server.
- Then before running a load test invoke the reset for all counters (note that the pattern might change depending on the server configuration, to accurately determine the pattern see below):
java -classpath jmxCounters.jar jvmruntime.ResetCounters -host localhost -port 8888 –pattern “WLS_LCES_821:Type=AdobeService”
- Once the load test is complete we can retrieve all the counters using the following command:
java -classpath jmxCounters.jar jvmruntime.GetCounters -host localhost -port 8888 –pattern “WLS_LCES_821:Type=AdobeService” > JMXResults.txt
- For a better interpretation of the results, it is best to import the text file in Excel, using commas as separators.
In these commands the host and port should be modified according to the configuration of the server, the pattern can be a bit more complex to figure out, the best way to it is to connect to the JConsole (as described above), navigate to the first element of the AdobeService category, and copy the string until the comma before the Name attribute.
Note that this pattern will differ depending on the server configuration.
If you need JMX to be secured on the server, then simply provide a user name and a password, as such:
java -classpath jmxCounters.jar jvmruntime.GetCounters -host localhost -port 8888 –pattern “WLS_LCES_821:Type=AdobeService” –username “viewer” –password “hello123” > JMXResults.txt
The results and how to interpret them
Once imported in Excel (Data -> Get External Data -> Import Text File…), you should end up with something like this:
As you can see here I’ve ordered the results based on the average invocation time, as this is the most relevant number for performance optimization.
Now all you have to do is run a few load-testing scenarios, preferably varying inputs like the number of concurrent users, and consolidate the results to clearly identify where the bottlenecks are, and how to get rid of them.
In a situation where Livecycle is used with multiple other systems (web services, databases or complex interfaces) these numbers will also allow you to exonerate Livecycle, or at least prove the performance degradation is not entirely due to your piece of work.