LiveCycle Gemfire Distributed Cache Configuration

LiveCycle ES 8.2 comes with a new distributed cache from Gemstone called Gemfire. Here’s a press release from Gemstone in this regard. In cluster deployments, it is critical that all members of a LiveCycle cluster be able to find one another so that they can keep their individual caches in synch with one another.

Members of a LiveCycle cluster find one another using IP multicasting. UDP is simpler to configure than TCP because TCP requires that you run an additional TCP Locator service. However, UDP has limitations of its own, especially when cluster members are not all in the same IP subnet.

Please note that once cluster members “discover” one another, they establish peer-to-peer sockets with one another and communicate using TCP. Therefore, if the TCP locator service goes down, LiveCycle will not stop working. However, cluster member arrivals and departures will not be communicated to all of the cluster members until the TCP locator service is re-started.

UDP

To over-ride the default IP address used by Gemfire, a JVM argument can be used. The following example sets the address a well as the port (35001):

-Dgemfire.mcast-address=239.255.2.10 -Dadobe.multicast-port=35001

For guidelines from Cisco about allocating IP Multicast addresses, see here. The most restrictive (“Site-Local Scope”) is the range 239.255.0.0 to 239.255.254.255

To verify that the cache is working, navigate to the LiveCycle Temporary Folder (configured using the LiveCycle Configuration Manager and later on changeable via the LiveCycle Admin UI) and open Gemfire.log. In WebSphere, the path will be:
$LC_TEMP/adobews_yourcellname_yournodename_yourappserverinstancename/Caching/

If the cache is working, you should see entries such as these:

GemFire P2P Listener started on tcp:///10.10.20.22:41585
Starting distribution manager aix1:62840/41585
Initial (membershipManager) view = [aix2:42282/35386, aix1:62840/41585]
admitting member ; now there are 1 non-admin member(s)
admitting member ; now there are 2 non-admin member(s)

TCP

In environments were UDP does not work, TCP has to be used. Copy the folder %LIVECYCLE_INSTALL_ROOT%\lib\caching\ and its contents to at least two of the horizontal cluster members. If only one TCP Locator instance is run, it will become a single point of failure. Edit and then run startlocator.bat (or startlocator.sh in Unix) so that the last line looks something as follows:
java -cp .\gemfire.jar com.gemstone.gemfire.internal.SystemAdmin start-locator -port=22345 -Dgemfire.license-type=production -Dlocators=node1.company.com[22345]
where 22345 is the port specified by you while running LiveCycle Configuration Manager (LCM) and added as a JVM argument
(-Dadobe.cache.cluster-locators=node1.company.com[22345],node2.company.com[22345]).

Before you run the startlocator batch/sh file, make sure that there are no files named .locator or PID_file in that folder. If they exist, delete them first.

In 64-bit Windows environments, edit and use the file startlocator_win64.bat

See the LiveCycle clustering guides for more details.

VN:F [1.9.22_1171]
Was this helpful? Please rate the content.
Rating: 0.0/10 (0 votes cast)
This entry was posted in Adobe LiveCycle ES and tagged . Bookmark the permalink.

2 Responses to LiveCycle Gemfire Distributed Cache Configuration

  1. William says:

    Hi,We are a medium sized company and looking for a distributed caching solution. Infect presently we are evaluating some products. I’ll highly appreciate if somebody please post a comparison between Gemstone and some other exiting distributed caching products like NCache. I am sure not only me but many other people will also be benefited if somebody do so. Thanks in advanceCheers,

  2. Eric C says:

    Jayan, where can I find the file called startlocator_win64.bat? I do not see it under %LIVECYCLE_INSTALL_ROOT%\lib\caching\ Thanks