Author Archive: apugalia

LDAP: error code 12 – Unavailable Critical Extension

The issue LDAP: error code 12 – Unavailable Critical Extension commonly occurs when asking an LDAP Server to return paged results but the LDAP doesn’t support the PagedResultsControl extension.

  • SunOne 5.2 and 6.3 don’t support PagedResultsControl extension.
  • Active Directory and other LDAP servers support PagedResultsControl extension.

Working of pagination during LiveCycle sync from an LDAP server
In LiveCycle, users and groups are synched from an LDAP server in batches of 200.
When the results returned from an LDAP server is >= 200, then an AutoDetectionLogic is automatically enabled.
This AutoDetectionLogic seeing that the LDAP server is SunOne, automatically disables paging.
This AutoDetectionLogic seeing that the LDAP server is AD or non-SunOne, automatically enables paging.

Issue
There have been cases where an Enterprise has a proxy server in between which acts as Active Directory but the ultimate LDAP server running behind is SunOne.
In such a scenario, the AutoDetectionLogic is forced to enable paging because of the proxy server acting as Active Directory.
Hence, when the communication ultimately happens with SunOne, we get the error and sync fails.

Fix
In such a scenario, the AutoDetecitonLogic has to be turned off so that LiveCycle doesn’t send any pagination requests.
One should follow the steps mentioned below to disable paging permanently.

  • Login to AdminUI with administrator credentials.
  • Navigate to Home > Settings > User Management > Configuration > Manual Configuration
  • Export the config.xml to file system.
  • Look for the tag entries starting with <entry key=”enablePaging” value=”…
  • This entry is present under nodes named LDAPUserConfig and LDAPGroupConfig for a particular Enterprise or Hybrid domain.
  • By default the entry is <entry key=”enablePaging” value=”true”/>
  • When using SunOne as LDAP, this entry should be modified to <entry key=”enablePaging” value=”false”/>
  • Save the config.xml and import it back to LiveCycle.
  • No restart of the Application server required.

LifeCycle of LiveCycle Domain Synchronization

A LiveCycle Enterprise Domain is synchronized with users and groups from an external entity.
The external entity can be either of the following,

  • Custom SPI – Custom Service Provider Interface allows one to connect to a external system other than LDAP for fetching users to LiveCycle.
    One can create a Custom SPI by implementing DirectoryUserProvider and DirectoryGroupProvider Interfaces.
    The following document details step by step procedure for creating a Directory Provider, http://help.adobe.com/en_US/livecycle/9.0/programLC/help/index.htm?content=001505.html
  • LDAP – Integrating an LDAP with an Enterprise Domain is very clearly detailed in this blog, http://blogs.adobe.com/livecycle/2009/02/integrating_livecycle_with_the_1.html
    As of LiveCycle ES3, following LDAP servers are supported,

    • Sun ONE 5.2
    • Sun ONE 6.3
    • Microsoft Active Directory 2003
    • Microsoft Active Directory 2008
    • IBM Tivoli Directory Server 6.0
    • Novell eDirectory 8.7.3
    • Lotus Domino v8.x
    • ADAM 1.1.3790.2075
    • OpenLDAP 2.3.43-12.el5_5.3.i386

I’ll talk about the following related to Domain Synchronization,

Working of Domain Synchronization with an LDAP Server

An Enterprise Domain registered with an LDAP server works in 3 steps during a sync,

  • Syncrhonize Users from LDAP
  • Syncrhonize Groups from LDAP
  • Syncrhonize Group Members from LDAP

Let’s take an example where an Enterprise Domain is registered with a Microsoft Active Directory with following specifications,

  • UniqueId for both Users and Groups is set to ObjectGUID.
  • Batch size is by default set to 200, which means that the Users, Groups, Group Members will be fetched from an LDAP in a batch of 200.
  • Let’s consider some test Users from the Active Directory LDAP,
    User 1
    dn: CN=foo,CN=users,DC=example,DC=com
    sAMAccountName: foo
    mail: foo@example.com
    memberOf: CN=baz,CN=Users,DC=example,DC=com
    objectGUID: 1
    User 2
    dn: CN=bar,CN=users,DC=example,DC=com
    sAMAccountName: bar
    mail: bar@example.com
    memberOf: CN=baz,CN=Users,DC=example,DC=com
    objectGUID: 2

In an Active Directory LDAP server, the ObjectGUID field is a binary field.
For simplicity sake I’m using it as numeric field.

The user synchronization phase in this example, involves the following steps:

  • Fetch 200 users (configurable batch size via editing config.xml) from LDAP.
  • Determine the value for the unique identifier for each user.
  • Look for a record in LiveCycle user table where the canonicalName matches with the user’s unique identifier.
    • Case A – If the record exists then update the user properties.
    • Case B – If the record does not exist then create a new user record.
    • Case C – If the record exists in LiveCycle Db but marked Obsolete, then mangle the userID of the previous record and continue creating the new record.
  • Once all the users have been fetched, the users present in LiveCycle Db but not modified in the current synch cycle are marked OBSOLETE.

A similar logic is used in the group synchronization phase.
Group membership phase comes into picture only if both the User and Groups synchronization for a Domain are enabled.
Group Membership links the synched User and Group on behalf of the Membership at the LDAP end between both the principals.

UniqueId used in LDAP Synchronization

In the synchronization process, the unique identifier attribute for users and groups plays a very important role.
It serves as the key attribute for helping migrate Principals registered with one LDAP to another.
This LDAP attribute used as UniqueId should fulfill the following requirements in the LiveCycle database,

  • Unique – The identifier should be unique across the whole user/group repository.
  • Immutable – It does not change for a given user/group.
  • Not recycled – The identifier once assigned to a user/group is never reused.

The following LDAP attributes are not a good candidate for a unique identifier:

  • distinguishedName
    For example, consider that a user’s distinguished name (dn) CN=foo, ou=finance, DC=bar,DC=com is used as the unique identifier.
    In this case, if foo moves to different department, then the dn will change.
  • loginId
    A user’s login ID (samAccountName in Active Directory and uid in SunOne) are sometime recycled when an old user leaves and the new user with the same name arrives.
    In this case, the new user is given the same userId.
  • email
    Email is fairly a good candidate. However, it might cause some issues when the emails are recycled or modified.

Therefore, the synchronization logic by default, uses the following LDAP attributes for different LDAP server,

  • objectGUID – for Microsoft Active Directory
  • objectGUID – for ADAM
  • nsuniqueId – for SunOne
  • guid – Novell eDirectory
  • dominoUNID – Lotus Domino
  • ibm-entryuuid – IBM Tivoli

However, these attributes can be replaced with any other attribute of LDAP which fulfills the above mentioned requirements.

User Indentity in LiveCycle

The user identity of a user in LiveCycle is governed by the following rules.
In a LiveCycle domain,

  • A user’s loginid (edcprincipaluserentity.uidstring) is unique but with respect to it’s own domain, i.e. same userId can co-exist in another domain.
  • A user’s canonicalName (edcprincipalentity.canonicalname) is unique  but with respect to it’s own domain, i.e. same canonicalName can co-exist in another domain.
  • Each user and group is assigned an oid (edcprincipalentity.id), which is used to refer the user by other systems in LiveCycle.
    It’s unique across all domains.
    Therefore, a process refers to a user using its Oid.
  • A user can be uniquely referred using:
    • User’s LiveCycle DomainName and LoginId
    • User’s LiveCycle DomainName and CanonicalName
    • User’s Oid

Unique Identifier Migration

There are times when an Enterprise needs to migrate it’s users to another LDAP, or let say an Enterprise may have multiple LDAP servers but now wants to consolidate the principals in a single domain.
In broader sense the migration works as follows,

  • Modify the unique identifiers from old one to new ones according to the new LDAP server.
  • The next Synchronization detects that the unique identifier has changed. Therefore, the Synchronization logic is modified.
  • Fetch 200 users from LDAP.
  • For each user, determine the value for the old unique identifier.
  • Look for a record in LiveCycle user table, where the record’s canonicalName matches with the old unique identifier.
    • Case A – If the record exists, update the user properties and the canonicalName.
    • Case B – If the record does not exist, create a new user record with the new canonicalName.
  • Once all the users are fetched, find all the users who have not been modified in the current cycle, and mark them OBSOLETE.
  • In the end, the canonicalName (unique identifier) for all users is changed to the new Unique Id.
  • Ensure that during this Synchronization process, the value for old unique identifier is not changed at the Old LDAP’s end.

Scenario 1

An Enterprise has configured an LDAP DirectoryProvider in LiveCycle, which uses email as the unique identifier.
Due to various reasons mentioned above, an Enterprise may want to move to objectGUID, whereas other details, such as the LDAP server, domain, and so on, remain unchanged.
In this case, after the unique identifier has been changed, the canonical name for all active Users will be migrated to the new one.

Scenario 2

An Enterprise has configured a SunOne LDAP DirectoryProvider, which uses nsuniqueId as the unique identifier.
As a part of an IT exercise all the users at the LDAP end have migrated from SunOne to Active Directory.
This means that all the LiveCycle Users that are a part of the Enterprise Domain registered with Sunone will have to migrate to Active Directory.
Strict care has to be taken to preserve the User’s Identity in such a way that the said User’s work doesn’t get affected.

  • LDAP1 – Old LDAP server, SunOne
  • LDAP2 – New LDAP server, Active Directory

The user accounts will be migrated from LDAP1 to LDAP2.
While doing this, a user account will be active only in one of the LDAP servers.
After the migration, it will be marked inactive in the other one.
As UniqueId will be different between LDAP1 and LDAP2, one needs to change the unique identifier in a way that the user identity is maintained.
i.e. the user identity should be federated between both the LDAPs based on a particular attribute which can also serve as the uniqueId during migration.

The following steps need to be performed to conclude a successful migration,

  • In the Directory Provider configured with LDAP1, change the unique identifier from nsuniqueId to uid.
    It assumes that the value for uid(from SunOne) and samAccountName(from Active Directory) remains same between LDAP1 and LDAP2.
  • Run the Synchronization. It will change the unique identifier from nsuniqueId to uid, and thereby update the canonicalName for the Users in LiveCycle database.
  • During the migration, LDAP1 will have users disabled while the same users will be in active in LDAP2.
    Now, one needs to configure LDAP2 as the new Directory Provider in the same LiveCycle domain with unique identifier set to samAcccountName.
    For example, the user Bob is disabled in LDAP1 and enabled in LDAP2.
    In this case, when the Synchronization runs,

    • It fetches the user Bob from LDAP1, which is inactive, and therefore will be disabled.
    • It fetches the users Bob from LDAP2, which is active. Its unique identifier will be used to look up the user Bob in LiveCycle database.The record will be found (from the Synchronization by LDAP1) and updated.
  • After migration is complete, the new Directory Provider can switch to the commonly used uniuqeId attribute and synchronize again, i.e. objectGuid in this case.

Bookmark and Share

Capturing LDAP Traffic between LiveCycle server and LDAP server

In an Enterprise Domain, principals(users/groups) are synched to LiveCycle from an LDAP server via Directory Providers.
In an Enterprise or Hybrid Domain, the LiveCycle users might be authenticated using an LDAP server.
In both the above cases, it becomes necessary to sniff the traffic if one of the principal hasn’t reached LiveCycle or the sync has broken in between for some unexplained reason.
Or let say authenticating a user through an LDAP server is failing.
The server logs of an Application server detail only the issue occurring at it’s own end,
but can’t trace the issue occurred during the transfer of data or the issue occurred at LDAP server end.

In such cases capturing the log details right from the start of fetching principal to transporting it to LiveCycle server, gives a clear insight into the working of LDAP protocol and the issue occurred.

There are many ways of capturing LDAP traffic

  • Wireshark is a great tool for capturing the LDAP traffic between an LDAP server and LiveCycle server.
  • Snoop is a utility used for capturing traffic on a Solaris machine.
  • TCPDump can be used to capture the LDAP traffic on a Linux machine.
  • LDAPDecoder can be used to capture the traffic by acting as a proxy between the LiveCycle server and the LDAP server.

I prefer using LDAPDecoder because of the following reasons,

      • It’s not always possible to install a Wireshark on customer’s production system.
        Also using a Wireshark needs root access else the LDAP traffic can’t be captured.
        Wireshark doesn’t work on headless unix/linux servers.
      • Snoop and TCPDump help in capturing traffic on headless unix/linux servers but they capture all of the traffic while one may be interested in only the LDAP traffic.
      • LDAPDecoder works well on both UI and non-UI systems.
        On Non-UI systems the LDAPDecoder can work independently as well as in tandem with Snoop and TCPDump.
        The data captured from Snoop and TCPDump can be given the LDAPDecoder in order to interpret LDAP information in specific.
      • LDAPDecoder can be used to capture LDAP traffic between a remote LiveCycle server and remote LDAP server.

LDAPDecoder acts as a proxy between both the LDAP Server and LiveCycle server.
So, the client forwards the request to LDAPDecoder which decodes the request and then forwards it to the server.
Once it gets a response from the LDAP server it decodes it and passes on the LiveCycle server thereby completing the whole flow of LDAP communication.

Okay, enough of theory, let’s get to some practical now,
1. Extract the LDAPDecoder.jar from LDAPDecoder.
2. Start the LDAPDecoder as follows on command line,
java -jar LDAPDecoder.jar -h ldapserver -p 389 -L 390 -f output.log

-h ldapserver -> the host name or ip address of the LDAP server
-p 389 -> the port of the LDAP server
-L 390 -> the port of the LDAPDecoder server which the LiveCycle server should send requests to.
-f output.log -> the file in which the LDAP traffic will be captured, i.e. the request from the LiveCycle server and response from the LDAP server.

Additionally one can specify -s parameter to make communication between LDAP Server and LiveCycle server in SSL mode.
For more such options just type, java -jar LDAPDecoder.jar

3. Open the Enterprise or Hybrid domain in Edit mode.
4. In the server field, mention the hostname or the IPaddress of the machine on which the LDAPDecoder server is running.
5. In the port field, mention the port to which the LDAPDecoder server is listening, i.e. the port specified with -L parameter, e.g. 390.


6. Save the domain.
7. Sync the Domain in case of Enterprise or make authentication calls in case of either Enterprise or Hybrid domain.
8. Check the details of LDAP request response flowing between LiveCycle server and LDAP server in the log file, i.e. the file specified with the -f parameter, e.g. output.log

For more information on LDAPDecoder and how to interpret Snoop and TCMPDump data, refer the following doc,
http://fossies.org/unix/privat/slamd-2.0.1.zip:a/slamd/webapps/ROOT/documentation/tools_guide.odt

Bookmark and Share

Using JMX-Console for configuring and debugging LiveCycle applications

 JMX-Console ??

JMX(Java Managed Extensions) is a Java Technology which provides a way to manage running applications via various utilities and tools.
The running services are registered as mbeans and can be accessed/controlled remotely via JMX-Console.

LiveCycle currently supports 3 App Servers namely, JBoss, Weblogic and Webspehere.
While Weblogic and Websphere don’t provide a UI to connect to JMX managed bean instances,
Jboss provides a management console called JMX-Console to do the same.
In order to manage beans for Weblogic and Websphere,
Java provides a generic Utility called JConsole which can be connected to any running Application Server and relevant tasks can be performed.

Why JMX-Console

  • There are times when server gets stuck at some error, e.g. Server goes into hang state. In such cases, needs arises to get an insight into the running system so that the issue can be narrowed down.
  • At times one would like to get some information about the registered LiveCycle service, e.g. some metadata, some server information, some kernel information.
  • Sometimes one needs to change some settings in the server at run time.
  • One may need to redeploy a web application.
  • One might need to change the logging level for a particular service or decrease the verbosity of the logs.
  • Most of these changes mandate restarting of the server which can be very tardy with a LiveCycle bundled with too many components.
  • There are times when the host machine is not accessible directly to debug the issue.

Some benefactions of JMX-Console for debugging LiveCycle issues

I’ll be discussing about the following topics that can be helpful while debugging a LiveCycle application/service.
a. Login into JMX-Console
b. Redeploying a LiveCycle war
c. Configuring the log levels for a specific package
d. Generating Thread Dump
e. Stopping LiveCycle Server Instance
f. Starting/Stopping the CRX server
g. Command line JMX management

NOTE: the changes made on JMX-Console remain active till the server is running and vanish once the server is restarted.

Login into JMX-Console

Redeploying a LiveCycle web service

Whenever one needs to change some settings in a war, Jboss doesn’t allow to do so until the server is stopped.
Of course, one can always copy the war/ear outside and edit and then hotswap with the existing running war.
But the following seems like a safe and graceful way to modify a war without shutting down the whole running LiveCycle server.

  • Navigate to http://LCServer:LCPort/jmx-console
  • Under the section “Object Name Filter”, click on the link named jboss.web.deployment
  • To the right hand side under the section jboss.web.deployment, there will be a listing of context-roots to which various wars are associated.
  • Click on the context-root which that is associated with the concerned war.
  • This will open the JMX Mbean view of the associated war.
  • The mbean names are of format “war=/context_root_associated_with_war”.
  • Now one can simply start/stop/redeploy the war by clicking on the related “Invoke” button.

Configuring log levels for package(s)

The log levels for a package are changed in Jboss via log4jService located at, \\Jboss\server\server_instance\conf\jboss-log4j.xml.
Or by changing the log4j.properties located in a particular war.
JMX-Console provides a remote way of doing the same.

  • Navigate to http://LCServer:LCPort/jmx-console
  • Under the section “Object Name Filter”, click on the link named jboss.system
  • Then click on link service=Logging,type=Log4jService under the section “jboss.system” on the right hand side.
  • There are various settings one can alter for logging as per need.
    • Custom Logging File – As pointed out before, the log4J settings are controlled via the file named jboss-log4j.xml.
      One can even change the settings of the server to point to a custom made logging.xml file.
      Replace the value of the attribute named “ConfigurationURL” from “resource:jboss-log4j.xml” to the custom logging.xml file.
      The custom file however needs to be placed at the same location alongside the jboss-log4j.xml.
    • Get specific Log Level details – Have a look at the jboss-log4j.xml.
      It consists of tags named “categories” with different priority values.
      In Log4j world, the category is called “Logger” and the priority value is called the “Log Level”.
      If one wants to know the log level for a particular category specified in the jboss-log4j.xml,
      then enter the name of the Logger for the Operation named “getLoggerLevel” and click on the Invoke button.
    • Set/Edit new Logger and Log level – One can create new Categories/Logger on the fly and associate a particular package with a Log Level.
      Enter the name of the Logger and the Log Level for the Operation named “setLoggerLevel” and click Invoke.
      Till the server instance is up and running, the log level for the particular package will remain active.
    • Set/Edit multiple Loggers and Log level – One can also set multiple Loggers with a particular Log Level under the Operation named “setLoggerLevels”.
    • Reconfigure Logging – If the logging is not getting reflected for some unknown reason,
      then click on Invoke button for the Operation named “reconfigure”.
      This is analogous to editing of a jboss-log4j.xml where the saving the changes the logging is reconfigured.
    • Edit root threshold Log level – When no Categories are specified then the Log level of Root takes control.
      This is specified under as the value for the attribute named “DefaultJBossServerLogThreshold”.
      Change the value to a required level and click on “Apply Changes”.
      This will activate the Log Level for all those packages which don’t have a category explicitly specified.

Generating Thread Dump

A Java thread dump is a way of finding out what every JVM thread is doing at a particular point in time.
This is very helpful in cases where the server has become unresponsive or went into an unexplained hang state.
Thread Dump allows one to get an insight into the failing JVM process.
As an analysis of the dump, it can be known where exactly the threads are stuck.

  • In order to get the Thread Dump via JMX-Console,Navigate to http://LCServer:LCPort/jmx-console
  • Under the section “Object Name Filter”, click on the link named jboss.system
  • Then click on link type=ServerInfo under the section “jboss.system” on the right hand side.
  • Click on Invoke button of the “listThreadDump” operation to generate the ThreadDump.
  • The Thread Dump can then be saved to the file system for analysis.

Stopping LiveCycle Server Instance

The JBoss server instance can be stopped online via JMX-Console.
Especially useful when one doesn’t have direct access to the host machine.
i. Navigate to http://LCServer:LCPort/jmx-console
ii. Under the section “Object Name Filter”, click on the link named jboss.system
iii. Then click on the link type=Server under the section jboss.system on the right hand side.
iv. Click on Invoke button of the “shutdown” operation to shutdown the server instance.

Starting/Stopping the CRX server

Have a look at second section of the blog entry, http://blogs.adobe.com/apugalia/restarting-crx-server-without-stopping-a-jboss-server-instance/

Twiddle
While JMX-Console provides UI way of debugging and changing the settings,
the same can be achieved through command line by another utility named twiddle that comes bundled with Jboss and is located in the bin folder of the same.
It can perform every task that a JMX-Console can do through UI.
I found a nice article depicting the usage of Twiddle with examples. Have a look at,
http://weblogic-wonders.com/weblogic/2011/02/13/twiddle-utility-examples/

Note: The JMXConosle.war is not shipped with the turnkey installations previous to LC ES3.
In such cases, it needs to be downloaded and deployed to the deploy folder of the related Server_Instance.

Bookmark and Share

Restarting CRX Server without stopping a JBoss Server Instance

Since the release of ES3, the CRX Server comes integrated with the LiveCycle Document Services.(NOTE: SA Installer should be run on an existing LiveCycle Document Services Installation in order to get the benefits of CRX Server and Correspondence Management Solution)

The CRX wars namely, crx-explorer_crx.war and crx-launchpad.war are bundled inside the core EAR, i.e. adobe-livecycle-jboss.ear.

There are times when a restart of the CRX Server is required due to some changes in configuration/settings or issues related to bundle activation.
Since the wars are bundled inside the core ear, the whole of JBoss server instance needs to be restarted to get things done.
This ultimately consumes time and things on the doc-services side also get blocked due to some task pending at CRX end.

Let’s handle the two issues separately,

i. Issues related to Bundle Activation

There is an easy way out where one can just shutdown the CRX server and Restart it again without interfering with the LiveCycle Server Instance.

Go to, http://localhost:8080/lc/system/console/vmstat (Snapshot at the end of the post)
It has two buttons,
a. Restart – Clicking on this will restart all the bundles and the framework.
b. Stop – Clicking on this will stop all the bundles and the framework.
In order to start the framework and the bundles, Just refresh the browser or hit the above mentioned URL again.
The LiveCycle services running under the Jboss server instance won’t be affected at all.

The above settings solves the issue of Bundle Activation because it restarts CRX LaunchPad. But it doesn’t start the whole of CRX due to which changes in repository.xml or bootstrap.properties don’t get reflected.

ii. Changes in Configuration/Settings like repository.xml, bootstrap.properties, etc.

For such changes to get reflected, the best way is to re-deploy the CRX war.
JBoss provides a seamless way of doing that through JMX Console.
a. Go to http://localhost:8080/jmx-console. (For more information on JMX-Console handling I’ll write a separate blog post)
b . Once inside JMX-Console, search for jboss.web.deployment section.
c. Click on the link named war=/crx.
d. Now you can stop and start the war by clicking on the respective “Invoke” buttons to get the settings and configuration reflected. This won’t affect the other LiveCycle Document Services running in parallel.

Bookmark and Share

Customizing ports on Jboss for running multiple server instances

Many a times we need to run multiple JBoss server instances on the same sever, e.g. Vertical Cluster setups. At times when some other application has taken control of port 8080, then need arises to switch JBoss server instance to run on some other port.

Below mentioned are the ways one can follow to configure different ports on multiple JBoss server instances without going for changing 10s of ports which is firstly time consuming and also hard to maintain.

Jboss 4.x.x

JBoss by default uses 8080 for Http connection and 8443 for SSL connection.

So, in case one needs to switch port 8080 and it’s set of related ports of a server instance to something else, then Jboss provides another set of 3 pre-configured ports to choose from, i.e. 8180, 8280, 8380.

To change the default port settings,
a. Go to \\Livecycle_Installation\JBoss\server\Server_Instance_Name\conf\jboss-service.xml
b. Look for the following mbean instance:

<mbean code=”org.jboss.services.binding.ServiceBindingManager” name=”jboss.system:service=ServiceBindingManager”>
<attribute name=”ServerName”>ports-01</attribute>
<attribute name=”StoreURL”>${jboss.home.url}/docs/examples/binding-manager/sample-bindings.xml</attribute>
<attribute name=”StoreFactoryClassName”>org.jboss.services.binding.XMLServicesStoreFactory</attribute>
</mbean>

c. The above mentioned mbean code, is commented by default to allow users to use 8080.
Uncomment the above said mbean code and the server instance will be ready to run on ports-01, which is 8180.
This also take care of the other related ports and increment their value with 100, i.e. SSL port related to ports-01 will be 8543.

If one wants to use any other port. Just change the value for the attribute “ServerName” to ports-02(8280) or ports-03(8380). These are pre-configured ports by JBoss.
For any new custom port other than the ones talked above, one will have to add new configurations to the file, specified in the path given for the attribute “StoreURL”.

For more information please refer the following doc,
http://docs.jboss.org/jbossas/jboss4guide/r3/html/ch10.html

Jboss 5.x.x

JBoss 5 also comes with a set of 4 predefined ports, 8080, 8180, 8280 and 8380.
By default, any server instance is bind to run with 8080(for http) and 8443(for https).

But in order to run with the other ports mentioned above, one can start the server with the following command,

run -c server_instance_name -Djboss.service.binding.set=ports-01

The above command will start the JBoss server instance with port 8180.
The related port values will as well get incremented with 100.

If one wants to configure a custom port other than the ones mentioned, then one’ll have to edit the pre-configured bindings(or add new) in following file,
\\LiveCycle_Installation\jboss\server\Server_Instance\conf\bindingservice.beans\META-INF\bindings-jboss-beans.xml

For more information please refer the following doc,
https://community.jboss.org/wiki/ConfigurePorts

Bookmark and Share

BeanShell scripting via LiveCycle Workbench

Adobe Livecycle Workbench provides a great tool for executing BeanShell scripts.

BeanShells sripts allow executing a piece of code on the fly. The best part of BeanShell scripting is that it doesn’t have unnecessary syntactic sugar which complicates and increases the learning curve. Rather one can simply use his/her java skills and write down the code and run it on the fly on a LiveCycle running server.

This mostly comes handy when one wants to execute a code quickly on the server but wants to avoid creating a project in an IDE and going through all the hassles of collecting the dependent jars in class-path and thereafter instantiating ServiceClientFactory and providing server details to connect.

Since the script gets executes within VM, hence it doesn’t require the pre-requisites such as providing the dependent jars or server detail configurations.

The following example demonstrates a simple “Hello World” code written in BeanShell script and executed on the LiveCycle server.

I’ve written a cookbook recipe for invoking various User Manager APIs through BeanShell scripting which one can find at,

http://cookbooks.adobe.com/post_Accessing_User_Manager_API_through_BeanShell_Scrip-17405.html

1. Connect the Workbench to a running LiveCycle server.

2. Create a new Application. Specify the name of the Application and click on Finish button.

3. Create a New Process under the Application.

4. Mention the Process name and click on Finish button.

5. A Process SwimLane will be opened on the right side. Drag the Activity Picker and attach it to the “Default Start Point”.

6. Once done, a “Define Activity” pod comes up. Enter the Activity name and search for “execute script” in the Find box.

Else, the Execute Script activity can also be found by navigating to “Foundation->Execute Script 1.0″ under Service tab.

Click OK.

7. Once done there will be a new window opened on the left side with the name “executeService Operation”. Click on the button in the Input box.

This will open up a Text Area where the code to be executed can be written.

8. Enter the code with proper imports and click on OK button. Then click the Save button on the top left corner or press CTRL+S.

9. Goto the Applications tab in the left top corner. Right click on the relevant process and click on “Invoke”.

Click OK on the screen where it asks “Check in all files”. Click OK on the next screen where it notifies of “No Input Variables”.

10. Look into the server logs of the application server on which LiveCycle server is running. The output will be printed in the INFO level logs.

The above was a simple example to show the working of BeanShell script in LiveCycle through Workbench.

But in Enterprise world the script would be interacting with inputs from different processes and returning processed results to further joining processes.This can be achieved through patExecContext object which is explicitly available to the script.

An example can be found at,

http://help.adobe.com/en_us/livecycle/9.0/workbenchHelp/help.htm?content=000581.html

Bookmark and Share