(Class) cannot be cast to (interface) in a custom LiveCycle component

Problem

You are getting an error along the following lines when running a custom component for LiveCycle ES and ES2:

"[com.adobe.] cannot be cast to [com.adobe.]"

even though you know for sure that AChild is a child or implementor of AParent – for instance:

"com.adobe.idp.taskmanager.dsc.client.query.TaskRowImpl cannot be cast to com.adobe.idp.taskmanager.dsc.client.query.TaskRow"

Additionally:

-You have included the JAR files from LiveCycle’s deployment directories into your component JAR
-If you remove the Adobe JAR from your Custom Component’s JAR, you get a ClassDefNotFoundError.

Solution

-Remove the Adobe JARs from your custom component’s JAR
-Modify your component XML and add tags as appropriate to include the Adobe packages you need.

Reason

This puzzling problem is to do simply with how different class loaders interact. To abstract over any proper LiveCycle classes, I will be talking about a fictitious Pan class.

# 1) About class loaders

A Pan is not a Pan when you have two different definitions of a Pan.
When you load a class called Pan with one class loader, and you load it again with a different class loader, they are different definitions in memory of a Pan.
So when you try to assign [a Pan instance from the first class loader] to [a Pan variable defined by the second class loader], the JVM will not match them up as being the same type of Pan – and you will get the unexpected ClassCastException.

-When the JVM is started, it uses a class loader to start loading all the classes it needs, along with the web application server.
-When the app server starts loading LiveCycle, it creates several class loaders – one for each EAR, for example; and within the EARs, one for each WAR, etc.
-Many different components are loaded by many different class loaders

When you deploy your custom component, it gets loaded by its own class loader. Here’s where it gets tricky.

# 2) Delegating class loaders

If you ask LiveCycle to give you a Pan via some API call, it will give you a Pan that one of its class loaders defined. When in your code you declare a variable to hold that Pan, you’re using your class loader’s Pan definition. A definition clash ensues.

Obviously you don’t want to load your own Pan when LiveCycle already has a definition for a Pan – remove the Adobe JARs that you included with your custom component. If you run the project now, you may get a ClassDefNotFoundError. Why?

If your class loader could not find a Pan of its own, it delegates finding the Pan up to the class loader that loaded it – the parent class loader. The parent class loader will ask your class loader what packages it needs to look in – this is specified in your component XML under the tag. The parent class loader will look for all packages that it is aware of for the packages you specified in the component XML; then look in those for class definitions. If it can’t find what you are looking for, the search will be deferred to its own parent class loader, so on and so forth until the root class loader is reached.

At this point, either one of two things can happen:
-if you specified a package in component XML that contains the Pan you seek, that class definition will be passed back to your class loader.
-if you did not specify a package containing the Pan you seek, NoClassDefFoundError will be thrown.

The document has been changed since it was created and these rights are no longer valid.

As a viewing program, Adobe Reader cannot make any changes to plain PDFs – it is not made for file creation or editing.

However, if you have Adobe Acrobat or Adobe LiveCycle Reader Extensions ES/ES2, you can activate functionality in a PDF that allows even Adobe Reader to handle that given PDF with advanced features, such as filling in forms, saving data into forms, commenting and annotating. These PDFs are known as “Reader-Extended PDFs”.

Sometimes when opening such PDFs, you will see the following warning:

This document contained certain rights to enable special features in Adobe Reader. The document has been changed since it was created and these rights are no longer valid. Please contact the author for the original version of this document

This means one thing: some program opened the PDF and made changes to the structure of the PDF itself – this is different from form-filling or annotating.

The change could have been made by an Adobe program (although Reader 8.1.0 displayed the message erroneously until 8.1.2 fixed it), another program (typically other PDF writers), or even automated programs that analyze and modify files.

At this point, only Adobe Acrobat can read the file, and re-apply the special features. If Reader has detected structural change however, it is advised to obtain a fresh copy of the form, as the changed version might not behave properly. Acrobat can export data from the damaged form, which can then be imported (by Acrobat) into the fresh form.

The fresh form can then be saved with this data in it, after which the special features can be applied to it.
The extended PDF can be returned then to the user to be used in Adobe Reader.

Converting DOC(X) in LiveCycle PDF Generator yeilds error: ALC-PDG-019-060-Encountered error while importing the XMP file.

When PDFG ES2 converts Word documents, it extracts the metadata and compiles it as an XMP for the PDF.

There is a known problem however, when a file contains custom metadata: PDFG cannot compile the XMP properly for such files.

To check the metadata in the DOC, open Word.
Use the MS Office menu (the big round one, top left) : Prepare : Properties
A small yellow info bar will appear above the editing area.
In that area, Click “DIP Properties – …” and select “Advanced properties …”
Go to the “Custom” tab

If there are items in the space at the bottom labelled “Properties”, then your DOCX is likely to fail to convert.

There are two solutions to this:

A) Workaround.
1/ Log on to the LiveCycle Admin UI
2/ Go to Services : PDF generator : File type Settings : [your file type settings]
3/ Open Word file type settings
4/ Uncheck “Convert document info”
5/ Save

This will prevent PDFG from trying to extract and convert the Word metadata – thus excluding the problematic custom information

2) Obtain the patch

If you have an Adobe Platinum support agreement, you can contact technical support and request the patch.

URI to reference an XDP in the LC Repository from LC Production Print

There are a few ways of accessing an XDP in the LiveCycle Repository.

One of them is to specify a URI at project design time, in which case the URI will look like this:

LC://servername:8080/Applications/AppName/1.0/hierarchy/yourfile.xdp

However, if you want to specify the URL at run time, you would need to populate a variable.
The face of thge URL changes to:

$varname = “repository:///Applications/AppName/1.0/hierarchy/yourfile.xdp”

The server name and port remain specified in the Select Template dialog, under Runtime Repository.

Some Central instances do not start on Windows 2008

With Central on Windows 2008, if you have multiple instances installed and start Central, some instances will fail to start quoting “open of table (….\JFXXX.TMP) failed”

The Central service in the Windows Services is started, but only some of the instances are running. No specific instance consistently starts or fails, it is random.

This is due to Windows TMP file creation APIs, which changed slightly in Windows 2008. For this reason, any version prior to Central 5.7 is likely to be affected on Windows 2008 (32 and 64 bit) when multiple instances are present.

A patch for Central 5.7 can be obtained from Adobe’s Enterprise Support – you can find their contact details from your Support Agreement.

Internet Explorer crashes when using the Reader or Acrobat plug-in

Sometimes Internet Explorer (32-bit) will crash when using the Reader or Acrobat plug-in to display a PDF in a browser window.

The reason is that some third-party PDF creators use Acrobat’s DLLs, but rather than keep them in their own folders, copy them to system32/ which causes Acrobat to open defunct DLLs – and fall over as a consequence.

To resolve the issue, do the following:

Open C:\Windows\system32

Move the following files somewhere else (for example, create a folder C:\bad_dlls):

ace.dll
AGM.dll
BIB.dll
BIBUtils.dll
JP2KLib.dll
cooltyp.dll

Of course, this means that whatever application installed those DLLs in system32 will stop working.
Identify it, and contact the software’s publisher to check if they have a solution.

If you’re feeling adventurous: You could tentatively open the Acrobat/Reader folder under Program Files\Adobe\Acrobat\Acrobat 9.0\ , and COPY the corresponding DLLs to system32 – but that is of the “dirty hack” class and would be generally disadvisable. The third party PDF application would not necessarily be able to use these DLLs anyway.

Many bad pages are printed / data is missing (Central Pro, Output Designer)

When processing a template with data in one environment (in Output Designer for example, or on the development server), the output seems fine; but when the same data and template are tested somewhere else (a different Designer, a different Central server), the reult is that many pages are printed, some blank, some with data in the wrong place, some data lacking.

Solution #1 – the afxon option is not properly specified

This is most likely to be due to a single argument that has been omitted in your processing parameters : “-afxon

From the Print_Ag.pdf documentation file:

To discard data for unknown fields, start Print Agent with the -afxon option.
To place data for an unknown field in the next field on the template, start Print Agent with the
-afxoff option. This is the default.

In Output Designer, these options are set under menu [ Tools : Options : Print options ]. You can add here any number of command line arguments, separated by spaces, that will apply whenever Test Presentment is run.

In Central server, arguments can be placed at two different locations

1) In the Job Management Database

You can access this under [Central install directory]/Server/jfserver.jmd

Note the syntax of the JMD: the list of arguments holds on a single line, enclosed in quote marks. Within this string of characters, double quotes must be escaped by typing two consecutive double quotes.

I posted previously about checking JMD validity – if you have Python 2.x installed, you might want to try out the following: Tamble JMD 0.2a

2) In the data file

You can add arguments in the JOB line of your data file.

If the first line of a data file starts with ^job, the Central engine (if run using Central Control on Windows, or launching the jfdaemon or jfserver on *nix) will take the arguments listed and append them to the ones already extant in the JMD. Since they are applied afterwards, they take precedence.

So if you have an argument -afxoff in your JMD, and you specify -afxon in your JOB line, -afxon will be applied when processing.

Solution #2 – you are using non-SAP compatible templates on a Central activated for SAP data

You may find that the above solution does not seem to apply (but please, test that one first as it is the most likely). In which case, there is a second potential explanation.

Central Pro can be obtained with an SAP adapter which needs to be used with templates specially compiled for this use. To activate this special compilation mode, you need to set the value of the following key to “0”

HKEY_CURRENT_USER\Software\Adobe\Output Designer 5.x\5.x\JfDesign\Compile\

Where ‘x’ should be replaced accordingly to correspond to your version of Output Designer. You can see in your problematic template whether the option is already active by looking at the template’s automatically generated preamble – if there are no !FldNotAvail events on the fields, the option is active.

“Invalid options on ^subform command” error (Adobe Central Pro)

Here’s a puzzler that looks strange, but that is easy to solve – once you understand a key difference between dynamic templates and static templates, and how Central loads and unloads tempplates.

Situation:
– You have multiple templates that you want to use for processing data
– You can process each template independantly without troubles (data only uses one template)
– When you call different templates (data uses the ^form command), with some combinations of files, you get the following error:
Invalid options on ^subform command

Reason:
You are mixing dynamic templates with static templates

Fix:
Ensure all your templates are fully dynamic by ensuring all fields exist inside subforms

Explanation:
Some of your forms are dynamic, some of them are static. Dynamic forms are created when fields are wrapped in subforms.

Dynamic forms, when compiled, create an embedded Preamble (see this in Output Designer with Ctrl+R, when a form is open)
Static forms do not.

The preamble is basically a set of variable definitions that get stored to a special global variable area.
Preamble variables call subform definitions, which are effectively the structural parts of the template.

When a template is loaded, preamble data is executed which reference form structures – for example, if you have a field called myfield in a subform called mysub, you will get:

^comment -----------------------------------------------------------------
^comment Subform: (mysub)
^comment -----------------------------------------------------------------
^define group:myfield!FldNotAvail \groupG_mysub\fieldmyfield.
^define group:myfield!FldUsed \groupG_mysub\fieldmyfield.

When a new template is loaded, any pre-existing form structures are flushed from memory (the item referred to by the “\groupG_mysub\fieldmyfield” part) – but the global preamble variables (the “group:myfield!FldNotAvail” part) are not : they cannot be distinguished easily from regular data, so they are kept.

When you load a static form, no preamble data exists to overwrite previous preamble information – in this case, the definitions for myfield field.

When in your data, myfield is called for the static field, the dynamic definition is still in memory, causing it to try and access a non-existing structure – hence the error.

A short term solution would be to wrap myfield into a subform – this will cause a dynamic preamble to be created for it, overwriting the other template’s definition of myfield

Long term, the recommendation is
-when using multiple templates, avoid where possible having fields of the same name, across all templates
-do not mix static templates with dynamic templates – if such a situation arises, convert the static forms to dynamic, by grouping them in the appropriate subforms. Preferably, leave no field out of a subform

2D Datamatrix barcodes in ZPL printing with Central Pro Output Server

You may find that when producing a 2D Datamatrix barcode for ZPL printers, the barcode is incorrectly produced. This is linked to the fact that Central’s escape character for the FNC1 control character is also part of ZPL’s “special characters” set. It is possible (and necessary) to customize the escape control character for the FNC1 byte. Configuration for 2D Datamatrix barcodes for ZPL is defined in …\Adobe\Output Designer 5.x\Config\barcode2.zpl

At the end of the file, the comments state:

; 2nd white value - escape control character
; Number representing the ASCII value of
; Zebra escape character used in the
; data.  Not currently supported.

Changing the value however will take effect correctly (ignore the fact that it says “unsupported”). To do this:

  1. In the barcode2.zpl file, edit the BarcodeWhite line appropriately of the 2D Datamatrix barcode section at the end of the file, above the comments, and save. For example, to use the dollar sign “$” as an escape character, edit as follows:
    BarcodeWhite 6 36 1 1 1

    Where 36 is the ASCII value for the dollar sign character. It is advised to remain with 7-bit ASCII values

  2. Open your form design in Output Designer
  3. go to menu File > Presentment Targets
  4. Select the presentment target (be sure it is highlighted)
  5. Click on the Font Setup button under the list of targets
  6. Make sure the correct fonts are selected – in this case, Barcode2 should be selected (if afterwards this does not work, try selecting all fonts – this will cause a bigger MDF though) Click OK to cause the changes in the barcode2.zpl file to take effec
  7. Re-compile the form to MDF; use this new MDF with your data.

2D Datamatrix barcodes should now come out fine.

Writing formatted text to a Text Field in LC Designer XFA forms

Text fields in XFA forms have a limited support for XHTML markup, allowing rich text text fields in a fillable form.

You can populate these fields using JavaScript, but it requires a bit more trickery than a simple call to field.rawValue

Here’s a script that sets the value of a text field to contain styled text – so long as the text field has its property set to Rich Text

// This is the envelope that the HTML-formatted data needs to be placed in
// remember to escape quote marks

var envel = ‘<?xml version=”1.0″ encoding=”UTF-8″?><exData contentType=”text/html” xmlns=”http://www.xfa.org/schema/xfa-template/2.8/”><body xmlns=”http://www.w3.org/1999/xhtml” xmlns:xfa=”http://www.xfa.org/schema/xfa-data/1.0/” xfa:APIVersion=”Acroform:2.7.0.0″ xfa:spec=”2.1″><p style=”margin-top:0pt;margin-bottom:0pt;font-family:\’Myriad Pro\';font-size:10pt;text-decoration:none”>PLACEHOLDER</p></body></exData>';

// This is the HTML-formatted data
var jsdata = ‘<b>Hello</b> <i>Stylish</i> <u>World</u>’;

// put the HTML data in the envelope
jsdata = envel.replace(/PLACEHOLDER/g,jsdata);

// Load the XHTML into the field
// the …, 1, 1 arguments are required for exData to understand the content it seems

form1.sub1.TextField1.value.exData.loadXML(jsdata, 1, 1);

To extract the HTML afterwards, you will need to use “saveXML”. To get just the resulting plain text, you can access via rawValue as normal.

And as AnchorMan says: “Stay Stylish San Diego!” :-)