Archive for December, 2008

Transpromo for your forms

If you are in the business of producing statements for your customers, you are likely interested in adding some targeted advertising. Just adding an advertisement to a form is not hard – the hard part is figuring out how to add the advertisement without causing more pages to be added to your printed output. i.e. how do we place ads in the available whitespace.

There are a couple of different design patterns where you can use JavaScript to discover (and use) available whitespace on dynamic XFA/PDF forms. I will cover the first pattern in today’s post.

To see what the end result looks like, open this sample form. On open we have determined that there is enough room for a very large ad, and we place a “flyer” that is 8 inches tall. Try adding more detail rows by pressing the “+” button. This causes the available white space to shrink and forces us to choose smaller advertisement subforms. After adding two rows, the ad shrinks to a 7 inch flyer. This continues until there is no room for any ads. But then after adding one or two more rows we spill over to a new page. Now once again there is room for a large ad on the second page.

Let’s take a closer look at how this form was designed.

Do nothing until layout:ready

Rendering a dynamic form goes through several processing stages:

  1. Loading content
  2. Merging data with template (create Form DOM)
  3. Executing calculations and validations
  4. Layout (pagination)
  5. Render

There a couple of script events that tell us when some of these processing stages are complete. The form:ready event fires when step 3 is complete. The layout:ready event fires after step 4.

Discovering whitespace is dependent on having a completed layout. Once all our content has been placed on pages we can examine the resulting placements. Consequently, we put the logic for placing our ad in a layout:ready event. But there is a problem: changing the form after layout:ready causes … another layout and another layout:ready event. Unless we are careful, we will cause an infinite loop of constant [re]layout. To guard against this, the form creates a form variable: vEvalTranspromo to indicate that we are ready for evaluation. Once we have placed our ad, we set it to false. Any time we change the structure of the form (add/remove a subform) we set it to true. (Note that normally we would do this on the server with a print form and would not worry about changes to the structure after the initial layout.)

Structuring the form

This example requires placing your ad content in your form at the spot where a page break will occur. At this location we place an ad container subform which is a wrapper that holds a series of [optional] child subforms of varying sizes. These child subforms hold the ad content.  Initially there is no content in the ad container, so it has a height of zero and does not impact layout. Our whitespace calculation is based on taking the Y position of the ad container, and subtracting it from the height of the content area.

When we have discovered how much space is available, we create the largest possible child ad subform to fit.

Next Steps

This form looks at only one aspect of the transpromo problem: discovery/usage of whitespace. Another key component is to select ads based on context. If this were a credit card statement, we would select the ads based on the transaction and customer data. This level of logic requires some sort of integration with a rules engine. This could happen as a pre-processing step or as a web-service request during layout.

Hopefully this will have whetted your appetite for transpromo. The sample is not overly complex (less than 100 lines of JavaScript). There is another Design pattern that involves more script, but also allows greater flexibility in ad placement. But that will have to wait until January.

Until then, enjoy your holidays. ‘Peace on earth, goodwill toward men’.

Object Expressions in XFA

Form authors tend to resolve SOM (Script Object Model) expressions inside calls to resolveNode() more often than they need to. As a result, their JavaScript code is less readable than it could be. There are alternatives coding patterns I can recommend, but first some background.

Object expressions vs SOM expressions

If you wanted to get the value of a field in a purchase order form, there are a couple of ways to get it:;



The first form is an object expression.  In the second form, the argument passed to resolveNode() is a SOM expression. These are not evaluated the same way, but they almost always return the same result.

Dotted expressions are evaluated by the JavaScript interpreter. Every time the JavaScript interpreter reaches a dot in an expression, it calls into the XFA processor to evaluate the token following the dot. De-constructing the first example:

  1. The JavaScript engine encounters “xfa”. It resolves this easily since “xfa” has been pre-registered with the engine as a known global object.
  2. JS encounters “.form”. It asks the XFA processor for the “form” property of the xfa object. XFA returns the form model object.
  3. JS encounters “.purchaseOrder”. It asks the XFA processor for the “purchaseOrder” property of the form object. XFA returns the purchaseOrder subform object.
  4. We continue to evaluate dots until we reach the “city” object. Then when JS asks XFA for the “rawValue” property of the “city” object, XFA returns a scalar value.

De-constructing the second example:

  1. The JavaScript engine encounters “xfa”, and returns the pre-registered xfa global object..
  2. JS encounters “resolveNode(…)”.It asks the xfa object to evaluate the method resolveNode(). XFA resolves the SOM expression and returns the city object
  3. JS encounters “.rawValue” and asks XFA for the rawValue property of city.

You can see we take two very different paths, but have ended up in the same place.

The main difference is that the object expression yields more readable code, but is less efficient. Resolving SOM expressions internally is more efficient than the many more round-trips from JS to the XFA engine. However, in the vast majority of cases, the performance difference is negligible in the context of overall processing. My bias is toward using object expressions for the sake of code readability – unless we know we are in an area of performance critical code.

Note that FormCalc handles object expressions differently from JavaScript. The expression: is not evaluated dot-by-dot. The FormCalc engine will pass the entire expression to XFA for evaluation as a SOM expression.

Using "#" in SOM

We use the "#" character to disambiguate properties (font, border, caption etc) from containers (fields, subforms etc). Using "#" tells the SOM parser to treat the following token as a className rather than as a named object. Take this example:

<subform name="Country">
    <field name="border"/>

The expression “Country.border” is ambiguous. It could refer to either the <border> property or the field named “border”. The rules are that by default the named object takes precedence. If you want to explicitly get the border property you must use “Country.#border”.

Since the "#" character is illegal in JavaScript, any expressions that use it are usually wrapped in a call to resolveNode(). E.g.:

Country.resolveNode("#border").presence = "visible";

As it turns out, there is a more terse syntax available. JavaScript properties can be accessed by two different syntaxes. e.g. A.B can also be expressed as: A["B"]. So the expression above could also be coded as:

Country["#border"].presence = "visible";

But simpler yet, if we are not worried about ambiguity, we can code this as:

Country.border.presence = "visible";

(As I write this, I realize that some of the script in my samples is not careful enough. Any general purpose re-usable code that references the subform.border property really must use the #border notation.)

As I said, in the case where you are not worried about ambiguity, you do not need to qualify with the “#” symbol. In the case with field properties we don’t really need to worry about name conflicts. Unlike subforms, fields do not have children with user-defined names. (ok, that’s not true, but let’s pretend it is for now.) You should be able to code “field.border” without worrying about ambiguity.

Accessing instead of field["#property"] is possible as long as the property does not have an unlimited number of occurrences. This means that we cannot refer to “field.event” – because fields can have any number of <event> elements.

We’ll work through this case with an example. I have seen code that changes the email address of a submit button that looks something like:

this.resolveNode("SubmitButton.#event[0].#submit").target =  
                                       "mailto:" + email.rawValue;

For starters, this could be expressed more tersely as:

SubmitButton["#event"] = "mailto:" + email.rawValue;

But do not do this.  This is fragile code. It assumes there is only one event, and addresses the first <event> element. To
make this easier, Designer has started populating the name property on the <event> element:

<field name="SubmitButton"> 
  <event name="event__click" activity="click"> 
    <submit format="xml" textEncoding="UTF-8" 

As a result, the script can now be reliably be coded as: = "mailto:" + email.rawValue;

Finally, to close the loop on what I said above: “fields do not have children with user-defined names.”. As you see, events (and a couple other properties) can have arbitrary names. But we shouldn’t have to worry about name collisions. There is no interface in designer for naming these properties. When Designer names them, it generates unique names. As long as nobody has modified their XML source and changed their event to look like <event name="border">, then field.border remains unambiguous.

Transparent Subforms

You will often see references to "#subform[0]". As in:


The reason you see “#subform” in this expression is because the subform is nameless.

This is the most explicit possible SOM expression that leads us to Button1.

However, this expression also gets us there:


We don’t need the [0] references because by default, object expression will always return the first occurrence.

The reason we can omit the #subform is because nameless subforms are treated as transparent.

In this example that means that all the children of the nameless subform may be referenced as if they were children of form1.


As noted above, SOM parser evaluation is faster than JavaScript object evaluation.  If you have an expression that must be executed many, many times, then it is better to wrap it in resolveNode() or use FormCalc.  However, for most forms with modest amounts of script, the difference in overall form performance is negligible. Most of the time ease-of-scripting and code readability are more important.

However, whether using SOM or not, reducing evaluations is good practise.

Beware of coding patterns that look like this:

xfa.form.form1.header.address.Button1.presence = "hidden";
xfa.form.form1.header.address.Button2.presence = "hidden";
xfa.form.form1.header.address.text1.presence = "hidden";
xfa.form.form1.header.address.text2.presence = "hidden";
xfa.form.form1.header.address.text3.presence = "hidden";

In cases like these, it is more efficient (and readable) to resolve a variable to the common parent node e.g.:

var addr = xfa.form.form1.header.address;
addr.Button1.presence = "hidden";
addr.Button2.presence = "hidden";
addr.text1.presence = "hidden";
addr.text2.presence = "hidden";
addr.text3.presence = "hidden";

When resolveNode() is Necessary

Some parts of SOM syntax are not supported by the object evaluation of our script engines. I can think of two examples:

  • SOM predicates (see previous blog entry)
  • The elipsis operator (..) that recurses down the hierarchy finding nodes. While it is handled by FormCalc, it is not legal in JavaScript.

Here is a sample script that shows how these can be combined using resolveNode() to make all hidden fields in a nested subform visible:

var vHiddenList = xfa.resolveNodes('xfa.form..address.#field.[presence == "hidden"]');
for (i=0; i<vHiddenList.length;i++)
  vHiddenList.item(i).presence = "visible";

The Deep End


When comparing object expressions to SOM evaluation I said: “These are not evaluated the same way, but they almost always return the same result.”

You might want to know under what circumstances they will return different results.  The difference between object expressions and SOM evaluation is that SOM evaluation will take into account the relative occurrence numbers (index) of objects, while object expressions cannot. In the example below we have fields bound to this data:







The form looks like this:

<subform name="order">

   <field name="price"/>

   <field name="quantity"/>

   <field name="subtotal">


         <script contentType="application/x-javascript">

           order.price.rawValue * order.quantity.rawValue;




   <field name="price"/>

   <field name="quantity"/>

   <field name="subtotal">


         <script contentType="application/x-javascript">

           order.price.rawValue * order.quantity.rawValue





The calculations for the subtotal fields will both return the result “20.00”. This is because when we evaluate the expression: order.price and order.quantity from the context of the subtotal field, the JavaScript engine will always return the first occurrence of the price and quantity fields – for both occurrences of the subtotal field. However, we could change the calculation to:

subtotal.resolveNode("order.price").rawValue *


In this case the second occurrence of the subtotal field would evaluate to “60.00”. This is because the occurrence number of the subtotal field causes the SOM evaluator to return the second occurrence of the price and quantity fields. If we were using FormCalc, then the expression order.price * order.quantity would return the expected result, since FormCalc uses SOM natively.  Fortunately, we don’t encounter this pattern very often, so most form authors can assume that there is no difference between SOM evaluation and object expression.

Naked Field references

Based on what I’ve described above, you might wonder how we are able to resolve “order.price” in JavaScript. How does JS resolve a naked reference to “order”? We can’t register all field names with the JavaScript interpreter. If you were used to coding in Acroforms, you used expressions like


in order to find field objects.

The way the XFA processor works is that when JavaScript encounters an unrecognized object, it tries to resolve it as a property of “this”. In our example, “order” is really evaluated as “this.order”. Technically the subtotal field doesn’t have a property called “order”, but XFA does a check to see if there is an object called “order” that is in scope of subtotal.

While this is really great for keeping the code easy to write and readable, it has a side-effect that causes grief. When encountering an unknown token, the JavaScript engine first asks the XFA processor to resolve the token, and if not found, then checks JavaScript variables.  This order of evaluation causes issues for a calculation script that looks like:

var x = 42;

field.rawValue = x;

In this example. the field will get populated with the value of the x property (this.x).  This is one of the reasons why I normally prefix my variable names with a "v", in order to reduce the likelihood of a conflict with a property name.

Note that FormCalc allows variables to take precedence over properties.  In FormCalc the script above would behave as expected and populate the field with "42".  The reason for the evaluation order in JavaScript is due to a limitation with the engine when this feature was first implemented.  As far as we know, that limitation is now gone.  As some point we would like to change the order of evaluation for JavaScript — but would do so in a safe way so that older forms continue to behave as they do today.

Canadian/US Address Data Capture

When I fill out an interactive form that prompts for an address that could be Canadian or US; I am constantly disappointed with the data capture experience. Usually the form uses a single field to capture both state and province: State/Province:________ with a drop down that lists all states and provinces. And then a single field to capture both zip code and postal code: Zip/postal Code:__________ . Or worse, the captions on the field are biased toward a US address, but allow you to enter values for a Canadian address. I.e. you get prompted for a zip code, but it allows you to type in a postal code.

The exercise for this blog entry is to come up with a data entry experience that is tailored according to country. The samples build on the work from the previous blog entry that dealt with validating a Canadian postal code.

Single Schema

The premise of the exercise is that you want to have only one place in your data where you store an address – whether Canadian or US. The samples are based on generating this XML data:


Validate a zip code

To be fair, I thought I should try to offer advanced validation for Zip codes.  After all, I did a whole blog entry on Canadian postal codes.  No offence to my American friends, but zip codes are not nearly as interesting as postal codes. When I poked around to see if I could do more advanced validation beyond the standard Zip or Zip+4, I was pretty disappointed. The only thing I found was that there is a correlation between the first digit and the state. For example, for zip codes starting with a “1”, the state must be one of: Delaware, New York or Pennsylvania. Better than nothing. The sample forms have a utility function to validate a zip code:

* validateZipCode() – validate whether a field holds a valid zip code
* @param vZip — the zip code field. If the validation fails, this field
* will be modified with an updated validation message
* @param vState (optional)– a corresponding field value holding the
* state abbreviation.  This method will make sure the first digit of
* the zip code is valid for this state.
* @return boolean — true if valid, false if not.

Keystroke validation

For the Canadian postal code validation, I introduced a change event that forced the entry to upper case. This time around, I have extended that concept to disallow keystrokes that would cause an invalid zip or postal code. A few words of explanation about some of the xfa.event properties that were used:

  • xfa.event.change – represents the delta between the previous change and the current one. Usually this is the content of a single keystroke. However it can also contain the contents of the clipboard from a paste operation. This property can be updated in the change event script to modify the user’s input. It can be set to an empty string to swallow/disallow user input.
  • xfa.event.newText – represents what the field contents will look like after the changes have been applied. Modifying this property has no effect.
  • xfa.event.selEnd – The end position of the changed text. Usually when the user is typing, we are positioned at the end of string, but the user could be inserting characters at any position.

Here is the change event script for the zip code:

Address.Address.USAddress.Zip::change – (JavaScript, client)
// restrict entry to digits and a dash
if (xfa.event.change.match(/[0-9\-]/) == null)
    xfa.event.change = "";

// Allow the hyphen at the 6th character only
if (xfa.event.change == "-" && xfa.event.selEnd != 5)
    xfa.event.change = "";

// If the 6th character is a digit, and they’re typing at the end, insert the hyphen
if (xfa.event.change.match(/[0-9]/) != null &&
    xfa.event.newText.length == 6 &&
    xfa.event.selEnd == 5) 
    xfa.event.change = "-" + xfa.event.change;

var vMax = 9;
if (xfa.event.newText.indexOf("-") != -1)
    vMax = 10;

// don’t allow any characters past 9 (10 with a hyphen)
if (xfa.event.newText.length > vMax)
    xfa.event.change = "";

In hindsight, I could have done a better job with this script.  It is still possible to enter invalid data.  e.g. after adding the hyphen at the 6th character, the user could cursor back and insert digits, forcing the hyphen beyond the 6th character.  A better approach might be to modify the validateZipCode() method so that it will validate partial zip codes.  Then block any user input that doesn’t result in a correct partial zip code.

There is a similar block of logic for the postal code change event.

Customizing Data Capture

The really hard part of this data capture problem is how to tailor the input by country. I have two samples that take different approaches.

Sample 1: Different subforms for each country

In this sample, the strategy is to use two subforms for data capture. One subform that has a province and postal code field for Canadian addresses. One that is tailored for capturing a US address.

To make this design work, we create two subforms (CanadianAddress and USAddress), set the binding of each subform to “none”. Then bind the individual fields to the address data. The reason for this approach is that we want both subforms to populate the same data. Multiple fields are allowed to bind to the same XML data element, but you cannot bind multiple subforms to the same XML data.

Show/hide logic. It is not enough to simply set the presence of the subforms to visible/invisible. A hidden field will still run its validation script. We want to make the subforms optional and add/remove them as appropriate. To make this exercise a little more interesting, I assumed that we were not in a flowed layout. Now the problem is that unless you’re in a flowed context, Designer does not allow you to make the subform optional (under the binding tab). However, the XFA runtime does not have this restriction. There are two workarounds: 1) Modify the source in the XML view 2) fix it in script. I chose the latter approach. Subform occurrences are managed by the <occur> element. By default, the address subforms will be defined as:

<occur initial="1" m
ax="1" min="1"/>

We can change the minimum via script in order to make them optional:

Address.Address.Country::initialize – (JavaScript, client)
USAddress.occur.min = "0";
CanadianAddress.occur.min = "0";

Once the subforms are defined, simply place them on top of each other at the same page location. When changing country from the country drop down list, the subforms will toggle on/off accordingly:

Address.Address.Country::validate – (JavaScript, client)
// Choose which subform address block to use depending on the country
_USAddress.setInstances(this.rawValue == "U.S." ? 1 : 0);
_CanadianAddress.setInstances(this.rawValue == "Canada" ? 1 : 0);

Sample 2: One Subform, change the field properties

In this sample, the strategy is to create one set of dual-purpose fields. One field to capture either a postal code or a zip code and one field to capture either a state or a province. When the country changes, we modify the field definitions so that they behave appropriately. The changed properties included the caption text, the picture clauses and the contents of the state/province drop down lists. The validation that happens in the change event and in the validation script needs to branch to accommodate the appropriate context.

The logic to toggle the field definitions looks like:

Address.Address.Country::validate – (JavaScript, client)
if (this.rawValue == "U.S." && 
    ZipPostal.caption.value.text.value != "Zip Code:")
    ZipPostal.caption.value.text.value = "Zip Code:";
    ZipPostal.ui.picture.value = "";
    ZipPostal.format.picture.value = "";

    StateProv.caption.value.text.value = "State:";

    StateProv.addItem("Alabama", "AL");
    StateProv.addItem("Alaska", "AK");
    StateProv.addItem("Arizona", "AZ");
  . . .
    StateProv.addItem("Wyoming", "WY");
} else if (this.rawValue == "Canada" &&
           ZipPostal.caption.value.text.value = "Postal Code:")
    ZipPostal.caption.value.text.value = "Postal Code:";
    ZipPostal.format.picture.value = "text{A9A 9A9}";
    ZipPostal.ui.picture.value = "text{OOO OOO}";

    StateProv.caption.value.text.value = "Province:";

    StateProv.addItem("Alberta", "AB");
    StateProv.addItem("British Columbia", "BC");
    StateProv.addItem("Manitoba", "MB");
. . .
    StateProv.addItem("Yukon", "YT");

Comparing the approaches

  • Both samples work in Reader version 7, 8 and 9
  • Sample 2 is easier to design, even though it requires more script.
  • Sample 1 is easier to extend in the event that you want your address block to handle more than just two countries.
  • Sample 1 requires dynamic forms.
  • Sample 2 could be modified to work for forms with fixed-pages. You would need to change the form design so that the caption is represented by a protected field (captions can be modified only on dynamic documents).

Adventures with JavaScript Objects

I have to start with an admission — I’m not a trained JavaScript programmer.  This became evident to me as I reviewed the latest version of the exclusion group sample form.  I was not happy with the performance, so I thought I would stick in a timer and see if I could measure and improve performance.  Adding the timer was no problem.  I changed the code on the subform remove button.  Note the time calculations at the beginning and end:

form1.#subform[0].NPSA3.NPTable.Row1.Cell14::click – (JavaScript, client)

var vTime1 = (new Date()).getTime(); // gets the time in milliseconds

// Removing a subform in the middle of a slie can
// change the SOM expressions of subforms later in the list
// This means that we have to clear the error list

if (_Row1.count > _Row1.min)
// rebuild the error list

var vTime2 = (new Date()).getTime();
console.println(vTime2 – vTime1);


For my test I added 20 subform rows.  With a total of 21 rows there are 65 exclusion groups on the form.  Deleting a row causes all 65 groups to be re-evaluated.  (actually most of them get evaluated twice, but that is a bug behavior that I cannot control).  With the measurement in place I could see that deleting one row was taking 2753 milliseconds on my laptop.  As I deleted more, the deletions sped up — since there were fewer groups to evaluate each time.  By the time I was down to two rows, the delete took 454 milliseconds.  All much too slow.

It did not take long for me to realize that my problem was the JavaScript objects I had defined to implement exclusion group behavior.  Here is where I expose my JavaScript naiveté.  I wanted to define an abstract group definition so that the implementation could be either based on a subform or on an array group.  When I set up the base class, I followed a C++ pattern.  I defined a base class with methods that the derived classes would override:

function exclusionGroup()
exclusionGroup.prototype.length = function() {return 0;}
exclusionGroup.prototype.item   = function(nItem) {return null;}
exclusionGroup.prototype.getMax = function() {return null;}
exclusionGroup.prototype.getMin = function() {return null;}

One of the derived classes:

// Implementation of exclusionGroup for a subform container
function subformGroup(vSubform)
    this.mSubform = vSubform;
subformGroup.prototype         = new exclusionGroup();

subformGroup.prototype.length  = function()        
    {return this.mSubform.nodes.length;};

subformGroup.prototype.getMax  = function()        
    {return this.mSubform.vMaxSelected.value;}

subformGroup.prototype.getMin  = function()        
    {return this.mSubform.vMinSelected.value;}

subformGroup.prototype.selectedContainer = function()
    if (this.mSubform.vOldSelectedContainer.value == "")
        return null;
    return this.mSubform.resolveNode(this.mSubform.vOldSelectedContainer.value);

What I failed to take into account is that JavaScript is completely interpreted.  e.g. When the getMax() method on the derived class gets called, it matters none that the method exists in a base class.  The interpreter simply checks the class instance to see if it has the getMax() method.  The base class was just extra processing overhead with no benefit.  The base class might have had some benefit if we had shared implementations of some methods — but we didn’t. So I removed the base class.  I created two classes: subformGroup and arrayGroup that both define the same member variables and functions.  There is no type checking.  The onus is on the programmer to make sure the property names, methods and parameters all match. 

The other revelation is that I did not need to extend the objects using prototype.  Using prototype impacts the class definition — i.e. all instances of a class.  You can extend a single instance of a class without bothering with prototype.  That seemed to improve performance as well.

In the revised form, my object definitions looked like:

// Implementation of exclusionGroup for a subform container
function subformGroup(vSubform)
    this.mSubform          = vSubform;
    this.len               = vSubform.nodes.length;
    this.maximum           = vSubform.vMaxSelected.value;
    this.minimum           = vSubform.vMinSelected.value;
    this.selectedContainer = function()
        if (this.mSubform.vOldSelectedContainer.value == "")
            return null;
        return this.mSubform.resolveNode

// Implementation of exclusionGroup for an array group
function arrayGroup(vHost)
    this.mHost             = vHost;
    this.len               = vHost.length.value; 
    this.maximum           = vHost.max.value;
    this.minimum           = vHost.min.value;
    this.selectedContainer = function()
        if (this.mHost.vOldSelectedContainer.value == "")
            return null;

        return this.mHost.parent.resolveNode

Simplifying the object definitions had a big impact on performance.  Deleting the 21st subform went from 2753 milliseconds to 1236 milliseconds.  Well over a 2x improvement.  Still not fast enough for my liking, but good enough for one round of performance tuning.  Now maybe I’ll go out and buy a JavaScript book…

Exclusion Groups v4.0

December 5: Happy Sinterklaas! I hope your children did not find a lump of coal in their boots this morning.

I received some good feedback from the “build your own” exclusion groups introduced in a series of previous posts (part 1, part 2, part 3). These exclusion groups have enriched functionality not found in standard radio button groups.

Since then it has been pointed out that the strategy of using a subform to contain the group is not always practical. In particular there are two cases where using a subform does not work:

  1. PDFs with fixed pages (artwork PDFs). In this case, Designer does not allow you to add subforms to the form.
  2. The case where the exclusion group is spread across cells in a table. In this case, it is not possible to wrap the cells in a container subform. We cannot use the row subform, since there are other cells in the row that are not part of the exclusion group.

The new sample provided here allows you to create an exclusion group by providing an array of fields (or subforms) to include in the group. To make this possible, I have added two new methods to the scGroup script object:

* addArrayGroup – Create an exclusion group by providing an array of
* containers (fields/subforms).
* Call this method from the initialization script of a field/subform.
* Note that a single initialization script can create multiple
* groups.  Once the groups have been initialized, you need to call
checkGroups() from the calculate event.
* @param vHost – The field or subform hosting the array group.
* @param vName – a name for the group (unique within this host)
* @param vList – An array of fields or subforms that will make up
*                the group
* @param vMin  – The minimum number of elements in the group that
*                need to be selected
* @param vMax  – The maximum number of elements in the group that
*                may be selected
* @param vValidationMessage – The message to use if the minimum
*    number of objects have not been selected
*    (and if the object does not already have a validation message))
function addArrayGroup(vHost, vList, vMin, vMax, vValidationMessage)

* makeArrayExclusive – Enforce the min/max constraints of an
* array-based exclusion group.
* @param vHost — the container object that hosts the exclusion group
* definition.  
* Note that this will check all groups that this container is hosting
* In order to use this, you will need to call addArrayGroup() from
* the initialization event. This call needs to be placed in the
* calculation event of the same object.
* @return true
function makeArrayExclusive(vHost)

The first example places these calls in the initialize and calculate events of a subform.

The second example adds a hidden field to the table to act as the host. The initialize event of the field calls scGroup.addArrayGroup() to define three groups:

form1.NPSA3.NPTable.Row1.groupControl::initialize – (JavaScript, client)
                      new Array(secondLine_yes, secondLine_no),
                      0, 1);

                      new Array(newCustomer_yes, newCustomer_no),
                      0, 1);

                      new Array(Bundle_yes, Bundle_no),
                      1, 1, "Must select yes or no.");

The calculate event calls scGroup.makeArrayExclusive() to enforce the exclusive behavior:

form1.NPSA3.NPTable.Row1.groupControl::calculate (JavaScript, client)
// Make sure the exclusion group behaviour is enforced

Note that if you create a lot of these on your form, it becomes expensive to clear the error list and recalculate everything. Fortunately you should not have to do that too often – just when removing subforms or when inserting subforms in the middle of a list. In these cases we need to clear/recalculate in order to update all the SOM expressions we have stored.  Have a look at the script in the subform remove buttons for an example.

The Deep End

The set of script objects that support validation and exclusion groups continues to grow in size and complexity.  By this point I expect most form developers would not be comfortable modifying them, but would want to treat them as a ‘black box’ library.  This is fine.  As far as I know, they continue to work back to Reader 7.0. 

To implement exclusion groups as arrays, I created a JavaScript base class with two derived implementations — one for cases where the group is hosted by a subform and one where the group is controlled by an array.  Along the way I also fixed a couple bugs that were uncovered once I started placing the groups inside repeating subforms.

This will not be the last time I enhance these libraries.  I have some more ideas for them.  But that would be a different blog post.