A Runtime Accessibility Checker

I continue to be interested in the challenges of developing forms that give a good experience for visually impaired users using screen readers.  Previously we have focussed on checking accessibility properties at design time. However, analysing the form design template doesn’t always fit the user workflow.  There are cases where accessibility properties are populated at runtime by script or by data binding. In this case, these properties are un-populated in the design version and consequently, the design-time accessibility checker will give warnings — yet at runtime the accessibility experience may be just fine.
In addition, we want to make sure that the accessibility experience adapts to changes in dynamic forms. We need to be able to test accessibility in various states — e.g. after subforms are added or removed.

The obvious answer is to perform accessibility checking at runtime — while the form is open in Reader or Acrobat.  Today we’ll look at a solution to that problem.  A mechanism for instrumenting your forms to allow you to analyse the state of accessibility information at runtime.

The Solution

For those of you who want to use the solution without necessarily understanding how it works, follow these steps:

  • Drag this fragment anywhere onto your form.  It will add a hidden subform with no fields (just script)
  • With the form open in Reader/Acrobat, paste the value “AccCheck” into any field on the form
  • From the resulting dialog, select which accessibility checks to run
  • Copy the results from the report dialog for future reference
  • All elements with accessibility issues will be re-drawn with a 2pt red border.  If the element is a field or draw, it will also be shaded light red and the accessibility error text will be assigned to the tooltip.
  • Close your form and go back to Designer to fix any identified issues

Try it out with this sample form. Note that this form dynamically populates some properties.  The Designer-based accessibility checker will report twice the number of errors.

Hopefully the embedded checker is innocuous enough that you could leave it in a production form.   But if not, you should be able to inject it during your testing phase and remove it when you’re ready to deploy your form.

The Deep End

For those who want to know how this works…

The challenge is to add something to your form without introducing an observer effect — i.e. analytics that can run without changes to your form UI and without impacting normal form processing.

The runtime accessibility checker is designed as a form fragment that you can drag onto any form.  The fragment is a single, hidden subform with no fields, no data binding — just script.
We want to trigger the analysis script without requiring the user add a “check accessibility” button to their form.  In this case, I added a propagating change event to the fragment subform.  This script checks the xfa.event.change event property.  If the change text is equal to “AccCheck”, we launch the accessibility checker.
The checker uses JavaScript dialogs to prompt for checks to perform and has a second dialog to display the results.
The JavaScript performing the checks is adapted from the Designer version of the accessibility checker.  The main difference is that it analyses the Form DOM (the runtime) instead of the Template DOM (the design). 

Code Sharing Example

Today I’d like to talk about code sharing within form definitions. We’ll start by talking about the advantages of code sharing and then get specific about techniques to support code sharing in form definitions. We’ll start with the advantages.  For those of you with a programming background, this is review.  But bear with me:

  • Less code.  Once you have a library of shared code, you will write less code in your event scripts.
  • Faster development. Writing less code means faster code development.
  • Lower maintenance costs.  Less code means les code maintenance
  • Consistent behaviours within a form and between forms.  Shared code means that the same action from different places in the same form or between different forms will behave consistently. And it means that changing the behaviour is done by modifying the centralized logic.
  • Simplify complex operations for use by novice form designers.  A shared code library can offer simple interfaces to complicated functions.

But … to get the benefits of code sharing you need to make an investment:

  • Recognize repeated patterns
  • Generalize the repeated operation
  • Isolate the variable parts of the operation into parameters

The generalized, shared version of functionality has higher up-front development costs.  However once the initial investment has been made in shared code, the ROI will more than compensate.

Ok, enough of talking about things you already knew.  Lets walk through a specific example.  I reviewed a form recently that had lots and lots of code to copy field values between subforms. The sort of thing you might do if, for example, you were copying a billing address to a shipping address.

Assuming subforms named S1 and S2 and fields named F1 – F5, the code initially looked like this:

S2.F1.rawValue = S1.F1.rawValue;
S2.F2.rawValue = S1.F2.rawValue;
S2.F3.rawValue = S1.F3.rawValue;
S2.F4.rawValue = S1.F4.rawValue;
S2.F5.rawValue = S1.F5.rawValue;

As written, this is not very good code.  It’s verbose and tedious.  And if any fields are added, removed or renamed, the code will need to be changed.

Now, let’s look at a progression of changes we can make to improve this code.  First of all, a little JavaScript tip. When you see an expression such as S2.F1, it can also be written as S2["F1"].  With that knowledge, we can re-write the script as:

var fields = ["F1", "F2", "F3", "F4", "F5"];

for (var i = 0; i < fields.length; i++) {
   S2[ fields[i] ].rawValue = S1[ fields[i] ].rawValue;

It’s a little less verbose, but still fragile.  Let’s change it so that our list of fields is not hard-coded:

var srcFields = S1.resolveNodes("$.#field[*]");
for (var i = 0; i < srcFields.length; i++) {
  var fieldName = srcFields.item(i).name;
  // if the same-named field exists in S2…
  if (S2.nodes.namedItem(fieldName)) {
     S2[fieldName].rawValue = srcFields.item(i).rawValue;

Notice that the script starts by using resolveNodes() to get a list of fields from the source subform. It then checks if the same named field exists in the destination subform.  If it’s in both places, we copy a value over. This is a big improvement.  It means that if any fields are added, removed or renamed the script will continue to work.  But we’re still not sharing code.

The next step is to generalize the function:

function subformCopy(dst, src) {
  var srcFields = src.resolveNodes("$.#field[*]");
  for (var i = 0; i < srcFields.length; i++) {
    var fieldName = srcFields.item(i).name;
    // if the same-named field exists in S2…
    if (dst.nodes.namedItem(fieldName)) {
      dst[fieldName].rawValue = srcFields.item(i).rawValue;
subformCopy(S2, S1);

Here we’ve isolated the copy functionality into a function. The next step is to move that function into a script object.  Now our script is a one-liner:

utilities.subformCopy(S2, S1);

Great! we copy subform contents in one line of script.  Now to really increase the value, take the script object and make it a fragment so it can be shared between forms:

Once the function has been shared, it can be enhanced or fixed and all forms using it will benefit.  In this case we might choose to make the subformCopy() method handle fields in nested subforms.

Up to this point I’ve talked about using script objects for code sharing.  There are a couple more things to say:

  1. I see people using execEvent() to share code.  e.g. they put script in a change event and use execEvent to call it from the initialize event. I don’t favour this pattern: The code readability is poor.  The performance is not as good (much less overhead to call a script object than it is to call execEvent). 
  2. Propagating events offer another code sharing technique. Isolating functionality in a propagating event means that you write the code in only one script and it is reused in many fields.

And one last point — do you hard-code color values or border widths in your code?  Consider moving these hard-coded values into script objects or form variables. The impact isn’t as dramatic as with shared code, but it is good practise.


resolveNode vs. resolveNodes

Today I’d like to poke at a design pattern I see fairly often.  The code looks like this:

var sum = 0;
var numberOfDetails = po._detail.count;
for(var i = 0; i < numberOfDetails; i++) {
    var nodeName = "po.detail[" + i + "].subtotal";
    sum += this.resolveNode(nodeName).rawValue;

Notice how inside the loop we’re constructing a SOM expression to use for resolveNode(). The constructed SOM expression will look something like: po.detail[2].subtotal.

Let’s look at the alternative:

var sum = 0;
var details = this.resolveNodes("po.detail[*].subtotal");
for (var i = 0; i < details.length; i++) {
    sum += details.item(i).rawValue;

The second variation is easier to read, easier to code and will be processed more efficiently.  Why didn’t the author code it this way to start with? Likely either: a) they didn’t realize that a single SOM expression could find all their nodes or b) they didn’t realize there is both resolveNode() and resolveNodes().  And since I don’t know the reason, let’s unpack both of those topics.

SOM expression to return multiple nodes

Look again at the second SOM expression: po.detail[*].subtotal. As this SOM expression is processed, it will first find all the detail subforms under po, and then for each of those detail subforms it will check for a child named subtotal and add it to the returned list.  I suppose what’s surprising is the realization that the wildcard can go anywhere in the SOM expression.  In fact, it’s permissible to have multiple wildcard expressions.  Suppose I had a form with a list of family members who each had a list of pets.  To get a list of all the pet names, I could use this expression: family.member[*].pet[*].petName

resolveNode vs resolveNodes

The difference between these methods is in what result you’re expecting. resolveNode() expects to process a selection that returns a single node. If you pass it an expression that returns multiple nodes, it will cause an error.  The return value of resolveNode() is either a node or null.

resolveNodes() expects to return a list of nodes.  Its return value is a nodelist — which could be empty.  The nodelist has two methods for traversal: list.length (return the number of nodes in the list) and list.item(n) return the nth item.

And while I’m on the topic, there are a couple other interesting things about these methods.  They begin their search from the context of the node they’re called from.  That means the result of total.resolveNode() will be different from the result of xfa.resolveNode().  For example, suppose my form/data has the structure:



For the total calculation, we want a list of all the subtotal fields.  If we anchor the search from xfa, it looks like:


If we anchor it from within the total calculation, it looks like:


The second variation is preferred.  It’s easier to read, and since it references fewer nodes, it’s more durable — less susceptible to breaking if a node gets renamed or a subform added/removed. 

By the way, Niall O’Donovan (active contributor in the forums and a regular commenter) has written a blog post that does a great job of explaining SOM expression: http://www.assuredynamics.com/index.php/2011/05/som-expressions/. Niall has a gift for explaining the basics of the technology. The post includes a very spiffy sample form that helps visualize SOM expressions. (he also constructed SOM expressions, but he’s certainly not the only one…)

The Deep End

There are a few more things that could be said to round out the picture for those of you who like to dive one layer deeper.

The .all property

The object model has a couple of properties that can be used in place of a wildcard SOM expression. The .all property will return a list of all sibling nodes that have the same name. I could also have used the .all property in my original sum calculation:

var sum = 0;
var details = po.detail.all;
for (var i = 0; i < details.length; i++) {
    sum += details.item(i).subtotal.rawValue;

Of course, .all is not as powerful as [*].  We can’t code po.detail.all.subtotal.  The other problem with this calculation is that it presumes there’s at least on instance of the detail subform.  If there are no instances, the script will fail.

We also have the .classAll property to return all sibling nodes of the same type. If you use .classAll on a field, you will get a list of all the sibling fields. For example po.detail.classAll is the same as po.resolveNodes("$.#subform[*]")

Evaluation in scope

A relative SOM expression will search through the entire scope to find the result. In the example: total.resolveNode("item[*].subtotal") we will search for item among the children of all the ancestors in our hierarchy.  Specifically, the search will check the children of: total, po, purchaseOrder, form and xfa.  Of course, we stop searching as soon as we find a match. In this case we’d stop at po.  If you want to prevent this hierachy search, then you can anchor the search with a reference to "$" (the current node).  In our example that would look like:


Rule of Thumb

If you find yourself constructing a SOM expression with string concatenation, I’d encourage you to have a second look and see if there is an easier way to get the result. I think it’s rare to need to construct an expression.  The one exception is if you’re building an expression that uses a predicate.  But other than that case there is almost always an alternative.

Propagating Events

Back to another of the 9.1 enhancements intended to help you write less code: propagating events. See here for an overview.  Again, now that we have a Design tool capable of authoring these and now that the 9.1 Reader is more common, we should have a closer look. 

For starters, here is the sample form I’ll be referencing.  As you try the form you will notice a couple of behaviors:

  • as you move your mouse over fields, they are highlighted
  • invalid fields get a thick red border

The punchline here is that there are 14 text fields on the form, but only one each of a mouseEnter, mouseExit and validationState event scripts.  These events are defined on the root subform and apply to all descendent objects.  This is a great technique for minimizing code in your form.  A highly recommended form design technique. 

But now the bad news… So far, this functionality has not been easy to get at through Designer.  In the ES2 designer there was a check box to make an event propagate.  But the script dialog didn’t allow field events to be defined in the context of a subform.  Then because of the confusion this functionality caused, the checkbox was removed from the Desginer that shipped with Acrobat 10.  This functionality should re-emerge in a future version of Designer. 

In order to make it easier to get at this functionality, I have written a macro that will add propagating events to subforms.  The UI appears like this:

After you run the macro, the event and script will be added to your subform. Now, the tricky part — to edit the script you need to do two things: 1) close/re-open the form. Yeah, macros are still beta, and Designer doesn’t pick up on the changes made by the macro. 2) on the script dialog for the subform, choose to show: "events with scripts".

One interesting thing to note is that it is valid to have two or more of the same event defined on a subform. e.g. you could have both a propagating enter event as well as a non-propagating enter event.

The Deep End

When changing the display of an object to show an error state, it’s a bit of a challenge to revert the object back to its original state.  e.g. If you change the border color or border thickness you need to know the original color and thickness in order to restore the original state of the field.

Well, not really.  You’ll see the validationState script in the sample uses a different technique to restore the original display.  The script removes the border and assist elements from the field.  To understand why this works you need to understand the relationship between the form and the template.  The template is the definition of a form you see in Designer.  At runtime, we merge the template objects with data and create form objects.  A form field is then a sparsely defined object that has a pointer back to a template field. 

When we execute script that changes the color of a border, we modify the form definition of the border and override the default border properties found in the template. Removing the border from the form field means that we have removed the override and the field then reverts to the definition found in the template.


Border Control

In these times. many countries are interested in border control.  Today I’m interested in controlling field borders. Did you know that fields have two borders? There is a border around the entire field (including the caption) and there’s a border around just the content part — excluding the caption. These are edited in two different places in the Designer UI.  Field border is specified under the border tab:

The border around the content portion is specified in the "Appearance" selection of object/field:

Both of these borders can be manipulated in script.  Have a look at this sample form to see what I mean. 

You might notice that the scripts in the sample form do not use the field.borderColor or field.fillColor properties. These are shortcuts — convenience syntax that simplifies the underlying property structure. And while they’re convenient, they don’t give you full control. Most notably, they control only the outer border, do not give access to the widget border. 

As you look at the script in the sample form, you will notice some interesting things:

1) script reflects syntax. 

The (simplified) XML definition of a border looks like this:

<border presence="visible | hidden">

    <edge presence="visible | hidden" thickness="measurement">
        <color value="r,g,b"/>
    </edge>  <!-- [0..4] -->

    <fill presence="visible | hidden">
      <color value="r,g,b"/>

A script that changes a border fill color (border.fill.color.value = "255,0,0";) is simply traversing the hierarchy of elements in the grammar.

2) Edge access is tricky

The syntax to get at the four edges uses the "getElement()" method, whose second parameter is the occurence number of the edge.  Note that the sample always sets all 4 edges.  This is because edge definitions are inherited.  e.g. if only one edge is specified, then modifying the single edge property impacts all four edges.  The problem is that you don’t always know how many edges have been specified, so it’s safest to set all four explicitly.

3) Show/hide with the presence property. 

You’re likely accustomed to using the presence property of fields. That same attribute applies on the components of a border:

border.presence = "hidden";      // hides entire border — all edges and fill
border.fill.presence = "hidden"; // makes the border fill transparentborder.getElement("edge", 0).presence = "hidden"; // hides the first edge

4) There’s more.

There is more to the border definitions than I’ve shown you here. Using script you an control the four corners, fill patterns, edge thickness, border margins and more.

5) There are other borders

Subforms, rectangles, draw elements and exclusion groups all have border definitions.

A better validation pattern for 9.1 forms

Today I’d like to go back again to revisit some functionality introduced in Reader 9.1 (XFA 3.0).  In 9.1 we added some enhancements specifically designed to improve the form validation user experience — and to allow validations to happen with less JavaScript code.  Specifically:

  • Message Handling options (described here)
  • The validation state change event (described here)

Prior to these enhancements, users were avoiding the use of validation scripts because they didn’t like the message boxes that popped up on every validation failure.  But now we have control over those message boxes, and, as we’ll see, there are lots of good reasons to like using validation scripts.

First things first, turn off the message boxes.  There’s a new designer dialog for this:

Designer dialog box showing a setting where the author can specify that validations do not display message boxes.

Great! now we don’t get those pesky dialogs showing up every time.  But now, of course, it’s up to you to find some other way to convey to the user that they have invalid fields in their form.  This is where the validationState event comes in.  It fires when the form opens, and it fires again every time a field validation changes from valid to invalid or vice versa.  The expected use of validationState is to change the appearance of the form to flag invalid fields. 

One more important piece of the puzzle: field.errorText. When a field is in a valid state, this text is and empty string.  When the field is invalid, it is populated with validation error text.

Take the example where we expect a numeric field to have a value greater than zero.  The validation script looks like this:

encapsulation.A1.B.C.D::validate-(JavaScript, client)
this.rawValue> 0;

The validationState event will set the border color and the toolTip text:

encapsulation.A1.B.C.D::validationState-(JavaScript, client)
this.borderColor = this.errorText? "255,0,0" : "0,0,0";
this.assist.toolTip.value = this.errorText;

Setting the toolTip value is important for visually impaired or color-blind users who won’t notice a change in border color. (There’s another topic waiting on designing a good validation experience that works with assistive technologies).

Hopefully this looks pretty clean and easy to code.  It’s important to contrast this approach with the alternative — just to make sure you’re convinced.

The Alternative

The alternative to field-by-field validation is to centralize form validation in one big script.  The big validation script then gets invoked by some user action – usually a form submit or print or validation button. Working with the same example, this script would look like:

encapsulation.validate::click-(JavaScript, client)
var field = A.B.C.D;
var bValid = field.isNull || (field.rawValue> 0);
if (bValid) {
     field.borderColor = "0,0,0";
     field.assist.toolTip.value = "";
} else {
     field.borderColor = "255,0,0";
     field.assist.toolTip.value = "The value of D must be greater than zero";

Here is a sample form that has both styles of validation.  And here are some reasons why you should prefer field-by-field validation:

  • Less code.  Three lines of code vs. nine. In a form with hundreds of fields, this difference becomes compounded
  • Better user experience — immediate feedback when fields become valid or invalid
  • Form processor is aware of the invalid field and will automatically prevent form submission
  • External validation scripts need to also enforce null/mandatory fields.  In the field-by-field approach, mandatory is enforced without any script.
  • Encapsulation: A term from our object-oriented design textbooks.  In this context it means that the field definition (including business logic and messages) is self-contained.  The benefits of encapsulation include:
  • Notice that the second example references the field by it’s SOM expression: A.B.C.D; There are any number of edit operations that could change that expression: moving a field, unwrapping a subform, renaming a subform or field etc.  If any of these operations happen, the script will need to be updated.  In the first example, there is no SOM expression needed and the script can remain the same through all thos edit operations.
  • Use fragments.  If you want the field to be included in a form fragment, you need to make sure that the validation logic is included with the field.  When the logic is encapsulated, this is not a problem.  When the validation logic is outside the field, it’s much harder to find ways to have the logic accompany the field.

Not Ready for 9.1?

Assuming you’re now convinced that field-by-field validation is what you want, you might still be in a situation where you can’t assume your users have 9.1.  In that case, I’d encourage you to check out some of my older blog posts that included a script framework that allowed field-by-field validation without the 9.1 enhancements.  The most recent version of that framework was included in this blog post.

Understanding Field Values

Knowing a bit more about how field values are represented in JavaScript could make a difference in how you write script.  Today I’ll give an overview of how field values are processed.   For starters, here is a sample form that will illustrate some of the points I make below.

But first, we need to be mindful that picture clauses (patterns) impact our field values.  A brief recap of the various picture clauses (patterns) we use:

  • format pattern: Used to format field data for display
  • edit pattern: Used to format field data when the user has set focus in a field and is about to edit a value.
  • data pattern: Used to format field data when it is saved to XML.
  • validation patten: Used for field validation — does not impact field value

With that background, let’s look at the properties available:

field.rawValue: the field value in its raw form. i.e. a format that has no patterns applied and a format that is consistent across locales and is suitable for calculations.

field.editValue: The field value when formatted with the edit pattern. If there is no edit pattern you will get a reasonable default — usually the rawValue, except in the case of a date field where you’ll get some locale-sensitive short date format.  If a field value cannot be formatted using the edit pattern, then field.editValue will revert to be the same as field.rawValue.

field.formattedValue: Same as field.editValue except using the display pattern.

Some miscellaneous facts about field values:

  • field.editValue and field.formattedValue always return a string. If the field is null, these will return an empty string
  • rawValue returns a JavaScript type corresponding to the kind of field.  e.g. A text field is a string, a numeric/decimal field is a number
  • when a field is empty, rawValue will always be null
  • rawValue is what will be stored in the XML by default (assuming no data pattern has been specified)
  • There are two ways to check for a null value:
    • field.rawValue === null
    • field.isNull
  • The rawValue of a date field does not have a type date.  We chose to represent dates as a string — in the form that they will be saved.  The format used for the string is YYYY-MM-DD.
  • If you use JavaScript typeof to determine what kind of a value a field has, be aware that typeof(null) returns “object”

Knowing all this, there are some implications for the code you write:

  • If you need a JavaScript Date from a date field, you can use this function:
    function ConvertDate(sDate) {
       if (sDate === null) {
          return null;
       var parts = sDate.split("-");
       // Convert strings to numbers by putting them in math expressions
       // Convert month to a zero-based number
       return new Date(parts[0] * 1, parts[1] - 1, parts[2] * 1);
  • You should never code: field.rawValue.toString()
    This will result in a JavaScript error when the field is null.
  • If you want an expression that always returns a string, never null, use field.editValue or field.formattedValue
  • I often see code in the form:
    if (field.rawValue == null || field.rawValue == “”) { … }
    This is unnecessary.  Use one of the following:
    if (field.isNull) { … }
    if (field.rawValue === null) { … }
  • If you prefer not to use a validation pattern, you can validate a field using the display picture.  Use this validation script:
    this.formattedValue !== this.rawValue;

Recover an Embedded Image

By now I’ve mentioned a bunch of times that it is better to link your images than to embed them.  Most notably in this blog entry.  Just to review the facts one more time:

  • Linked/embedded applies only to the form definition (the template). By the time you generate a PDF, the image is always embedded in the PDF.  But the mechanics of how it is embedded in the PDF differs from embedded to linked.
  • Images embedded in the template are stored in base64 format.  Linked images are stored in binary form.  Base64 is 4/3 bigger than binary
  • Embedded images cause very large form templates. Linked images are stored outside the form template.
  • Embedded images force Designer and Reader to keep the image in DOM memory, resulting in slower open times, higher memory footprint, slower performance in Reader and Designer
  • Same embedded image used <n> times is stored <n> times.  Same linked image used <n> times is stored once

Now, just in case all of that rationale was a bit too much tech-speak (what’s a DOM anyway?) let me simplify the message: Do not embed images in your templates. When you see this pallette, do not check the box.

Lost Images

I predict some of you will look at your forms and realize that you no longer have the original image that you embedded in your template.  It is possible to recover these images and extract them to an external file.  The process is a bit messy, but if you really need it, I’m sure you can figure it out.  Here are the steps:

  1. Find a web site that supports base64 decoding.  A google search for "base64 decode" led me to this site: http://webnet77.com/cgi-bin/helpers/base-64.pl
  2. In Designer, select the image object and then switch to XML source view.  Make note of the contentType.  In the example below, it is image/JPG.  This will tell you what file suffix to use when you download the file.
  3. Copy the base64 image data into your paste buffer.  i.e. select the base64 data and right-click to choose "Copy"
  4. Go to the web site and paste into the "Base64 to Decode" field.  Click the "Process now" button.
  5. Now click the "download binary file now" button.  Save the file using a suffix according to the content type — in this case, with a .jpg extention.

Once you have the image extracted, go back to your form design, re-specify the url to the new image and de-select the embed option.  This would be a great tool to develop into a Designer macro.  Especially given that mx.utils has a Base64Decoder class.

Optional Sections of Forms

When Adobe first released Reader 9.1 I wrote a series of blog entries describing the new forms features.  But of course, at that time it would have been easy to overlook them because you didn’t have a designer that could target 9.1.  Not to mention that your user base would take some time to upgrade to Reader 9.1.

But now some time has passed and it’s a good idea to revisit some of the 9.1 features.  Today we’ll look at field.presence = “inactive”.  For background, you may want to re-read the original post here

The sample for today shows a scenario where the form requests a payment type.  If the payment type is “credit card”, we need to expose the credit card fields to fill.  Not only that, but the credit card fields need to be mandatory. With the inactive setting, this becomes a very simple script:

if (this.rawValue === "Credit Card") {
    CreditCardDetails.presence = "visible";
} else { 
    CreditCardDetails.presence = "inactive";

If you wanted to do the same without the inactive setting, the script would look something like:

if (this.rawValue === "Credit Card") {
	CreditCardDetails.presence = "visible";
	CreditCardDetails.cardExpiry.mandatory = "error";
	CreditCardDetails.cardNumber.mandatory = "error";
	CreditCardDetails.cardType.mandatory = "error";

} else { 
	CreditCardDetails.presence = "hidden";
	CreditCardDetails.cardExpiry.mandatory = "disabled";
	CreditCardDetails.cardNumber.mandatory = "disabled";
	CreditCardDetails.cardType.mandatory = "disabled";

And this is with a very simple scenario where the optional section consists of three fields.  Imagine if the optional section had dozens of fields with mandatory settings, validation scripts etc.

The presence=”inactive” setting was added to help reduce the amount and complexity of code authors needed to write to author forms.  If you make use of it, you should find your forms easier to code and less difficult to maintain.

Use the change event to filter keystrokes

If you’re wondering why a sudden flurry of blog posts, it’s because I spoke at the Ottawa Enterprise Developer User Group meeting last week.  I prepared a bunch of material for that presentation, and now I need to make that material generally available.

I spoke a lot about validation techniques in form design.  Today I’ll focus on using the change event to validate user input. 

User input should be validated as early as possible.  Compare the experience between:

  1. Wait until the user submits the form — then highlight all the validation errors
  2. Validate as the user exits the field
  3. Validate input as the user types into the field

The earlier the validation happens, the better the user experience. Let’s look at what is involved in validating input as the user types into the field.

The change event fires every time the user enters data into a field.  The change is normally a keystroke, but could also be a delete or a paste operation. When the change event fires, there is lots of useful information available in the xfa.event object.  I’ll describe the properties that are relevant for today’s discussion:

xfa.event.change: the contents of the data being entered.  Normally this is the keystroke.  But it could also be the contents of the paste buffer.  Or in the case where the user hits the delete or backspace keys, it is an empty string. You can modify the value of xfa.event.change in the change event.

xfa.event.selStart, xfa.event.selEnd: Tells us where the change event will happen. "sel" is short for "selection".  selStart and selEnd are character positions.  When the user has selected text, they describe the range of selected text.  When no text is selected, selEnd will be the same as selStart and text will be inserted at that position.  You can change the values of selStart and selEnd in the change event.

xfa.event.prevText: The contents of the field before the change is applied

xfa.event.fullText: What the contents of the field will be after the change is applied.

Now, some practical examples of what you can do in the change event.  Here is a sample PDF containing all the examples.

Force upper case

If you want to make sure that the contents of your field will be upper case, then modify xfa.event.change like this:

xfa.event.change = xfa.event.change.toUpperCase();

Allow only numeric characters

If you create a field that is a numeric type, then Reader/Acrobat will automatically restrict users to valid numeric input.  But suppose you’re gathering a telephone number or a credit card number. These are normally text fields that hold numbers. In this case you want to "swallow" any changes that insert non-numeric characters. This script uses a regular expression to test the change contents and cancel if necessary:

if (xfa.event.change.match(/[^0-9]/) !== null) {
    // swallow the change
    xfa.event.change = "";
    // if the user has selected a range of characters,
    // then leave the range intact by re-setting the start/end
    xfa.event.selStart = 0;
    xfa.event.selEnd = 0;

Visual Feedback

I once designed a form with a telephone number that accepted only digits.  I swallowed spaces, brackets, dashes and other formatting characters that the user entered.  Then I found out that a couple of the people filling in the form abandoned it because they couldn’t enter data in that field.  They needed some feedback that their keystrokes weren’t valid.  This next example temporarily sets the field border red and thick when the user enters an invalid key.  <deepEnd>The script uses the app.setTimeOut() method.  Notice that I call it from a script object.  If the return value of setTimeOut() gets garbage collected, the event will cancel. Variables declared outside script objects will be garbage collected.</deepEnd>

// If the user has entered invalid data, cancel the event and give some visual feedback
if (xfa.event.change.match(/[^0-9]/) !== null || xfa.event.fullText.length > 10) {
    // cancel the change
    xfa.event.change = "";
    xfa.event.selStart = xfa.event.selEnd = 0;
    // turn the border red and thick
    this.borderColor = "255,0,0";
    this.borderWidth = ".04in";
    // Turn the border back to black after one second
    var sRevert = 
        "var This = xfa.resolveNode('" + this.somExpression + "');\
        This.borderColor = '0,0,0';\
        This.borderWidth = '.02in';";
    helper.timer(sRevert, 1000);