Acrobat X and XFA 3.3

By now you will have noticed the announcements for Acrobat X — which promises to be a very nice upgrade on the Acrobat and Reader product lines.  Acrobat X includes XFA version 3.3.  In this blog post I will give you an overview of the new features that have been added to XFA forms.

Designer Version

As usual, LiveCycle Designer is bundled with Acrobat Pro. This Designer is the version that went out with the most recent release of LiveCycle.  i.e. this is Designer 9 a.k.a. Designer ES2 — plus some bug fixes. This version of Designer is able to target the features introduced in XFA 3.0 (Acrobat/Reader 9.1).  If you need a refresher on exactly what features this includes, your can search for "XFA 3.0" on this blog and you will find them described in some entries back in April 2009.

Perhaps more importantly, this is the version of Designer where we introduced the experimental macro feature — first described here.

In order to target the new XFA features introduced in Acrobat X (XFA 3.3) you will have to wait for a new version of Designer to be released with LiveCycle.

Now on to the new features added to XFA 3.3 / Acrobat X.

Flash in XFA

Probably the most exciting new capability in this release is the ability to embed flash objects in an XFA template.  This means your interactive forms may now share their data with charts.  You can embed instructional video. You can introduce a slider widget. A new pop-up widget selector dialog. Have fun. 


Surprisingly, we did not introduce any new markup to support embedded flash objects.  We resurected a part of the grammar that was not in active use: the exObject element. Flash content will be defined in the context of a host field where the ui element is an exObject e.g.:

    <exObject classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" 
              archive="<uri reference to primary SWF resource>"/>

The rest of the parameters (flashvars, playcount, speed, embedded/windowed, poster, etc.) will be defined as content below the exObject element.

The host field will determine the size and positioning of the flash object (when it’s not displayed in a floating window).

If you are embedding flash in PDF today, you save data/state using ExternalInterface:"multimedia_saveSettingsString", "save to data or formstate");
state = "multimedia_loadSettingsString" );

For SWFs that are embedded in an XFA form, these calls will save/load the host field data – which will be persisted in the XML form data or form state when the field is not bound.

The XFA object model has 3 new methods:

field.ui.exObject.setState(state) // state is either "activate" or "deactivate"


// call an Actionscript method exposed via ExternalInterface
field.ui.exObject.invoke(methodName, arguments);

ActionScript can call into form JavaScript in this form:"", "parameter 1 data", "parameter 2 data");

This will call the bar() method in the script object "foo" and pass in the two provided parameters.

There are many more details you will want to know about eventually, but these are the basics.

RTL Subforms

In order to provide better support for right-to-left languages (Arabic, Hebrew), subforms may now flow in a right-to-left direction.  In addition, subforms and exclusion groups may now also specify a horizontal alignment so that content rendered top-to-bottom can be aligned on the right hand side of the parent subform.


The new grammar to support RTL functionality is on the subform element:

<subform layout="position | tb | lr-tb | rl-tb | table | row | rl-row" 
     colSpan="1" columnWidths="..." 
     hAlign="left | right | center | justify | justifyAll" >

Bulleted and Numbered Lists

The subset of xhtml markup supported in the XFA grammar has been expanded to include the <ol>, <ul> and <li> elements.

XML Encryption

There are now options to leverage the W3C XML encryption standard ( with the form data.  The form author can choose to encrypt fragments of data or the entire data set.

Forms that submit XML data can encrypt their data before submission so that it is secure until decrypted by the recipient. 


Encryption is managed by a new <encryptData> element that can appear below the <event> and <submit> elements. 

Restrict Console

There are new viewer preferences that will disable the console and JavaScript debugger for specific forms.  There is new configuration grammar that will set these viewer preferences when a PDF is generated from an XFA definition.

Deprecate Legacy Rendering

Ok, this is not an enhancement, but something you ought to be aware of.  Forms that are authored with a target version of XFA 3.3 can no longer be rendered with legacy rendering.  Legacy rendering will continue to be supported for all previous versions.

Legacy rendering was used to bridge the transition of XFA 2.5 (Acrobat
7, 8) documents forward. Having extended the bridge to Acrobat 8.1, 9.0,
9.1 and 9.2 it is no longer needed for Acrobat 10.

Version Correlation

In case you needed a score card, here is the history of XFA and Acrobat/Reader versions:

XFA Version Acrobat/Reader
2.1 6.02
2.2 7.0
2.3 7.0.1, 7.02
2.4 7.0.5, 7.0.7
2.5 8.0
2.6 8.1, 8.1.1
2.8 9.0
3.0 9.1, 9.2, 9.3
3.3 10.0


E4X in Form Design

Over the time that I’ve been writing blog entries I’ve made many references to E4X and have included lots of samples that make use of this technology. However I’ve recently come to understand that I probably should have done a better job of introducing E4X usage, since it isn’t necessarily well understood by the average form developer.

For those of you who are unaware, E4X is an extension to JavaScript (ECMAScipt) for manipulating XML.  The reason it is not particularly well known among JavaScript developers is that few of the browsers have implemented E4X.  The notable exception is Mozilla / Firefox. And since the JavaScript engine inside Acrobat/Reader is based on Mozilla, E4X can be used reliably in Acrobat/Reader.

To learn more about E4X, here are some links you may find useful:

The XMLData Object

Before E4X was invented, we had recognized the need to have generic XML processing in Acrobat / Reader. Consequently, we added the XMLData object to the Acrobat object model.

However, E4X was introduced shortly afterwards, and we ended up with two XML processing engines in Acrobat/Reader. In case there is any doubt, the right answer is to use E4X. You should no longer use the XMLData object. The XMLData object remains in the product for backward compatibility, but it does not get enhanced, it does not get bug fixes.  E4X is faster and uses less memory.

Some Usage Notes

There are several common learning curve issues you might encounter when using E4X:

Oddly, E4X refuses to process the xml processing instruction at the front of an XML fragment.  If it’s there, you will get a syntax error: “xml is a reserved identifier”.

The workaround is to remove leading processing instructions with a regular expression such as:

sXML = sXML.replace(/^[\s\S]*?(<[^\?!])/, “$1”);

Processing Instructions and Comments

By default E4X ignores processing instructions and comments. You can change the default with:

XML.ignoreProcessingInstructions = false;
XML.ignoreComments = false;

If you stop ignoring comments and processing instructions and if you are processing an XML fragment that includes an outer comment or processing instruction, you need to treat it as an XMLList object instead of an XML object:

XML.ignoreProcessingInstructions = false;
var xyz = new XMLList(‘data’);


Since I just finished recommending JSLint to you, I should point out that JSLint doesn’t handle the E4X syntax extensions to JavaScript.  e.g. XML literals. It will stop processing once it encounters syntax specific to E4X. In most cases you can keep JSLint happy by using function calls instead of the syntax extensions.  e.g. instead of:




Mixed namespaces

Consider this example:

var xXML = <root><one><two>abc</two></one></root>;

The expression:; returns “abc”. However, if I introduce another namespace, this expression no longer works:

var xXML = <root><one><two xmlns=”foo”>abc</two></one></root>;

Instead, use:*::two.toString();

And for completeness, the JSLint friendly version:

var xXML = new XML(‘<root><one><two xmlns=”foo”>abc</two></one></root>‘);, “two”)).toString();

JavaScript Lint check in Designer

Some time ago I distributed a sample form that used Douglas Crockford’s excellent JSLint script to check form JavaScript for common coding errors.  You can read about it here. The logical next step is to use macros to integrate this capability into Designer (macros introduced here).

Here is everything you need to install the macro.

One improvement I’ve made over the previous version is to automatically populate the list of global objects with all the subform and field names in the form.  Now it will flag far fewer global objects — and if it does flag a global, you really want to pay attention, since it is very likely an un-declared variable.

I really strongly recommend this tool to you. When you run this against your script you will very likely be surprised at some of the coding errors you’ve made.  I know I was.

XDP Size Matters

I have recently been involved in a couple of contexts where customers have worked hard to reduce the size of their XDP and resulting PDF files.

We know that size matters.  The smaller the better.  We want small PDF files for downloading.  We know that encryption-based technologies such as certificates and rights enabling perform faster when the form definition is smaller.  Today’s discussion is a couple of tips for reducing the size of your XDP and PDF.

When concerned about size, the first two places to look are fonts and images.  I’ve covered images a couple times.  Check out: Linked vs Embedded Images and also the sample in Parameterize your SOAP address.

Fonts are a simple discussion.  Your PDFs are much smaller when the fonts are not embedded. If you need to embed fonts, use as few as possible.  To see which fonts are referenced by you form, you can check out the form summary tool here.

But the main topic for today is syntax cleanup.  The elements and attributes in the XFA grammar all have reasonable default values.  We save lots of space by not writing out the syntax when the value corresponds to the default.  For example, the default value for an ordinate is "0". If your field is at (0,0), we will not write out the x and y attributes. 

However, there are a couple of places where the syntax cleanup could use some help. Specifically: borders (edges and corners) and margins. There are scenarios where these elements can be safely deleted from your form.

Today’s sample is a Designer macro (with the sample form) for eliminating extra syntax. If you don’t want to understand the gory details, you can just download and install the Designer macro.  (If you haven’t already, please read this regarding Designer macros).  After running the macro, look in Designer’s log tab for a summary of the cleanup.  Then double check the rendering of your form and make sure it hasn’t changed.

If you want to understand a bit more, I’ll explain what is going on.  There are three instances of syntax bloat that the macro will clean up:

Extra corners

In the attached form there are two fields: border1 and border2. 

The border definition for border1 looks like:

  <edge thickness="0.882mm"/>
  <corner thickness="0.882mm"/>
  <edge thickness="0.3528mm"/>
  <edge thickness="0.3528mm"/>
  <edge thickness="0.3528mm"/>
  <corner thickness="0.353mm"/>
  <corner thickness="0.353mm"/>
  <corner thickness="0.353mm"/>

When you modify edge properties, designer will inject <corner> definitions. However, the only time a corner definition has any impact on the rendering is when the radius attribute is non-zero.  In this example, the <corner> elements may all be safely removed without changing the rendering of the form.

Hidden Borders

The border definition for border2 looks like:

  <edge thickness="0.882mm" presence="hidden"/>
  <corner thickness="0.882mm" presence="hidden"/>
  <edge thickness="0.3528mm" presence="hidden"/>
  <edge thickness="0.3528mm" presence="hidden"/>
  <edge thickness="0.3528mm" presence="hidden"/>
  <corner thickness="0.353mm" presence="hidden"/>
  <corner thickness="0.353mm" presence="hidden"/>
  <corner thickness="0.353mm" presence="hidden"/>

I’m not sure how I ended up with this configuration, but as you can see, all the edges and corners are hidden.This would be be represented more efficiently as <border presence="hidden"/> or better yet, no border element at all. But before cleaning up this syntax, bear in mind that there can be a useful purpose here.  If you have script that toggles the presence of border edges, it is useful to have the edge properties (e.g. thickness, color) defined in the markup.  If you remove the markup, you will need to set those properties via script.

Zero Margins

If you have edited object margins, you could end up in a situation where your margin element looks like:

<margin topInset="0mm" bottomInset="0mm" leftInset="0mm" rightInset="0mm"/>

Since the default margin insets are all zero, this element can be safely removed.

How frequently this syntax bloat occurs in your forms depends on your editing patterns. In my testing against several customer forms, I saw size reductions of 10 – 15% by removing these elements.


Editable Floating Fields V3

I had a user report a bug in the floating field sample  — the sample described in these blog entries: Version1   Version2

I’ve updated the code again.  The specific problems fixed were:

  1. The editing field can now be unbound (binding="none"). This is important in cases where you are using an xml schema and there isn’t data that you can bind to the editor field.
  2. There was a problem with preserving trailing spaces in edited values.  The easiest fix was to strip trailing spaces from the edited values.

Here is the updated form.  And the updated fragment.


Yesterday I discovered that we have an undocumented script object property that you might find useful.

The root xfa object has a property called “context”.  This property is a reference to the object that is hosting a calculation/validation or event.  i.e. it returns what is normally referenced by “this” in those scripts.

I recently came across a scenario where knowing the source context is very useful.  In my case, I wanted a function in a script object to know the calling context.  It worked well to use xfa.context.

I have attached a sample form where I have a set of expense report items.  I wanted to write a sum() function in JavaScript (rather than using formcalc).  Of course, I wanted the sum function to be re-usable so I put it in a script object on the root subform.  You call the function with a SOM expression that returns the fields to be summed:


The challenge here is that the SOM expression is relative to the field calculation that uses it.  That means this code will not work:

function sum(sExpression) {
  // "this" is at the form root
  // and won’t find the result

One workaround is to provide a fully explicit SOM expression:


But I don’t like this approach.  The calculation is now not encapsulated.  If the form author modifies the hierarchy in any way, the script will fail.  It also means that it’s very difficult to place this logic in a fragment where you don’t know the absolute context.

A second workaround is to resolve the nodes before calling the method:


I don’t really like this either – it reduces the readability of the code. 
Instead, I used this:

function sum(sExpression) {
  // resolve sExpression in the context
  // of the calling script

Note that xfa.context is a read/write property, but at this point I have not found a useful reason to assign a value to xfa.context.

Shared Data in Packages Part 2

Last week I started to outline a design pattern for sharing data in a package.  Again, the general problem we’re trying to solve is to allow multiple forms in a PDF package exchange data — where fields common to multiple forms is propagated.  Part 1 of the problem is establishing a communication mechanism between documents.  Once the documents have been disclosed to each other, any document in the package can modify any other document.  Today’s Part 2 entry describes how to do the sharing.

Data mapping strategies

Implementing data sharing means solving a mapping problem: How do we know which fields in one PDF correlate to which fields in other PDFs? There are several techniques we could choose, including:

  1. Use the same name for common fields
  2. Generate a manifest that explicitly correlates fields
  3. Base all forms in the package on a common data schema

For my solution, I’ve chosen the 3rd option: common schema.  The underlying assumptions are:

  • Field values get propagated by assigning values in the data dom — a data value from one form will have the same data node address in each of the other forms
  • The data dom in the package document will hold the aggregate data of all forms in the package — this is the ‘master copy’ of the data.  Note that the package document does not have to have fields correlating to all the data elements.
  • Attached forms will have their data synchronized from the master copy of the data when they are launched in Reader

Data Sharing Algorithm

With these assumptions in place, the actual algorithm is fairly simple:

  1. When any attachment opens, it registers with the package document
    (the topic of Part 1)
  2. At registration, all data values of the attachment are synchronized from the master copy of the data in the package
  3. While the attachments are open, detect when field values change.  The technique used to detect field value changes is to use propagating enter/exit events. We save the field value at field enter, and compare it to the result at field exit. 
  4. When a field value changes, send a synchronize message to the package document.
  5. When the package document gets a synchronizing message, it updates the master copy of the data and then propagates the change to all other open attached PDFs.

One benefit of this approach is that the actual logic used to synchronize the data resides in the package document. This means you can customize your data sharing algorithm by modifying only the script in the package.

Design Experience

The really good news is that you can put this all together in a very simple design experience.  With today’s sample, there are two fragment subforms: packageSync and embeddedSync. You can probably figure out where they each go.  The fragment subforms have all the logic needed to register and synchronize the documents.  They contain propagating enter/exit events so that all fields in the form are automatically synchronized.  So the Design experience is as simple as:

  1. Drag the packageSync fragment on to a package subform
  2. Drag the embeddedSync fragment on each each attachment form
  3. Attach the embedded forms to the package form

Global Submit

Since the package document holds the aggregate of all the data, a global submit operation can be achieved by simply submitting the data from the package document.


There are some limits to the synchronization that you should be aware of:

  • No handling for complex field value: rich text, images, multi-select choice lists
  • Does not synchronize subform occurrences
  • Does not synchronize calculated values
  • Because we are relying on propagating events, the solution works only in Reader 9.1 or later

Each of these problems are solvable. I was just too lazy.

The Sample

The sample form and all fragments are in this zip file. Try opening PackageSample.pdf and one or both attachments.  Fill in fields and observe that they get propagated.  Note that any field that has focus will not get updated until you exit the field.

Shared Data in Packages Part 1

In a previous post I warned about using doc.disclosed = true; There’s just too much risk from some rogue form that you might happen to open.  But yet, there are some very compelling applications we can develop if/when we could selectively disclose a document to another trusted PDF.  The application I have in mind is where we share access to the various documents that are open in a package.

I want to be able to propagate shared data between a package document and its attached forms.  When multiple forms capture the same data (e.g. name, address) you’d like to be able to capture it once and have the values automatically shared with each form in the package.

The shared data problem had two parts:

  1. Establishing a trusted connection between a package document and its
    embedded documents
  2. Propagating field values between documents

The first problem was very difficult.  It’s the topic of today’s post.
There will be a part 2 where I provide a solution for the data sharing

Today? We’re pretty much in the deep end.

Opening an embedded document

A package could establish communications with its embedded documents by opening each of them using: doc.openDataObject().  But this solution doesn’t scale.  Your package could have dozens of embedded forms.  We can’t assume it works to open all of them.  Eventually we’ll run into performance problems.

What we want is to get a handle to the embedded documents that the user has launched.  And there is nothing in the Acrobat object model that will tell you which of your child documents are open.

Step 1.  Modify the app object

We need a technique where the host/package document can expose an API that selectively discloses itself — just to its children.  The way to expose an API that all open documents can share is to modify the app object.  e.g. if I write
this code in one document:

app.myfunc = function() {return “hello world”;};

then any other open document can call app.myfunc();

Step 2.  Add a disclosedPackages object

The code outline belows shows how to add a disclosure API to the app object:

// At docReady, the host document/package adds a package 
// disclosure function/object to the acrobat app object 
var fDiscloseObject = new function() { 
    // private list of disclosed package functions 
    var disclosedPackagesList= [];
    this.disclose = function(packageDoc) { ... } 
    this.undisclose = function(packageDoc) { ... } 
    this.findPackage = function(embeddedDoc) { ... } 
if (typeof(app.disclosedPackages) === "undefined") { 
  app.disclosedPackages= fDiscloseObject; 

After executing this code, any PDF can call:


The specific usage:

On docReady, a package will disclose itself to its attachments by calling: app.disclosedPackages.disclose(;

On docReady an attached document will try to locate its package document by calling:


Once it has a handle to its package document, it can call synchronize methods found in the package.

At docClose the package will call: app.disclosedPackages.undisclose(packageDoc);

Step 3. Validate an embedded child

The app.disclosedPackages object maintains a private
list of disclosed package documents.  We need to selectively disclose the package to any
of the children that calls app.disclosedPackages.findPackage(embeddedDoc);

The problem is: How do we determine whether the candidate document is
actually a child?

There are a couple of tests we apply:

1) Check whether the candidate path is consistent with an attachment.  The doc.path property of an
embedded PDF is constructed to look like:


Loop through the parent’s embedded objects (doc.dataObjects) and look for any that have the same path as the candidate object.

Once we’re satisfied that the paths match we perform test 2

2) The package opens the embedded object that has a path matching the candidate. The package now has two doc objects and needs to confirm that they are the same.  The technique we use is to modify one and see if the modification shows up in the other.

Step 4. Spoof-proof the code

Unfortunately we live in a world where we have to take extra precaution to protect ourselves from hackers.  After all, that’s why we don’t use doc.disclosed in the first place.  In step 2 we added a JavaScript API accessible to every PDF open on our system.  How do we ensure that this API cannot be replaced by malicious JavaScript? After all, if we modified the app object, another document could overwrite our code with their own script.

Check your source

var fDisclose= new function() {
    // private list of disclosed package functions 
    var disclosedPackagesList = [];
    this.disclose = function(packageDoc) { ... }
    this.undisclose = function(packageDoc) { ... }
    this.findPackage = function(embeddedDoc) { ... }
if (typeof(app.disclosedPackages) === "undefined") {
    app.disclosedPackages = fDisclose; 
// Make sure the disclosedPackagesfunction has not been 
// replaced/spoofed 

if (app.disclosedPackages.toSource() === fDisclose.toSource()) {

Note that we disclose ourselves only after making sure the source is the same as the original.

The clever JavaScript coder will point out that the toSource() method can be overridden. Not so.  The Acrobat JavaScript implementation does not allow overriding the toSource() and toString() methods of objects.

Naming Conflicts

Something to be careful about is managing different versions of this code.
If there are two variations of this code in different PDFs and both use the same name (app.disclosedPackages), one of them will fail.  So if you write your own version of this, use a unique name.  Better yet, use a unique name that incorporates a version number so you can manage the code over time.

What? No Sample?

Next post.  I promise.

PaperForms Barcode Performance

We have had some feedback around the performance of paper forms barcodes on large forms.  Seems that when customers use multiple barcodes to process large amounts of data, their form slows down. 

The reason the form slows down is because the barcode recalculates every time a field changes.  The calculation is doing several things:

  1. Generate minimal, unique names for each data item. When the names are included in the output, we want them to be as terse as possible, while still uniquely identifying the data.  To do this, each name needs to be compared to all other names.
  2. Gathering data values to be included in the output
  3. Formatting the result as delimited text

It’s the first item that takes the bulk of the time.  In order to find the minimal name, the script compares each data node name against all others and iteratively expands the names until they are unique.  The algorithm appears to have O(n2) performance — which means that it degrades quickly when the number of data elements grows large.

There are three techniques you can use to improve the performance:

1. Do the work in the prePrint event

Move the barcode calculation to the prePrint event. In the barcode properties, uncheck "Automatic Scripting" and move the script from the calculate event to the prePrint event. Now, instead of recomputing the barcode every time a field changes, we compute the barcode only once — just before it gets printed.

2. Use unique field names

When the script encounters duplicate field names, it does lots of extra work to resolve them.  So don’t ask the script to do so much work.  Use unique field names. For example, instead of:




Not only will the script complete more quickly, but the names written to the barcode value will be shorter.  When they are not unique, they get prefixed with their subform name.  When they’re unique, they are left unqualified.

3. Do not include names — and modify the script

Since the bulk of the work that the script does is to come up with minimal unique names, let’s not write out the names.  Uncheck the "Include Field Names" option.  Unfortunately, the script goes through the effort to produce unique names even when names are not included in the output. You need to modify the script to prevent it from calculating names. Uncheck "Automatic Scripting" and add the lines in red below.

19 function encode(node)
20 {
21   var barcodeLabel = this.caption.value.text.value;
22   if (includeLabel == true && barcodeLabel.length > 0)
23   {
24     fieldNames.push(labelID);
25     fieldValues.push(barcodeLabel);
26   }
28   if(collection != null)
29   {
30     // Create an array of all child nodes in the form
31     var entireFormNodes = new Array();
       if (includeFieldNames) {
32       collectChildNodes(, entireFormNodes);
34     // Create an array of all nodes in the collection
35     var collectionNodes = new Array();
36     var nodes = collection.evaluate();
38     for(var i = 0; i < nodes.length; ++i)
39     {
40       collectChildNodes(nodes.item(i), collectionNodes);
41     }
43      // If the form has two or more fields sharing the …
44     // parents of these fields, as well as the subscript …
45     // their parents, will be used to differentiate …
46     // to take as little space in the barcode as possible, …
47     // data in the object names only when necessary …
       if (includeFieldNames) {
48       resolveDuplicates(collectionNodes, entireFormNodes,…);

If you implement this option, odds are you won’t bother with the first two methods.  The performance of the script is now O(n) and should work fine in the calculate event with non-unique names. And … is it just me?  I get a kick out of watching the 2D barcode re-draw itself every time I modify a field. 


Accessibility Bug Fix

Recently one of our customers discovered an accessibility bug that caused read order and tab order to diverge. When we investigated it, we discovered that the bug existed in the reader runtime, but that we could work-around the bug by changing the syntax that Designer generates. Fixing the problem in Designer is much preferable, since that doesn’t require upgrading Reader. Of course, waiting for a fix in Designer is not easy either. Fortunately, the macro capability offers us an alternative way to disseminate a fix.  First the usual caveat: macros are an experimental, unsupported product feature.  

The Bug

The bug scenario is when a custom tab order exits a subform; the read order follows the next-in-geographic order, while the tab order follows the explicit custom order.  Described differently, imagine a form with fields named "One", "Two" and "Three" and a subform named "S’.  The geographic order is: S.One, Three, Two, but the custom tab order is S.One, Two, Three. In the Reader runtime, the tab order correctly follows the custom order, but the Reader order gets confused on exiting the subform.

If you’re only interested in correct tab order you will not notice the problem.  However if you are using a screen reader and following read order, you will follow an incorrect order.  (If you examine the resulting form with tools like inspect32 or AccExplorer32 you’ll be able to see the result without actually running a screen reader)

I have included a sample form that illustrates the problem, and a version of the form with the fix applied.

The Fix

Down below, I’ll discuss the technical details of the fix.  But first the macro (rename to FixReadOrder.js) When you install and run the macro, it will scan the entire form searching for all the places where a custom tab order exits a subform. Each of these places will be modified. 

You will need to re-run the macro every time you make edits to the form that modify tab order.

The macro includes some logic so that it will stop functioning in the next version of Designer – given that we expect the bug to be fixed.

The Deep End

Custom tab order is controlled by the <traversal> and <traverse> elements.  The (stripped down) syntax of the original form looks like this:

 1 <subform>
 2  <subform name="S">
 3    <field name="One">
 4      <traversal>
 5        <traverse ref="Two[0]"/>
 6      </traversal>
 7    </field>
 8    <traversal>
 9      <traverse operation="first" ref="One[0]"/>
10    </traversal>
11  </subform>
12  <field name="Three"/>
13  <field name="Two">
14    <traversal>
15      <traverse ref="Three[0]"/>
16    </traversal>
17  </field>
18  <traversal>
19    <traverse operation="first" ref="Subform1[0]"/>
20  </traversal>
21 </subform>

There are two variations of the traverse element used here:

  1. operation="first": specifies the first child of a container (subform)
  2. operation attribute is unspecified (the default) — which means operation="next". Specifies the next element.

The runtime problem occurs when the traverse from field "one" at line 5 does not correctly navigate to field "Two".

Here is the fixed version:

 1 <subform>
 2   <subform name="S">
 3    <field name="One">
 4       <traversal>
 5         <traverse ref="Two[0]"/>
 6       </traversal>
 7     </field>
 8     <traversal>
 9       <traverse operation="first" ref="One[0]"/>
10       <traverse ref="Two[0]"/>
11     </traversal>
12   </subform>
13   <field name="Three"/>
14   <field name="Two">
15     <traversal>
16       <traverse ref="Three[0]"/>
17     </traversal>
18   </field>
19   <traversal>
20     <traverse operation="first" ref="Subform1[0]"/>
21   </traversal>
22 </subform>

The macro added a traverse element at line 10. Now we’ve explicitly told the subform which child to navigate to first and also where to navigate to when the children are finished.  In fact, we end up with two traverse elements pointing to field "Two".  Not surprisingly, the runtime correctly moves to "Two" in both tab order and read order.