Security misconceptions – Watermarks, Usage Rights and Rights Management

There is a confusion about what features of Acrobat and PDFs in general offer by way of securing documents. I would like to do a very cursory overview of the items that I have so far seen users consider “security.”

To be clear, by “security” I mean the ability or inability to access the contents of the PDF, thus safeguarding information from entering the wrong hands.

1) Not Security-Oriented

a) Watermarks

Unlike on your Dollar, Euro or Pound notes (etc), the watermark is NOT a guarantee of integrity, veracity or anything at all.

In the PDF world, a visible watermark only exists as a notification mechanism. If a watermark says “Confidential,” it is only warning the viewer that the content is confidential, but will not otherwise try to make itself indelible.

It is meant to be a very visible mark on the page, with the added property of not completely obfuscation the items underneath (allowing readability to be maintained)

b) Certification

A Certified PDF carries a digital signature certifying that certain things can and cannot be done with it. Namely:

-A PDF certified to run privileged scripts can run scripts requiring special privileges, such as writing to the hard drive.
-A PDF certified to be unmodified means that so long as the PDF has been modified withing given parameters (fields filled in for example), then the certification will hold. If a visual aspect of the PDF changes though, the certification will be broken, and Acrobat will report an error.

Certification covers a number of other use cases as well, but I hope the above illustrates sufficiently why this is a not a security-related item, rather a usability concern.

c) Reader Extensions Usage Rights

Acrobat and LiveCycle can extend the usability of PDFs to Adobe Reader, the free PDF viewing application. By extending usability features, you can allow Reader users to fill in forms and save that content, add comment annotation, and other functionality.

However, if the same extended form is opened in Acrobat, the user can do to the PDF pretty much anything that Acrobat has at its disposition.

REUR adds functionality to Reader. Any extra functionality it does not add is a restriction that Reader already had.

2) Security-Oriented

a) Password Protection

Using password protection, you can encrypt the PDF so it can only be opened by a person who has the password. You can also prevent the PDF from being used in certain ways, such as modifying the pages.

You cannot however track who has opened the PDF, when and at what IP. That is the domain of Rights Management.

b) LiveCycle Rights Management (aka Policy Server)

LiveCycle 7 introduced Policy Server, later renamed to LiveCycle Rights Management. Adobe LiveCycle/ADEP Rights Management protects your documents from being accessed by parties you have not authorized to do so.

This allows the document publisher to:
-protect with a user ID/password combination
-force the identification to go to a remote server
-restrict usage rights depending on the user’s group

With this is mind, you must be aware that ONLY persons that are trusted should be granted a login to the document. If, on a document that you want to protect, you have granted access to a person you do not Trust Entirely, you have opened the door to having your information stolen – be it via sreen grab, or simply photographing the screen with a camera.

It’s like having the best vault to protect your secrets and giving the secretary the passcode for safekeep. If the secretary is honest, they will leave your items well alone. But if you did not trust them in the first place, the vault, for all its technology and mechanisms, cannot protect your secrets – because you’ve willingly given the key to the intruder.

3) A note on Rights Management and SSL

To use Adobe LiveCycle Rights Management, you need to setup the server to be able to server SSL connections, and configure the callback URL appropriately in the LiveCycle/ADEP Rights Management service configuration.

Note that if the server’s SSL certificate specifies external CRLs, you must be able to grant the client application free network access to the CRL’s URL – otherwise the connection will fail.

I hope that this article has allowed you to understand the subtle difference between the perceived security tools and actual security features – and most importantly, the fact that if you suspect a user may likely try to do Bad Things with your information, you should not give them the keys to the vault.

My own Rule Number One of security is: “don’t trust anyone, not even those you trust.” Then add exceptions, based on well-founded assumptions.

– Tai

Flat PDF vs. XFA Form PDFs

A frequent mistake that is made is to assume that, since XFA Forms can be saved as PDFs, they will behave like any other PDF. Truth is: XFA PDFs and flat PDFs are entirely different beasts.

1) About PDF

“PDF” stands for Portable Document Format. Initially, PDFs were meant to be a digital counterpart to printable documents. You can open a PDF, and see the layout exactly as the page designer intended, with pictures and page breaks in the right places, ergonomic page margins, and most noticeably, with the original fancy fonts preserved. This is the original flat PDF.

Flat PDFs contain the page render data – a binary encoding of how the document should visually be drawn on screen or on paper (minus interactive items such as videos and flash animations).

PDF has come a long way since. It really has embraced the idea of “portable document,” the idea of the distribution of a published, polished document, and seeks to be all that printed documents could never be.

You can embed videos, flash animations and 3D spaces, protect them with encryption, limit their usage with DRM solutions (Adobe’s own is called LiveCycle Rights Management), annotate through comments and highlighting, measure elements, digitally sign them, make form fields to be filled in – and make forms that change according to the data inside them. Wow.

2) About forms – AcroForms

The first iteration of interactive form filling came as AcroForms.

At the most basic, an AcroForm is a flat PDF form that has some additional elements – the interactive fields – layered above the flat render, that allow users to enter information, and allow developers to extract data from.

You can create these using Adobe Acrobat, or any third-party PDF creation application that allows creation of PDF forms.

Flat PDFs can be annotated (comments, highlighting, and various other scribbles as desired), as these annotations can be mapped to an {x,y} location on the page.

Flat PDFs can have their pages extracted, as each page is already defined in the render.

Flat PDFs can be linearly optimized, for fast web viewing, which ensure that data for the first page all occurs before the data for the second page, in turn being before the data of the third page, and so on.

Such features are not available to XFA PDFs.

3) About forms – Dynamic XFA Forms

Dynamic forms are based on an XML specification known as XFA, the “XML Forms Architecture”. The information about the form (as far as PDF is concerned) is very vague – it specifies that fields exist, with properties, and JavaScript events, but does not specify any rendering. This allows them to redraw as much as necessary, with subforms repeating on the page, sections appearing and disappearing as appropriate, the ability to pull in form fragments stored in different files, and objects re-arranging as you (the developer) dictate.

This also means that some features of AcroForms and flat PDFs are lost.

XFA Foms cannot be annotated. Reader (or Acrobat) cannot know whether all your custom code may change the layout. As such, without any render data, and a chance that the render data may be drastically altered on the fly, local annotations cannot be implemented. An annotation only has sense at an {x,y} location, but if the item you are annotate changes location, your annotation becomes meaningless, if not misleading.

XFA Forms cannot have their pages extracted. There is no render data to determine pages. Change some data in the form, and the layout of the pages may change drastically. You must “flatten” the PDF before extracting pages, thereby losing interactive properties.

XFA Forms cannot be optimized for fast web viewing. There is no render information. Data at the end of the document may affect displays on the first page of the document.

4) Acrobat is not a word processor

A final common misconception is that you can edit pages in Acrobat as you could in Word. This is not the case. Acrobat focuses on the integrity of the layout – as such, if you have a page with 2 paragraphs, each 5 lines long, that is what Acrobat’s PDF engine will commit to showing.

The editing tools available are provided for minor cosmetic changes – nothing more. It is essential to keep the original documents – Word, PowerPoint, OpenOffice, TIFF or otherwise – for your editing process. Once converted to flat PDF, the intent is that the overall layout should never change.

For the same reason, XFA Forms are in conflict with Acrobat’s editing tools – the latter operate on the render layer, which does not exist in the saved XFA PDF. Such editing tools lose meaning faced with XFA Forms.

I hope this helps clear some of the confusions around what XFA PDFs can and cannot do in Acrobat and other PDF manipulation tools.

Understanding the LiveCycle GDS – and freeing up disk space

LiveCycle, as an piece of Enterprise software, tends to assume that you may want to keep a quantity of data around for posterity. Long-lived processes can cause a lot of disk space bloat, and whilst this is fine for those who wish to archive lots, this may not be ideal when running a lower-spec server.

In this article, I will point out the main areas where data and disk space use can happen, and how to clean up.

1) About Short-Lived and Long-Lived processes

Processes (also known as “workflows”, or “orchestrations”) are created in LiveCycle Workbench. This tool that allows you to create workflows, or processes, organized into Applications; and each process can be either long-lived (“asynchronous”) or short-lived (“synchronous”).

When a short-lived process is invoked, the response is only returned once the whole process has run. For this reason, no short-lived process can have a step which requires human interactions – namely, a Workspace task.

When a long-lived process is invoked, the request returns immediately. The process will run, but you will need to get the result through a different request or action. Long-lived processes do not need to have a human-centric activity in them: you could use a long-lived process to send a document to the server for procesing, without needing to know what status it ended up in.

Note that for any process that stalls, the documents associated will also be kept, ready for recuperation, analysis and debugging.

2) About the Global Document Store

The Global Document Store, also known as “the GDS” is a space on the hard drive or in the database (depending on your configuration in the Core System Configuration) where LiveCycle stores documents during the running of processes, and once long-lived processes are complete.

Note that whilst the GDS stores the files themselves, the references to them that processes need are stored in the database. For this reason, the GDS and the database must NEVER be out of sync. Should that happen, any processes that are running would fail, making data recuperation difficult or even insurmountable.

In short-lived processes, when documents are larger than a certain size, they will be written to the GDS instead of being held in memory. This size is set in the Admin UI as the Document Max Inline Size. When a result document is produced, no mater what its size, it will be written to the GDS. Short lived processes can return the document itself, or a URL to the document. Accessing this URL will cause LiveCycle to lookup the document in the GDS to write it back to the client.

Documents from short-lived processes are removed after their time is passed. The Sweep setting (set in the Admin UI in the Core System Configuration) determines how frequently the GDS is scanned for documents to delete, and its associated Document Disposal Timeout determines how long the document should be kept for. If during a sweep of the GDS any new document is found from a short-lived process, it is marked for expiry by placing a similarly named document in the GDS, with a timestamp indicating the clock time after which the document should be deleted – this clock time is determined by the disposal timeout. Every sweep checks the timestamp, and if the clock time is after the one specified in the timeout, it will be deleted. The URL returned from short-lived processes need these documents for an amount of time, between the time the URL is returned to the user, and the time the user clicks the URL. It is good to set the Document Disposal Timeout to a value between 30s to 120s, depending on the load expected on the server.

Long-lived processes will write required documents to the GDS before assigning them to a human-centric task so that they can be obtained later when the user actually logs on to process them. At the end of the process, the final collaterals are kept in the GDS for posterity and later review if required.

Thus, for long-lived processes, the files are never disposed of. The default behaviour for the GDS then is to constantly grow, if long-lived processes are used. If you do not want this to happen, you must perform regular purges.

3) Purging Jobs

In LiveCycle ES, a command-line purge tool is provided to purge jobs that either completed or were terminated. This exists still in ES2, should you ever need it.

In LiveCycle ES2, the Health Monitor was introduced to offer a graphical UI for performing purges.

In ES2 SP2, a purge scheduler was introduced to automate, at intervals, the purge of jobs.

a) If you are on ES2 SP2

Connect to Admin UI and go to Health Monitor > Job Purge Scheduler

Schedule a One Time Purge for records older than 1 day

b) If you are using ES2 pre-SP2

Connect to Admin UI and go to Health Monitor > Work Manager. Search with the following criteria:

-Category = Job Manager
-Status = Terminated
-Create Time 2 Weeks
-(iterate over time periods)

Delete any terminated processes that are found.

c) If you are on LiveCycle ES

The purge tool requires some knowledge of the contents of the LiveCycle database; for this reason I will not cover this in this article.

You can find most of the required information in the link below, however you would be best advised to operate under the guidance of the Enterprise Support service, if you can.

http://www.adobe.com/content/dam/Adobe/en/devnet/livecycle/pdfs/purging_processes_jobs.pdf

4) A note on process recordings

I would like to add a special note here concerning process recordings. These can be activated via Workbench by right-clicking on a process or on a process canvas, and selecting Process Recordings > Start Recording

This will record the activity of every time the process is launched, including the contents of LiveCycle variables, branches followed, etc, at EVERY step of the process, for later review in Workbench.

Even processes not started in Workbench will be recorded.

For this reason, process recordings must be activated ONLY for debugging purposes.

Process recordings are heavy, and are not suitable for a production server, both in terms of performance and space used. They can easily be deleted via Workbench through the playback dialog.

Updating a LiveCycle Process on a Live System

Recently I have been seeing reports of stalled / failed processes, both short-lived and long-lived. It turns out that in some cases, the process developers were updating processes on their live systems without taking due precautions. The following post is to explain what is actually happening under the hood, and why you can’t simply “hot change” a process.

A simple process

Imagine you have a process called MyNotify with two steps in it: a first task (not start point) AssignUser, and a second task SendEmail. Pretty straight forward.

You have deployed this process, and process instances are merrily being kicked off. Great. Now you realize that you don’t want the AssignUser task; rather, you want a Workspace Start Point. Remove the AssignUser task, configure the Workspace Start Point to open the same form as you had in the now deletd AssignUser task. Save and deploy.

You may now start getting calls from your users saying their form submission never resulted in an email. Upon investigating, you will likely find errors saying that “Job ID could not be found for Job [...]“.

Oh no. What happened?

Tasks not found

Let’s wind back to where there were two steps. When a process instance is kicked off, what happens is that an entry in the database is made saying that process MyNotify has been kicked off by user “Alice” (for example), and it has been assigned to Alice, in the AssignUser task.

When Alice submits the form, LiveCycle goes to its database and checks the process – Alice just submitted the AssignUser step, so it checks MyNotify for an AssignUser step, and figures out what to do next from that.

If you have deleted the AssignUser task from MyNotify, LiveCycle will be unable to determine what to do next – and stall the process instance. Future instances should work fine (they are before the change point), anything that was at the SendEmail point in the process will be fine as well (they are after the change point). It is specifically process instances that are in a state that could be affected by the change that risk failure, and that will happen without possibility of roll-back or recuperation (bar performing some incantations in the database to recuperate the data, and then constructing some half-processes to mimick pre-change parts of the changed process, in the hope of pushing the data onwards — a lot of avoidable hassle, itself prone to failure).

Staying safe

It is understandable that you may think that if the process were modified, already-running processes would continue with the older version. This is not so, or at least, not in this scenario.

When kicking off a new process instance, LiveCycle does not automatically make a copy of the process to be followed should a new edit of the process come into existence. Maintaining a copy of all the configurations of a process for each instance created would cause extremely severe performance issues on the server, so a more de-coupled implementation serves best. The work of ensuring that a new copy of the process configuration is created is left up to the process developer.

Once a process is deployed, there are only two “safe” ways of updating the process.

Application Version

The first is to create a new Application version, and when all the new changes are made, deploy the new version.

  • All old instances will continue their course in the old version.
  • Workspace processes will kick off in the new version
  • Calls to the old version of the process will still call the old version

Once you have ascertained for sure that NO more processes from the old version are running, and that no applications rely on the old version, you can safely undeploy it from runtime, and delete it if you so wish.

When will references point to the new version automatically?

Having done this, all references to sub-processes inside the same application will be updated, as will all resources referenced relatively inside the application. For example, MyApplication/1.0/ProcessA can reference /FormA (in Assign User tasks for example) which means “the current application’s version’s FormA”. A sub-process on the process-workflow convas will be referenced relatively if the sub-process is contained inside the same application.

When do you need to update references manually?

Take special heed however — processes and resources that do not reside in the same Application do not see their references updated — they are absolutely referenced. If MyApplication/1.0/ProcessA is upgraded, it will not change the references to, for example, OtherApplication/1.0/OtherProcess. Similarly, if we upgrade to OtherApplication/2.0/OtherProcess, then MyApplication/1.0/ProcessA will still point to OtherApplication/1.0/OtherProcess.

Finally, note that if you create a new version of an application, all resources and processes are copied over with it with new version numbers too, and any textual references to other resources remain the same. So if MyNotify/1.0/ProcessA has a variable referencing MyNotify/1.0/FormA, then MyNotify/2.0/ProcessA will also reference MyNotify/1.0/FormA ; the same goes for resources uch as XDPs, XSLTs, and any other resources stored in an Application. The reason for this is that these references are stored as “strings” (pieces of text) and Workbench cannot, upon creating a new version, differentiate what snippets of text all through the Application are versions to update, what versions should not be updated, and what strings do not actually contain versions at all.

Down time (clean “brute force” update)

The other way is to plan downtime.

  • Prevent people from kicking off new processes
  • Let all processes complete
  • … and only then deploy the new edit of the process

You could choose the second method to avoid having to update references to resources, but it would defeat the purpose of having this versioning at all and does, unavoidably, incur the overhead of a downtime.

Central ‘s !Replace! command requires a caret line

Central Pro Output Server has syntax for the preamble which allows you to replace lines in the field nominated data. There is however an additional requirement that is implied, but not explicitly stated:

!Replace! only works with lines that start with “^”

The following definitions:

^define group:!Replace!^bogus!content ^comment content
^define group:!Replace!xyz ^comment xyz
^define group:!Replace!^standalone ^standalone complex

With the following data:

^bogus content
xyz
^standalone

Produce the following result:

^comment content
xyz
^standalone complex

If you want to have placeholder data, the data elements need to start with a caret “^” character. If you are passing in XML for example, it could look like this:
<node>^placeholder</node>

A line to be replaced using !Replace! must be quoted verbatim. If you want to use replacement patterns, you will need to add your own pre-processing.

Central Migration Birdge : centralDataAccess requires lower case FNF commands

Central Migration Bridge allows you to use your Central Pro Output Server inputs with LiveCycle and XFA forms by converting Field Nominated Data to XML using the centralDataAccess operation.

A caveat however is that the field nominated data commands need to be in lower case for centralDataAccess to understand it. This will not suit some setups, which have their data applications producing upper case commands.

You can work around this issue by adding an ExecuteScript activity before centralDataAccess activity. Say the FND was loaded into a document vairable call inFNF. The following script would read that data, process it, and write it back to the inFNF variable:

import java.util.regex.Pattern;
import java.util.regex.Matcher;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.io.InputStream;
import com.adobe.idp.Document;

 // XPath to the process variable in which you loaded the field nominated data
String procvar = "/process_data/inFNF";

StringBuffer buff = new StringBuffer();
String crlf = "\r\n";
Document indata = patExecContext.getProcessDataDocumentValue(procvar);
BufferedReader fnf_in = new BufferedReader(new InputStreamReader(indata.getInputStream() ));

String line = fnf_in.readLine();
Pattern p = Pattern.compile("^(\\^\\$?[A-Z]+?)($|\\s+.*)");

while(line != null) {
    Matcher m = p.matcher(line);
    if(m.find() ) {
        line = m.group(1).toLowerCase()+m.group(2);
    }
    buff.append(line).append(crlf);
    line = fnf_in.readLine();
}

Document outdoc = new Document(buff.toString().getBytes() );

patExecContext.setProcessDataDocumentValue(procvar,outdoc);

The use of regular expressions helps isolate the command and set it to lower case, rather than affect the whole line.

Cannot install LiveCycle ES SP3 on Linux

There is a known issue with the LiveCycle ES SP3 installer wherein on some Linux systems, when running the installer you will get an OutOfMemoryError despite having more than 1 GB free memory (you probably don’t need that much, but it’s always safe on a server to have some leeway…)

To get around this issue, you can use a Windows staging platform for a Linux target system to perform the installation, deploy manually, and remotely run the Configuration Manager. This is also useful for locked down Linux systems where the app server is installed and accessible (for JBoss this means at least RW permissions in JBoss’s deploy directory), but you do not have the administrative rights to run the LiveCycle installer.

1. What does the patch/SP installer do?

Unlike your average Windows installer or Nix package installer, simply running and completing the LiveCycle Service Pack or patch installer will not have applied it to your live system. What the installer does is copy a series of JAR files to the LiveCycle installation’s holding directory, ready to be packged into EAR files for deployment – but does not do the packaging for you.

The packaging is left to the LiveCycle Configuration Manager which when launched will load the configurations from your previous installation run, and package the EARs accordingly, taking into account local system paths and parameters.

2. What’s the staging platform?

A Windows staging platform allows you to run the patch installer on a mock LiveCycle installation. You can thus run a patch installer on Windows and compile EARs for Nix, and then upload these files from the Windows box to the Nix box.

To setup a Windows staging platform, run the LiveCycle installer on a Windows machine and select a manual install. It will then ask you whether you want a regular installation or a staging installation. Select the staging option for the target operating system of your choice and complete the install, using the same parameters as you would use if installing natively on your target system.

Now you can run patch installations for your target LiveCycle live system from this “dummy” system.

3. Steps required for installing via a staging platform

  1. Copy the installer packages for LiveCycle and the Service Pack to a Windows machine (Win 2003 Server, Win XP Pro, Win7, Win 2008 Server)
  2. Run the LC installer – this is provided both with Windows and Linux executables, the actual significant code is Java so the same ZIP as used for Linux install can be used for Windows install
  3. Select a manual install, and then select a staging platform for your target operating system
  4. Once the components are “installed”, run the configuration manager to compile the EAR files
  5. When prompted to deploy the EARs, deploy the EAR files to the application server and restart it
  6. Continue with the configuration manager pointing to hte LiveCycle server’s IP address

Information is available from page 12 of the following guide:

http://help.adobe.com/en_US/livecycle/es/install_jboss.pdf

Cannot import or export runtime configs from Admin UI

The LiveCycle ES2 documentation reads:

You can export the runtime configuration information for deployed applications.

1. In LiveCycle Administration Console, click Services > Applications and Services > Application Management.
2. Click the name of the application.
3. Click Export Runtime Config and save the configuration file (XML) that is produced.

But some people will not see the buttons “Export Runtime Config” and “Import Runtime Config”. This is unusual behaviour, that for now I have not identified a reason for; but here is a workaround:

-Right-click the main area of the page, and choose to “show only this frame”
-Check the URL – replace “selectArchive.do” with “selectApplication.do” and hit enter

You will now be able to see the import and export buttons, fully functional.

Changing the JBoss multicast port for LiveCycle

You may find you need to change the multicast address+port of your LiveCycle cluster – specifically, if your LC cluster shares the same multicast config as other clusters on your intranet, they might find eachother and try to combine into a super cluster.

This is a huge issue if the two are LiveCycle clusters for example, but one is the test cluster and the other is a QA or even production cluster. This can cause all sorts of havoc ending up with database inconsistenceies, lost data, gridlocked requests etc… If you have a communication or database issue on a cluster, you MUST check your multicast setup as first step !

Multicasting in LiveCycle ES and LiveCycle ES2 is built-in to the application server – WebSphere or Web Logic; or JBoss in our case.

To change the multicast port for a JBoss cluster:
- stop each of the JBoss nodes;
- edit for each node the run.bat (Windows) or run.sh (*nix) script as appropriate;
- identify in each the multicast port, and change it (all to the same port);
- save the changes, and start JBoss again

Remember: no two clusters on a same network/intranet may use the same multicast address + port. That is absolute.

Central Pro loops during processing

Sometimes you will find that after some edits in your form, or on inclusion of certain data, Central will enter an infinite loop whilst processing pages.

This could have a number of obscure causes, but the most common is this: a combination of incorrect placement of $POSITION field; inappropriate subform size; and Page Full settings configured to conditionally start new pages.

The $POSITION field

When a subform is drawn on the page, the metaphporical drawing head returns to the leftmost part of the printing area, and slightly lower, ready to draw the next subform.

If the subform contains a $POSITION field however, the writing head will start drawing where the position field as, as if that position field were its {0,0} coordinates.

If a $POSITION field is in a JFMAIN page (which serves as the template page for other pages) then it will determine where the start of drawing is located.

The Page Full settings

As subforms are placed on the page, the page fills up, eventually reaching the bottom. Page Full settings can be used to determine when to skip to another page, for example, when there is no more room for a new subform instance, as well as some extra subforms (for example, footers, defined on “At End-Of-Page”).

When the page is full, a new page starts, and the subform that was going to be placed on the previous page is instead placed on the new page. It is important to note that when this is done, the page-full conditions are run again.

The subform size

If a subform is too big for the printing area, it will be truncated at the bottom (print off the bottom of the page) before a new page is started, unless the option “Reserve space for the current subform plus these subforms” option is ticked in the subform’s properties, under Page Full.

If the Reserve Space option is ticked, a new page will be started, and the subform will again be laid down on the page.

Now imagine these scenarios:

a) There is no room on the page for the subform, so a new page is started. The $POSITION field is low enough on the page so that there is still no room for the subform. So a new page is started, where the $POSITION field is again too low.
–> Infinite loop

This often happens when a $POSITION field in a JFMAIN has been moved too low for some reason. Bring the $POSITION field back to the top left corner of the printable area.

b) A large subform whose veritcal size exceeds that of the page is called. A new page is started for the subform, but the page is still too small. So a new page is started, but the page is still too small.
–> Infinite loop

This often happens when either the $POSITION field is set too low, or when the form design was made for the PDF default presentment target (it makes use of the whole page space), but the processing is done for a physical target (for example, a printer), where physical margins reduce the printable area. To correct this, set the default presentment target to be the one with the most restrictive margins (which will show in the design area as dark gray), and only allow items to be as tall as that space (remember to also account for header and footer spaces), and adjust the $POSITION field accordingly.