Posts tagged programming

Changing application fonts for the ICR solution interface

Philomena Dolla

This blog post is part of the series on customizing the Adobe Integrated Content Review solution.

***

The applications shipped as part of the Integrated Content Review (ICR) Solution interface use certain default fonts to display text. You can customize your application and change the default font to suit your requirements. The CampaignPortal Flex application, for example, uses the MyriadPro font.

The default font files for the CampaignPortal project are located at campaign_portalsrcmainflexassetsfonts. The fonts used by the application are defined in the style sheet file, icr.css.

To change the font, create new font files for the project and update the style sheet, icr.css, with the path to these new font files:

    1. In Flash Builder, open the CampaignPortal project in the Package Explorer view.
    2. Copy the font files that you want the application to use.
    3. Navigate to CampaignPortal > src > main > flex > assets > fonts.
    4. Right-click fonts and select Paste to paste the font files to the fonts directory.
    5. Navigate to CampaignPortal > src > main > flex > css > icr.css.
    6. Open the style sheet, icr.css, in the editor.
    7. Update the paths to the font files to point to the new font files.
    8. Rebuild and redeploy the package. See this blog post for more information on building and deploying.

For background information about setting up the ICR development environment, refer to this blog post.

——-
Original article at http://blogs.adobe.com/ADEPhelp/2011/09/changing-application-fonts-for-the-icr-solution-interface.html.

Changing skins and styles of ICR UX components

- Philomena Dolla

This blog post is part of the series on customizing the Adobe Integrated Content Review solution.

***

The look and feel of a UX component in the Integrated Content Review (ICR) solution can be customized to suit your business needs. You can choose to apply specific formatting and coloring styles, or even change the skin of the component.

The Asset Details pod of the solutions interface, for example, can be modified to change the way the asset attributes are displayed. In the existing layout, the Asset Details pod displays all the attributes in a single pane. You need to scroll through the pane when the attributes extend beyond the display area.

To minimize scrolling and for better accessibility, you can change the single pane format to use an accordion menu such that the system defined attributes and the custom attributes are displayed in two different panels. (This example assumes that a user is allowed to define any number of custom attributes for any asset at design time.)

To modify the Asset Details pod:

  1. Make a copy of the existing AssetDetailsPodSkin (CustomAssetDetailsPodSkin.mxml) skin and edit it.
  2. Modify the CSS (icr.css) to use the new skin.
  3. Specify the styling changes as part of the code for the skin itself, or update the CSS and apply it to the skin. For example, you can centre and underline the panel labels by associating a style with the accordion header.
    Defining your style in the CSS:

    Associating the custom style with the accordion header
    :
  4. Save and deploy the customized solution interface. See this blog post for more information on saving and deploying.
    System Attributes Panel:

    Custom Attributes Panel:

You can view the the CustomAssetDetailsPodSkin.mxml file here. See UX components for more information on UX components. For background information about setting up the ICR development environment, refer to this blog post.

 

——-
Original article at http://blogs.adobe.com/ADEPhelp/2011/09/changing-skins-and-styles-of-icr-ux-components.html.

ICR: Add custom asset attributes

This blog post is part of the series on customizing the Adobe Integrated Content Review solution.

***

Integrated Content Review lets you define custom asset attributes that are listed in the solution interface Asset Details pod as well as the Task Details area in the Adobe Creative Suite Task List Extension for ICR.

In this blog post, we’ll learn how to perform the following tasks:

  • Create custom attributes using Adobe CRXDE
  • Modify the appropriate orchestration in Workbench to ensure that custom attributes are displayed in the Task List extension
  • Test the newly added attributes

For detailed information about each of these tasks, refer to this PDF document (download).

Optional background reading

——-
Original article at http://blogs.adobe.com/ADEPhelp/2011/09/icr-add-custom-asset-attributes.html.

ICR: Update the campaign portal SWF without redeploying the package

This blog post is part of the series on customizing the Adobe Integrated Content Review solution.

***

If you’re working with just the Flex project that ships with Integrated Content Review, you may find it convenient to update the campaign portal SWF directly without redeploying the entire package (template-integratedcontentreview-pkg.zip).

You can generate the SWF in one of the following ways:

  • Clean the solution interface project from within Flash Builder:
    • In Flash Builder, select Project > Clean. The contents of the default project output folder—ICR_SOURCEintegratedcontentreview[CampaignPortal]bin-debug—are updated.
    • Rename the ICR.swf file in this folder to campaign_portal.swf.
  • Run build.xml:
    • Run the build.xml in the ICR_SOURCEintegratedcontentreview[CampaignPortal] folder. The contents of the ICR_SOURCEintegratedcontentreview[CampaignPortal] folder, which includes campaign_portal.swf, are updated.
Once you have the campaign_portal.swf file available, follow these steps to update it in CRX:
  1. Navigate to http://localhost:4502/crx/index.jsp and log in using admin credentials.
  2. Click Content Loader.
  3. Click Browse and select /content/icr.
  4. Click Choose File and select the campaign_portal.swf file that you just generated.
  5. Click Import.
For background information about setting up the ICR development environment, refer to this earlier blog post.

——-
Original article at http://blogs.adobe.com/ADEPhelp/2011/09/update-the-campaign-portal-swf-without-redeploying-the-package.html.

Setting up the ICR development environment

This is the first blog post in the series on customizing the Adobe Integrated Content Review solution.

***

The Integrated Content Review solution ships with a solution interface and building blocks that you can customize as per your organization’s requirements. Before you set out to customize these components, you must first set up your development environment. Setting up the ICR development environment involves the following broad steps:

  1. Set up prerequisites
  2. Locate the solution interface and required dependencies
  3. Understand available projects
  4. Set up available projects in Flash Builder
  5. Set up Java projects in Eclipse
  6. Build and deploy the solution interface

For detailed information about each of these steps, refer to this PDF document (download).

Watch out this space for more customization scenarios for ICR!

——-
Original article at http://blogs.adobe.com/ADEPhelp/2011/08/setting-up-the-icr-development-environment.html.

Extending LiveCycle ES 2.5 for Java Developers

The full courseware that Scott MacDonald, Gary Gilchrist and I delivered during MAX 2010 is not available online.  Anyone may use this material as a self paced tutorial to understand how Java Developers can extend the native capabilities of Adobe LiveCycle ES. 

The course is available as a ZIP file here (right click then save the target as …”)

http://www.web2open.org/courses/LCES4JavaDevs-CourseArchive.zip

This course covers the following topics:

Extending Livecycle ES for Java Developers

TABLE OF CONTENTS
OVERVIEW
EXERCISE 1: UNDERSTANDING CUSTOM COMPONENTS

EXERCISE 2: DEVELOPING THE CUSTOM COMPONENT .
Task 2‐1: Start Eclipse and create a new project
Task 2‐2: Add the required Java library files
Task 2‐3: Defining the service interface
Task 2‐4: Defining the service implementation
Task 2‐5: Defining the component XML file

EXERCISE 3: DEPLOYING YOUR COMPONENT
Task 3‐1: Package your component into a JAR file
Task 3‐2: Importing the component using Workbench ES2

EXERCISE 4: USING THE COMPONENT WITHIN A PROCESS
Task 4‐1: Create the EncryptManyDocuments/EncryptManyDocuments process and invoke it from
Workbench.
Task 4‐2: Programmatically invoking the EncryptManyDocuments process.

Solution Code is provided as well as extra notes on the code for the programmatic invocation:

—-
Original article at http://technoracle.blogspot.com/2011/04/extending-livecycle-es-25-for-java.html.

The LCES Pet Store & Process Oriented Application Development!

Shone Sadler

This is one of three demos that I did at MAX 2008. Unfortunately, I did not make it through all the demos due to technical issues (i.e. I should have come earlier to test out the gear). Enough of the excuses though, hopefully people enjoyed what I could show and now here is the source ;-)

The primary purpose of this demo was to show a) a “traditional” enterprise app being built solely on top of LCES and b) the diverge from typical Data Oriented Applications that interact directly with the underlying DB to Process Oriented Applications that leverage Long-Lived processing to build a more rich end-to-end experience.

Click HERE to download the source code.

Note the download is a zip file (LCESPetStore.zip) containing 3 files:

  1. PetStore.zip (My Flex Project) – This App is currently hardwired to talk to localhost.
  2. petstore-dsc.jar – The LiveCycle Data Management Services Assembler that Creates, Reads, Updates, and Deletes Pets from the DB along with the Java source. This DSC also creates the underlying DB table when it is installed, however the DDL is generated for MySQL only currently.
  3. PetStore.lca – The LiveCycle Application Archive that contains the Pet Verification Process and XFA Form used in the Application

The Architecture
Below is a slide of the overall architecture.

LCES PetStore ArchitectureLCES PetStore Architecture

Note that only the highlighted boxes are complete in the demo (sorry I didn’t get to the rest;-( .
A brief description of the highlighted Boxes are:

  1. The LCES PetStore AIR application
  2. The Pet Verification Process – A long lived process that generates a form/workitem that is routed to the store clerk (Tony Blue)
  3. The Pet Detail Form – the one that is rendered to Tony Blue
  4. The User Service – An out of the box service used to make User Assignments as part of a process
  5. LiveCycle Workspace – An operational UI provided out of the box for users to manage workitems and participate in long-lived processes.
  6. The PetService – A Custom service that implements the CRUD operations necessary to manage Pets in the Database and to push them to clients via LiveCycle Data Management Services.

For purposes of this demo I decided to use Mate. I was originally motivated by the excellent presentation that I saw from Laura Arguello at the Atlanta Flash & Flex User Group back in September. This is my first time using Mate, so hopefully I paid it some justice here. At MAX 2008 I laid out the following slides to show how MVC related to LCES and Mate to LCES respectively.

MVC & LCESMVC & LCES
Mate & LCESMate & LCES

Anyway, I have two more LCES demos to post over the weekend (the Zillow App and UDDI Browser), so keep an eye out!

—-
Original article at http://lostintentions.com/2008/11/21/the-lces-pet-store-process-oriented-application-development/.

LCES Pet Store Walkthrough

Shone Sadler

Ok, So I already did a posting on the LCES Pet Store but I hadn’t learned how to do a screencast yet so now here it goes.

In this screencast I walk through the LCES PetStore application fictitiously selling my own high maintenance dog, Thor (Hey, we can all dream…).

You can see my previous/more detailed post at LCES Pet Store & Process Oriented Application Development.

To see Thor (and the LCES Pet Store) at his best see LCESPetStore at screencast.com

 

—-
Original article at http://lostintentions.com/2008/11/23/lces-pet-store-walkthrough/.

A Look Into Domain Specific Languages

Shone Sadler

Not to long ago I had the opportunity to hear Charles Simonyi present on the topic of Intentional Programming (IP). In his presentation he discussed many of the underlying objectives and concepts that I found similar to those that we have in Business Process Management (BPM), the area where most of my experience lies. Reflecting the intentions of the developer (i.e. the business process) is the holly grail in terms of what we want to achieve in BPM, but where the “developer” is a domain expert rather than your traditional programmer. Unfortunately, the IP platform from Intentional Software, the company that Charles is a founder of, was not yet publicly available. As a result, I recently took a look at several other projects out their with similar objectives and concepts including Meta Programming System (MPS), Whole Platform, and XMF. Looking at these products naturally lead me to think much more about DSL(s), their architectures, and role in enterprise application development. At this point I’ll describe some of my high level findings to set the stage for some later postings and list some references for those interested in the topic.

1. Domain Specific Languages

A Domain Specific Language (DSL) is a specialized language engineered with the goal of implementing solutions for a particular problem domain. This is in contrast to General Purpose Languages (GPL) that are aimed at generally solving any type of problem. The key value proposition for a DSL is that is abstracts away the underlying complexities that are unnecessary for the targeted developer(s) in implementing a solution.

2. Horizontal vs. Vertical Domains

A domain is a problem space with a set of interrelated concepts. Domains can be further classified as being horizontal or vertical. In the context of DSL(s) Horizontal domains are those domains that are both technical and broad in nature with concepts that apply across a large group of applications. Examples of horizontal DSL(s) include the following (Note: I am biased towards Adobe technologies ;-) .

  • Flex: An open sourced framework and DSL that targets the user interface domain by enabling Rich Internet Applications (RIA) that are portable across multiple browsers.
  • Cold Fusion: A language for creating data-driven web sites.
  • Document Description XML(DDX): A declarative language used to describe the intended structure of a one or more PDF(s) based on the assembling of one or more input documents.
  • LiveCycle Workflow: Similar to workflow languages such as BPEL, LC Workflow is An imperative language used for specifying the interaction of a process across one or more services.
  • SQL & DDL: Structured Query Language and Data Definition Language(s) are standard languages used for querying and defining data structures respectively with respect to relational databases.
  • Pixel Bender: An toolkit made up of a kernel language, a high-performance graphics programming language intended for image processing, and a graph language, an XML-based language for combining individual pixel-processing operations (kernels) into more complex filters.

Vertical domains on the other hand are more narrow by nature, pertain to an industry, and contain concepts that can only be re-used across a relatively small number of similarly typed applications. Good examples of vertical DSL(s) can be found in [3] and are listed below:

  • IP Telephony & Call Processing: A modeling language created to enable the easy specification of call processing services using telephony service concepts.
  • Insurance Products: A modeling language that enables insurance experts to specify the static structure of various insurance products from which a a J2EE web portal can be be generated.
  • Home Automation: A modeling language for controlling low-level sensors and actuators within a home environment.
  • Digital Watches: A wristwatch language for a fictitious digital watch manufacturer that enable the building of a product family of watches with varying software requirements.

Two important points with respect to horizontal vs. vertical DSL(s) are:

  • Its much easier to find examples of horizontal DSL(s) rather than vertical. This is in part due to the fact that developers are the primary contributors to new languages and creating DSL(s), which is a technical activity of itself. Its often a natural next step to look towards DSLs after establishing a successful API and/or framework that itself provides a level of abstraction
  • There is overlap between horizontal and vertical DSL(s) that must be managed. In his thesis Anders Hessellund describes the general problem of managing overlapping DSL(s) as the coordination problem [2].
Figure 1: Horizontal vs. Vertical DSLsFigure 1: Horizontal vs. Vertical DSLs

3. Architectures

In this section we classify DSL(s) into three broad classifications based on architecture. The three classifications; Internal, External, and Language Workbench were first termed by Martin Fowler [4][8].

Internal DSL(s)

In our spoken languages experts typically do not invent entirely new languages to support their domain. Similarly, many argue that DSLs should be built on-top of an existing language such that its usage can be embedded in the host environment. Rather than remain true to the original syntax of the host programming language, DSL designers attempt to formalize a syntax or API style within the host language that best expresses the domain of the language.

Figure 2: Internal DSL Architecture

Internal DSLFigure 2: Internal DSLs

The syntax support of a host language constrains how expressive an internal DSL can be to a great extent. Languages with a rigid syntax structure such as Java or C# have difficulty in supporting expressive DSLs, whereas flexible (& dynamic) languages such Ruby, Python, Groovy, and Boo are designed for such support. Projects such as JMock have shown that even syntactically rigid languages such as Java can still provide fairly expressive DSL(s) by implementing method chaining and builder patterns to give the illusion of a more natural language [5]. Fowler refers to DSLs that employ such patterns to work within the confines of a rigid syntax as fluent interfaces [6].

External DSL(s)

Unlike Internal DSL(s), external DSL(s) exist outside the confines of an existing language. Examples of such languages are SQL, XPath, BPEL, and Regular Expressions. Building an external DSL can be a complex and time consuming endeavour. An external DSL developer is essentially starting from a blank slate and while that may empower them to an extent, it also means that they must handle everything themselves.

So what exactly does “everything” entail? Similar to implementing a GPL, an external DSL developer must implement a multistage pipeline that analyzes or manipulates a textual input stream, i.e. the language’s concrete syntax. The pipeline gradually converts the concrete syntax as a set of one or more input tokens into an internal data structure, the Intermediate Representation (IR).

Figure 3: The multi-stage pipeline of an external DSLFigure 3: The multi-stage pipeline of an external DSL <7>

The architecture of a external DSL can still vary significantly based on its runtime requirements.
The four broad subsets of the multi-stage pipeline above as described in [7] are:

  • Reader: readers build internal data structure from one or more input streams. Examples include configuration file readers, program analysis tools and class file loaders.
  • Generator: generators walk an internal data structure (e.g. Abstract Syntax Tree) and emit output. Examples include object-to-relational database mapping tools, object serializers, source code generators, and web page generators.
  • Translator: A translator reads text or binary input and emits output conforming to the same or a different language. It is essentially a combined reader and generator. Examples include translators from extinct programming languages to modern languages, wiki to HTML translators, refactorers, pretty printers, and macro preprocessors. Some translators, such as assemblers and compilers, are so common they warrant their own sub-categories.
  • Interpreter: An interpreter reads, decodes, and executes instructions. They range from simple calculators up to full blown programming language implementations such as for Ruby, JavaScript, and Python.

Language Workbench & Platform

In [8] Fowler describes the benefits of DSL(s) versus the cost of building the necessary tools to support them effectively. While recognizing that implementing internal, rather than external DSL(s) can reduce the overall tool cost, the constraints on the resulting DSL can greatly reduce the benefits, particularly if you are limited to static typed languages that traditionally have a more rigid syntax. An external DSL gives you the most potential to realize benefits, but comes at a greater cost of designing and implementing the language and its respective tooling. Language Workbench’s are a natural evolution that provide both the power and flexibility of external DSL(s) as well as the infrastructure (IDE, frameworks, languages, etc…) to greatly facilitate the implementation of the necessary tooling for the DSL. In the picture below we expand upon the concept of a Workbench to a Language Platform, which includes not only the Workbench but also the Runtime for the implemented languages as well.

Figure 4: Language Workbench & PlatformFigure 4: Language Workbench & Platform

The concept of a Language Workbench (or Platform) is relatively new. Much of initial theoretical work in this area was started by Charles Simonyi with the concept of Intentional Programming, which he started at Microsoft [10] and the work being done by his later company, Intentional Software. For purposes of this posting we will discuss two important aspects of Language Workbenches shown in the illustration above.

Multi-level Projection Editors

A projection editor is an editor for the concrete syntax of a language, whether that syntax is textual or graphical. These editors are tightly bound to their respective languages and offer the assistance capabilities which we now expect from modern editors such as code completion. Unlike, traditional free format text editors, projection editors can:

  • work directly with an underlying abstract representation (i.e. an Abstract Syntax Tree)
  • can bypass the scanning and parsing stages of the compiler pipeline
  • direct users to enter only valid grammatical structures
  • can be textual or graphical
  • can offer multi-level representations that abstract out various views of the program for various users and enabling multi-level customizations of the end solution [9]

Active Libraries

Its long been realized that compilers & interpreters are typically unable to take advantage of domain specific context encoded within the source code of a GPL. Moreover, as discussed previously DSL(s), both internal and external, often lack tooling support or acquire it through significant costs. Active Libraries enable DSL developers to overcome the overall lack of domain-specific tooling by taking an active role throughout the language tool chain, from the IDE, through the compiler, to the runtime. The are three types/levels of Active Libraries [11]:

  • Compiler Extensions: libraries that extend a compiler by providing domain-specific abstractions with automatic means of producing optimized target code. They may compose and specialize algorithms, automatically tune code for a target machine and instrument code. Czarnecki et al provided Blitz++ and the Generative Matrix Computation Library as two examples of libraries that extended a compiler with domain specific optimizations.
  • Domain-Specific Tool Support: libraries that extend the programming environment providing domain-specific debugging support, domain-specific profiling, and code analysis capabilities, etc… The interaction between Tuning and Analysis Utilities (TAU), a set of tools for analyzing the performance of C, C++, Fortran and Java programs and Blitz++ are good examples.
  • Extended Meta-programming: libraries that contain meta-code which can be executed to compile, optimize, adapt, debug, analyze, visualize, and edit the domain-specific abstractions. Czarnecki et al [11] discuss active libraries generating different code depending on the deployment context providing the example that they may query the hardware and operating system about their architecture.

For my purposes I was primarily interested in the domain-specific tool support afforded by active libraries (i.e. #2 above). In the case of Language Workbench’s we are looking for domain specific support for rendering, editing, re-factoring, reduction(i.e. compiling), debugging, and versioning capabilities.

4. Role of the Language Engineer

While the concept of DSL(s) and Unix “small languages” have been around for some time, the focus has been on slowly evolving General Purpose Languages (GPL). As a result, the industry currently lacks developers, patterns and best practices, as well as tooling that facilitate the design, implementation, and evolution of languages on a larger scale. Language Engineering is a more complex activity than typical software and system engineering. In-order for the benefits of DSL(s) to be fully realized there needs to be a new discipline evolved that focuses on engineering of domain specific languages. Similar to others [18] this paper asserts that there will be a paradigm shift that demands the creation of a new role, the Language Engineer, in the software development process. Furthermore, the creation of this role will significantly reduce the workload of application developers in building a given solution in a similar fashion to the introduction of 3rd generation languages and subsequently platforms (e.g. Java/J2EE and .Net).

Figure 5: Developer RolesFigure 5: Developer Roles

While we would not expect to see the dramatic increase in productivity that we saw with the introduction of 3rd generation languages, we would still expect the productivity increases to be substantial once effective tooling is in place for both the Language Engineer and the Application Developer as the consumer of any given DSL.

5. Benefits & Costs

In this section we review & summarize many of the costs and benefits associated with creating and using domain specific languages.

Benefits of DSL(s):

  • Domain Knowledge: Domain specific functionality is captured in a concrete form that is more easily and readily available to both application developers, who are direct language users, and domain experts. The benefit of capturing domain knowledge can be realized throughout an applications life-cycle.
  • Domain Expert Involvement: DSLs should be designed to focus on the relevant detail of the domain while abstracting away the irrelevant details making them much more accessible to domain level experts with varying levels of programming expertise. In cases where there is already an existing notation understood by the domain expert the DSL should be designed with a concrete form to match that existing notation. For example, for users familiar with the Business Process Modeling Notation (BPMN), a related Workflow DSL should enable users to implement workflow solutions using familiar BPMN constructs.
  • Expressiveness: DSL’s that are tailored to a specific domain can more concisely & precisely represent the formalisms for the specified domain. Furthermore, DSLs tend to be declarative in nature, enabling the developer to focus on the “what” while eliminating the need for them to understand or over specify their program with the “how.”
  • Compilation & Runtime Optimizations: As discussed with Active Libraries the additional context afforded DSL(s) can provide for domain-specific optimizations at either compile and/or runtime. Adobe’s Pixel Bender language is an example of a language designed to provide optimizations by leveraging parallelization on multi core machines. Similarly, there are many use cases in the scientific community where DSL’s have been leveraged for abstraction and optimization to handle sparse arrays, automatic differentiation, interval arithmetic, adaptive mesh refinement, etc…[1] – The expressiveness of DSLs not only eliminate the need for developers to over specify their code, but consequently provides more opportunity for the infrastructure to intelligently optimize where and when possible.
  • Re-Use: Due to the additional costs associated with designing and supporting DSLs mentioned previously DSLs should not be created for isolated solutions. However, in cases where a problem in a particular domain (horizontal or vertical) is repeating there is ample benefit to be gained by re-using the expressiveness of DSLs and the other benefits that follow.
  • Reliability: Like libraries once tested DSL(s) provide higher level of reliability when re-used across projects.
  • Toolability: By associating a concrete and abstract syntax to a language, such as is the case with External DSL(s) or DSL(s) defined in a Language Workshop, tools are better enabled to analyze and provide guidance through assistance or verification tooling.

Costs of DSLs:

  • Tool Support: One of the primary benefits of creating DSL(s) vs. application libraries is the ability to add tooling based on the concrete syntax of the DSL, however today such tooling does not come for free, it must be built as part of the cost of creating the DSL. Tooling features include features such as code guidance/assistance, re-factoring, and debugging.
  • Integration: The successful development of most applications today requires the convergence of multiple views. Business analysts, domain experts, interaction designers, database experts, and developers with different types of expertise all take part in the process of building such applications. Their respective work products must be managed, aligned and integrated to produce a running system [2].
  • Training Cost: As opposed to mainstream languages, DSL users will likely not have an established specification to look to for guidance. However, this cost can be offset by the degree to which the DSL narrowly reflects and abstracts the domain concepts.
  • Design Experience: DSL platforms are not yet widely adopted in the software industry. As a result, there is an evident lack of experience in the field around language engineering, language design patterns, prescriptive guidelines, mentors, and/or research.

Conclusion

Anyway, hope to have some more postings related to DSLs in the not to distant future. At this point I am attempting to pull together a lot of good ideas from various areas. Which reminds me here is the list of docs referenced throughout this posting.

References

[1] T. Veldhuizen and D. Gannon, “Active Libraries: Rethinking the role of compilers and libraries,” page 2

[2] A. Hessellund. “Domain-Specific Multimodeling,” Thesis. IT University of Copenhagen, Denmark. page 15.

[3] S. Kelly & J. Tolvanen, “Domain-Specific Modeling,” John Wiley & Sons, Inc. 2008, Hoboken, New Jersey.

[4] M. Fowler. “Domain Specific Language,” http://www.martinfowler.com/bliki/DomainSpecificLanguage.html

[5] S. Freeman, N. Pryce. “Evolving an Embedded Domain-Specific Language in Java,” OOPSLA, Oct 22-26, 2006, Portland, Oregon, USA.

[6] M. Fowler. “Fluent Interface,” http://www.martinfowler.com/bliki/FluentInterface.html

[7] T. Parr, “Language Design Patterns, Techniques for implementing Domain Specific Languages, ” Pragmatic Bookshelf, 2009. pages 14-16.

[8] M. Fowler. “Language Workbench,” http://www.martinfowler.com/articles/languageWorkbench.html

[9] K. Czarnecki, M. Antokiewicz, and C. Kim. “Multi-Level Customization in Application Enginneering, ” Communications of the ACM, Vol I, December 2006. pages 61-65

[10] C. Simonyi. “Intentional Programming – Innovation in the Legacy Age, ” Presented at IFIP WG 2.1 meeting. Jun 4, 1996

[11] K. Czarnecki, U. Eisenecker, R. Gluck, D. Vandevoorde, and T. Veldhuizen. “Generative programming and active libraries (extended abstract). ” In M. Jazayeri, D. Musser, and R. Loos, editors, Generic Programming ’98. Proceedings, volume 1766 of Lecture Notes in Computer Science, pages 25–39. Springer-Verlag, 2000.

[12] Business Process Modeling Notation, http://www.bpmn.org/

[13] Meta Programming System, http://www.jetbrains.com/mps/index.html

[14] T. Clark, P. Sammut, J. Willans. “Applied Metamodeling A Foundation For Language Development, ” Ceteva 2008.

[15] R. Solmi. “Whole Platform, ” Thesis for Department of Computer Science at University of Bologna. Italy. March 2005

[16] B. Langlois, C. Jitia, E. Jouenne. “DSL Classification,” In 7th OOPSLA Workshop on Domain-Specific Modeling, 2007.

[17] OMG, Meta Object Facility (MOF) 2.0 Core Proposal, ad/2003-04-07, 7 Apr 2003.

[18] A. Kleppe. “Software Language Engineering.” Addison-Wesley Professional. 12/09/2008. Page 19.

—-
Original article at http://lostintentions.com/2009/08/15/a-look-into-domain-specific-languages/.

LiveCycle ES2′s application model best practice – Modularity

Darren Melanson

In LiveCycle ES2 the development model has changed from the repository centric approach that was used in previous versions. The new development model is application centric rather than relying on loose assets in the repository. When developing applications it is critical that proper modular design patterns are used. There are 3 primary factors that should be considered:

  • Number of processes in the application
  • Physical size of the application (the number of bytes)
  • Actions executed by the application’s process during deployment

There is no set limit for the number of assets within an application; however larger applications could lead to issues during deployment due to transactional constraints.  LiveCycle was tested with significantly sized applications, however, a point will be reached where deployments will suffer from performance issues if applications become too large or complex. In addition, it should be noted that manageability and maintenance of an application will become more difficult as the number of assets in an application grow. Also note that the use of slow performing systems and network latency will exacerbate deployment problems with large applications.

Modularity should be a primary development objective when designing applications.  Putting all assets into a single monolithic application does not take advantage of the application model and the modularity that it allows.  Modular development will allow for greater control over versioning, better performance, and much more flexibility in application design.  Modularity does mean breaking up application assets simply to reduce the number of items in an application.  Assets should be grouped so they target a specific use case, area of functionality, or so they are grouped because they are maintained in a similar way or by a similar group of developers.

Consider the case of Java and creating applications; it is possible to put all of your Java classes into one jar file.  You could even take third party libraries, unjar them, and then rejar them into your applications jar file.  If you took this approach you would quickly reach a point where your application is unmaintainable.  Application development in LiveCycle should be thought of in a similar fashion, taking advantage of the capabilities that are present to provide modularity.

How to separate existing applications will depend on the interrelationship of assets held inside the application.  In most cases, when an asset is moved to another application the other assets that depend on it will need to be modified to reflect the new location of the asset.  For example, a process that uses a form would need to be modified to use the new form asset in its new location.

Ultimately, as there are no set guidelines for actual modularity of any given application, the developer will need to exercise reasonable common sense when developing in the ES2 application model. Having said that, if one begins to see dozens of process or assets accumulating in an application, this should be a signal to review the application’s modularity. Another warning sign is if an application deployment begins to take prolonged times (over a few minutes) or actually timeout.

For more information on the LiveCycle ES2 application model, please see:

Craig Randall’s blog post on the subject:

http://craigrandall.net/archives/2010/05/livecycle-es2-app-model/

 

——-
Original article at http://blogs.adobe.com/livecycle/2011/03/livecycle-es2s-application-model-best-practice-modularity.html.

Go to Top