Posts in Category "Data Services"

Server-side typing for Flash Builder data access tools (PHP)

Flash Builder 4 introduces a rich set of tools for configuring access to data services. The Flash Builder documentation and tutorials can help you get started with writing PHP and ColdFusion services as well as configuring your applications to access these services.

However, the Flash Builder documentation examples and tutorials use client-side typing. You can also write PHP and ColdFusion services that implement server-side typing. Server-side typing simplifies the workflow in Flash Builder, and also provides server code that is easier to understand and maintain.

Client-side typing

Flash Builder uses client-side typing for services that do not specify data types for arguments or return values. To implement access to these services, Flash Builder needs to know the data types for the arguments and return values. Flash Builder tools introspects the services, prompting you to configure the necessary custom data types.

Here are links to the Flash Builder tutorials that use client-side typing:

This article provides documentation and an example for server-side typing in PHP. Later, I’ll provide a separate example for server-side typing in ColdFusion.

Server-side typing

Both PHP and ColdFusion allow you to define custom data types in server code. Flash Builder recognizes the custom data type during introspection of services. This simplifies the access to the service – you do not have to walk through wizard screens to configure custom data types.

This blog post contains an example for PHP that shows how to implement server-side typing. This basic application lists employees from a database. It also includes an input form to add new employees. This blog post contains the full source listing for both the client and server, plus a mini-tutorial.

StrongTypePHPApp.jpg

Continue reading…

LiveCycle Data Services 3 and doc available

LiveCycle Data Services ES2 version 3 is now available. Download the free developer edition.

LiveCycle Data Services documentation is available online:

* Using LiveCycle Data Services HTML | PDF
* Application Modeling Technology Reference HTML | PDF
* ActionScript Language Reference HTML
* Installing LiveCycle Data Services HTML
* Javadoc HTML
* Release Notes HTML
* Quick Starts HTML

LiveCycle Data Services 3 Beta Available

LiveCycle Data Service 3 Beta 1 is now available on Adobe Labs.

The new model-driven development features in LiveCycle Data Services 3 offer a huge leap in productivity and ease-of-use for end-to-end applications. You start an application by creating a data model (a simple XML file) in the new “Modeler” editor that plugs into Flash Builder. From that model, you automatically generate data access logic on the server and Flex client code for working with the server code.

You can even generate much of a model by dragging existing SQL database tables into the Modeler editor. When you save the model, client code is automatically generated. When you deploy the model to the server, a fully functional Data Management Service destination is automatically generated on the LiveCycle Data Services server. You can support even the most advanced Data Management Service features just by creating and deploying a model.

Using Flash Builder with LiveCycle Data Services, you can now build simple or complex data-driven applications without writing any server-side code or configuration files. You can also take full advantage of the new Flash Builder 4 features for building the client side of data-driven applications.

We would love to get your feedback on this release and the documentation. To learn more:

Calling remoting destinations from Flash or Java applications

The legacy NetConnection API of Flash Player provides a way to call remoting destinations from a standard (non-Flex) Flash application or from ActionScript in a Flex application if desired. The new Java AMF Client in BlazeDS gives you a Java API patterned on the NetConnection API but for calling remoting destinations from a Java application. You can use either of these APIs with BlazeDS, LiveCycle Data Services, or third-party remoting implementations.

Call a remoting destination from a Flash application

You can use the Flash Player flash.net.NetConnection API to call a BlazeDS remoting destination from a Flash application. You use the NetConnection.connection() method to connect to a destination and the NetConnection.call() method to call the service.
The following MXML code example shows this legacy way of making remoting object calls with NetConnection instead of RemoteObject:

<?xml version="1.0"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" width="100%" height="100%"
creationComplete="creationCompleteHandler();">
<!--
-->

<mx:Panel id="mainPanel" height="100%" width="100%">
<mx:HBox>
<mx:Label text="Enter a text for the server to echo"/>
<mx:TextInput id="ti" text="Hello World!"/>
<mx:Button label="Send" click="echo()"/>
<mx:Button label="Clear" click='ta.text = ""'/>
</mx:HBox>
<mx:TextArea id="ta" width="100%" height="100%"/>
</mx:Panel>
<mx:Script>
<![CDATA[
import flash.net.NetConnection;
import flash.net.ObjectEncoding;
import flash.net.Responder;
private var nc:NetConnection

private function creationCompleteHandler():void
{
nc = new NetConnection();
nc.objectEncoding = ObjectEncoding.AMF0;
nc.connect("http://[server]:[port]/yourapp/messagebroker/amf" );
}
private function echo():void
{
nc.call( "remoting_AMF.echo", new Responder( resultHandler, faultHandler ), ti.text );
}
private function resultHandler(result:Object):void
{
ta.text += "Server responded: "+ result + "\n";
}
private function faultHandler(fault:Object):void
{
ta.text += "Received fault: " + fault + "\n";
}
]]>
</mx:Script>
</mx:Application>

Call a remoting destination from a Java application

The Java AMF Client is new Java client API in the BlazeDS flex-messaging-core.jar file that makes it simple to work with remoting destinations from a Java application. The Java AMF Client is similar to the Flash Player flash.net.NetConnection API, but uses typical Java coding style rather than ActionScript coding style.

The Java AMF Client classes are in the flex.messaging.io.amf.client* package in the flex-messaging-amf.jar file. The primary class of the Java AMF Client is the AMFConnection class. You connect to remote URLs with the AMFConnection.connect() method and call the service with the AMFConnection.call() method. You catch ClientStatusException and ServerStatusException exceptions when there are errors.
Here’s a simple example of how you can use AMFConnection to call a Remoting Service destination from a method in a Java class:

public void callRemoting()
{
// Create the AMF connection.
AMFConnection amfConnection = new AMFConnection();

// Connect to the remote url.
String url = "http://[server]:[port]/yourapp/messagebroker/amf";
try
{
amfConnection.connect(url);
}
catch (ClientStatusException cse)
{
System.out.println(cse);
return;
}

// Make a remoting call and retrieve the result.
try
{
Object result = amfConnection.call("remoting_AMF.echo", "echo me1");
}
catch (ClientStatusException cse)
{
System.out.println(cse);
}
catch (ServerStatusException sse)
{
System.out.println(sse);
}
// Close the connection.
amfConnection.close();
}

The Java AMF Client automatically handles cookies similarly to the way in which web browsers do, so there is no need for custom cookie handling.

LiveCycle Data Services ES 2.6 documentation

We are happy to announce that LiveCycle Data Services ES 2.6 was just released and includes many documentation improvements. Much of the documentation has been revised and reorganized, and there are completely new sections on:
Getting Started
- Introduction
- Building and deploying

Architecture
- General architecture
- Channels and endpoints
- Managing session data

You can get the documentation here:
http://www.adobe.com/support/documentation/en/livecycledataservices/

The main product page is here:
http://www.adobe.com/products/livecycle/dataservices/

Thanks go out to the LiveCycle Data Services development and quality engineering teams, who put a great effort into shaping and reviewing this content. Special thanks go to Mete Atamel, Seth Hodgson, Ed Solovey, and Jeff Vroom for their contributions.

Master-Detail Flex Application

I’m an Adobe writer assigned primarily to Flex Builder and the Flex SDK. I joined Adobe in October of 2007 and have spent the first few months learning and using Flex and Flex Builder.

I’ve recently completed my first Flex application, and am using this blog to write about my learning experience, and also to describe some of the concepts behind the application that make it work. Actually, these are two applications that work together, vRadio and RadioLoginDB.

These applications illustrate how to use Flex to create a master-detail application that accesses data from a PHP server, and also incorporates PHP sessions.

At the end of this posting is a list of documentation sources I used to learn how to create the applications. I also have links to the source files.

vRadio and RadioLoginDB applications

I have always been a fan of Community Radio, and in the past I’ve Googled “Community Radio” to find new stations to listen to over the web. I created the vRadio application to provide a Flex alternative that lists Community and Talk radio stations, providing details about each station including station name and location, station graphic, and clickable links to open the station. RadioLoginDB is a CRUD application to update the database of radio stations used by vRadio.

vRadio is a Master-Detail application, and corresponds to many type of applications that present stored data in a variety of presentation formats, and also provide forms for updating the data.

What is interesting about the the vRadio application is the way Flex handles XML data using the E4X format. By providing an XML feed into the vRadio application, the application displays the XML data in a tree view. When the user clicks a node in the tree, details are displayed.

Continue reading…

Measuring Message Processing Performance in BlazeDS

One place to examine application performance is in the message processing part of the application. To help you gather this performance information in BlazeDS, you can enable the gathering of message timing and sizing data.

When enabled, information regarding message size, server processing time, and network travel time is available to the client that pushed a message to the server, to a client that received a pushed message from the server, or to a client that received an acknowledge message from the server in response a pushed message. A subset of this information is also available for access on the server.

Download the new chapter’s PDF: Measuring Message Processing Performance

BlazeDS Beta 1 Documentation

Adobe has made a very exciting announcement. We announced a new open source data services project called BlazeDS. BlazeDS Beta 1 is now available on Adobe Labs.

BlazeDS contains the latest versions of the Message Service, Remoting Service, and Proxy Service that were previously only available as part of Adobe® LiveCycle® Data Services ES. BlazeDS is a server-based Java remoting and messaging technology that lets developers easily connect to back-end distributed data and push data in real time to Adobe Flex™ and Adobe AIR™ applications.

BlazeDS usage documentation is available in HTML format on LiveDocs and as a PDF file:

BlazeDS ActionScript and Java reference documentation is available in these ZIP files:

Information about setting up a BlazeDS project in Flex Builder 3 is available here:

http://learn.adobe.com/wiki/display/Flex/Using+Flex+Builder+with+your+J2EE+server

Key features and benefits of BlazeDS are highlighted in the release notes. A great way to get started quickly with BlazeDS is the Test Drive application. The Test Drive is one of the sample applications included in the BlazeDS installation.

Real-Time Messaging with LiveCycle Data Services ES

This article describes the variety of configurations and the considerations for how to use each mechanism. The following members of the LiveCycle Data Services development and quality engineering teams contributed to this content: Jeff Vroom, Seth Hodgson, and Kumaran Nallore.

LiveCycle Data Services ES supports a variety of deployment options that offer real-time messaging from Flex clients to Java 2 Enterprise Edition (J2EE) servers. Although efficient real-time messaging is difficult when client and application server are both behind firewalls, LiveCycle Data Services ES attempts to maximize the real time experience while keeping load on the server resources manageable.

The following issues prevent a simple socket from being used to create a real time connection between client and server in a typical deployment of a rich Internet application:

  • The client’s only access to the Internet is through a proxy server.
  • The application server on which LiveCycle Data Services ES is installed can only be accessed from behind the web tier in the IT infrastructure. LiveCycle Data Services ES offers a few approaches for creating efficient real time connections in these environments, but in the worse case, the server fall’s back to simple polling. The main disadvantages of polling are increased overhead on both client and server machines and reduced response times for pushed messages.

One design goal for LiveCycle Data Services ES is to create an environment where J2EE code can be run in a channel-independent manner. Your backend server code can either be used in a traditional J2EE context or with protocols that offer scalable real time connectivity currently not available in the J2EE platform. For example, LiveCycle Data Services ES provides the FlexSession, which mirrors the functionality available with the HttpSession in for both the Flash Player RTMP (real time messaging protocol) as well as the standard HTTP protocol that J2EE supports.

Clients can be configured using a list of channels that they can use to connect to the server. A client attempts to use the first channel in the list, and if that fails, it tries the next channels until it finds one it can connect to. You can use this to provide different protocol options so that a client can first try to connect using RTMP and if that fails, can fall back to HTTP. You also could use this mechanism to provide redundant server infrastructures to provide high reliability.

RTMP channels

The LiveCycle Data Services ES Real Time Messaging Protocol (RTMP) server is embedded in the J2EE application server process. RTMP is is a protocol that is primarily used to stream data, audio, and video over the Internet to Flash Player clienst. RTMP maintains a persistent connection with an endpoint and allows real-time communication. The protocol is a container for data packets that can be in Adobe Action Message Format (AMF) or raw audio or video data.

The server uses application-server-specific extensions when necessary, so that RTMP requests are handled with managed J2EE threads that support the use of the Java Transaction API (JTA) transactions. This lets RTMP requests use your existing J2EE service code.

The LiveCycle Data Services ES RTMP server is implemented using Java NIO facilities, which allows it to decouple threads from connections. This approach is very scalable when dealing with idle connections. A single RTMP server can support 10,000 or more simultaneous RTMP connections as long as the load generated by those clients does not exceed the server’s computational capacity.

The RTMP server supports a session that mirrors HTTP session functionality. It does not support failover the way many J2EE application servers do when an application server process dies. This is often not required in Flash applications because Flash Player itself can store state more conveniently to make seamless failover possible in many cases.

An RTMP connection be made in one of three ways depending upon the network setup of the clients:

  1. Direct socket connection.
  2. Tunnelled socket connection through a proxy server using the "connect" method.
  3. Active, adaptive polling using the browser’s HTTP implementation. This mechanism does place additional computational load on the RTMP server and so reduces the scalability of the deployment. A given server can handle roughly 500-1000 poll requests per second, so you should ensure the number of polling clients does not exceed the number that a given server can handle comfortably.

RTMP Direct socket connection

RTMP in its most direct form uses a simple TCP socket made from the client machine directly to the RTMP server.

RTMPT Tunnelling with Proxy Server Connect

This acts very much like a direct connection but uses the proxy server’s support for the "CONNECT" protocol if supported. If connect is not supported, the server falls back to using HTTP requests with polling.

RTMPT Tunnelling with HTTP requests

In this mode, the client uses an adaptive polling mechanism to implement bidirectional communication between the browser and the server. This mechanism is not configurable and is designed to use at most one connection between the client and server at a single time to ensure that Flash Player does not consume all of the connections the browser will allow to one server (Internet Explorer imposes a limit of two simultaneous connections to the server when the server provides HTTP 1.1 support).

The client issues an HTTP POST to the server with any messages that the client needs to send to the server. The server keeps the connection until it has a message to push to that client, or until its wait time expires. Once the wait interval expires, the response is written to the socket. The client immediately issues another POST if it has additional messages to send. If not, it waits for its poll time to expire and then polls again.

The server gives the client the next interval for the client to use as its poll time. The server doubles the time of the previous request if no messages have come in for that client since the last poll. The server keeps doubling the poll time until it hits a limit of four seconds. At that point, the client polls at an interval of four seconds until a message comes for that client at which point it reduces the interval to a very short timeout.

Similarly, the server doubles the wait time each time it receives a poll request that does not include any client to server messages. The server again reaches the limit of four seconds and stops increasing the wait time.

Connecting RTMP from the web tier to the application server tier

Currently all RTMP connections must be made from the browser, or the browser’s proxy server, directly to the RTMP server. Because the RTMP server is embedded in the application server tier with access to the database, in some server environments it is not easy to expose it directly to web clients. There are a few ways to address this problem in your configuration:

  1. You can put a load balancer which is exposed to web clients that can implement a TCP pass through mode to the LiveCycle Data Services ES server.
  2. You can put a reverse proxy server on the web tier that supports the HTTP CONNECT protocol so it can proxy connections from the client to the RTMP server. This provides real time connection support by passing the web tier. The proxy server configuration can provide control over which servers and ports can use this protocol so it is more secure than opening up the RTMP server directly to clients.
  3. You can put a reverse proxy server on the web tier which proxies simple HTTP requests to the RTMP server. This approach only works in RTMPT polling mode.

HTTP and AMF channels

The HTTP channel uses XML over an HTTP connection. The AMF channel uses the AMF format over the HTTP protocol. The HTTP and AMF channels support simple polling mechanisms that can be used by clients to request messages from the server. You can use these channels to get pushed messages to the client when the other more efficient and realtime mechanisms are not suitable. The polling interval is configurable or polling can be controlled more flexibly from ActionScript code on the client. This mechanism uses the application server’s normal HTTP request processing logic and works with typical J2EE deployment architectures. When you use the HTTP channel or AMF channel, the FlexSession maps directly to the underlying application server’s HTTP session so the application server’s session failover can be used to provide fault tolerant session storage. You typically should ensure your application server is configured to use sticky sessions so one clients requests go to the same application server process instance.

In the default configuration for a polling HTTP or AMF channel, the channel does not wait for messages on the server. When the poll request is received, it checks to see if there are any messages queued up for the polling client and if so, those messages are delivered in the response to the HTTP request.

In LiveCycle Data Service ES, there is a new configuration option that provides the ability to support more real time behavior using these channels. You configure a poll-wait-interval on the channel which indicates the amount of time the server thread should wait for pushed messages. If that interval is set to -1, it means to wait indefinitely – as long as the client can maintain that connection to the server. As soon as any messages are received for that client, we push them in the response to the request. The poll-interval on the client then is used to determine when next to poll the server. If you set poll-interval to 0 and poll-wait-interval to -1, you get real time behavior from this channel.

There are two additional caveats for using the poll-wait-interval with LiveCycle Data Services ES. One is utilization of available application server threads. Since this channel ties up one application server’s request handling thread for each waiting client connection, this mechanism is not nearly as scalable as RTMP. Each application server thread requires a certain amount of memory and may add some overhead to thread scheduling. Modern JVMs can typically support a couple of hundred threads comfortably if given enough heap space. You should check the maximum thread stack size (often 1 or 2M per thread) and make sure you have enough memory and heap space for the number of application server threads you configure.

To ensure Flex clients using channels with poll-wait-interval do not lock up your application server, LiveCycle Data Services requires that you set an additional configuration attribute specifying the maximum number of waiting connections that LiveCycle Data Services ES should manage. This is configured through the max-waiting-poll-requests configuration setting. This number must be set to a number smaller than the number of HTTP request threads your app server is configured to use. For example, you might configure the application server to have at most 200 threads and allow at most 170 waiting poll requests. This would ensure you have at least 30 application server threads to use for handling other HTTP requests. Your free application server threads should be large enough to maximize parallel opportunities for computation. Applications which are I/O heavy may need a large number of threads to ensure all I/O channels are utilized completely. Examples where you need application server threads include: # of threads simulataneously writing responses to clients behind slow network connections, # of threads used to execute database queries or updates, and # of threads needed to perform computation on behalf of user requests.

The other consideration for using poll-wait-interval is that LiveCycle Data Services ES must avoid monopolizing the available connections the browser will allocate for communicating with a server. The HTTP 1.1 spec mandates that browsers allocate at most 2 connections to the same server when the server supports HTTP 1.1. To avoid using more that one connection from a single browser, LiveCycle Data Services ES allows only one waiting thread for a given app server session at a time. If a new polling request comes in over the same HTTP session as an existing waiting request, the waiting request returns and the new request starts waiting in its place.

If you have an application that has more than one player instance on the same page, beware that only one of those instances can have a real time HTTP channel connection open to the server at the same time.

RTMP deployment scenarios

The following table shows deployment scenarios and channel configuration when using RTMP channels:

RTMP protocol allowed?

Port open?

Channel configuration

Yes

Yes

Default configuration

Yes

No

1) Endpoint URI port should be set to an open port.
2) The <bind-port> property in the RTMP channel configuration should be set. RTMP server listens on this port.
3) RTMP traffic from the endpoing URI’s port should be redirected to the port set in the <bind-port> property.

No

Yes

1) Default configuration. The RTMP requests fall back to RTMPT.
2) In the case of non-secure channels only, the protocol in endpoint URI could be hardcoded to use RTMPT directly.

No

No

1) The endpoint URI port should be set to an open port
2) The <bind-port> property should be set. The RTMP server listens on this port.
3) RTMPT traffic from the endpoint URI port should be redirected to the port set in <bind-port> property.
4)  In the case of non-secure channels only, the protocol in the endpoint URI could be hard coded to use RTMPT directly.

When a client-side RTMPChannel instance internally falls back to tunneling, it sends its HTTP requests to the same domain/port that is used for regular RTMP connections and these requests must be sent over a standard HTTP port so that they make it through external firewalls that some clients could be behind. For RTMP to work, you must bind your server-side RTMP endpoint for both regular RTMP connections and tunneled connections to port 80. When the client channel falls back to tunneling, it always goes to the domain/port specified in the channel’s endpoint URI.

If you have a switch or load balancer that supports virtual IPs, you can use a deployment as the following table shows using a virtual IP (and no additional NIC on the LiveCycle Data Services ES server) where client requests first hit a load balancer that routes them back to a backing LiveCycle Data Services ES server:

Public Internet    Load balancer LiveCycle Data Services ES
Browser client makes Http request my.domain.com:80 Servlet container bound to port 80
Browser client uses RTMP/T  rtmp.domain.com:80 (virtual IP address) RTMPEndpoint bound to any port; for example, 2037

The virtual IP/port in this example (rtmp.domain.com:80) is configured to route back to port 2037 or whatever port the RTMPEndpoint is configured to use)on the backing LiveCycle Data Services ES server (. In this approach, the RTMPEndpoint does not need to bind to a standard port because the load balancer publicly exposes the endpoint via a separate virtual IP address on a standard port.

If you do not have a switch or load balancer that supports virtual IPs, you need a second NIC to allow the RTMPEndpoint to bind port 80 on the same server as the servlet container that binds to port 80 on the primary NIC, as the following table shows:

Public Internet    LiveCycle Data Services ES NICs LiveCycle Data Services ES
Browser client makes Http request my.domain.com:80 Servlet container bound to port 80 on primary NIC using primary domain/IP address
Browser client uses RTMP/T  rtmp.domain.com:80 RTMPEndpoint bound to port 80 on a second NIC using a separate domain/IP address

In this approach the physical LiveCycle Data Services ES server has two NICs, each with its own static IP address. The servlet container binds port 80 on the primary NIC and the RTMPEndpoint binds port 80 on the secondary NIC. This allows traffic to and from the server to happen over a standard port (80) regardless of the protocol (HTTP or RTMP). The key take away is that for the RTMPT fallback to work smoothly, the RTMPEndpoint must be exposed to Flex clients over port 80 which is the standard HTTP port that is generally allows traffic (unlike nonstandard RTMP ports like 2037, which are generally closed by IT departments).

To bind the RTMPEndpoint to port 80, one of the approaches above is required. For secure RTMPS (direct or tunneled) connections, the information above is the same but you would use port 443. If an application requires both secure and non-secure RTMP connections with tunneling fallback you do no’t need three NICs; you need only two. The first NIC services regular HTTP/HTTPS traffic on the primary IP address at ports 80 and 443, and the second NIC services RTMP and RTMPS connections on the secondary IP at ports 80 and 443. If a load balancer is in use, it iss the same as the first option above where LiveCycle Data Services ES has a single NIC and the different IP/port combinations are defined at the load balancer.