Device Proliferation and the Cost of Software

I purchased my first application from the Mac OS App Store this week; it cost $99. Before I hit the purchase button, though, I wondered: Was I going to have to spend that $99 three times? Unlike my phone, of which I have only one, I have a Mac at work, one at home, and a laptop. Multiplying every purchase by three was going to make this a much more expensive proposition.

Being in the business of writing software for a living, I try to be conscientious regarding software license terms. To do that, I actually read the license agreements. Or at least, the part of them that deals with how many computers the software can be installed on.

In my experience with such terms, there has been a gradual evolution away from licenses tied to specific computers to accommodating the desktop/laptop combination that’s common these days. But, that still left the typical number of machines per license at two, and one of them had to be a laptop.

The Mac App Store licensing terms are much more generous: Each purchase can be used on any Mac that you own or control. From the Mac App Store FAQ:

Can I use apps from the Mac App Store on more than on computer?

Apps from the Mac App Store may be used on any Macs that you own or control for your personal use.

Re-assured that I’d only have to spend my $99 once, I proceeded with the purchase.

Does this represent a shift towards lower software costs for consumers? That $99 license from the Mac App Store cost the same as buying the software directly from its publisher, but the publisher’s license was more restrictive: I would have had to purchase two copies for twice the price.

Yes and no. The kicker is that the Mac App Store works only for my Mac. Want the same app for your iPad? That’s another $40. And for your iPhone? That’s another $20. So yes, this change in licensing terms brings down the cost of software for desktop and laptop computers. But at the same time, I’m spending more on software than ever before because I’m also buying it for different types of device. Device proliferation is driving costs right back up.


Application Structure and Composition

Recent mobile application models impose some surprising straightjackets that defeat the ability to compose applications from constituent parts.

On the desktop, composition support is de rigeur. On Windows it’s possible because there simply aren’t many rules. Want to use a DLL that uses another DLL that loads some external resources? There’s an API for that. Well, APIs, anyway. You can string them together by solving for units, which by the way also works wonders in high school physics.

On Mac OS composition is by design. Which is fortunate, since otherwise it probably wouldn’t be allowed. Depend on a framework? There’s a place to put it. Framework depends on another framework? Yes, there’s a place for that, too. All places rationally named, everything is a bundle, layout standardized.

iOS? Not so much. Mostly because dynamic code loading isn’t allowed, at least not outside of OS frameworks. We have to revert to linking everything together into one big blob. And then all linker symbols appear in a common namespace, so we’re denied the pleasure of embedding multiple copies of the PNG library into the same application without modifying any of them to use unique prefixes.

The real surprise for me, though, was Android. Surely an open platform would support composition? But alas, resources on Android get compiled into one big block of unique IDs. No way to use this mechanism to create a library beforehand with its own set, because there’s only one file to rule them all.

No doubt there’s a method to the madness that I just can’t see. Maybe they can break it down into pieces for me.

Class Declarations as a Transformation of Closures

In JavaScript, one can go about creating objects with private state by hiding it in a closure made at object creation time. Something along the lines of:

var myObject = (function() {

    var privateVariable = ...;

    return {
        publicGetter:function() { return privateVariable; }

Here, shielding the private state occurs at execution time; that is, it’s a result of the execution of the program.

Here’s the equivalent in using class declarations in ActionScript:

class MyClass {
    private var privateVariable = ...;
    public function publicGetter():* { return privateVariable; }

Here we can take the “declarations” part of “class declaration” literally: The private state is now private because it’s declared that way, rather than as a side effect of program execution.

Thus, class declarations provide a declarative transform of one particular thing that can be achieved via closures. As a mechanism, closures are of course more powerful and widely applicable. For purposes of writing and reading maintainable code, it’s preferable to have declarative support for common patterns like this.

Perspective on the AIR 2.7 Release

Yesterday we shipped the AIR 2.7 release. Although a relatively minor release in the grand scheme of things, this one is an important milestone for the Flash Runtime. It marks the culmination of an enormous effort over the past two years to bring AIR to mobile devices.

It has not been a straightforward path. Our first work targeted the first generation iPhone, which included a couple of major challenges. First we had to craft an alternate compilation strategy for ActionScript, since using just-in-time compilation, our usual strategy, was not permitted. Then we had to try to achieve reasonable performance on what was a slow device.

Further bumps along the way included scrambling to support the iPad, being forced off of iOS entirely, and switching our focus to the quickly-evolving Android platform. For all their similarities, Android and iOS often taken different approaches under the covers which made, and continues to make, the creation of cross-platform APIs a real challenge.

Of course, bringing AIR to iOS and Android wasn’t simply a matter of porting the code, either. We also added a variety of new APIs to accommodate the different features found in mobile devices. That list includes major additions like Geolocation, Accelerometer, CameraUI, CameraRoll, and StageWebView. Smaller enhancements supported screen orientation, soft keyboards, and other mobile-specific capabilities.

For me, the 2.7 release brings us to an important milestone. AIR is now clearly a viable platform for building applications across the PlayBook, Android, and iOS, with thousands of AIR-based applications available in various marketplaces. We’ve rounded out a basic set of mobile-specific APIs, and, with the iOS performance enhancements in this release, eliminated performance as a barrier to adoption. Moving to mobile was a mad scramble; now we’re there.

With 2.7 out the door, we’re shifting our attention from merely moving to mobile to pushing the envelope. That AIR is a cross-platform runtime is no reason for us to compromise on capabilities, performance, or tools, and you’ll see that in our upcoming releases. The technologies that will drive the next generation of AIR are already in the works, running here behind closed doors. We look forward to sharing these with you in the months to come.

Platform Parity and Scalability

Confession: We’ve made a bit of a mess for ourselves in the application descriptor.

(For those not familiar with the application descriptor, it’s a short XML document that’s a required part of any AIR application. It provides essential information like the application’s unique ID and version, plus a variety of optional settings covering everything from screen orientation behavior to application marketplace filtering. It’s roughly analogous to Info.plist on Mac OS and iOS, and AndroidManifest.xml on Android.)

In AIR 1.0, we strived to keep this descriptor purely cross-platform. We almost achieved that goal, but compromised a bit with the <programMenuFolder> setting. That setting allows an AIR app to control, on Windows, where it appears in the Start Menu. Customers told us it was essential.

When we added iOS support in AIR 2.0, we realized that there was a host of options, accessible via the iOS Info.plist file, that we wanted developers to have access to and yet didn’t have any cross-platform analogue. That was hardly surprising at the time, since iOS was the first mobile platform we supported.

So, we decided to add an escape hatch: the <iPhone> element. (Why not <iOS>? Because this was before iPhone OS became iOS.) It contains iOS-specific settings, including arbitrary additions to the Info.plist file.

When we added Android support, we extended this in the obvious way, adding an <android> element with similar capabilities.

Now, about this time was when we realized we had a problem. Our list of supported platforms is continuing to grow, but adding new elements to the descriptor for each one isn’t a scalable approach. In particular, it requires that Adobe modify the descriptor schema each time a new platform comes online, rather than enabling our platform partners to make these additions on their own. Our partners don’t want to have to wait for us for such a change, and we don’t want to have to make them wait, either.

As we worked with RIM to bring AIR to the PlayBook, we asked RIM to help us fix this and store PlayBook-specific settings in a separate file, outside the application descriptor. This approach is easily scalable, as it’s trivial for each platform to add its own file. And it’s easier to use then open-ended extensions in the application descriptor itself, which can get tricky when storing XML in one schema inside XML in another schema.

At the moment, the unfortunate result of this mess is that PlayBook might appear to be a second-class citizen, in that it doesn’t get its own element in the descriptor. This is not at all the case. On the contrary, PlayBook is the first platform to move to our preferred mechanism. It’s iOS and Android that are stuck with the older, more awkward mechanism.

Although I can’t speak to the timing, as it’s not yet determined, we will be moving platform-specific settings for all platforms, including iOS and Android, to external files in the future. Then we’ll finally be where we should have been heading from the beginning: Parity between platforms, in a scalable fashion.

AIR 2.6 Extended Migration Signature Grace Periods

Returning to the ever-popular subject of the AIR desktop signing mechanism, I want to point out that the grace period for applying a migration signature with your old certificate has been extended from six months (AIR 2.5 and earlier) to one year (AIR 2.6 and beyond).

Those familiar with the details will recall that updating to a new certificate for a desktop AIR application goes something like this:

  1. Sign the new version of the app with your new certificate.
  2. Sign the new version of the app a second time with your old certificate.

The second signature is referred to as the migration signature.

The new certificate, not surprisingly, must be valid when used to sign. For the old certificate, however, we allow a grace period after the certificate expires. This avoids the need to have both certificates valid at the same time, which can be difficult to arrange. This period was initially six months, but this proved a bit too short in some situations. As of AIR 2.6, the grace period is now one year.

For more background in this area, you might want to read:

Unique Device Identifiers on AIR Mobile

Application developers frequently want a mechanism by which they can track a device’s identity. Such a value is frequently used in conjunction with copy protection schemes to, for example, limit the total number of devices on which a user can view purchased content.

AIR ostensibly does not include an API for obtaining such an identifier. Such an API would be straightforward to implement on iOS, since iOS itself provides an identifier that can be used for this purpose. However, on Android, no such facility is provided. Seems surprising to some, but even the Android team agrees.

How to work around it? I recommend using Math.random() to generate an ID on first launch and save it locally. It’s portable, and does a better job of protecting user privacy than the iOS device identifier does. Of course it can be reset, but then users also sometimes lose or dispose of devices, so any scheme needs a mechanism to flush old device IDs from the system as “no longer in use.”

EncryptedLocalStore and Storage Reliability

Recently I’ve fielded yet another question about the reliability of the EncryptedLocalStore API, which is usually a good indication that a topic deserves a post.

They key thing to understand about all storage options, encrypted or otherwise, is that there are no guarantees. Hardware failures, software failures, user error, or even deliberate user action can all lead to data disappearance and data corruption.

Applications should always be prepared to deal with these scenarios. This doesn’t necessarily mean you need fancy data-recovery options, although it might. It does mean your application shouldn’t be rendered useless upon encountering, for example, a malformed configuration file.

Does EncryptedLocalStore provide reliable storage? Probably not any more so than storing data anywhere else on a given device, as all of the same failure modes apply. Probably a bit less so, as the complex nature of the encryption mechanisms provide that many more things that can go wrong.

Note that this doesn’t excuse the presence of defects in those mechanisms. Rather, the point here is that more complex mechanisms have more points of failure, and sooner or later something will go wrong. If you’re concerned with reliability, your only realistic option is to plan for that scenario.

If you use EncryptedLocalStore to cache confidential information, such as passwords, then this issue is probably not serious, since users can generally re-enter that information. In effect, the user servers as the backup for this data.

If, however, you use EncryptedLocalStore to maintain encryption keys, or other items that the user is not aware of or cannot reproduce, then you should explore providing other backup mechanisms. If the encrypted data is valuable, you should probably back up the data and the key. If the data is a cache and is encrypted only locally, then logic to abandon and re-fill the cache may be sufficient. Whatever your scenario, an appropriate plan can no doubt be devised.

Deployment Configuration and AIR Applications

All but the simplest of applications accept, or even require, configuration data that affects their behavior. These settings might affect the appearance of a UI element, an algorithm used to compute a calculation, or where to locate a service, such as a database, required by the application.

AIR developers are often tempted to embed this configuration data into the application package. This appears to be a convenient option because standard methods for deploying AIR applications, whether on the desktop or on mobile devices, don’t provide any alternative way of packaging this data with the application. On the other hand, it is inconvenient if more than one configuration is required, because then a separate package must be created, signed, and managed for each set of configuration options.

I encourage you to avoid packaging configuration options with your application. In addition to creating a package management problem that is otherwise avoided, it makes it impossible to move a copy of the application between different environments, be they test, staging, production, or something else. (Jez Humble and David Farley do an excellent job of covering this issue in their book, Continuous Delivery.)

Instead, I encourage you to deploy configuration data via another mechanism. The simplest approach is to use a file deployed to a well-known location. This approach works well in the enterprise, where it has a well-established history (think /etc on Unix), makes the data easily accessible to AIR applications, and is support by most deployment tools. If you’re in a closed environment your platform of choice might offer other options; Windows, for example, offers Active Directory.

It can be useful, by the way, to have a built-in default set of values, especially in the consumer space. This approach makes it easier for your application to be tested against a mocked up service provider in house and then, when deployed, automatically target the real service. For example, an application that talks to Facebook might use your mocked Facebook service in-house, but when deployed by customers always connect directly to Facebook.

As implied above, Adobe does not provide tools that directly support the deployment of configuration data. The AIR team elected not to enter this space because (a) it’s populated by existing players, such as IBM and Microsoft, with deep expertise in this area, and (b) a significant number of customers we speak with already have an existing solution in place. Our approach, then, has been to integrate with existing deployment tools and their capabilities. For more on this, see Distributing AIR in the Enterprise, by Peter Albert and Michael Labriola.

Approaches to Modular AIR Applications

As applications grow in size and complexity, it becomes necessary to break them up into manageable pieces. To some degree this can be handled at development time by maintaining a carefully-structured source code base.

Compiling the application into a single, monolithic deliverable may itself become a roadblock. For example, it may be desirable instead to compile and validate these pieces separately, and then assemble the resulting binaries. The pieces may be developed and shipped on different schedules, and thus different portions of the application updated independently. Or, the pieces may even be developed by other parties, as is often the case for applications supporting plugins. Regardless, it often becomes necessary to defer loading some of these pieces until runtime.

AIR currently supports two basic approaches for this kind of runtime assembly: Loading code into a network sandbox, and loading code into the application sandbox. The two approaches have different capabilities and different limitations. When designing large applications, it’s good to be aware of these trade-offs.

Network Sandbox

Code is loaded into a network sandbox via the Loader.load() API by specifying the URL of the target code. Code in this sandbox is restricted from accessing AIR-specific APIs, but has access to the same set of APIs that content running in Flash Player can access. The network sandbox essentially is Flash Player.

Interestingly, by restricting the API to the Flash Player API, the loaded content can necessarily be run in Flash Player or AIR. This may be especially useful if your application runs sometimes in the browser and sometimes in an application.

The API restrictions also mean that you don’t have to trust the code you load. The network sandbox prevent access to potentially dangerous APIs, such as the filesystem API. You can selectively open up access via the sandbox bridge capability.

Because the runtime knows where the code was loaded from, the code can use relative addressing to access other network resources, including additional network-hosted code. This allows for such code to easily be moved between different web servers, including between test and production environments.

The network sandbox is typically not suitable when the code your loading is supposed to function as an integral part of the application, with full access to other parts of the application and to the AIR APIs. For that, you want the application sandbox.

Application Sandbox

Code is loaded into the application sandbox via the Loader.loadBytes() API. Code in this sandbox runs as if it were installed along with your application, having full access to all of the AIR APIs. This facility provides the basic underpinnings for a complete plugin model. You can use that model to modularize your own application, or even open it up to third-party components.

Code loaded via this method is granted full API access, so it is essential that the application validates that it trusts the code before loading it. This can be accomplished in a secure fashion by using code signing, and validating the signature before loading the code. It may also be sufficient to obtain the code via a secure URL, but it should be noted that this protects the code only while in transit, and not after it is stored locally. To encourage secure use of this API, it loads the code, as the method names suggests, directly from a ByteArray and not from a URL.

Code loaded via this API does not retain its origin (URL), and therefore cannot access other code using relative URLs. Again, this was done to encourage secure use of the API. If this code can implicitly load additional code off the network, it is not sufficient to validate just the first SWF; each link of that chain requires careful validation.

Note that, because the origin of the code is removed, you can’t combine granting access to the application sandbox with the deployment ease of loading a SWF and associated RSLs from the network. If one module depends on another, some other referencing mechanism must be used.


It’s easy to imagine extending these models in a variety of ways. For example, while it is currently possible to create a plugin mechanism for an AIR application, the validation work has to be done by the application. This in turn prevents it from integrating with other runtime features, like RSLs. By expanding runtime support, we could potentially allow these features to work together.

We are actively exploring how AIR applications are using these techniques today, and how we might enhance our support for building large applications in the future. If you have feedback in this area, please let us know via a comment or via


For more in this area, you might want to read: