Posts in Category "AIR"

My Thoughts on Flash and HTML (as Expressed in an Email to “Tech News Today”)

I’m a big fan of the video and podcast Tech News Today. It’s one of the best technology shows I know of, and I seldom miss an episode. As some of you know, I sent them an email yesterday about our recent announcements around Flash and HTML, and they were kind enough to read some if it on-air. It was way too long for them to read in its entirety, so I figured I’d post the whole thing here.

As someone who has worked on the Flash Platform at Adobe for the last nine years, I just wanted to provide some additional context around yesterday’s announcement. Your coverage was very good, so no complaints, but I feel like it’s worth emphasizing a few things.

Part of Adobe’s story is enabling cross-platform solutions, but since Flash has never been supported on iOS, we weren’t able to deliver on that vision in the context of mobile browsers. With mobile browsers as good as they are now (the ICS browser looks amazing, and mobile Safari has always been awesome), it just makes more sense to use HTML.

In the context of installed applications, however, our story is stronger than ever. We recently released AIR 3 which is an extremely solid option for delivering installed applications through app stores across devices. Installed mobile applications is an area where we have been very successful delivering on our cross-platform vision, so that’s where we’re going to invest. Additionally, I think that model more closely matches the way we use our devices; I think mobile browsers are primarily used for accessing content, and the tendency is to use installed apps for more interactive content like applications and games.

Another point I want to make is in response to Sarah’s comment yesterday about Flash working better on some devices than others. That’s true. Getting Flash to work consistently across all the chipsets that we support (and with all the different drivers out there — some of which are better implemented than others) is a huge amount of work, and requires a lot of engineering resources. At some point, we had to ask ourselves if we wanted to be the kind of company that continues to use resources to pursue something we weren’t convinced made sense just because it’s what we’ve always done, or if we wanted to be more forward thinking. I think we absolutely made the right decision.

It’s also worth pointing out that we’re still investing heavily in Flash in the areas where it makes more sense like installed mobile and desktop applications, and the desktop browser. Specifically, the Stage3D APIs we introduced in AIR 3 are going to provide an in-browser gaming experience like nobody has ever seen (look for videos of Unreal running in the browser), and the new APIs for hardware accelerated video are going to mean higher quality video that uses less CPU and battery. These are areas that HTML5 has not yet fully addressed, so Flash can lead the way. We will continue to use Flash for doing things that HTML can’t, and for the things that HTML can do, we will embrace it.

That brings me to my last point: I think there’s this perception out there that Adobe dislikes HTML, and that yesterday was somehow a bitter concession. As someone inside the company, I can tell you that there are a lot of us who are very excited about what we can do with HTML5. Personally, I’ve been researching and working on HTML projects for quite some time at Adobe, and I’ve been working with a lot of very smart people who are as passionate about it as I am. There are definitely people out there (both inside Adobe and outside) who are passionate just about Flash, but I think it’s more accurate to say that the overwhelming majority of us are simply passionate about the web, and about building awesome experiences. Flash has always been about providing functionality that HTML couldn’t, however now that HTML5 can provide a lot of that functionality, we’re going to have a lot of fun seeing what we can do with it.

So in summary, look for Adobe to continue to push Flash forward in areas that HTML doesn’t yet address, to push HTML forward with contributions to WebKit and tooling, and to provide cross-platform solutions in whatever technology makes the most sense.

If you want to hear it read on-air, it’s at the 45:00 mark in the video below.

Secure Data Persistence with AIR 3

AIR 3 offers two new capabilities that can make data persistence much more secure than what was possible with previous versions of AIR. If your AIR application persists information like credentials or sensitive relational data that must remain secure, this article is for you.


The new flash.crypto.generateRandomBytes function provides a way to generate far more random data than was previously possible. As you probably already know, functions like Math.random are known as pseudorandom number generators because they rely on relatively small sets of initial data (like timestamps). Bytes generated in such a manner are insufficient for use as encryption keys because if the random number generation algorithm is known (and you should always assume that it is), and the mechanism for seeding it is known and constrained, it isn’t hard for an attacker to guess the encryption key and gain access to your data.

The new generateRandomBytes function uses a much more random (and therefore, much more secure) method for generating random bytes. Rather than implementing our own algorithm, the runtime uses the underlying operating system’s random byte generation function. The table below shows which function is used on each platform:

Operating System Function
Windows CryptGenRandom()
Mac OS X /dev/random
Linux /dev/random
Android /dev/urandom
iOS SecRandomCopyBytes()

The natural question at this point is what makes the operating systems’ random byte generation implementations so secure — or, put another way, how is the OS able to generate random data which is more cryptographically secure than Math.random? The answer is something called an entropy pool. An entropy pool is essentially a collection of randomness. When a process needs random bytes, they can be taken from the entropy pool through one of the functions listed above, and when a process believes it has random data, it can choose to add those bytes to the entropy pool. Processes can obtain random data in several ways:

  • Hardware jitter (deviations caused by things like electromagnetic interference or thermal noise).
  • Mouse movements and/or the time between keystrokes.
  • Unpredictable hardware events like disk head seek times, or network traffic.
  • A hardware random number generator.

(Now is a good time to state that it’s probably more correct to say that an entropy pool contains unguessable data as opposed to truly random data, but in the world of encryption, the two amount to essentially the same thing.)

So now that you have access to unguessable bytes, what can you do with them to make your data more secure? The most obvious use for the new generateRandomBytes function is probably the creation of encryption keys. The database implementation embedded inside of the AIR runtime supports encrypted database files, and the quality of the encryption is largely based on the quality of your encryption key. The generateRandomBytes function gives you a cryptographically secure way to generate an encryption key, and the EncryptedLocalStore API gives you a secure method for storing it (more on this below). Other uses of the generateRandomBytes function include generating session identifiers, GUIDs, and salt to be used in hashes (salt is additional data added to the data you want to hash which makes looking the hash up in rainbow table much less feasible).

Mobile Encrypted Local Store

Update (11/3/2011): If you’re having trouble getting ELS to work in the iOS emulator or in “fast packaging” mode, add the following to your compiler arguments: -swf-version=13

A second feature in AIR 3 which can make data persistence more secure is the introduction of the mobile EncryptedLocalStore (ELS) API. ELS has been available for the desktop profile since 1.0, but AIR 3 is the first version which makes it available on mobile.

ELS provides a secure storage mechanism implemented by the underlying operating system. On Windows, ELS uses DPAPI, and on OS X, ELS uses Keychain services.

The new mobile implementations of ELS require some additional explanation. On iOS, ELS uses Keychain services just as it does on OS X, however Android does not provide a encrypted storage service for the runtime to leverage. Data is therefore "sandboxed" rather than encrypted. In other words, it is stored in a secure, isolated location (inaccessible to other processes), but since the data isn’t actually encrypted, if an attacker has access to the physical device, or if a user has rooted his or her device, it is possible for the data to be compromised. According to the internal ELS specification:

On Android, ELS can guarantee that on a non-rooted device in the hands of an authorized user, ELS data of an AIR application will not be accessible/modifiable by any other application (AIR or native) running on the device.

It’s possible that Android will provide a more secure storage service in the future, and we can simply swap out our implementation, however for the time being, it’s worth noting that the Android implementation lacks the additional step of encryption on top of isolation.

So what kinds of data should mobile developers store with ELS? The same data that you would store in desktop applications. Namely:

  • Encryption keys (the sequence of random bytes obtained through the generateRandomBytes function).
  • Account credentials. If you want to store users’ credentials so they don’t have to enter them every time they use your app, ELS is the correct API to use.
  • Salts for hashes. Once you’ve salted and hashed something like a password, you need to be able to securely persist the salt for future comparisons.


If you just need to generate random numbers for something like a game, you should stick to using Math.random. In fact, on some platforms, the generateRandomBytes function could temporarily block if the entropy pool has been entirely drained which means you don’t want to pull large amounts of data from it in an endless loop. Rather, generateRandomBytes should be reserved for generating unguessable cryptographically secure random data for things like encryption and session keys, GUIDs, and salts for hashes.

Whenever you need to store sensitive information like credentials, encryption keys, or salts, you should always use the EncryptedLocalStore APIs. Since they are now supported everywhere, you can use them in a completely cross-platform manner.

Whether you’re updating an existing application that needs to persist data securely, or building an entirely new one, I strongly recommend that you use the new best practices made possible in AIR 3.

Providing Hints to the Garbage Collector in AIR 3

AIR 3 has a new API to allow developers to provide the runtime’s garbage collector with hints. The API is called System.pauseForGCIfCollectionImminent(), and it requires a little explanation to understand both it’s utility and value.

Before I get into how the function works, let’s start with why you’d want to give the garbage collector pointers (no pun intended) to help it decide when to reclaim memory. Running the garbage collector can cause the runtime to pause, and while the pause is typically imperceptible, if you’re doing something like playing a fast-paced game, you might notice it. In this particular scenario, a better time to garbage collect might be between game levels. Or, if you’re building something like an e-reader, it would be best to reclaim memory while the user is reading as opposed to during your super slick and smooth page-flip animation.

The System.gc() function can force the runtime to garbage collect, however it doesn’t always have 100% predictable results across all platforms, and forcing garbage collection on a regular basis is usually overkill. System.pauseForGCIfCollectionImminent(), on the other hand, does not force garbage collection, but rather serves as a mechanism to tell the runtime that a particular place in your application would be a good place to reclaim memory, and to specify how likely it is that runtime will take that advice. The function takes a single parameter called imminence which is a Number indicating the strength of your recommendation. From the spec:

The GC will synchronously finish the current collection cycle (complete the marking and perform finalization) if the current allocation budget has been exhausted by a fraction greater than the fraction indicated by the parameter imminence.

If it’s still not entirely clear, here are some examples:

Imminence Meaning / Result Likelihood
.25 If more than 25% of the allocation budget has been used, finish the current collection cycle. More likely
.75 If more than 75% of the allocation budget has been used, finish the current collection cycle. Less likely

In the former case, you’re essentially saying, "I really want you to garbage collect right now because a lot of animation is about to happen which I want to be as smooth as possible," and the latter case is essentially saying, "If you were going to garbage collect pretty soon anyway, now would be a pretty good time."

If you’re currently using System.gc() to force garbage collection in your AIR applications, I highly recommend you take a look at this new API.

Slides, Links, and Questions From my MAX 2011 Presentation

I wanted to post some additional information from my sessions at MAX 2011 for those of you who weren’t able to make it. Below you can find my slides, all the links contained in my presentation, and some of the questions (with answers) I received.


The recording of my presentation isn’t available yet is now available, but you can also just download the slides here (PDF).

Continue reading…

Accessing Compass Data With AIR 3

I just finished writing a simple compass application for AIR 3 that uses an ANE (AIR Native Extension) to get orientation data from the Android operating system. All the code (AIR application, Java native extension, and "glue" library code) is available on Github if you want to see how it works.

New Audio Capabilities in AIR 3

AIR 3 has two important new audio capabilities:

  1. Device speaker selection. Play audio through a phone’s speaker, or through its earpiece.
  2. Background audio playback on iOS. Keep audio playing in the background when a user switches away from your application, and even when the screen is switched off.

Device Speaker Selection

Toggling between a phone’s speaker and earpiece for playing audio is as easy as setting the audioPlaybackMode property on SoundMixer like this:

// Play audio through the speaker:
SoundMixer.audioPlaybackMode = AudioPlaybackMode.MEDIA;

// Play audio through the earpiece:
SoundMixer.audioPlaybackMode = AudioPlaybackMode.VOICE;

Background Audio

Background audio playback has always worked on Android because of the nature of the Android’s multi-tasking implementation. Starting with AIR 3, it works on iOS, as well. To allow audio to continue playing when your application is closed — and even when the device’s screen is turned off — add the following to your application descriptor:


I wrote a sample application that demonstrates both these concepts called DeviceSpeakerExample. The code is available on github, and/or you can download the Flash Builder project.

Writing a Cross-platform and Cross-device Application That Browses for Images

If you’re writing a cross-platform, cross-device AIR application that allows users to pick an image from their local computer/device, there’s a relatively simple technique you can use to make sure the app does the right thing in all contexts (desktop, phones, and tablets). Specifically, I define "the right thing" as:

  • On phones and Android tablets, bring up the full-screen image picker.
  • On iPads, bring up the photo gallery panel and associate it with the button or other UI element that invoked it.
  • On Macs, open a file picker that defaults to ~/Pictures.
  • On Windows, open a file picker that defaults to c:/Users/<username>/My Pictures.

The function below can be used in any AIR application on any supported device, and it should always do the right thing:

private function onOpenPhotoPicker(e:MouseEvent):void
    // We're on mobile. Should work well for phones and tablets.
    if (CameraRoll.supportsBrowseForImage)
        var crOpts:CameraRollBrowseOptions = new CameraRollBrowseOptions();
        crOpts.height = this.stage.stageHeight / 3;
        crOpts.width = this.stage.stageWidth / 3;
        crOpts.origin = new Rectangle(,,,;
        var cr:CameraRoll = new CameraRoll();
    else // We're on the desktop
        var picDirectory:File;
        if (File.userDirectory.resolvePath("Pictures").exists)
            picDirectory = File.userDirectory.resolvePath("Pictures");
        else if (File.userDirectory.resolvePath("My Pictures").exists)
            picDirectory = File.userDirectory.resolvePath("My Pictures");
            picDirectory = File.userDirectory;
        picDirectory.browseForOpen("Choose a picture");

For more information on CameraRoll changes in AIR 3, see How to Correctly Use the CameraRoll API on iPads.

Native Text Input with StageText

One of the major new features of AIR 3 (release candidate available on Adobe Labs) is the introduction of the StageText API. StageText allows developers to place native text inputs in their mobile AIR applications rather than using the standard flash.text.TextField API. StageText is essentially a wrapper around native text fields, so the resulting input is identical to the text input fields used in native applications. (Note that StageText is only for the AIR mobile profile; the StageText APIs are simulated on the desktop, but they do not result in native text inputs on Mac or Windows.)

Advantages of Using StageText

There are several advantages to using StageText over flash.text.TextField in mobile applications:

  • Both iOS and Android have sophisticated auto-correct functionality that StageText allows AIR applications to take advantage of.
  • The autoCapitalize property of StageText allows you to configure how auto-capitalization is applied to your text field. Options are specified as properties of the AutoCapitalize class and include ALL, NONE, SENTENCE, and WORD.
  • StageText allows you to control the type of virtual keyboard that is displayed when the text field gets focus. For example, if your text field is requesting a phone number, you can specify that you want a numeric keypad, or if you’re asking for an email address, you can request a soft keyboard with an "@" symbol. Options are specified as properties of the SoftKeyboardType class, and include CONTACT, DEFAULT, EMAIL, NUMBER, PUNCTUATION, and URL.
  • In addition to controlling the type of virtual keyboard, you can also configure the label used for the return key. Options are specified as properties of the ReturnKeyLabel class, and include DEFAULT, DONE, GO, NEXT, and SEARCH.

The Challenges of Using StageText

On mobile devices, StageText provides much more powerful text input capabilities than flash.text.TextField, however there are some challenges to adding native visual elements to AIR applications.

StageText is Not on the Display List

StageText is not the first time we’ve exposed native functionality directly in AIR. The StageVideo class (mobile and Adobe AIR for TV) allows video to be rendered in hardware at the OS level, and StageWebView (mobile only) allows HTML to be rendered with the device’s native HTML engine. All three of these features have similar benefits (native performance and functionality), but they also share the same unusual characteristic: although they are visual elements, they are not part of the Flash display list. In other words, like StageVideo and StageWebView, StageText does not inherit from DisplayObject, and therefore cannot be added to the display list by calling the addChild() function. Instead, StageText is positioned using its viewPort property which accepts a Rectangle object specifying the absolute (that is, relative to the stage) coordinates, and the size of the native text field to draw. The text field is then rendered on top of all other Flash content; even if Flash content is added to the display list after a StageText instance, the StageText instance will always appear to have the highest z-order, and will therefore always be rendered on top.

In order to help address this behavior, StageText has has a function called drawViewPortToBitmapData. The drawViewPortToBitmapData function provides similar functionality to the IBitmapDrawable interface when used in conjunction with the BitmapData.draw function. Specifically, you can pass a BitmapData instance into the drawViewPortToBitmapData function to create a bitmap identical to the StageText instance. You can then toggle the visible property of your StageText instance, add the bitmap copy to the display list at the same coordinates as the viewPort of your StageText, and then you can place other Flash content directly on top without your StageText instance showing through. It seems like a lot of work, but it’s actually very easy to implement (especially when nicely encapsulated), and provides a good work-around for the fact that native visual elements do not render through the Flash pipeline. (Note that StageWebView also has the drawViewPortToBitmapData function which allows for the same work-around.)

StageText Does Not Have Borders

Something else that sets StageText apart from flash.text.TextField is the fact that it is not rendered with a border. There were two primary reasons for this decision:

  1. Developers will frequently want to customize their StageText borders.
  2. Native text inputs in native applications are sometimes rendered without any borders, especially on iOS. (For an example, compose a new email in the Mail application and note how the "To" and "Subject" fields have dividers between them, but no borders around them.)

Unfortunately, drawing borders around native text fields isn’t as easy as it sounds. In order to perfectly vertically center text between horizontal borders, and to constrain the text perfectly within vertical borders, developers need access to sophisticated font metrics APIs which are available for the two different text engines in Flash (flash.text and flash.text.engine), but they are not available for StageText. Drawing a perfectly sized and positioned border around a StageText instance so that it will work with different fonts and font properties, and work consistently across platforms, is somewhat challenging.


Now that I’ve pointed out the challenges of working with StageText, it’s only fair that I provide some solutions.

For Flex Developers

Flex will address the two challenges of working with StageText described above (the fact that it’s not part of the display list, and the drawing of borders) by encapsulating StageText inside of spark.components.TextInput. The TextInput component will worry about drawing a border around the StageText instance, and as long as you use the Flex APIs for manipulating the display list (as opposed to manipulating the display list at the Flash API level), Flex will also manage the viewPort property for you. In other words, Flex will give you get all the advantages of StageText without you having to manage it yourself.

For ActionScript Developers

Developers who are writing in ActionScript directly rather than using Flex have a few options:

  1. Use the StageText APIs directly. This is probably the least practical approach since you will likely find that trying to manage the display list, and trying to manage visual elements not on the display list in a completely different way, will be complex and error-prone.
  2. Encapsulate StageText. You can always do what the Flex team did, and encapsulate StageText to make it easier to work with. This is a fair bit of work, but in the end, it will make it much easier to manage, and will probably prove to be a good time investment.
  3. Use the NativeText class. While validating the encapsulation of StageText, I wrote a pretty comprehensive wrapper which makes working with StageText much easier (details below).

Introducing NativeText

NativeText is a wrapper around StageText which makes adding native text input functionality to your ActionScript application much easier. NativeText has the following advantages:

  • It makes StageText feels like it’s part of the display list. You can create an instance of NativeText, give it x and y coordinates, and add it to the display list just like you would any other display object. All the management of the viewPort is completely encapsulated.
  • It’s easy to add borders. NativeText does the work of drawing borders such that the StageText instance is vertically centered and horizontally bound. You can even change the border’s thickness, color, and corner radius.
  • It’s easy to add content on top because it manages the bitmap swapping for you. NativeText has a function called freeze which does all the work of drawing the StageText to a bitmap (including the border), hiding the StageText instance, and positioning the bitmap exactly where the StageText was. Therefore, if you want to place content on top of a NativeText instance (for instance, an alert box), just call the freeze function first, and everything will work perfectly. When you’re done, call unfreeze, and the bitmap is replaced with the StageText instance and disposed of.
  • Support for multi-line text inputs. NativeText supports text fields of one or more lines. Just pass the number of lines of text you want into the constructor (this is the one property that cannot be changed dynamically), and both the border and viewPort will be properly managed for you.
  • Cross platform. The trickiest part of writing NativeText was making sure it rendered properly on all our reference platforms (Mac, Windows, iOS, and Android) which use different fonts, have different sized cursors, etc. NativeText lets you use any font, font size, and border thickness you want, and your text input should render properly and consistently on all supported platforms.

NativeText is included in a project called StageTextExample. It is open-source, and the code is available on GitHub (or you can download the ZIP file for importing into Flash Builder). This is something I worked on independently, and is not an officially supported Adobe solution, so although I’ve found that it works pretty well, it’s certainly possible that you could discover bugs or use cases it does not take into account. If that’s the case, let me know and/or feel free to hack away at the code yourself.

Socket Improvements in AIR 3

In AIR 3 (currently in beta, available on Adobe Labs), we added a frequently requested feature to the Socket class: an output progress event. The Socket class has always dispatched a ProgressEvent which is designed to let you know when data is ready to read from the socket, however there was no event indicating how much data had been written from the socket’s write buffer to the network. In most cases, at any given moment, it doesn’t really matter how much data has been passed to the network and how much is left in the write buffer since all of the data eventually gets written before the socket is closed (which usually happens very quickly), however that’s not always the case. For example, if your application is writing a large amount of data and the user decides to exit, you might want to check to see if there is still data in the write buffer which hasn’t been transferred to the network yet. Or you might want to know when data has finished being transferred from the write buffer to the network so you can open a new socket connection, or perhaps de-reference your socket instance in order to make it eligible for garbage collection. Or you might just want to show an upload progress bar indicating how much data has been written to the network, and how much data is still pending.

All of these scenarios are now possible in AIR 3 with the OutputProgressEvent. An OutputProgressEvent is thrown whenever data is written from the write buffer to the network. In the event handler, developers can check to see how much data is still left in the buffer waiting to be written by checking the bytesPending property. Once the bytesPending property returns 0, you know all the data has been transferred from the write buffer to the network, and it is consequently safe to do things like remove event handlers, null out your socket reference, shut down your application, start the next upload in a queue, etc.

The code below (also available on Github, and as an FXP file) is a simple example of safeguarding your application from being closed while data is still in the write buffer. The application opens a socket connection to the specified server, writes the data from the input textarea, and then writes the response in the output textarea. It’s coded in such a way that if the user tries to close the application before all the data has been written from the write buffer to the network, it will stop the application from closing, then automatically close it later once it can verify that all the data has successfully been written. (Note that this isn’t a very realistic example since the data being written to the socket is just text, and will probably be limited enough that it will all be written from the socket to the network layer in a single operation, however you can easily imagine a scenario where megabytes of data are being written which could take several seconds or even minutes, depending on the quality of the client’s network connection.)

<?xml version="1.0" encoding="utf-8"?>
<s:WindowedApplication xmlns:fx="" xmlns:s="library://" xmlns:mx="library://" showStatusBar="false" creationComplete="onCreationComplete();">


      private var socket:Socket;
      private var readBuffer:ByteArray;
      private var socketOperationInProgress:Boolean;
      private var closeLater:Boolean;

      private function onCreationComplete():void
        this.nativeWindow.addEventListener(Event.CLOSING, onClosing);
      private function onClosing(e:Event):void
        if (this.socketOperationInProgress)
          this.closeLater = true;

      private function sendData():void
        this.socketOperationInProgress = true;
        this.readBuffer = new ByteArray();
        this.socket = new Socket();
        this.socket.addEventListener(Event.CONNECT, onConnect);
        this.socket.addEventListener(ProgressEvent.SOCKET_DATA, onSocketData);
        this.socket.addEventListener(OutputProgressEvent.OUTPUT_PROGRESS, onOutputProgress);
        this.socket.connect(this.server.text, Number(this.port.text));
      private function onConnect(e:Event):void

      private function onSocketData(e:ProgressEvent):void
        this.socket.readBytes(this.readBuffer, 0, socket.bytesAvailable);
        this.output.text += this.readBuffer.toString();
      private function onOutputProgress(e:OutputProgressEvent):void
        if (e.bytesPending == 0)
          this.socketOperationInProgress = false;
          if (this.closeLater)

  <s:VGroup width="100%" height="100%" verticalAlign="middle" horizontalAlign="center" paddingBottom="10" paddingTop="10" paddingLeft="10" paddingRight="10">
    <s:HGroup width="100%">
      <s:TextInput id="server" prompt="Host Address" width="80%"/>
      <s:TextInput id="port" prompt="Port" width="20%"/>
    <s:TextArea id="input" prompt="Input" width="100%" height="50%"/>
    <s:Button width="100%" label="Open Socket and Send Data" click="sendData();"/>
    <s:TextArea id="output" prompt="Output" width="100%" height="50%"/>

Keep in mind that AIR 3 is still in beta, so you might find bugs. If you do, here’s how to file them.

Native JSON Support in AIR 3

One of the many new features in AIR 3 (in beta, available on Adobe Labs) is the new native JSON parser. It has always been possible to parse JSON with ActionScript, but AIR 3 provides native JSON support which is faster than ActionScript implementations, and more efficient in terms of memory usage.

The two main things the JSON class can do are:

  1. Parse JSON strings.
  2. Turn ActionScript objects into JSON ("stringify").

To learn more about JSON support in AIR 3, check out this short sample news reader (you can also download the FXP file) which uses JSON rather than RSS. The code is so simple, I’ll include the entire thing below:

<?xml version="1.0" encoding="utf-8"?>
<s:WindowedApplication xmlns:fx="" xmlns:s="library://" xmlns:mx=&quot library://" applicationComplete="onApplicationComplete();">


      import flash.globalization.DateTimeFormatter;
      import flash.globalization.DateTimeStyle;
      import flash.globalization.LocaleID;


      private var df:DateTimeFormatter;

      private function onApplicationComplete():void
        // Feed data is stored as a JSON file...
        var f:File = File.applicationDirectory.resolvePath("feeds.json");
        var fs:FileStream = new FileStream();, FileMode.READ);
        var feedJson:String = fs.readUTFBytes(fs.bytesAvailable);
        var feeds:Object = JSON.parse(feedJson);
        this.feedTree.dataProvider = feeds;
        this.postHtmlContainer.htmlLoader.navigateInSystemBrowser = true;
        df = new DateTimeFormatter(LocaleID.DEFAULT, DateTimeStyle.MEDIUM, DateTimeStyle.NONE);

      private function onTreeClick(e:MouseEvent):void
        if (!this.feedTree.selectedItem || ! return;

      private function onFeedLoaded(e:ResultEvent):void
        var result:String = e.result.toString();
        var feedData:Object = JSON.parse(result);
        var s:String = '<html><body>';
        s += '<h1>Posts for ' + feedData.responseData.feed.title + '</h1>';
        for each (var post:Object in feedData.responseData.feed.entries)
          s += '<p class="postTitle"><a href="' + + '">' + post.title + '&nbsp;&nbsp;&#0187;</a></p>';
          s += '<p>' + df.format(new Date(post.publishedDate)) + '</p>';
          s += '<p>' + post.content + '</p><hr/>';
        s += '</body></html>';
        this.postHtmlContainer.htmlText = s;


    <s:HTTPService id="feedService" url="" result="onFeedLoaded(event)" resultFormat="text" contentType="application/x-www-form-urlencoded" method="GET" showBusyCursor="true" />

  <mx:HDividedBox width="100%" height="100%">
    <mx:Tree id="feedTree" width="200" height="100%" click="onTreeClick(event);"/>
    <mx:HTML width="100%" height="100%" id="postHtmlContainer"/>

Keep in mind that AIR 3 is still in beta, so you might find bugs. If you do, here’s how to file them.