Posts in Category "ActionScript"

Secure Data Persistence with AIR 3

AIR 3 offers two new capabilities that can make data persistence much more secure than what was possible with previous versions of AIR. If your AIR application persists information like credentials or sensitive relational data that must remain secure, this article is for you.

flash.crypto.generateRandomBytes()

The new flash.crypto.generateRandomBytes function provides a way to generate far more random data than was previously possible. As you probably already know, functions like Math.random are known as pseudorandom number generators because they rely on relatively small sets of initial data (like timestamps). Bytes generated in such a manner are insufficient for use as encryption keys because if the random number generation algorithm is known (and you should always assume that it is), and the mechanism for seeding it is known and constrained, it isn’t hard for an attacker to guess the encryption key and gain access to your data.

The new generateRandomBytes function uses a much more random (and therefore, much more secure) method for generating random bytes. Rather than implementing our own algorithm, the runtime uses the underlying operating system’s random byte generation function. The table below shows which function is used on each platform:

Operating System Function
Windows CryptGenRandom()
Mac OS X /dev/random
Linux /dev/random
Android /dev/urandom
iOS SecRandomCopyBytes()

The natural question at this point is what makes the operating systems’ random byte generation implementations so secure — or, put another way, how is the OS able to generate random data which is more cryptographically secure than Math.random? The answer is something called an entropy pool. An entropy pool is essentially a collection of randomness. When a process needs random bytes, they can be taken from the entropy pool through one of the functions listed above, and when a process believes it has random data, it can choose to add those bytes to the entropy pool. Processes can obtain random data in several ways:

  • Hardware jitter (deviations caused by things like electromagnetic interference or thermal noise).
  • Mouse movements and/or the time between keystrokes.
  • Unpredictable hardware events like disk head seek times, or network traffic.
  • A hardware random number generator.

(Now is a good time to state that it’s probably more correct to say that an entropy pool contains unguessable data as opposed to truly random data, but in the world of encryption, the two amount to essentially the same thing.)

So now that you have access to unguessable bytes, what can you do with them to make your data more secure? The most obvious use for the new generateRandomBytes function is probably the creation of encryption keys. The database implementation embedded inside of the AIR runtime supports encrypted database files, and the quality of the encryption is largely based on the quality of your encryption key. The generateRandomBytes function gives you a cryptographically secure way to generate an encryption key, and the EncryptedLocalStore API gives you a secure method for storing it (more on this below). Other uses of the generateRandomBytes function include generating session identifiers, GUIDs, and salt to be used in hashes (salt is additional data added to the data you want to hash which makes looking the hash up in rainbow table much less feasible).

Mobile Encrypted Local Store

Update (11/3/2011): If you’re having trouble getting ELS to work in the iOS emulator or in “fast packaging” mode, add the following to your compiler arguments: -swf-version=13

A second feature in AIR 3 which can make data persistence more secure is the introduction of the mobile EncryptedLocalStore (ELS) API. ELS has been available for the desktop profile since 1.0, but AIR 3 is the first version which makes it available on mobile.

ELS provides a secure storage mechanism implemented by the underlying operating system. On Windows, ELS uses DPAPI, and on OS X, ELS uses Keychain services.

The new mobile implementations of ELS require some additional explanation. On iOS, ELS uses Keychain services just as it does on OS X, however Android does not provide a encrypted storage service for the runtime to leverage. Data is therefore "sandboxed" rather than encrypted. In other words, it is stored in a secure, isolated location (inaccessible to other processes), but since the data isn’t actually encrypted, if an attacker has access to the physical device, or if a user has rooted his or her device, it is possible for the data to be compromised. According to the internal ELS specification:

On Android, ELS can guarantee that on a non-rooted device in the hands of an authorized user, ELS data of an AIR application will not be accessible/modifiable by any other application (AIR or native) running on the device.

It’s possible that Android will provide a more secure storage service in the future, and we can simply swap out our implementation, however for the time being, it’s worth noting that the Android implementation lacks the additional step of encryption on top of isolation.

So what kinds of data should mobile developers store with ELS? The same data that you would store in desktop applications. Namely:

  • Encryption keys (the sequence of random bytes obtained through the generateRandomBytes function).
  • Account credentials. If you want to store users’ credentials so they don’t have to enter them every time they use your app, ELS is the correct API to use.
  • Salts for hashes. Once you’ve salted and hashed something like a password, you need to be able to securely persist the salt for future comparisons.

Summary

If you just need to generate random numbers for something like a game, you should stick to using Math.random. In fact, on some platforms, the generateRandomBytes function could temporarily block if the entropy pool has been entirely drained which means you don’t want to pull large amounts of data from it in an endless loop. Rather, generateRandomBytes should be reserved for generating unguessable cryptographically secure random data for things like encryption and session keys, GUIDs, and salts for hashes.

Whenever you need to store sensitive information like credentials, encryption keys, or salts, you should always use the EncryptedLocalStore APIs. Since they are now supported everywhere, you can use them in a completely cross-platform manner.

Whether you’re updating an existing application that needs to persist data securely, or building an entirely new one, I strongly recommend that you use the new best practices made possible in AIR 3.

Parsing Exif Data From Images on Mobile Devices


If you’re interested in parsing Exif data from images on devices using AIR, you’re in the right place. But first, a little background to put this example in context.

Someone asked me a really good question at MAX about getting metadata about images on mobile devices. I don’t recommend using the file property of MediaPromise to get a reference to images because it doesn’t work across platforms (it works fine on Android, but not on iOS — for more information, see How to Use CameraUI in a Cross-platform Way), so that means you don’t have access to certain properties on File like creationDate, size, etc. The best way to get information about images on mobile devices, therefore, is to parse out the image’s Exif data.

Exif stands for "Exchangeable image file format," and it provides a way for digital cameras and applications to encode information about images directly in the images themselves. In other words, Exif is to images what ID3 is to MP3 files. Since the data is image-specific, it is typically much richer than information associated with a file reference. Of course, it’s also a little bit harder to get to, so I figured I would write a sample application to demonstrate one way of doing it.

ExifExample (Github project, Flash Builder project) is a simple application which allows the user to either take a photo, or browse to a photo already on the device, and then uses the ExifInfo library to parse the Exif data out of the image. The code is pretty simple, but there a few things to pay special attention to:

  1. I don’t use the file property of MediaPromise to access the image. As I mentioned above, that would work on Android, but it doesn’t work on iOS, and it’s not guaranteed to work on other platforms in the future. Instead, I use a MediaPromise to either synchronously or asynchronously get at the bytes of the image.
  2. I only read the first 64K of the image before parsing out the Exif data. Exif data has to be within the first 64K of an image which makes dealing with very large files much easier. If you’re downloading them, it saves time and bandwidth, and in the context of a mobile application, it saves memory. Now that the resolution of cameras on mobile devices is getting so high, you might not want to read entire images into RAM. (When mobile applications run out of memory, they tend to be shut down with little or no warning, so keeping your memory footprint as low as possible is always a good idea.) This is especially important if you want to display the thumbnails of multiple photos in a grid or list.
  3. Although the application supports both taking a picture using CameraUI and selecting a picture using CameraRoll, all the code that deals with the MediaPromise (both synchronous and asynchronous) is shared. If you have an application that supports both paths, I would recommend using this type of architecture in order to simplify your codebase.
  4. Even though this application is designed to run on mobile devices, it also supports selecting an image from a hard drive using the File.browseForOpen() function. If there’s an opportunity to add desktop functionality into a mobile application, I will usually do it since the more I can use and debug the application on the desktop before running it on a mobile device, the faster and smoother the development process goes. In order to support selecting an image from the desktop, I only had to add an additional 20 lines of code, and it definitely saved me much more time in testing and debugging than it cost me.

I should point out that this sample is not intended to highlight any particular Exif parsing library. I used the ExifInfo library because of its favorable license (MIT), but there are other libraries out there, as well (none of which I tried, so I can’t vouch for them). I should also point out that I had to modify the library according to Joe Ward’s instructions here in order to get it to work properly on iOS.

Providing Hints to the Garbage Collector in AIR 3

AIR 3 has a new API to allow developers to provide the runtime’s garbage collector with hints. The API is called System.pauseForGCIfCollectionImminent(), and it requires a little explanation to understand both it’s utility and value.

Before I get into how the function works, let’s start with why you’d want to give the garbage collector pointers (no pun intended) to help it decide when to reclaim memory. Running the garbage collector can cause the runtime to pause, and while the pause is typically imperceptible, if you’re doing something like playing a fast-paced game, you might notice it. In this particular scenario, a better time to garbage collect might be between game levels. Or, if you’re building something like an e-reader, it would be best to reclaim memory while the user is reading as opposed to during your super slick and smooth page-flip animation.

The System.gc() function can force the runtime to garbage collect, however it doesn’t always have 100% predictable results across all platforms, and forcing garbage collection on a regular basis is usually overkill. System.pauseForGCIfCollectionImminent(), on the other hand, does not force garbage collection, but rather serves as a mechanism to tell the runtime that a particular place in your application would be a good place to reclaim memory, and to specify how likely it is that runtime will take that advice. The function takes a single parameter called imminence which is a Number indicating the strength of your recommendation. From the spec:

The GC will synchronously finish the current collection cycle (complete the marking and perform finalization) if the current allocation budget has been exhausted by a fraction greater than the fraction indicated by the parameter imminence.

If it’s still not entirely clear, here are some examples:

Imminence Meaning / Result Likelihood
.25 If more than 25% of the allocation budget has been used, finish the current collection cycle. More likely
.75 If more than 75% of the allocation budget has been used, finish the current collection cycle. Less likely

In the former case, you’re essentially saying, "I really want you to garbage collect right now because a lot of animation is about to happen which I want to be as smooth as possible," and the latter case is essentially saying, "If you were going to garbage collect pretty soon anyway, now would be a pretty good time."

If you’re currently using System.gc() to force garbage collection in your AIR applications, I highly recommend you take a look at this new API.

Accessing Compass Data With AIR 3

I just finished writing a simple compass application for AIR 3 that uses an ANE (AIR Native Extension) to get orientation data from the Android operating system. All the code (AIR application, Java native extension, and "glue" library code) is available on Github if you want to see how it works.

New Audio Capabilities in AIR 3

AIR 3 has two important new audio capabilities:

  1. Device speaker selection. Play audio through a phone’s speaker, or through its earpiece.
  2. Background audio playback on iOS. Keep audio playing in the background when a user switches away from your application, and even when the screen is switched off.

Device Speaker Selection

Toggling between a phone’s speaker and earpiece for playing audio is as easy as setting the audioPlaybackMode property on SoundMixer like this:

// Play audio through the speaker:
SoundMixer.audioPlaybackMode = AudioPlaybackMode.MEDIA;

// Play audio through the earpiece:
SoundMixer.audioPlaybackMode = AudioPlaybackMode.VOICE;

Background Audio

Background audio playback has always worked on Android because of the nature of the Android’s multi-tasking implementation. Starting with AIR 3, it works on iOS, as well. To allow audio to continue playing when your application is closed — and even when the device’s screen is turned off — add the following to your application descriptor:

<iPhone>
    <InfoAdditions>
        <![CDATA[
            <key>UIBackgroundModes</key>
            <array>
                <string>audio</string>
            </array>
        ]]>
    </InfoAdditions>
</iPhone>

I wrote a sample application that demonstrates both these concepts called DeviceSpeakerExample. The code is available on github, and/or you can download the Flash Builder project.

Writing a Cross-platform and Cross-device Application That Browses for Images

If you’re writing a cross-platform, cross-device AIR application that allows users to pick an image from their local computer/device, there’s a relatively simple technique you can use to make sure the app does the right thing in all contexts (desktop, phones, and tablets). Specifically, I define "the right thing" as:

  • On phones and Android tablets, bring up the full-screen image picker.
  • On iPads, bring up the photo gallery panel and associate it with the button or other UI element that invoked it.
  • On Macs, open a file picker that defaults to ~/Pictures.
  • On Windows, open a file picker that defaults to c:/Users/<username>/My Pictures.

The function below can be used in any AIR application on any supported device, and it should always do the right thing:

private function onOpenPhotoPicker(e:MouseEvent):void
{
    // We're on mobile. Should work well for phones and tablets.
    if (CameraRoll.supportsBrowseForImage)
    {
        var crOpts:CameraRollBrowseOptions = new CameraRollBrowseOptions();
        crOpts.height = this.stage.stageHeight / 3;
        crOpts.width = this.stage.stageWidth / 3;
        crOpts.origin = new Rectangle(e.target.x, e.target.y, e.target.width, e.target.height);
        var cr:CameraRoll = new CameraRoll();
        cr.browseForImage(crOpts);
    }
    else // We're on the desktop
    {
        var picDirectory:File;
        if (File.userDirectory.resolvePath("Pictures").exists)
        {
            picDirectory = File.userDirectory.resolvePath("Pictures");
        }
        else if (File.userDirectory.resolvePath("My Pictures").exists)
        {
            picDirectory = File.userDirectory.resolvePath("My Pictures");
        }
        else
        {
            picDirectory = File.userDirectory;
        }
        picDirectory.browseForOpen("Choose a picture");
    }
}

For more information on CameraRoll changes in AIR 3, see How to Correctly Use the CameraRoll API on iPads.

Native Text Input with StageText

One of the major new features of AIR 3 (release candidate available on Adobe Labs) is the introduction of the StageText API. StageText allows developers to place native text inputs in their mobile AIR applications rather than using the standard flash.text.TextField API. StageText is essentially a wrapper around native text fields, so the resulting input is identical to the text input fields used in native applications. (Note that StageText is only for the AIR mobile profile; the StageText APIs are simulated on the desktop, but they do not result in native text inputs on Mac or Windows.)

Advantages of Using StageText

There are several advantages to using StageText over flash.text.TextField in mobile applications:

  • Both iOS and Android have sophisticated auto-correct functionality that StageText allows AIR applications to take advantage of.
  • The autoCapitalize property of StageText allows you to configure how auto-capitalization is applied to your text field. Options are specified as properties of the AutoCapitalize class and include ALL, NONE, SENTENCE, and WORD.
  • StageText allows you to control the type of virtual keyboard that is displayed when the text field gets focus. For example, if your text field is requesting a phone number, you can specify that you want a numeric keypad, or if you’re asking for an email address, you can request a soft keyboard with an "@" symbol. Options are specified as properties of the SoftKeyboardType class, and include CONTACT, DEFAULT, EMAIL, NUMBER, PUNCTUATION, and URL.
  • In addition to controlling the type of virtual keyboard, you can also configure the label used for the return key. Options are specified as properties of the ReturnKeyLabel class, and include DEFAULT, DONE, GO, NEXT, and SEARCH.

The Challenges of Using StageText

On mobile devices, StageText provides much more powerful text input capabilities than flash.text.TextField, however there are some challenges to adding native visual elements to AIR applications.

StageText is Not on the Display List

StageText is not the first time we’ve exposed native functionality directly in AIR. The StageVideo class (mobile and Adobe AIR for TV) allows video to be rendered in hardware at the OS level, and StageWebView (mobile only) allows HTML to be rendered with the device’s native HTML engine. All three of these features have similar benefits (native performance and functionality), but they also share the same unusual characteristic: although they are visual elements, they are not part of the Flash display list. In other words, like StageVideo and StageWebView, StageText does not inherit from DisplayObject, and therefore cannot be added to the display list by calling the addChild() function. Instead, StageText is positioned using its viewPort property which accepts a Rectangle object specifying the absolute (that is, relative to the stage) coordinates, and the size of the native text field to draw. The text field is then rendered on top of all other Flash content; even if Flash content is added to the display list after a StageText instance, the StageText instance will always appear to have the highest z-order, and will therefore always be rendered on top.

In order to help address this behavior, StageText has has a function called drawViewPortToBitmapData. The drawViewPortToBitmapData function provides similar functionality to the IBitmapDrawable interface when used in conjunction with the BitmapData.draw function. Specifically, you can pass a BitmapData instance into the drawViewPortToBitmapData function to create a bitmap identical to the StageText instance. You can then toggle the visible property of your StageText instance, add the bitmap copy to the display list at the same coordinates as the viewPort of your StageText, and then you can place other Flash content directly on top without your StageText instance showing through. It seems like a lot of work, but it’s actually very easy to implement (especially when nicely encapsulated), and provides a good work-around for the fact that native visual elements do not render through the Flash pipeline. (Note that StageWebView also has the drawViewPortToBitmapData function which allows for the same work-around.)

StageText Does Not Have Borders

Something else that sets StageText apart from flash.text.TextField is the fact that it is not rendered with a border. There were two primary reasons for this decision:

  1. Developers will frequently want to customize their StageText borders.
  2. Native text inputs in native applications are sometimes rendered without any borders, especially on iOS. (For an example, compose a new email in the Mail application and note how the "To" and "Subject" fields have dividers between them, but no borders around them.)

Unfortunately, drawing borders around native text fields isn’t as easy as it sounds. In order to perfectly vertically center text between horizontal borders, and to constrain the text perfectly within vertical borders, developers need access to sophisticated font metrics APIs which are available for the two different text engines in Flash (flash.text and flash.text.engine), but they are not available for StageText. Drawing a perfectly sized and positioned border around a StageText instance so that it will work with different fonts and font properties, and work consistently across platforms, is somewhat challenging.

Solutions

Now that I’ve pointed out the challenges of working with StageText, it’s only fair that I provide some solutions.

For Flex Developers

Flex will address the two challenges of working with StageText described above (the fact that it’s not part of the display list, and the drawing of borders) by encapsulating StageText inside of spark.components.TextInput. The TextInput component will worry about drawing a border around the StageText instance, and as long as you use the Flex APIs for manipulating the display list (as opposed to manipulating the display list at the Flash API level), Flex will also manage the viewPort property for you. In other words, Flex will give you get all the advantages of StageText without you having to manage it yourself.

For ActionScript Developers

Developers who are writing in ActionScript directly rather than using Flex have a few options:

  1. Use the StageText APIs directly. This is probably the least practical approach since you will likely find that trying to manage the display list, and trying to manage visual elements not on the display list in a completely different way, will be complex and error-prone.
  2. Encapsulate StageText. You can always do what the Flex team did, and encapsulate StageText to make it easier to work with. This is a fair bit of work, but in the end, it will make it much easier to manage, and will probably prove to be a good time investment.
  3. Use the NativeText class. While validating the encapsulation of StageText, I wrote a pretty comprehensive wrapper which makes working with StageText much easier (details below).

Introducing NativeText

NativeText is a wrapper around StageText which makes adding native text input functionality to your ActionScript application much easier. NativeText has the following advantages:

  • It makes StageText feels like it’s part of the display list. You can create an instance of NativeText, give it x and y coordinates, and add it to the display list just like you would any other display object. All the management of the viewPort is completely encapsulated.
  • It’s easy to add borders. NativeText does the work of drawing borders such that the StageText instance is vertically centered and horizontally bound. You can even change the border’s thickness, color, and corner radius.
  • It’s easy to add content on top because it manages the bitmap swapping for you. NativeText has a function called freeze which does all the work of drawing the StageText to a bitmap (including the border), hiding the StageText instance, and positioning the bitmap exactly where the StageText was. Therefore, if you want to place content on top of a NativeText instance (for instance, an alert box), just call the freeze function first, and everything will work perfectly. When you’re done, call unfreeze, and the bitmap is replaced with the StageText instance and disposed of.
  • Support for multi-line text inputs. NativeText supports text fields of one or more lines. Just pass the number of lines of text you want into the constructor (this is the one property that cannot be changed dynamically), and both the border and viewPort will be properly managed for you.
  • Cross platform. The trickiest part of writing NativeText was making sure it rendered properly on all our reference platforms (Mac, Windows, iOS, and Android) which use different fonts, have different sized cursors, etc. NativeText lets you use any font, font size, and border thickness you want, and your text input should render properly and consistently on all supported platforms.

NativeText is included in a project called StageTextExample. It is open-source, and the code is available on GitHub (or you can download the ZIP file for importing into Flash Builder). This is something I worked on independently, and is not an officially supported Adobe solution, so although I’ve found that it works pretty well, it’s certainly possible that you could discover bugs or use cases it does not take into account. If that’s the case, let me know and/or feel free to hack away at the code yourself.

Socket Improvements in AIR 3

In AIR 3 (currently in beta, available on Adobe Labs), we added a frequently requested feature to the Socket class: an output progress event. The Socket class has always dispatched a ProgressEvent which is designed to let you know when data is ready to read from the socket, however there was no event indicating how much data had been written from the socket’s write buffer to the network. In most cases, at any given moment, it doesn’t really matter how much data has been passed to the network and how much is left in the write buffer since all of the data eventually gets written before the socket is closed (which usually happens very quickly), however that’s not always the case. For example, if your application is writing a large amount of data and the user decides to exit, you might want to check to see if there is still data in the write buffer which hasn’t been transferred to the network yet. Or you might want to know when data has finished being transferred from the write buffer to the network so you can open a new socket connection, or perhaps de-reference your socket instance in order to make it eligible for garbage collection. Or you might just want to show an upload progress bar indicating how much data has been written to the network, and how much data is still pending.

All of these scenarios are now possible in AIR 3 with the OutputProgressEvent. An OutputProgressEvent is thrown whenever data is written from the write buffer to the network. In the event handler, developers can check to see how much data is still left in the buffer waiting to be written by checking the bytesPending property. Once the bytesPending property returns 0, you know all the data has been transferred from the write buffer to the network, and it is consequently safe to do things like remove event handlers, null out your socket reference, shut down your application, start the next upload in a queue, etc.

The code below (also available on Github, and as an FXP file) is a simple example of safeguarding your application from being closed while data is still in the write buffer. The application opens a socket connection to the specified server, writes the data from the input textarea, and then writes the response in the output textarea. It’s coded in such a way that if the user tries to close the application before all the data has been written from the write buffer to the network, it will stop the application from closing, then automatically close it later once it can verify that all the data has successfully been written. (Note that this isn’t a very realistic example since the data being written to the socket is just text, and will probably be limited enough that it will all be written from the socket to the network layer in a single operation, however you can easily imagine a scenario where megabytes of data are being written which could take several seconds or even minutes, depending on the quality of the client’s network connection.)

<?xml version="1.0" encoding="utf-8"?>
<s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" showStatusBar="false" creationComplete="onCreationComplete();">

  <fx:Script>
    <![CDATA[

      private var socket:Socket;
      private var readBuffer:ByteArray;
      private var socketOperationInProgress:Boolean;
      private var closeLater:Boolean;

      private function onCreationComplete():void
      {
        this.nativeWindow.addEventListener(Event.CLOSING, onClosing);
      }
      
      private function onClosing(e:Event):void
      {
        if (this.socketOperationInProgress)
        {
          this.closeLater = true;
          e.preventDefault();
        }
      }

      private function sendData():void
      {
        this.socketOperationInProgress = true;
        this.readBuffer = new ByteArray();
        this.socket = new Socket();
        this.socket.addEventListener(Event.CONNECT, onConnect);
        this.socket.addEventListener(ProgressEvent.SOCKET_DATA, onSocketData);
        this.socket.addEventListener(OutputProgressEvent.OUTPUT_PROGRESS, onOutputProgress);
        this.socket.connect(this.server.text, Number(this.port.text));
      }
      
      private function onConnect(e:Event):void
      {
        this.socket.writeUTFBytes(this.input.text);
      }

      private function onSocketData(e:ProgressEvent):void
      {
        this.socket.readBytes(this.readBuffer, 0, socket.bytesAvailable);
        this.output.text += this.readBuffer.toString();
      }
      
      private function onOutputProgress(e:OutputProgressEvent):void
      {
        if (e.bytesPending == 0)
        {
          this.socketOperationInProgress = false;
          if (this.closeLater)
          {
            this.nativeApplication.exit();
          }
        }
      }
    ]]>
  </fx:Script>

  <s:VGroup width="100%" height="100%" verticalAlign="middle" horizontalAlign="center" paddingBottom="10" paddingTop="10" paddingLeft="10" paddingRight="10">
    <s:HGroup width="100%">
      <s:TextInput id="server" prompt="Host Address" width="80%"/>
      <s:TextInput id="port" prompt="Port" width="20%"/>
    </s:HGroup>
    <s:TextArea id="input" prompt="Input" width="100%" height="50%"/>
    <s:Button width="100%" label="Open Socket and Send Data" click="sendData();"/>
    <s:TextArea id="output" prompt="Output" width="100%" height="50%"/>
  </s:VGroup>
</s:WindowedApplication>

Keep in mind that AIR 3 is still in beta, so you might find bugs. If you do, here’s how to file them.

How to Correctly Use the CameraRoll API on iPads

Now that iPads have built-in cameras, we have to start thinking about how to use camera-related APIs in a more cross-platform way. In particular, the CameraRoll API requires some consideration. On most devices, you can just call browseForImage on a CameraRoll instance, and trust that the right thing will happen. On iPads, however, that’s not enough.

On the iPhone and iPad touch, the image picker takes up the full screen, however on the iPad, the image picker is implemented as a kind of floating panel which points to the UI control (usually a button) that invoked it. That means browseForImage has to be a little smarter so that it knows:

  1. How big to make the image picker.
  2. Where to render the image picker.
  3. What part of the UI the image picker should point to.

AIR 3 (in beta, available on Adobe Labs) allows developers to solve this problem with the introduction of the CameraRollBrowseOptions class. CameraRollBrowseOptions allows you to specify the width and height of the image picker as well as the origin (the location of the UI component that invoked it). On platforms whose image pickers fill the entire screen, the CameraRollBrowseOptions argument is simply ignored.

Below are screenshots of a simple AIR sample application that uses the new CameraRollBrowseOptions class to tell the OS where and how to draw the image picker:





The code for the application is available on Github (or you can download the Flash Builder project file), but I’ll include the important parts here:

package
{
  import flash.display.Sprite;
  import flash.display.StageAlign;
  import flash.display.StageScaleMode;
  import flash.events.Event;
  import flash.events.MouseEvent;
  import flash.geom.Rectangle;
  import flash.media.CameraRoll;
  import flash.media.CameraRollBrowseOptions;

  public class iPadCameraRollExample extends Sprite
  {

    private static const PADDING:uint = 12;
    private static const BUTTON_LABEL:String = "Open Photo Picker";

    public function iPadCameraRollExample()
    {
      super();
      this.stage.align = StageAlign.TOP_LEFT;
      this.stage.scaleMode = StageScaleMode.NO_SCALE;
      this.stage.addEventListener(Event.RESIZE, doLayout);
    }

    private function doLayout(e:Event):void
    {
      this.removeChildren();

      var topLeft:Button = new Button(BUTTON_LABEL);
      topLeft.x = PADDING; topLeft.y = PADDING;
      topLeft.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);
      this.addChild(topLeft);

      var topRight:Button = new Button(BUTTON_LABEL);
      topRight.x = this.stage.stageWidth - topRight.width - PADDING; topRight.y = PADDING;
      topRight.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);
      this.addChild(topRight);

      var bottomRight:Button = new Button(BUTTON_LABEL);
      bottomRight.x = this.stage.stageWidth - bottomRight.width - PADDING; bottomRight.y = this.stage.stageHeight - bottomRight.height - PADDING;
      bottomRight.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);
      this.addChild(bottomRight);

      var bottomLeft:Button = new Button(BUTTON_LABEL);
      bottomLeft.x = PADDING; bottomLeft.y = this.stage.stageHeight - bottomLeft.height - PADDING;
      bottomLeft.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);
      this.addChild(bottomLeft);
      
      var center:Button = new Button(BUTTON_LABEL);
      center.x = (this.stage.stageWidth / 2) - (center.width / 2); center.y = (this.stage.stageHeight / 2) - (center.height/ 2);
      center.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);
      this.addChild(center);
		}
  
    private function onOpenPhotoPicker(e:MouseEvent):void
    {
      if (CameraRoll.supportsBrowseForImage)
      {
        var crOpts:CameraRollBrowseOptions = new CameraRollBrowseOptions();
        crOpts.height = this.stage.stageHeight / 3;
        crOpts.width = this.stage.stageWidth / 3;
        crOpts.origin = new Rectangle(e.target.x, e.target.y, e.target.width, e.target.height);
        var cr:CameraRoll = new CameraRoll();
        cr.browseForImage(crOpts);
      }
    }
  }
}

Keep in mind that AIR 3 is still in beta, so you might find bugs. If you do, here’s how to file them.

Native JSON Support in AIR 3

One of the many new features in AIR 3 (in beta, available on Adobe Labs) is the new native JSON parser. It has always been possible to parse JSON with ActionScript, but AIR 3 provides native JSON support which is faster than ActionScript implementations, and more efficient in terms of memory usage.

The two main things the JSON class can do are:

  1. Parse JSON strings.
  2. Turn ActionScript objects into JSON ("stringify").

To learn more about JSON support in AIR 3, check out this short sample news reader (you can also download the FXP file) which uses JSON rather than RSS. The code is so simple, I’ll include the entire thing below:

<?xml version="1.0" encoding="utf-8"?>
<s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx=&quot library://ns.adobe.com/flex/mx" applicationComplete="onApplicationComplete();">

  <fx:Script>
    <![CDATA[

      import flash.globalization.DateTimeFormatter;
      import flash.globalization.DateTimeStyle;
      import flash.globalization.LocaleID;

      import mx.rpc.events.ResultEvent;

      private var df:DateTimeFormatter;

      private function onApplicationComplete():void
      {
        // Feed data is stored as a JSON file...
        var f:File = File.applicationDirectory.resolvePath("feeds.json");
        var fs:FileStream = new FileStream();
        fs.open(f, FileMode.READ);
        var feedJson:String = fs.readUTFBytes(fs.bytesAvailable);
        fs.close();
        var feeds:Object = JSON.parse(feedJson);
        this.feedTree.dataProvider = feeds;
        this.postHtmlContainer.htmlLoader.navigateInSystemBrowser = true;
        df = new DateTimeFormatter(LocaleID.DEFAULT, DateTimeStyle.MEDIUM, DateTimeStyle.NONE);
      }

      private function onTreeClick(e:MouseEvent):void
      {
        if (!this.feedTree.selectedItem || !this.feedTree.selectedItem.data) return;
        feedService.send({v:'1.0',num:'20',q:this.feedTree.selectedItem.data});
      }

      private function onFeedLoaded(e:ResultEvent):void
      {
        var result:String = e.result.toString();
        var feedData:Object = JSON.parse(result);
        var s:String = '<html><body>';
        s += '<h1>Posts for ' + feedData.responseData.feed.title + '</h1>';
        for each (var post:Object in feedData.responseData.feed.entries)
        {
          s += '<p class="postTitle"><a href="' + post.link + '">' + post.title + '&nbsp;&nbsp;&#0187;</a></p>';
          s += '<p>' + df.format(new Date(post.publishedDate)) + '</p>';
          s += '<p>' + post.content + '</p><hr/>';
        }
        s += '</body></html>';
        this.postHtmlContainer.htmlText = s;
      }

    ]]>
  </fx:Script>

  <fx:Declarations>
    <s:HTTPService id="feedService" url="https://ajax.googleapis.com/ajax/services/feed/load" result="onFeedLoaded(event)" resultFormat="text" contentType="application/x-www-form-urlencoded" method="GET" showBusyCursor="true" />
  </fx:Declarations>

  <mx:HDividedBox width="100%" height="100%">
    <mx:Tree id="feedTree" width="200" height="100%" click="onTreeClick(event);"/>
    <mx:HTML width="100%" height="100%" id="postHtmlContainer"/>
  </mx:HDividedBox>
</s:WindowedApplication>

Keep in mind that AIR 3 is still in beta, so you might find bugs. If you do, here’s how to file them.