Archive for August, 2011

Socket Improvements in AIR 3

In AIR 3 (currently in beta, available on Adobe Labs), we added a frequently requested feature to the Socket class: an output progress event. The Socket class has always dispatched a ProgressEvent which is designed to let you know when data is ready to read from the socket, however there was no event indicating how much data had been written from the socket’s write buffer to the network. In most cases, at any given moment, it doesn’t really matter how much data has been passed to the network and how much is left in the write buffer since all of the data eventually gets written before the socket is closed (which usually happens very quickly), however that’s not always the case. For example, if your application is writing a large amount of data and the user decides to exit, you might want to check to see if there is still data in the write buffer which hasn’t been transferred to the network yet. Or you might want to know when data has finished being transferred from the write buffer to the network so you can open a new socket connection, or perhaps de-reference your socket instance in order to make it eligible for garbage collection. Or you might just want to show an upload progress bar indicating how much data has been written to the network, and how much data is still pending.

All of these scenarios are now possible in AIR 3 with the OutputProgressEvent. An OutputProgressEvent is thrown whenever data is written from the write buffer to the network. In the event handler, developers can check to see how much data is still left in the buffer waiting to be written by checking the bytesPending property. Once the bytesPending property returns 0, you know all the data has been transferred from the write buffer to the network, and it is consequently safe to do things like remove event handlers, null out your socket reference, shut down your application, start the next upload in a queue, etc.

The code below (also available on Github, and as an FXP file) is a simple example of safeguarding your application from being closed while data is still in the write buffer. The application opens a socket connection to the specified server, writes the data from the input textarea, and then writes the response in the output textarea. It’s coded in such a way that if the user tries to close the application before all the data has been written from the write buffer to the network, it will stop the application from closing, then automatically close it later once it can verify that all the data has successfully been written. (Note that this isn’t a very realistic example since the data being written to the socket is just text, and will probably be limited enough that it will all be written from the socket to the network layer in a single operation, however you can easily imagine a scenario where megabytes of data are being written which could take several seconds or even minutes, depending on the quality of the client’s network connection.)

<?xml version="1.0" encoding="utf-8"?>
<s:WindowedApplication xmlns:fx="" xmlns:s="library://" xmlns:mx="library://" showStatusBar="false" creationComplete="onCreationComplete();">


      private var socket:Socket;
      private var readBuffer:ByteArray;
      private var socketOperationInProgress:Boolean;
      private var closeLater:Boolean;

      private function onCreationComplete():void
        this.nativeWindow.addEventListener(Event.CLOSING, onClosing);
      private function onClosing(e:Event):void
        if (this.socketOperationInProgress)
          this.closeLater = true;

      private function sendData():void
        this.socketOperationInProgress = true;
        this.readBuffer = new ByteArray();
        this.socket = new Socket();
        this.socket.addEventListener(Event.CONNECT, onConnect);
        this.socket.addEventListener(ProgressEvent.SOCKET_DATA, onSocketData);
        this.socket.addEventListener(OutputProgressEvent.OUTPUT_PROGRESS, onOutputProgress);
        this.socket.connect(this.server.text, Number(this.port.text));
      private function onConnect(e:Event):void

      private function onSocketData(e:ProgressEvent):void
        this.socket.readBytes(this.readBuffer, 0, socket.bytesAvailable);
        this.output.text += this.readBuffer.toString();
      private function onOutputProgress(e:OutputProgressEvent):void
        if (e.bytesPending == 0)
          this.socketOperationInProgress = false;
          if (this.closeLater)

  <s:VGroup width="100%" height="100%" verticalAlign="middle" horizontalAlign="center" paddingBottom="10" paddingTop="10" paddingLeft="10" paddingRight="10">
    <s:HGroup width="100%">
      <s:TextInput id="server" prompt="Host Address" width="80%"/>
      <s:TextInput id="port" prompt="Port" width="20%"/>
    <s:TextArea id="input" prompt="Input" width="100%" height="50%"/>
    <s:Button width="100%" label="Open Socket and Send Data" click="sendData();"/>
    <s:TextArea id="output" prompt="Output" width="100%" height="50%"/>

Keep in mind that AIR 3 is still in beta, so you might find bugs. If you do, here’s how to file them.

How to Correctly Use the CameraRoll API on iPads

Now that iPads have built-in cameras, we have to start thinking about how to use camera-related APIs in a more cross-platform way. In particular, the CameraRoll API requires some consideration. On most devices, you can just call browseForImage on a CameraRoll instance, and trust that the right thing will happen. On iPads, however, that’s not enough.

On the iPhone and iPad touch, the image picker takes up the full screen, however on the iPad, the image picker is implemented as a kind of floating panel which points to the UI control (usually a button) that invoked it. That means browseForImage has to be a little smarter so that it knows:

  1. How big to make the image picker.
  2. Where to render the image picker.
  3. What part of the UI the image picker should point to.

AIR 3 (in beta, available on Adobe Labs) allows developers to solve this problem with the introduction of the CameraRollBrowseOptions class. CameraRollBrowseOptions allows you to specify the width and height of the image picker as well as the origin (the location of the UI component that invoked it). On platforms whose image pickers fill the entire screen, the CameraRollBrowseOptions argument is simply ignored.

Below are screenshots of a simple AIR sample application that uses the new CameraRollBrowseOptions class to tell the OS where and how to draw the image picker:

The code for the application is available on Github (or you can download the Flash Builder project file), but I’ll include the important parts here:

  import flash.display.Sprite;
  import flash.display.StageAlign;
  import flash.display.StageScaleMode;
  import flash.geom.Rectangle;

  public class iPadCameraRollExample extends Sprite

    private static const PADDING:uint = 12;
    private static const BUTTON_LABEL:String = "Open Photo Picker";

    public function iPadCameraRollExample()
      this.stage.align = StageAlign.TOP_LEFT;
      this.stage.scaleMode = StageScaleMode.NO_SCALE;
      this.stage.addEventListener(Event.RESIZE, doLayout);

    private function doLayout(e:Event):void

      var topLeft:Button = new Button(BUTTON_LABEL);
      topLeft.x = PADDING; topLeft.y = PADDING;
      topLeft.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);

      var topRight:Button = new Button(BUTTON_LABEL);
      topRight.x = this.stage.stageWidth - topRight.width - PADDING; topRight.y = PADDING;
      topRight.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);

      var bottomRight:Button = new Button(BUTTON_LABEL);
      bottomRight.x = this.stage.stageWidth - bottomRight.width - PADDING; bottomRight.y = this.stage.stageHeight - bottomRight.height - PADDING;
      bottomRight.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);

      var bottomLeft:Button = new Button(BUTTON_LABEL);
      bottomLeft.x = PADDING; bottomLeft.y = this.stage.stageHeight - bottomLeft.height - PADDING;
      bottomLeft.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);
      var center:Button = new Button(BUTTON_LABEL);
      center.x = (this.stage.stageWidth / 2) - (center.width / 2); center.y = (this.stage.stageHeight / 2) - (center.height/ 2);
      center.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);
    private function onOpenPhotoPicker(e:MouseEvent):void
      if (CameraRoll.supportsBrowseForImage)
        var crOpts:CameraRollBrowseOptions = new CameraRollBrowseOptions();
        crOpts.height = this.stage.stageHeight / 3;
        crOpts.width = this.stage.stageWidth / 3;
        crOpts.origin = new Rectangle(,,,;
        var cr:CameraRoll = new CameraRoll();

Keep in mind that AIR 3 is still in beta, so you might find bugs. If you do, here’s how to file them.

Native JSON Support in AIR 3

One of the many new features in AIR 3 (in beta, available on Adobe Labs) is the new native JSON parser. It has always been possible to parse JSON with ActionScript, but AIR 3 provides native JSON support which is faster than ActionScript implementations, and more efficient in terms of memory usage.

The two main things the JSON class can do are:

  1. Parse JSON strings.
  2. Turn ActionScript objects into JSON ("stringify").

To learn more about JSON support in AIR 3, check out this short sample news reader (you can also download the FXP file) which uses JSON rather than RSS. The code is so simple, I’ll include the entire thing below:

<?xml version="1.0" encoding="utf-8"?>
<s:WindowedApplication xmlns:fx="" xmlns:s="library://" xmlns:mx=" library://" applicationComplete="onApplicationComplete();">


      import flash.globalization.DateTimeFormatter;
      import flash.globalization.DateTimeStyle;
      import flash.globalization.LocaleID;


      private var df:DateTimeFormatter;

      private function onApplicationComplete():void
        // Feed data is stored as a JSON file...
        var f:File = File.applicationDirectory.resolvePath("feeds.json");
        var fs:FileStream = new FileStream();, FileMode.READ);
        var feedJson:String = fs.readUTFBytes(fs.bytesAvailable);
        var feeds:Object = JSON.parse(feedJson);
        this.feedTree.dataProvider = feeds;
        this.postHtmlContainer.htmlLoader.navigateInSystemBrowser = true;
        df = new DateTimeFormatter(LocaleID.DEFAULT, DateTimeStyle.MEDIUM, DateTimeStyle.NONE);

      private function onTreeClick(e:MouseEvent):void
        if (!this.feedTree.selectedItem || ! return;

      private function onFeedLoaded(e:ResultEvent):void
        var result:String = e.result.toString();
        var feedData:Object = JSON.parse(result);
        var s:String = '<html><body>';
        s += '<h1>Posts for ' + feedData.responseData.feed.title + '</h1>';
        for each (var post:Object in feedData.responseData.feed.entries)
          s += '<p class="postTitle"><a href="' + + '">' + post.title + '&nbsp;&nbsp;&#0187;</a></p>';
          s += '<p>' + df.format(new Date(post.publishedDate)) + '</p>';
          s += '<p>' + post.content + '</p><hr/>';
        s += '</body></html>';
        this.postHtmlContainer.htmlText = s;


    <s:HTTPService id="feedService" url="" result="onFeedLoaded(event)" resultFormat="text" contentType="application/x-www-form-urlencoded" method="GET" showBusyCursor="true" />

  <mx:HDividedBox width="100%" height="100%">
    <mx:Tree id="feedTree" width="200" height="100%" click="onTreeClick(event);"/>
    <mx:HTML width="100%" height="100%" id="postHtmlContainer"/>

Keep in mind that AIR 3 is still in beta, so you might find bugs. If you do, here’s how to file them.

Front-facing Camera Support in AIR 3

Adobe AIR for mobile has had consistent camera support since AIR 2.6, and now in AIR 3 (in beta, available on Adobe Labs), we’ve introduced support for front-facing cameras, as well. The introduction of the new position property on Camera along with the constants in the new CameraPosition class (BACK, FRONT, and UNKNOWN) allow you to chose which camera you want to get a reference to before attaching it to a Video object.

To be honest, the API is not quite as elegant as I’d like because making it consistent across platforms (mobile and desktop) meant having to compromise a little intuitiveness, but it’s easy enough to figure out, and most importantly, it’s entirely cross-platform. The code below shows a simple function that will return the requested camera:

// Get the requested camera. If it cannot be found,
// return the device's default camera instead.
private function getCamera(position:String):Camera
  for (var i:uint = 0; i < Camera.names.length; ++i)
    var cam:Camera = Camera.getCamera(String(i));
    if (cam.position == position) return cam;
  return Camera.getCamera();

For a full working example, check out the demo application called FrontCameraExample on Github (you can also download the FXP file). Keep in mind that AIR 3 is still in beta, so you might find bugs. If you do, here’s how to file them.

How to File Adobe AIR Bugs

I just wanted to post a quick reminder that we now have a public bug base for Adobe AIR. You can find it at, and it currently supports both AIR and Flash Player (along with a few other products).

In case you’re not accustomed to filing bugs, it’s hugely important to be as thorough and precise as possible. The absolute best way to file a bug is to provide a simple use case (with code) that reproduces the issue along with detailed instructions on how to run it. The first step in fixing a bug is being able to reliably reproduce it.

The bugs entered through the public bug base go directly into our internal bug tracking system and our review queue, so the better the bug report, the faster it gets in front of an engineer to start being looking at. To anyone who has taken the time to file a bug, we really appreciate your contribution!

How to Tell Which Flex Components Have Been Mobile Optimized

Even when you work for Adobe, it’s hard to keep up with everything we’re doing — especially when it comes to AIR, Flex, and Flash Builder. I started building a new Flex Mobile application yesterday, and I realized that I didn’t know exactly which Flex components had been mobile optimized to-date. Rather than just asking someone for a list, I asked around to find out the best way to keep updated.

Piotr Walczyszyn gave me the excellent suggestion of checking the documentation for mobile skins. If a component has a mobile skin, it has been mobile optimized, and since the docs are kept up to date, this seems like as good a way as any to know which components are safe to use in a mobile app and which to avoid.

Ultimately, I would like to see our docs allow filtering by mobile optimization, but until that happens, this will work almost as well.

How to Use the AIR 3 Beta SDK

The AIR 3 runtime has been out in beta for some time, and now the AIR 3 SDK is available, as well. If you’re interested in checking it out, you can find it over on Adobe Labs.

If you’ve downloaded the SDK and want to give it a try, you’ll need to know how to set it up. There are really only two things you need to know:

  1. How to overlay the SDK. Although these instructions are a little old, I just reviewed them, and they’re still valid. Of course, SDK versions are different now, but in general, the instructions are still accurate.
  2. How to access the new AIR 3 APIs. Once you have the AIR 3 SDK properly overlaid, you’ll need to make one simple change in Flash Builder in order to be able to access the new APIs. Once you’ve created a new project and selected the correct SDK (the one you just created), you need to go to "Project Properties," then "Flex Compiler." In the "Additional compiler arguments" box, add "-swf-version=13". 13 is the SWF version that corresponds to AIR 3, so you need to tell the runtime what version of the APIs you want to use.

That’s about all you need to do to get started building AIR 3 (beta) applications.