Pew Pew for Metro

After almost three long months of writing about cross-compiling ActionScript to JavaScript I would like to start a new series of blog posts that is perhaps a little bit more entertaining but also contains information that you will hopefully find useful.

Yesterday Microsoft successfully launched the Windows Consumer Preview version of the upcoming Windows 8 operating system. As part of this launch Microsoft also revealed eight winners of their First App Contest. Back in December Microsoft invited developers to write a Metro style app for the Microsoft App Store:

“You may recall we issued a challenge to developers to be among the very first to be featured in the Windows Store. Under tight deadlines, and limited only by their own imagination and creativity, they responded!”
Source:  Introducing the winners of the First Apps Contest

If you go to the Windows 8 First App Contest website and you scroll down a little bit you will find an app called Pew Pew and my name next to it. Yep, that’s me. I somehow managed to be one of the eight winners. In the following weeks I going to tell you about the crazy journey that eventually led to Pew Pew becoming one of the first apps in Microsoft’s App Store.

 

Pew Pew Chronicles

  1. December 2011
  2. Hello, Metro!
  3. From Flash To Flex
  4. Planning A Death March
  5. Ready, Steady, Go!
  6. Submitting to the App Store
  7. You are a winner!
  8. The Pew Pew Manifesto

 

 

Testing cross-compiled ActionScript

After discussing optimizing and debugging cross-compiled ActionScript I would like to take a look at testing cross-compiled ActionScript. The short answer I can offer is: testing cross-compiled ActionScript is not much different from testing JavaScript. But of course there is more to it. There is always more…

 

Testing your patience

Be warned, this is a long post. I thought about splitting this post up as I have done in previous posts. But in this case I think everything should stay together. That said, let’s dive in!

In my opinion there are two different testing scenarios:

  1. Testing your cross-compiler.
  2. Testing client code.

In both scenarios we test JavaScript, but as we will see the goals and methods are different. Before we jump right in let me also point out that there are two types of testing, automated testing and manual (interactive) testing. Of course we prefer automated testing, because it is cheaper, less error prone, and can be integrated into Continuous Integration (CI) loops.

For running the Tamarin Acceptance Tests, which tests the cross-compiler, I use Chrome’s d8 (the debugger for V8) and for testing client code and running regression tests I use the default browser (usually Safari on OSX) with Google’s  js-test-driver.

 

Correctness versus Consistency

As mentioned above the goals for testing your cross-compiler and testing client code are different. When you test your cross-compiler you want to ensure that all language constructs in the source language are correctly cross-compiled to the target language. The term “correct” refers to preserving the internal consistency of the source code in the target code, not correctness in an abstract sense. I tried to explain this idea in a previous post, where in ActionScript and JavaScript you’ll get a mathematically incorrect result when adding 0.2 to 0.1:

// yields 0.30000000000000004, and not 0.3 in AS3 and JS.
trace( 0.2 + 0.1 );

I argued that in the context of cross-compiling one language into another it is very important to produce the same results in both worlds (here 0.30000000000000004) even if the results are incorrect. Our test suites need to follow that principle as well. As wrong as it may seem we should include a unit test that tests against 0.2 + 0.1 = 0.30000000000000004.

 

Missing Language Specification

Testing implies that you know the correct and expected results of your tests. When testing your cross-compiler the expected results are defined by the grammar of the source and target languages. What we need in our case are the language specifications of ActionScript and JavaScript. Herein lies a big problem. While the current JavaScript specification is summarized in ECMA-262 you won’t find a language specification for ActionScript. Just to be clear, a “language specification” is a document that describes the language grammar (usually formulated in Backus-Naur Form), so that developers can build a compiler for that language. A language specification is not a tutorial, FAQ, or online help about that language. For example Dart has a language spec, on the other hand Coffeescript’s langage reference does not qualify as a specification.

Neither does ActionScript’s online help. The closest to a language specification that Adobe currently offers is a section in the online help called  Learning ActionScript 3.0 . You might argue that an ActionScript language specification is not really necessary, because the syntax of ActionScript is by now well known and documented. I am afraid you cannot assume that everything about ActionScript is well documented. Just try find the exact syntax for Vectors in Learning ActionScript 3.0. I found this snippet under Array/Advanced Topics (you also find other bits and pieces under Basics of Arrays):

Note: You can use the technique described here to create a typed array. However, a better approach is to use a Vector object. A Vector instance is a true typed array, and provides performance and other improvements over the Array class or any subclass. The purpose of this discussion is to demonstrate how to create an Array subclass.

Even though it’s not really my fault, I truly regret that Adobe has never properly published a language specification for ActionScript. One could even argue that all computer languages can be divided into “hobby languages” and “professional languages”. The latter have language specifications.

 

Tamarin Acceptance Tests

The lack of a language specification for ActionScript has another nasty side effect. Without a language specification you also won’t find a corresponding language validation suite, which tests a compiler against the language specification. But that’s exactly what we need: a test suite that tells us that our cross-compiler complies to the language specification.

The next best thing to a language validation suite are currently the Tamarin Acceptance Tests. Those tests are grouped into different sections:

  • abcasm
  • as3
  • e4x
  • ecma3
  • misc
  • mmgc
  • mops
  • recursion
  • regress
  • spidermonkey
  • versioning.
Of those sections the most important ones are as3 and ecma3. The ecma3 section covers about 40,000 unit tests while as3 only has about 650 unit tests. I suspect the name “ecma3″ refers to the 3rd edition of ECMA-262. In reality, though, the tests seem to validate against the latest edition, which is the 5.1 edition of ECMA-262. Unfortunately we can’t really tell what the ecma3 folder is testing against, because there is no ActionScript language specification, and the Tamarin Acceptance Tests don’t claim to be the language validation suite.

 

Oberflächenverdoppelung

In fact, the Tamarin Acceptance Tests only seem to care about the behavior of the ActionScript VM in the Flash Player not changing in new versions of the Tamarin VM. The definition “ActionScript is what passes the Tamarin Acceptance Tests” does unfortunately not work as we will see in the following chapters.

There is a wonderful, but rarely used German philosophical term for this kind of circular reasoning (ActionScript is what passes the Tamarin Tests, which validate Tamarin VM results, which reflect ActionScript language rules). It is called “Oberflächenverdoppelung“, which I would translate as “surface duplication“. By creating tests that just confirm the results of an existing VM you only reiterate the knowledge of your own belief system. Instead of probing for truth you merely duplicate the surface of what you already know.

 

typeof Number

Assuming that the “ecma3″ section of the Tamarin Acceptance Tests validates against ECMA-262 (even the 3rd edition) there is a cluster of tests, which are in my opinion incorrect. To be precise, I believe ActionScript does not always comply to ECMA-262, and the Tamarin tests just bend to the status quo of ActionScript. For example, test e15_4_2_2_2, which is under ecma3/Array, would fail in every browser, because of this tiny difference between ActionScript 3 and JavaScript:

// ActionScript
typeof(new Number()) == "number"
// JavaScript
typeof(new Number()) == "object"

I asked our specialists here at Adobe why there is this difference and this was one of the replies:

In javascript, new Number(…) evaluates to an Object that wraps a number value. In AS3, new Number(…) return a number value. In both languages the Array constructor creates an Array of length 1 if given an object and an array of length n if given a number, where n is the value of the number.

But does ECMA-262 really allow both points of view? Is it allowed to return a number value for “new Number(…)” without fragmenting the language specified by ECMA-262? The ECMA-262 Language Overview chapter says (highlights added by me):

ECMAScript defines a collection of built-in objects that round out the definition of ECMAScript entities. These built-in objects include the global object, the Object object, the Function object, the Array object, the String object, the Boolean object, the Number object, the Math object, the Date object, the RegExp object, the JSON object, and the Error objects Error, EvalError, RangeError, ReferenceError, SyntaxError, TypeErrorand URIError.

Number is clearly defined as an Object and typeof(new Number()) = "object" seems to be the correct result according to ECMA-262. There is also a whole chapter on Number Objects in ECMA-262. But I couldn’t find any supporting arguments for typeof(new Number()) = "number".

As mentioned earlier it is important to preserve the behavior of the source language in the target language. But I decided that in this case, ActionScript (and the bending Tamarin tests) are incorrect, if you assume that “ecma3″ really does validate against ECMA-262. This would be similar to, say, getting 0.2 + 0.1 = 0.30000000000000004 in AS3 and 0.2 + 0.1 = 0.3 in JS. If you get a different result in AS3 and JS and JS reports the correct result I usually decide against AS3. You can’t rely on the Tamarin Acceptance Test. Their definition of correctness only preserves behavior exhibited in pre-existing C++ implementations of the ActionScript VM.

In either case, this was one of the rare exceptions where I decided to stick with ECMA-262 and not with what the Tamarin tests says is ActionScript (who really knows what ActionScript is without a language spec). As we just learned, one nice thing about language specifications is that there are almost no ambiguities (Number is an Object). I’ll get to the “almost” in a second.

 

function.toString()

There are many more Tamarin tests, that I find questionable and sometimes even absurd. This one is particularly annoying, because the test could have been phrased to serve a broader definition of what function.toString() may return. This code yields different results in ActionScript 3.0 and JavaScript in browsers:

(Array(1,2)).toString;
// Result in ActionScript
"function Function() {}"
// Result in JavaScript
"function toString() { [native code] }"

It turns out that ECMA-262 does not specify what function.toString() should exactly return. But we do know the retuned value is of type String and it should start with “function “. Instead of validating against  “function ” the Tamarin Acceptance Tests unfortunately check for what the Tamarin VM returns, which is “function Function() {}” and therefore triggers numerous false negatives in browser environment.

What shall we do about it? Shall we perhaps decide against ActionScript like we did for “typeof Number” above? This is a tough one, and I don’t think I am allowed to implement the JavaScript version by ignoring the ActionScript version as I did with “typeof Number”, because ECMA-262 is in regards to function.toString() somewhat ambiguous. (At least I try to be consistent in the way I am fragmenting the ActionScript language!)

So here is the compromise that I came up with. I added a new compiler option to the cross-compiler called “-js-extends-dom”, which defaults to “true”. If you compile the Tamarin tests, -js-extends-dom will be true and the cross-compiler will emit this ugly glue code at the top of your generated JavaScript:

// Additions to DOM classes necessary for passing Tamarin's acceptance tests.
// Please use -js-extend-dom=false if you want to prevent DOM core objects changes.
if( typeof(__global["Math"].NaN) != "number" )
{
  /**
  * @const
  * @type {function()}
  */
  var fnToString = function() { return "function Function() {}"; };
  /** @type {object} */
  var proto = {};
  // Additions to Math
  __global["Math"].NaN = NaN;
  // Additions to Array (see e15_4_1_1, e15_4_2_1_1, e15_4_2_1_3, e15_4_2_2_1)
  proto = __global["Array"].prototype;
  proto.toString.toString = fnToString;
  proto.join.toString = fnToString;
  proto.sort.toString = fnToString;
  proto.reverse.toString = fnToString;
  if( Object.defineProperty )
  {
    proto.unshift = (function ()
    {
      /** @type {function()} */
      var f = proto.unshift;
      return function() { return f.apply(this,arguments); };
    })();
    Object.defineProperty( proto, "unshift", {enumerable: false} );
  }
  // Additions to Error (see e15_11_2_1)
  proto = __global["Error"].prototype;
  proto.getStackTrace = proto.getStackTrace || function() { return null; };
  proto.toString = (function ()
  {
    /** @type {function()} */
    var f = proto.toString;
    return function ()
    {
      return (this.name == this.message || this.message == "") ?
      this.name : f.call(this);
    };
  })();
  proto = __global["Object"].prototype;
  // Additions to Object (see e15_11_2_1)
  proto.toString = (function ()
  {
    /** @type {function()} */
    var f = proto.toString;
    return function ()
    {
      return (this instanceof Error) ?
      ("[object " + this.name + "]") : f.call(this);
    };
  })();
}

 

The glue code above contains all the hacks for passing the Tamarin tests. In October 2011 my cross-compiler passed 40930 of the ecma3 Tamarin tests, which is about 95%.

In case you are wondering, of course I tell all my clients to run my cross-compiler with -js-extends-dom=false.

 

Validating versus Testing

In my test suite I differentiate between validating and testing. Validating unit tests generate JavaScript that I diff against a known, correct JavaScript version of the same test. All other tests are just run in the browser or in d8 as mentioned earlier. If the generated JavaScript is different during validation, I’ll get an error. If I accepted the newly generated JavaScript as the correct version then I would copy the new version to the saved version. If I detected a bug in the delta code of the newly generated JavaScript,  I would reject the build. This method has been very successful and helped me keeping my bug regression rate at an astonishing low level.

 

Testing Client Code

I have a few test apps that I always cross-compile as part of my unit tests. One of them is SpriteExample, which is one of the first examples that I ever cross-compiled to JavaScript. You can find that example at the bottom of the Flex online help for flash.display.Sprite .

The SpriteExample is actually not that trivial, because it uses constructor functions (which I very much dislike):

var sprite:Sprite = Sprite(event.target);

In order to get this example to work you would also have to solve event dispatching and drag and drop for Sprites. The drag and drop part can only be tested manually, though.

 

The End

That’s it, I hope! Now I have told you almost everything I know about cross-compiling ActionScript to JavaScript. There are still many topics that I could discuss in more detail. But after almost three month since my first post when I started this series, I think, we all deserve a break.

Next week, I am starting a new series about something completely different. Well, maybe not completely different…

 

 

Debugging cross-compiled ActionScript

This could be a short post: Just don’t introduce any bugs to your code, then you won’t ever have to debug. Clearly, that’s easier said than done. But seriously, how do bugs come into this world? I might address this philosophical question in a different post if there is any interest. Today I will assume that developers are not perfect beings and occasionally make mistakes, which don’t get caught by the tools they use. That’s why they have to debug their code. So how do you debug cross-compiled ActionScript?

 

Optimized versus Unoptimized Code

This might be obvious, but if you can, try to debug your unoptimized code. If your bug only occurs in the optimized version use Closure compiler’s -formatting=PRETTY_PRINT option, which generates somewhat readable optimized code. If that doesn’t help, use Closure’s -create-source-map option and their Closure Compiler Inspector. In very rare cases all of those methods don’t work and all that’s left is “printf-debugging”, which translates to “console.info-debugging” in the browser world.

 

Debugging with the Browser

At this point I always debug my cross-compiled ActionScript directly in the browser, mostly Safari on OSX. Sometimes I switch browsers depending on the task at hand.

In these days every modern browser comes with a built-in debugger that allows source-level debugging and an interactive console. I have noticed that Safari and Chrome both have difficulties when you jump out of the last expression in a function (tail calls). The debugger seems to get confused and does not stop at that last expression and jumps out to the parent scope. Microsoft’s IE10 does not have that problem. If you have to debug, i.e.  jQuery’s library code you might want to switch to IE10. It will probably save you a lot of time.

 

Debugging with Flash Builder

In a previous post about my “dreamed up” Flex SDK for JavaScript I proposed that everything should be transparent, even Flash Builder’s source-level debugging of ActionScript running as JavaScript in the browser should just work. JavaScript and HTML would just be features delivered through a specialized Flex SDK.

Now, this is also easier said than done. But it can definitely be implemented and I tell you how in a second. I just find it hard to justify adding source-level debugging of cross-compiled JavaScript to Flash Builder while existent browsers already offer excellent debug tools.

So here would be my rough idea how you could implement source-level debugging for Flash Builder…

 

FDB

Not many of you might know that every Flex SDK comes with a command line debugger called FDB. In fact, you can study the source code at Adobe’s Flex Open Source trunk. That’s actually what you would probably have to do first: Study how FDB works and transfer that knowledge over to the browser world. As far as I remember from the days when I was implementing debugging for Package For iPhone apps, FDB uses a custom protocol for exchanging data between the debugger (FDB) and the running app. This is all done using sockets on some port reserved by Adobe. I think, you could cook something up with WebSockets, where you would use that protocol and transmit debugging data from your JavaScript code.

One thing you have to keep in mind, though, is that your cross-compiler needs to emit extra debug code pretty much after each line of code of the emitted JavaScript, in order to allow the debugger, which is the driver of your JavaScript program, to intercept execution. Some features like call stacks might not be available through the JavaScript API of your browser.

If everything goes well, you will be able to hit the debug button in Flash Builder, which triggers the launch of your default browser with the HTML page associated with your project. Flash Builder will then wait patiently up to 2 minutes for your JavaScript to connect with the debugger that is baked into Flash Builder, which uses the same protocol as FDB.

Frankly, I wouldn’t do all that work just to be able to debug from within Flash Builder. This might turn out to be a feature that requires  one or more full-time developers.

 

 

 

 

About Humans, Horses, and Taxes

This morning I reread my previous post about optimizing cross-compiled JavaScript and I noticed that I didn’t quite tell the whole story. There is a different, almost philosophical, angle that I would like to share with you. Many readers might have walked away with the impression that I am just a “static type-guy” while the future will be dominated by “dynamic” languages like JavaScript and Dart. It is true that at this point I prefer static types – but really solely for pragmatic reasons. My explanation will take you to some strange places…

 

The Illusion of Life

Let me state the obvious: You are now staring at your computer screen. What you are looking at are not real buttons and the objects moving in your game are not really alive. You have been tricked and you like it that way. It’s all an illusion! A good app keeps those illusions alive and sucks you in, a bad app ruins your buzz and makes you painfully aware of the fact that you are just staring at a computer screen. But what is the secret sauce of those illusions? I think, that the main ingredient of that sauce is processing velocity, which needs to be just powerful enough to satisfy a few “human constants”.

 

Acceptable Frame Rates

I know from my own experience that a game crawling at 6 frames per seconds (fps) does not do the trick. It does not suck you in, instead you look at the game and think: Oh, yeah, this is some game on my screen. The illusion is broken. So what is the “acceptable frame rate” when moving pictures come to life? Here at Adobe we aim for 20fps, but sometimes that’s just not doable. It seems that 16fps is the absolute minimum. Everything under 16fps ruins your buzz. Perhaps we can then just say:

20 frames per second = 1/20 seconds per frame = 50ms per frame
50ms >= processing velocity

 

Acceptable Response Time

I couldn’t find any literature on “acceptable frame rates” but there is a lot of material out there that discusses “acceptable response times” for web applications. According to this article  Miller (1968) and Card et al. (1991) summarize acceptable response times as follows:

  • 100 ms is about the limit for having the user feel that the system is reacting instantaneously.
  • 1.0 second is about the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay.
  • 10 seconds is about the limit for keeping the user’s attention focused on the dialogue.

In my opinion users become impatient in these days if they don’t get any visual feedback with some kind of progress report after the 1.0 second mark. If your desktop app is not up and running after 10 seconds you have lost the user, or made her angry about your product and, in extension, to your company. Also, I estimate that the average attention span for launch times is much shorter for mobile apps. I am afraid, your app needs to be up in about 1 second on mobile devices or the buzz is ruined. It seems that for button clicks we can assume:

100 ms >= processing velocity

and for launch times, which includes load times, we should be on the safe side with:

1 s >= processing velocity

 

Three of a Perfect Pair

(UPDATE, 2/13/2012: I corrected the formula. It’s SW/HW not HW/SW. The bigger the tax, the greater the expression, which needs to be smaller than the Human Constant on the left hand side.)

It seems that there are some human constants (20 fps frame rate, 100 ms click response, 1 s launch time) that set the performance rules every app has to obey. I would define “processing velocity” as the hardware’s ability to process software as fast as possible. If you adopt this definition then hardware has the horsepower while software is the sticky part you throw into the blender. We could generalize this relationship as follows:

Human Constant >= (Software Tax) / (Hardware Horsepower)

This crude formula describes for me perfectly that equilibrium, which must stay in balance, or your app will suck. Your JavaScript can be inefficient and bloated (high software tax). But if the hardware is strong enough (lots of hardware horsepower) then it can easily pick up the tap and everything may stay under those mysterious Human Constant values. As long as that equilibrium is balanced nobody cares.

But why am I telling you all this? I want to make the point that the Software Tax is the part developers have the most control over and it is also the most flexible part. I don’t expect humans change their perception much in the next 10 years. If we all do so, then we will most likely get more impatient. Hardware gets faster every year but during a typical 12-18 month production cycle of hardware generations you have to treat the Hardware Horsepower as a constant.

 

Pirates Love Daisies

That leaves Software Tax as the component that will have to bend in order to keep our equilibrium in balance. As you well know there are countless ways of writing web apps (apps that run in browsers). I would argue that on all desktop platforms  the hardware is currently probably strong enough to run almost any JavaScript in browsers. But the picture seems very different on mobile devices. For example try playing Pirates Love Daisies on the iPad. You will probably quit the app while it is still loading (1 s < SW/HW). If you do wait until you can play the game you will notice that the game itself is way too slow and runs at around 6fps (SW/HW = 6fps = 167ms per frame ;  50ms < 167ms). Pirates Love Daisies is as far as I know a project sponsored by Microsoft perhaps to promote their modern versions of Internet Explorer (IE9, IE10, IE++). I don’t think that running on Apple devices was a requirement for this game. But if it were they would have to lower the Software Tax by at least switching their Canvas based architecture to SVG, which is hardware-accelerated.

 

La Quenta, Por Favor

So what does this all have to do with “dynamic” versus “static languages”? Why do I prefer static types for practical reasons? Isn’t “dynamic” versus “static languages” a question of religion?

This is how I would like to tie the different threads together: ActionScript is a dynamic language like Dart and JavaScript and allows you to use no types at all if you wish. From my experience using Object and Array all over the place comes with a hefty Software Tax, because dynamic language expressions are extremely hard to analyze at compile time. As a result untyped code will be less efficient in size and performance than typed code once cross-compiled to JavaScript and optimized with the Closure compiler. Now, on desktop platforms you can probably get away with a “dynamic coding style”. But the hardware on mobile platforms is still not quite powerful enough to process unoptimized and less efficient JavaScript. I am afraid that you simply cannot afford to use dynamic language expressions if your target platform is iOS or Android. You can always sit it out and wait until the hardware catches up if dynamic is so important to you. It is not for me. In these days, the so-called Dark Ages, where my SmallTalk app refuses to launch on my high-tech toaster, I would rather stick with typed code.

 

That said, I would like to end this post. My wife wants to go down to the market here in Oaxaca, Mexico. She is eager to buy a musician-skeleton wearing a sombrero (calavera).

Optimizing cross-compiled JavaScript

There are three topics left that I would like to touch on next:

  • optimizing
  • debugging
  • testing

Many of you probably think that optimizing JavaScript is mostly about minifying the code thus keeping code size small and load times short. Minifying is a big part of optimizing cross-compiled ActionScript but there is also a little bit more to it.

 

Picking the right Optimizer

There are many code minifiers out there but I think everybody will agree with me when I say that  Google’s Closure Compiler is the best. The Closure Compiler offers three different optimization modes and it is worth studying what does what:

  • WHITESPACE_ONLY
  • SIMPLE_OPTIMIZATIONS
  • ADVANCED_OPTIMIZATIONS

SIMPLE_OPTIMIZATIONS is the default and it always works. What you want is ADVANCED_OPTIMIZATIONS. But here is the thing: Google does not guarantee you that your optimized code will always run after optimizing with ADVANCED_OPTIMIZATIONS. In order to successfully optimize with ADVANCED_OPTIMIZATIONS you need to follow a list of rules that you should perhaps follow anyways.

 

Ninja in, Turtle out

For example calling member functions by name is a no-no for ADVANCED_OPTIMIZATIONS:

// ActionScript
var sprite : Sprite = new Sprite();
var width : Number = sprite["width"].apply(sprite,[]);

Who would do that kind of stuff? I guess, you can call them ActionScript Ninjas, but you will also find this kind of style in Adobe’s Flex libraries. I don’t recommend this coding style and it will for sure limit your optimization options to SIMPLE_OPTIMIZATIONS and  WHITESPACE_ONLY. In general trying to write “cute” ActionScript code will almost always backfire. Either your cross-compiler will misinterpret what you want to do, or the Closure compiler will generate optimized code that does not run. Trust me: reality will bend every ActionScript Ninja. You will be faced with the choice of being right (“This is allowed in ActionScript!”) and being fast (“Hmm, I like 60 fps!”). You pick!

Just to give you an idea what ADVANCED_MODE is capable of if you don’t feed it ninja code: A year ago I cross-compiled Mike Chambers’s Pew Pew game to JavaScript and recently beefed it up a little bit (by also removing some ninja parts). The uncompressed, cross-compiled JavaScript ended up being about 1.5 MB. The optimized version using ADVANCED_MODE brought the size eventually down to 116 KB. I would be willing to give up ninja-style programming for those kind of improvements!

 

Honey, I shrunk the code

But how? And why was Pew Pew so big to begin with (1.5 MB)? I am planning on writing a series of posts about cross-compiling Pew Pew to JavaScript shortly, but this seems the right time to explain what I think you should consider in your design of your cross-compiler. Simply put, your cross-compiler should try to support ADVANCED_MODE as best as it can. One of the  ADVANCED_MODE rules is that you have to annotate your code with type information (technically you don’t have to, but you want to as explained above):

// ActionScript
package goog
{
  class Baz
  {
    public function query(groupNum:Number, term:Object=null):void { ... }
  }
}
// JavaScript
/**
 * Queries a Baz for items.
 * @param {number} groupNum Subgroup id to query.
 * @param {string|number|null} term An itemName,
 *     or itemId, or null to search everything.
 */
goog.Baz.prototype.query = function(groupNum, term) {
  ...
};

(Source: http://code.google.com/closure/compiler/docs/js-for-compiler.html)

I am proposing that an ActionScript to JavaScript compiler should always emit all type annotations to the JavaScript code in order to enable optimizers like Google’s Closure Compiler to using those type annotations for enhanced optimizations.

 

Boring is beautiful

You might not always succeed with your noble plan, though. For example it is extremely difficult to determine at compile time what the set of types will be for query’s “term” parameter. So I suspect instead of:

* @param {string|number|null} term An itemName,

You will probably only be able to emit:

* @param {object|null} term

But that’s better than nothing! To me this example is just a variation of the “ninja-in, turtle out” problem. May I ask you: Do you really have to use Object as the type for “term”? How about this “less cute” version of query():

// ActionScript
package goog
{
  class Baz
  {
    public function queryByNumber(groupNum:Number, term:Number=NaN):void { ... }
    public function queryByString(groupNum:Number, term:String=null):void { ... }
  }
}

That’s right: disambiguate your API by using separate methods for string terms and number terms. I hear some ninjas protesting loudly and arguing that my chatty coding style will result in larger code. Trust me, it will not. The ADVANCED_MODE will take of it. But if you use Object and Array (instead of Vector) all over the place the Closure compiler might not be able to make any sense of your code and you will end up with large chunks of half-optimized code. Or consider this example: maybe nobody uses queryByString()? In my chatty version the closure compiler would strip out that function, which it couldn’t do so for the do-it-all-in-one query() method. In this case, I am afraid, boring is beautiful.

 

Connecting the dots

So far I have only been talking about strategies for minifying generated JavaScript. At this point I might have also convinced you to not worry too much about the code size of  your generated, unoptimized JavaScript. We expect it to be large and verbose, because of the extra type annotations and unused methods that the ADVANCED_MODE will hopefully take care of later.

But there is one type of optimization that you have to take care of yourself. Consider this example from a previous post:

// ActionScript:
package flash.display
{
    public class Sprite
    {
    }
}
// JavaScript:
var flash = {};
flash.display = {};
flash.display.Sprite = function() {};

Creating a Sprite should result in this generated JavaScript code:

// ActionScript:
var sprite : Sprite = new Sprite();
// JavaScript:
var sprite : Sprite = new flash.display.Sprite();

What could possibly be wrong with that translation? There is nothing wrong with it, it’s just slow. The Closure compiler will optimize the JavaScript snippet above roughly to this:

var a=new b.c.d();

Do you see the problem? You would like to get this instead, don’t you?

var a=new b();

With our current implementation of the cross-compiler using too many packages and classes results in code that unnecessarily dereferences object parts where the parts will be optimized but not the expression as a whole. Bummer. Here is my proposal to address this problem (which is unfortunately not easy to implement): Introduce a package separator constant that is a string and that defaults to “.” in debug mode. In release mode use something like “$”. What the closure compiler will receive from your cross-compiler will then look like this (without the type annotations):

// JavaScript:
var flash = {};
var flash$display = {};
var flash$display$Sprite = function() {};
var sprite : Sprite = new flash$display$Sprite();

Since “flash$display$Sprite” is one name literal the Closure compiler will now happily optimize your JavaScript to:

var a=new b();

Here are some numbers to finish up this post: cross-compiled Pew Pew with “.” as package separator yielded about 205 KB optimized JavaScript while the “$” version made the app faster (thanks to less dereferencing) and reduced the size further down to 116 KB.

 

 

What is in flash.swc?

In this part of our ongoing series about cross-compiling ActionScript to JavaScript I will talk about flash.swc, which is the last of the three FlashRT SWCs of my  “dreamed up” Flex SDK for JavaScript:

  • frameworks/libs/browser/browserglobal.swc
  • frameworks/libs/browser.swc
  • frameworks/libs/flash.swc

The FlashRuntime API has about 450 classes and flash.swc has about 200 of those classes covered. Some of the APIs for i.e. accessing the camera will probably never make it into flash.swc, because browsers don’t support those APIs. One important thing to point out right from the start is that I recommend implementing the FlashRuntime API in ActionScript in terms of browser.swc and browserglobal.swc like I did in my previous post with the implementation of trace().

There is no way I can cover all 200 classes of flash.swc. Instead I would like to focus on a few classes that were particularly hard. Those are:

  • Proxy
  • Dictionary
  • XML, XMLList
  • EventDispatcher
  • DisplayObject
  • BitmapData
  • ByteArray
  • MovieClip
  • Stage

Let’s get started!

 

Proxy

You might recall that I explained Proxy in a previous post in December. It can be done, but supporting Proxy requires a lot of work on the cross-compiler.

 

Dictionary

I have also talked in detail about Dictionary in a previous post. To summarize, Dictionaries can be implemented as a class derived from Proxy. The only ActionScript feature that we cannot support is weak references. For that reason it is probably a good idea to throw an error if somebody passes true for useWeakKeys when creating a Dictionary:

// ActionScript:
public dynamic class Dictionary extends Proxy
{
    public function Dictionary( useWeakKeys : Boolean = false )
    {
        if( useWeakKeys )
            throw new Error("Weak references are not supported!");
    }
}

XML, XMLList

ActionScript support built-in XML (E4X) but no browser except for Firefox supports E4x and even Mozilla hinted that they might drop support for E4X. In a previous post I suggested for that reason dropping support for E4X. But in practice you’ll find a lot of projects that do use XML and E4X. Over time I realized that we probably need to offer some sort of compromise and I created two classes AS3XML and AS3XMLList that emulate a subset of XML features on top of DOM Element and NodeList.

The details are pretty boring except for maybe the part where the constructor parses a string to an Element:

package flash.xml
{
  import adobe;
  import org.w3c.dom.DOMParser;
  import org.w3c.dom.Element;
  import org.w3c.dom.Node;
  import org.w3c.dom.NodeList;
  ...
  // E4X definitions. based on ECMA-357
  public final dynamic class AS3XML extends Object
  {
    public function AS3XML(value:Object=null)
    {
      if( adobe.jsInstanceOf(value, Node) )
      {
        const valueAsNode : Node = value as Node;
        setNode(valueAsNode);
      }
      else if( typeof(value) == "string" )
      {
        const str:String= value as String;
        setNode( parseFromString(str) );
      }
      else
      {
        throw new Error("XML: incorrect parameter");
      }
   }
   public static function parseFromString( str : String ) : Element
   {
     if( !str )
       return null;

     const parser : DOMParser = new DOMParser();
     const elem : Element = parser.parseFromString(str, "text/xml") as Element;
     return elem;
   }
   public function getNode() : Node
   {
     return m_node;
   }
   private function setNode( val : Node ) : void
   {
     m_node = val;
   }
   ...
 }
}

 

EventDispatcher

In the Flash Runtime dispatching and observing events is probably one of the most important mechanism used for communicating between instances. The browser’s DOM does support event propagation, too. But some features like event bubbling are either different, or missing. You could roll your own event model like Google did in its Closure Library. If you do so you would run the risk of  missing out on some optimizations that only the native browser code offers. I went back and forth on this topic until I decided to base flash.events.EventDispatcher on jQuery’s event implementation. My hope was that I might get the good parts of both worlds. Later I abstracted the event model core in IFramework in order to allow other framework implementations including Google’s Closure Library.

 

DisplayObject

“Do you use SVG, or Canvas?” That’s what people ask me all the time. My answer has always been: “Both”. But my flash.swc has a bias towards SVG. DisplayObject uses an SVGLocatableElement. Here is an example:

public function set alpha(value:Number):void
{
  m_alpha = value;
  m_element.setAttribute( "opacity", value.toString() );
}

 

BitmapData

As far as I remember BitmapData is the only class that uses Canvas within flash.swc. But how do I integrate Canvas elements with the rest of my SVG tree? Well, SVG has this fantastic feature called ForeignObjectElement. You can mix Canvas into SVG, but not the other way around. If you are interested in more details on how you can use Canvas in SVG, please read this article.

 

ByteArray

I probably spent more than a week on ByteArray alone. These were the painful areas:

  • read/writeFloat
  • read/writeDouble
  • read/writeObject
  • compress/uncompress, deflate/inflate

read/writeFloat is about  serializing IEEE 754 single-precision (32-bit) floating-point numbers while read/writeDouble is about  IEEE 754 double-precision (64-bit) floating-point number serialization. You might think that by now there should be tons of libraries and examples out there that serialize according to IEEE 754. That is indeed the case, but I could only find one implementation that worked. The rest all had bugs which I found with the fantastic IEEE-754 Analyzer.

In order to support read/writeObject I had to implement Action Message Format serialization. I did that by manually converting BlazeDS’s Java code to ActionScript. The compress/uncompress, deflate/inflate require zip compression code, which I implemented using the old FZlib, that is still floating around on the Web.

The code for AMF and FZLib is so large that I moved it into amf.swc and fzlib.swc. The current implementation of ByteArray doesn’t require amf.swc and fzlib.swc and will assert if you use i.e. read/writeObject without linking amf.swc:

public function readObject():*
{
  // s_defaultDeserializer is implemented by amf.swc
  if( s_defaultDeserializer )
    return s_defaultDeserializer.readObject(this);
  throw new Error( "ByteArray: no default deserializer. " +
                   "Did you forget to link amf.swc?" );
}

 

MovieClip

As a Flex developer you rarely see or use MovieClip. But if you come from the Flash Pro world you constantly deal with MovieClips. The biggest pain about MovieClip is that it is a read-only interface. As a developer you are supposed to only play a MovieClip but never programatically add any frames to it. When I started working more seriously with MovieClip I quickly realized that the API is pretty much useless if you want to for example cross-compile a game like Pew Pew.

A MovieClip is conceptually a Sprite with a Timeline. So here is how MovieClip looks like in flash.swc (and in my opinion that is how MovieClip should have been designed):

public dynamic class MovieClip extends Sprite
{
  protected var m_timeline : ITimeline = null;
  public function MovieClip(tl:ITimeline = null, element : SVGLocatableElement=null)
  {
    super(null,element);
    if(tl)
      m_timeline = tl;
    else
      m_timeline = new Timeline();

    ...
 }
 ...
}

The ITimeline interface allows you to add frames and retrieve TimelineObjects from the timeline. This change allowed me to cross-compile ActionScript code that integrates with Adobe Edge‘s runtime.

 

Stage

There is really only one Stage per SWF. For that reason I made Stage a singleton:

public static function instance() : Stage
{
  if( !s_instance )
  {
    ...
    s_instance = createStage(w,h,null,bgcolor);
    ...
  }
  return s_instance;
}

 

What is next?

We finally made it through the long lost of painful flash.swc classes!

So what’s next? These are the topics that I have in mind:

  • debugging
  • testing
  • optimizing
Please let me know if there are any topics you would like me to talk about.

 

 

 

 

 

 

 

 

 

 

 

 

Wrapping native JavaScript libraries in ActionScript

This post is about the second SWC of my “dreamed up” Flex SDK for JavaScript, browser.swc. In a nutshell, I want to put the main framework class, core JavaScript library class wrapper, and browser DOM API wrapper into browser.swc.

 

Framework class

When you start building a Flex SDK in ActionScript you will quickly realize that there are a bunch of utility functions that you will use over and over again. I have already shown you a few in this series about cross-compiling ActionScript to JavaScript:

  • as3.getProperty/setProperty/callProperty
  • as3.isProxy
  • as3.instanceOf
  • as3.addToUint
  • as3.vectorFilter
  • as3.implements

You might now recall that I put all of those utility functions into the as3 namespace. Instead of “as3″ I am using “adobe” as my main framework class, which also contains those low-level utility functions. I have only one rule for the main framework class: all members and methods have to be static. But you can tell FalconJS to tell which framework class should be used.

My adobe framework class is not tied to a specific core JavaScript library like jQuery or Google Closure Library. Instead it uses a static member that points to an IFramework interface:

public static var m_framework : IFramework = null;

IFramework is my attempt to create an abstraction layer that works for most core JavaScript libraries.

 

IFramework

This is roughly in IFramework:

package browser
{
    import org.w3c.dom.Element;
    public interface IFramework
    {
       function isFunction( obj:Object ) : Boolean;
       function isArray( val: * ) : Boolean;
       function bind( fn:Function, selfObj:Object ) : Function;
       function exportSymbol(publicPath:String, 
            object : *, opt_objectToExportTo : Object = null ) : void;
       // Style interface
       function setStyle( element : Element, name : String, value : String ) : void;
       function getStyle( element : Element, name : String) : String;
       // Event interface
       function createEvent( type : String ) : Object;
       function listenToObject(src:Element, type:String, listener:Function,
               opt_capt:Boolean = false,
               opt_handler : Object = null ) : uint;
       function unlistenToObject(src:Element, type:String, listener:Function,
               opt_capt:Boolean = false,
               opt_handler : Object = null ) : Boolean;
       function dispatchEvent(eventTarget:Element, e:IEvent) : Boolean;
   }
}

Most core JavaScript libraries come with a bunch of utility functions for compensating browser quirks, binding functions to instances, and event handling. For browser.swc I implemented JQueryFramework and ClosureFramework based on the IFramework interface above and that has worked fine for me.

I will explain later what org.w3c.dom.Element is all about.

 

Core JavaScript Library Classes

JQueryFramework implements IFramework but it uses jQuery functions. This is all done in ActionScript:

import com.jquery.jQuery;
public class JQueryFramework implements IFramework
{
   public function isFunction(obj:Object):Boolean
   {
      return jQuery.isFunction(obj);
   }
   ...
}

Okay, the next question is: What’s in com.jquery.jQuery?

package com.jquery
{
   public class jQuery implements IExtern
   {
      // http://api.jquery.com/jQuery.isFunction
      public static native function isFunction( obj:Object ) : Boolean;
   }
}

IExtern is something I came up in order to let FalconJS know that an interface, or class is representing a JavaScript implementation. I could have used metadata tags but at that time when I wrote those wrappers FalconJS was not able to support metadata tags. Below I will show you a cleaner declaration of external classes.

As you can see I use the same trick as Tamarin does for declaring atomic classes and functions. Declaring isFunction() as a native function makes sure that FalconJS’s MXMLC (the JavaScript application compiler) will assume that the host environment (in our case the browser) will provide those “native” implementations. That’s exactly what we want to do with all native JavaScript functions!

Let’s have a quick look at the emitted JavaScript for JQueryFramework.isFunction:

JQueryFramework.prototype.isFunction = function(obj)
{
   return jQuery.isFunction(obj); // not com.jquery.jQuery.isFunction(obj);
}

Wait a second, shouldn’t that be “return com.jquery.jQuery.isFunction(obj);”, because isFunction is a static method?

This is a little bit iffy, but because jQuery implements IExtern FalconJS knows that there is no real com.jquery package. This assumption works for most of the JavaScript libraries I know. A much cleaner solution would use metadata tags, perhaps like this:

package com.jquery
{
 [Extern(name="jQuery")]
   public class jQuery
   {
      // http://api.jquery.com/jQuery.isFunction
      public static native function isFunction( obj:Object ) : Boolean;
      ...
   }
}

Switching over from IExtern to Extern metadata tag is one of the clean up tasks I have on my list for FalconJS.

Alternatively I could throw everything into the default package namespace just like the browser does. But I like using ActionScript’s package names. That way I can keep my projects better organized.

 

Browser DOM API

The jQuery example above used the “native” keyword for declaring functions that are part of external code that we expect the host environment to provide at runtime. There is a second way of declaring external functions that I prefer. In FlashRT most of the browser DOM APIs are defined as interfaces. This works pretty well, because in the browser you can get to all APIs through one root object called DOMWindow. For example from DOMWindow you can get the Document and from Document you can get or create other DOM elements etc.

In my adobe framework class I added a static variable pointing to a DOMWindow:

import org.w3c.dom.DOMWindow;
... 
public static var globals : DOMWindow = null;

The startup code will set DOMWindow, which you can use to retrieve other browser DOM interfaces:

// ActionScript
package org.w3c.dom
{
    // map org.w3c.dom.DOMWindow to DOMWindow
    [Extern(name="DOMWindow")]
    public interface DOMWindow
    {
        ...
        function get document() : Document;
        function get console() : Console;
        function get XMLHttpRequest() : Class;
        function get navigator() : Navigator;
        function setTimeout(closure:Function, delay:uint, ...args) : uint;
        ...
    }
}

In the code snippet above I am only showing a small section of what is in DOMWindow. For a more complete list please see Mozilla’s documentation, or this neat DOM Reference Manual.  I don’t know why, but the W3C specs don’t seem to define DOMWindow. Other interfaces like Document are described in their own interface description language (IDL):

interface Document : Node {
  readonly attribute DocumentType     doctype;
  readonly attribute DOMImplementation  implementation;
  readonly attribute Element          documentElement;
  Element            createElement(in DOMString tagName)
                                        raises(DOMException);
  DocumentFragment   createDocumentFragment();
  Text               createTextNode(in DOMString data);
  Comment            createComment(in DOMString data);
  CDATASection       createCDATASection(in DOMString data)
                                        raises(DOMException);
  ProcessingInstruction createProcessingInstruction(in DOMString target,
                                                    in DOMString data)
                                        raises(DOMException);
  Attr               createAttribute(in DOMString name)
                                        raises(DOMException);
  EntityReference    createEntityReference(in DOMString name)
                                        raises(DOMException);
  NodeList           getElementsByTagName(in DOMString tagname);
};

It’s tedious but pretty easy to create corresponding ActionScript interfaces for those IDL snippets manually. Alternatively you could write a custom IDL compiler (like Google’s Dart team seems to use). But I don’t find it necessary to develop an IDL compiler for W3C specs. I would change my mind if W3C started pumping out 300 important specs a year that I have to write wrappers for.

 

Without a trace

Let me show you how all those pieces come together in this implementation of trace(), which is also in browser.swc:

     /**
     * Displays expressions, or writes to log files, while debugging. A single trace
     * statement can support multiple arguments. If any argument in a trace statement
     * includes a data type other than a String, the trace function invokes the associated
     * <code>toString()</code> method for that data type. For example, if the argument is
     * a Boolean value the trace function invokes
     * <code>Boolean.toString()</code> and displays the return value.
     * @param arguments One or more (comma separated) expressions to evaluate.
     *                  For multiple expressions, a space is inserted between each
     *                  expression in the output.
     * @playerversion Flash 9
     * @langversion 3.0
     * @includeExample examples\TraceExample.as -noswf
     * @playerversion Lite 4
     */
package
{
    import adobe;
    import org.w3c.dom.DOMWindow;
    import org.w3c.dom.Console;
     [DebugOnly]
     public function trace(...arguments) : void
     {
         const domWindow : DOMWindow = adobe.globals;
         const console : Console = domWindow.console;
         if( console != null )
         {
             var s : String = "";
             for(var i:uint =0; i < arguments.length; i++)
                 s += arguments[i];
             if( s.length > 0 )
                 console.info( s );
         }
     }
}

The [DebugOnly] metadata tag tells your cross-compiler that this function and any calls to it can be stripped out in release builds.

 

Who is in and who is not?

Here are a few files you would find in my browser.swc:

  • adobe.as – main framework class.
  • goog.as – root class with static native methods wrapping base.js functions.
  • trace.as – package function that calls adobe.globals.console.info() .
  • browser/IFramework.as – interface used by adobe framework.
  • browser/JQueryFramework.as – implements IFramework in terms of jQuery.
  • browser/ClosureFramework.as – implements IFramework in terms of Google’s base.js.
  • com/jquery/$.as – wrapper for jQuery’s $ function.
  • com/jquery/Event.as – wrapper for jQuery’s Event object.
  • com/jquery/fn.as – wrapper for jQuery.fn.
  • com/jquery/jQuery.as – wrapper for jQuery’s root object.
  • goog/* – wrapper for Google’s Closure Library.
  • org/w3c/dom/* – wrapper for the browser DOM API.

You could argue that the wrappers for jQuery, Google Closure Library, and perhaps even org.w3c.dom should go into their own separate SWCs. I wouldn’t disagree with you. The separation into browserglobal.swc, browser.swc, and flash.swc I have been describing so far just turned out to be most practical one for me. I did try to merge browserglobal.swc with browser.swc. But as mentioned in my previous post I have not been successful so far.

 

Wrapping everything up

browser.swc contains the main framework class, core JavaScript library class wrappers, and browser DOM API wrappers.

When creating wrapper classes and interfaces for JavaScript libraries you can either use the “native” keyword or interfaces. I prefer interfaces over classes with “native” functions. But sometimes “native” functions are unavoidable, because ActionScript interfaces don’t support static functions.

I also recommend taking advantage of ActionScript’s package namespace feature. This requires you to do a little extra work, because browsers usually declare all classes in the default package (i.e. Document instead of org.w3c.dom.Document).

It turns out that it is beneficial to add extra information to wrapper classes and interfaces for JavaScript libraries that indicate that those are implemented by external code. I propose adding Extern metadata tags to those wrapper classes and interfaces, which could also be used to map package names to names used by the host environment (i.e. [Extern(name=”jQuery”)] would map the native class “com.jquery.jQuery” to “jQuery”).

 

In the beginning there was Object

After diving into the details of cross-compiling ActionScript language features to JavaScript we are now looking at runtime features. In my last post I suggested sorting all runtime features into groups that correspond to SWCs in the Flex SDK’s framework/libs folder. There are three SWCs that you would find in my “dreamed up” Flex SDK for JavaScript:

  • frameworks/libs/browser/browserglobal.swc
  • frameworks/libs/browser.swc
  • frameworks/libs/flash.swc

To keep things simple I will assume that you believe me that JavaScript can be embedded into SWCs using a custom COMPC.jar for JavaScript and later extracted with FalconJS’s MXMLC and baked into JavaScript applications. That should allow us to focus on the classes we could put into those three SWCs.

I’ll make the portions even smaller. Today I just want to look at what goes into browserglobal.swc .

 

Smashing Atoms

A reasonable goal for designing a Flex SDK for JavaScript starts in my opinion with the bare minimum set of classes and functions, the “atoms” of the runtime. Those atom classes should go into browserglobal.swc.

The best place to find our atom classes is the Tamarin source depot. In case you have never heard of Tamarin, it is Adobe’s open source version of the virtual machine that runs compiled ActionScript. Mozilla’s Tamarin project page looks a little bit like an abandoned space ship (“The Mozilla engineering team currently estimates that Tamarin will be incorporated into shipping versions of SpiderMonkey and Firefox in 2008″). But as far as I know Tamarin is still an active project. If you go to the Tamarin depot you’ll find under core a file called builtin.as, which includes 12 other ActionScript files:

include "Object.as"
include "Class.as"
include "Function.as"
include "Namespace.as"
include "Boolean.as"
include "Number.as"
include "Float.as"
include "String.as"
include "Array.as"
include "actionscript.lang.as"
include "Vector.as"
include "DescribeType.as"

I would like to prune that list above even further and only pick the classes and functions that are supported by JavaScript in every browser:

Note that I am allowed to add the functions and variables from actionscript.lang.as to that list if I remove isXMLName(), which is okay, because I said earlier that we don’t want to support E4X. Error is a bit tricky, because ActionScript’s Error constructor signature differs from JavaScript’s. But that can easily be adjusted to match JavaScript’s Error class. Number.as contains the definitions for Number, int, and uint. I’ll move out int and uint, because only Number is supported by JavaScript.

Here are my suggestions what we should do with the rest of the ActionScript classes in Tamarin’s core folder:

  • Boolean – should be emulated
  • Class – should be emulated
  • Number – int and uint need to be implemented as IntClass and UIntClass in browser.swc
  • Float – should be mapped to Number
  • Vector – should be mapped to Array
  • Namespace – should be mapped to package names
  • XML – should be mapped to browser.AS3XML in browser.swc
  • DescribeType – should go into avmplus of browser.swc
  • Proxy – should go into flash.utils of flash.swc
  • ByteArray – should go into flash.utils of flash.swc

Now that we have our list of classes that we picked as the bare minimum set for browserglobal.swc how shall we proceed?

 

browserglobal.swc

You compile them into a SWC using COMPC. But which COMPC? This might surprise you but I compile those atomic classes with the “standard” COMPC and not with FalconJS’s COMPC. Why? Because all of those classes are defined as native classes with native methods, which means that MXMLC (the application compiler) will assume that the host environment (in our case the browser) will provide the implementations. In other words when cross-compiling to ActionScript we will not need JavaScript implementations for those atomic classes and it doesn’t matter whether we use the “standard” COMPC or FalconJS’s COMPC.

Actually, it does seem to matter, because I have never been able to successfully build browserglobal.swc with FalconJS or in Flex. The only method that has worked for me is using a modified version of the Tamarin build scripts with the “standard” COMPC.

 

 

Dreaming up a Flex JavaScript SDK

We finally made it through the long list of language features and those odd three features (Weak References, Proxy, and Dictionary) that are langage and runtime features at the same time. We are now with both feet in runtime feature land! A few days ago I thought that perhaps the best way to introduce cross-compiling ActionScript runtime features to JavaScript is probably by describing a fictional Flex JavaScript SDK.

 

Science Fiction?

This may sound a little bit like science fiction to you, but what if you launched Flash Builder, hit File/New/Flex Project and checked “Use a specific SDK” under Flex SDK Version. From the pop-up menu you would then select “Flex x.x for JavaScript” (x.x being the version number) and that would be everything needed to get you started on writing your first JavaScript application in ActionScript. Or even better: You open an old Flex Project, navigate to File/Properties/Flex Compiler/Flex SDK Version, change the SDK to “Flex x.x for JavaScript” and recompile your project to an HTML app.

That’s how easy creating HTML apps in Flash Builder should be in my opinion. Everything should be transparent, even Flash Builder’s source-level debugging of ActionScript running as JavaScript in the browser should just work. JavaScript and HTML would just be features delivered through a specialized Flex SDK.

 

Flex SDK’s Blue Print

Let’s have a look at the current structure of Flex SDKs. Flex SDKs are usually located in the Flash Builder’s …/sdks/x.x.x version folders. For example you would find Flex SDK 4.6 in …/sdks/4.6.0. To me it would make sense to do the same with our Flex JavaScript SDK and use a folder like …/sdks/4.6.0-js .

Every Flex SDK has pretty much the same structure:

  • ant – support for ant (flex tasks).
  • asdoc – support for generating documentation from ActionScript source files.
  • bin – platform dependent launcher
  • frameworks – runtime libraries and skins
  • lib – Java JAR files used by the launchers in bin and by the Flash Builder IDE.
  • runtimes – native AIR framework for OSX.
  • samples
  • templates

Our Flex JavaScript SDK would probably use the same structure where it made sense. Only a few files and folders under lib and frameworks would need to be replaced.

 

What’s in the lib folder?

As mentioned earlier SDK’s lib folder contains Java JAR  files that are used by the platform dependent launch utilities in the bin folder and by the Flash Builder IDE. There are three JARs that I would like to focus on for now:

  • asc.jar – ActionScript compiler
  • mxmlc.jar – MXML and ActionScript compiler for creating applications (SWFs).
  • compc.jar – MXML and ActionScript compiler for creating libraries (SWCs).

Those are the main three JARs we would need to replace with our cross-compiler versions:

  • asc.jar (a.k.a. jsc.jar) – ActionScript to JavaScript cross-compiler.
  • mxmlc.jar (a.k.a. mxmljsc.jar) – MXML and ActionScript to JavaScript cross-compiler for creating HTML applications (JS).
  • compc.jar (a.k.a. compjsc.jar) – MXML and ActionScript to JavaScript cross-compiler for creating libraries (SWCs).

Those three JARs form FalconJS. Our Flex JavaScript SDK should probably use the original names asc.jar, mxmlc.jar, and compc.jar. Instead of lib/optimizer.jar FalconJS currently uses Google’s Closure compiler, which is copied into the SDK to …/lib/google/closure-compiler/closure.jar.

 

What’s under frameworks?

There is a lot of stuff under Flex SDK’s frameworks folder. In order to keep things simple I will only look at two sub-folders, which I think are the most important ones.

  • frameworks/javascript currently only contains one folder called “fabridge”, which contains glue code for DOM/SWF interactions.
  • frameworks/libs is the folder where you can find almost all of the SDK’s SWCs.

The frameworks/javascript folder seems like a good place to put JavaScript libraries like jQuery and Google Closure Library. The fabridge folder does not make any sense in the browser and should be removed.

In the “regular” Flex SDK  you’ll find within the frameworks/libs folder a SWC called player/x.x/playerglobal.swc, which contains the implementation of ActionScript core classes (Object, Array etc) as well as the complete Flash Runtime API (flash.display.* etc).

For our Flex JavaScript SDK I recommend splitting up what traditionally goes into playerglobal.swc into two pieces: browserglobal.swc (for the ActionScript core classes) and flash.swc (for the Flash Runtime API). Those two SWCs form FlashRT (or a.k.a “runtime-js” in earlier versions). The following list probably describes the bare minimum set of files you need in order to cross-compile ActionScript that uses the Flash Runtime API to JavaScript:

  • frameworks/javascript/jquery-1.5.1.js
  • frameworks/javascript/goog/base.js
  • frameworks/javascript/…
  • frameworks/libs/browser/browserglobal.swc
  • frameworks/libs/browser/browserglobal.abc
  • frameworks/libs/flash.swc
The Flex specific SWCs (Peter Flynn’s FlexJS) are:
  • frameworks/libs/framework.swc
  • frameworks/libs/mobilecomponents.swc
  • frameworks/libs/mobiletheme.swc
  • frameworks/libs/mx.swc
  • frameworks/libs/spark.swc
  • frameworks/libs/sparkskin.swc
Please note that those SWCs are very special SWCs created by FalconJS’s compc.jar. You can use them in “regular” Flash Builder projects and even compile SWCs and SWFs. But the resulting SWCs and SWFs created by “regular” Flex SDKs will never run in the Flash Player. Only the FalconJS versions of asc.jar, mxmlc.jar, and compc.jar are able to extract JavaScript from those SWCs listed above and create JavaScript applications.
In the following posts I will explain FlashRT and its two main components browserglobal.swc and flash.swc

 

 

 

Dictionary

After discussing Weak References and Proxies there is only one feature left in that group of odd features that are language and runtime features at the same time: Dictionary.

ActionScript Dictionaries are essentially object (key) to object (value) maps. Since JavaScript only supports string (key) to object (value) maps we have to do a little bit of work.

 

String to Object Maps

Why can’t we just implement Dictionary as an ordinary Object like this?

// ActionScript:
public dynamic class Dictionary extends Object
{
}

Everything would work if we only used strings for keys:

// ActionScript:
var d : Dictionary = new Dictionary();
d["a"] = 1;
d["b"] = 2;
trace( "a=" + d["a"] ); // "a=1"
trace( "b=" + d["b"] ); // "b=2"
// JavaScript:
var d = new Dictionary();
d["a"] = 1;
d["b"] = 2;
trace( "a=" + d["a"] ); // "a=1"
trace( "b=" + d["b"] ); // "b=2"

But things wouldn’t go so well if you used objects instead of strings for keys as we will see next.

 

Object to Object Maps

Consider this example:

// JavaScript:
var d = new Dictionary();
var a = {};
var b = {};
d[a] = 1;
d[b] = 2;
trace( "a=" + d[a] ); // WRONG: "a=2"
trace( "b=" + d[b] ); // "b=2"
Why are we seeing “a=2″ instead of “a=1″? It’s a bit sneaky, but a and b are forcefully converted to strings when used as keys for object maps. Since a.toString() and b.toString() yield the same string “[object Object]” the second expression “d[b] = 2″ overwrites the effects of the first expression “d[a] = 1″. Both keys a and b are really the same key once converted to strings. This is what’s really happening under the hood:
// JavaScript:
var d = new Dictionary();
d["[object Object]"] = 1; // a.toString()
d["[object Object]"] = 2; // b.toString()
trace( "a=" + d["[object Object]"] ); // "a=2"
trace( "b=" + d["[object Object]"] ); // "b=2"

This looks like a tough problem: How can we emulate object to object maps in JavaScript, which only supports string to object maps? Is this another nightmare?

It is not. The solution might surprise you.

 

Dictionary extends Proxy

The title says it all: Dictionary with support for object to object maps can be implemented by deriving from Proxy. As you might recall Proxy allows you to intercept access to properties. That’s exactly what we need! What if we derived Dictionary from Proxy and override getter and setter? In our setter we could add a uid to the key object, which we would then use in our getter for retrieving values.

// ActionScript:
public dynamic class Dictionary extends Proxy
{
    private var map : Object = {};
    private static var uid : uint = 0;
    override flash_proxy function setProperty(name:*, value:*):void
    {
        name.uid = ++uid;
        map[uid] = value;
    }
    override flash_proxy function getProperty(name:*):*
    {
        return map[name.uid];
    }
}

This rough first version of Dictionary actually works. FlashRT’s current implementation is a little bit more sophisticated as it optimizes string keys and also supports for-in and for-each loop iterations. But the core idea of deriving Dictionary as a Proxy turned out to be a successful strategy.

 

Semi Weak References

We already know that Weak References are bad news. But here is another weird discovery: Our Proxy based implementation of Dictionary kind of supports weak references, because our Dictionary does not keep any references to the keys. The values are still referenced, though. Hence “semi” weak references.
If this is confusing you, look at it from this angle. We actually have to put in some extra work if we don’t want any weak references:
// ActionScript:
public dynamic class Dictionary extends Proxy
{
    private var map : Object = {};
    private var keys : Object = null;
    private static var uid : uint = 0;
    public function Dictionary( useWeakKeys : Boolean )
    {
        if( !useWeakKeys ) { keys = {}; }
    }
    override flash_proxy function setProperty(name:*, value:*):void
    {
        name.uid = ++uid;
        map[uid] = value;
        if( keys ) { keys[uid] = name; }
    }
    override flash_proxy function getProperty(name:*):*
    {
        return map[name.uid];
    }
}
Unfortunately “semi” weak references are not weak references as supported by ActionScript. That means we still have to pick one of the workarounds I came up with in my post about Weak References. The irony is that we ended up adding support for strong references to our Dictionary class.