Posts in Category "Adobe"

My Google I/O Presentation: New Web Tools and Advanced CSS/HTML5 Features from Adobe & Google

I had the good fortune of doing a presentation this year at Google I/O along with my coworker Vincent Hardy and Alex Danilo from Google. The topic is essentially everything Adobe is doing (or at least everything we’re ready to talk about) in the HTML space including tooling, services, and direct contributions to HTML5/CSS. We really had to struggle to fit everything into a single hour, but I think it’s a very comprehensive overview with a lot of great demos.


Parsing Exif Data From Images on Mobile Devices


If you’re interested in parsing Exif data from images on devices using AIR, you’re in the right place. But first, a little background to put this example in context.

Someone asked me a really good question at MAX about getting metadata about images on mobile devices. I don’t recommend using the file property of MediaPromise to get a reference to images because it doesn’t work across platforms (it works fine on Android, but not on iOS — for more information, see How to Use CameraUI in a Cross-platform Way), so that means you don’t have access to certain properties on File like creationDate, size, etc. The best way to get information about images on mobile devices, therefore, is to parse out the image’s Exif data.

Exif stands for "Exchangeable image file format," and it provides a way for digital cameras and applications to encode information about images directly in the images themselves. In other words, Exif is to images what ID3 is to MP3 files. Since the data is image-specific, it is typically much richer than information associated with a file reference. Of course, it’s also a little bit harder to get to, so I figured I would write a sample application to demonstrate one way of doing it.

ExifExample (Github project, Flash Builder project) is a simple application which allows the user to either take a photo, or browse to a photo already on the device, and then uses the ExifInfo library to parse the Exif data out of the image. The code is pretty simple, but there a few things to pay special attention to:

  1. I don’t use the file property of MediaPromise to access the image. As I mentioned above, that would work on Android, but it doesn’t work on iOS, and it’s not guaranteed to work on other platforms in the future. Instead, I use a MediaPromise to either synchronously or asynchronously get at the bytes of the image.
  2. I only read the first 64K of the image before parsing out the Exif data. Exif data has to be within the first 64K of an image which makes dealing with very large files much easier. If you’re downloading them, it saves time and bandwidth, and in the context of a mobile application, it saves memory. Now that the resolution of cameras on mobile devices is getting so high, you might not want to read entire images into RAM. (When mobile applications run out of memory, they tend to be shut down with little or no warning, so keeping your memory footprint as low as possible is always a good idea.) This is especially important if you want to display the thumbnails of multiple photos in a grid or list.
  3. Although the application supports both taking a picture using CameraUI and selecting a picture using CameraRoll, all the code that deals with the MediaPromise (both synchronous and asynchronous) is shared. If you have an application that supports both paths, I would recommend using this type of architecture in order to simplify your codebase.
  4. Even though this application is designed to run on mobile devices, it also supports selecting an image from a hard drive using the File.browseForOpen() function. If there’s an opportunity to add desktop functionality into a mobile application, I will usually do it since the more I can use and debug the application on the desktop before running it on a mobile device, the faster and smoother the development process goes. In order to support selecting an image from the desktop, I only had to add an additional 20 lines of code, and it definitely saved me much more time in testing and debugging than it cost me.

I should point out that this sample is not intended to highlight any particular Exif parsing library. I used the ExifInfo library because of its favorable license (MIT), but there are other libraries out there, as well (none of which I tried, so I can’t vouch for them). I should also point out that I had to modify the library according to Joe Ward’s instructions here in order to get it to work properly on iOS.

How to Correctly Use the CameraRoll API on iPads

Now that iPads have built-in cameras, we have to start thinking about how to use camera-related APIs in a more cross-platform way. In particular, the CameraRoll API requires some consideration. On most devices, you can just call browseForImage on a CameraRoll instance, and trust that the right thing will happen. On iPads, however, that’s not enough.

On the iPhone and iPad touch, the image picker takes up the full screen, however on the iPad, the image picker is implemented as a kind of floating panel which points to the UI control (usually a button) that invoked it. That means browseForImage has to be a little smarter so that it knows:

  1. How big to make the image picker.
  2. Where to render the image picker.
  3. What part of the UI the image picker should point to.

AIR 3 (in beta, available on Adobe Labs) allows developers to solve this problem with the introduction of the CameraRollBrowseOptions class. CameraRollBrowseOptions allows you to specify the width and height of the image picker as well as the origin (the location of the UI component that invoked it). On platforms whose image pickers fill the entire screen, the CameraRollBrowseOptions argument is simply ignored.

Below are screenshots of a simple AIR sample application that uses the new CameraRollBrowseOptions class to tell the OS where and how to draw the image picker:





The code for the application is available on Github (or you can download the Flash Builder project file), but I’ll include the important parts here:

package
{
  import flash.display.Sprite;
  import flash.display.StageAlign;
  import flash.display.StageScaleMode;
  import flash.events.Event;
  import flash.events.MouseEvent;
  import flash.geom.Rectangle;
  import flash.media.CameraRoll;
  import flash.media.CameraRollBrowseOptions;

  public class iPadCameraRollExample extends Sprite
  {

    private static const PADDING:uint = 12;
    private static const BUTTON_LABEL:String = "Open Photo Picker";

    public function iPadCameraRollExample()
    {
      super();
      this.stage.align = StageAlign.TOP_LEFT;
      this.stage.scaleMode = StageScaleMode.NO_SCALE;
      this.stage.addEventListener(Event.RESIZE, doLayout);
    }

    private function doLayout(e:Event):void
    {
      this.removeChildren();

      var topLeft:Button = new Button(BUTTON_LABEL);
      topLeft.x = PADDING; topLeft.y = PADDING;
      topLeft.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);
      this.addChild(topLeft);

      var topRight:Button = new Button(BUTTON_LABEL);
      topRight.x = this.stage.stageWidth - topRight.width - PADDING; topRight.y = PADDING;
      topRight.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);
      this.addChild(topRight);

      var bottomRight:Button = new Button(BUTTON_LABEL);
      bottomRight.x = this.stage.stageWidth - bottomRight.width - PADDING; bottomRight.y = this.stage.stageHeight - bottomRight.height - PADDING;
      bottomRight.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);
      this.addChild(bottomRight);

      var bottomLeft:Button = new Button(BUTTON_LABEL);
      bottomLeft.x = PADDING; bottomLeft.y = this.stage.stageHeight - bottomLeft.height - PADDING;
      bottomLeft.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);
      this.addChild(bottomLeft);
      
      var center:Button = new Button(BUTTON_LABEL);
      center.x = (this.stage.stageWidth / 2) - (center.width / 2); center.y = (this.stage.stageHeight / 2) - (center.height/ 2);
      center.addEventListener(MouseEvent.CLICK, onOpenPhotoPicker);
      this.addChild(center);
		}
  
    private function onOpenPhotoPicker(e:MouseEvent):void
    {
      if (CameraRoll.supportsBrowseForImage)
      {
        var crOpts:CameraRollBrowseOptions = new CameraRollBrowseOptions();
        crOpts.height = this.stage.stageHeight / 3;
        crOpts.width = this.stage.stageWidth / 3;
        crOpts.origin = new Rectangle(e.target.x, e.target.y, e.target.width, e.target.height);
        var cr:CameraRoll = new CameraRoll();
        cr.browseForImage(crOpts);
      }
    }
  }
}

Keep in mind that AIR 3 is still in beta, so you might find bugs. If you do, here’s how to file them.

Screencast of CSS Regions

One of the most popular demos at the Adobe booth at Google I/O this year was CSS Regions. CSS Regions represents Adobe’s attempts to bring magazine-level production to the web in a simple and standard way. Here’s a quick demo:


These samples are available on Adobe Labs along with documentation. For more details, check out Arno Gourdol’s article, CSS3 regions: Rich page layout with HTML and CSS3.

A Simple Zip Utility for Adobe AIR

I threw together a simple application for zipping and unzipping files in AIR using the FZip project. The application isn’t earth-shattering, but it’s a nice Flex 4 and AIR sample project, and all the source code is available. Screenshot below.

Continue reading…

A Third Kind of Orientation: Head-to-Head

Digital board games are going to be big. If you have any doubt, just check out Scrabble on the iPad. But to get the most out of digital board games, I think we’re going to need a third orientation mode in addition to portrait and landscape: head-to-head, or simply flat.

Of course, you don’t have to wait for devices or operating systems to support a third orientation mode. As long as you have access to the accelerometer, you can program head-to-head mode yourself. Below is an example of why a "flat" orientation makes sense (as well as a demo of iReverse running on a Lenovo X201 multi-touch laptop).

Continue reading…

Which Storage Devices Are Considered Removable?

AIR 2 has the ability to detect the mounting and un-mounting of storage volumes like flash drives, hard drives, some types of digital cameras, etc. (to see this in action, see A Demonstration of the New Storage Volume APIs in AIR 2). This feature basically piggybacks off of the operating system’s detection of storage devices. In other words, if the OS thinks something is a mass storage device, AIR will also recognize it as such and throw a StorageVolumeChangeEvent. If the OS does not recognize the device as a storage volume, AIR will not react to it. (Note: it is possible to detect and communicate with any type of peripheral in AIR 2 using external processes launched with the new NativeProcess API; the StorageVolume APIs are only for, well, storage volumes.)

Continue reading…

Adobe Feeds Update

The new launch of Adobe Feeds (MXNA) has gone well, but there are two issues I’m seeing people report:

  1. Maybe people have been getting "Header Length Too Large" errors. Interestingly, this comes from cookie-related code that CF7 tolerated, but CF8 doesn’t. Anyway, the problem has been fixed. If you’re still seeing the error message, clear your feeds.adobe.com cookies, and it will never come back.
  2. It seems the web services are broken. This is probably the result of the query optimizations I made. I didn’t test all the web services, so they’re probably incompatible with the changes I made. Oops. Sorry about that. I’ll get this fixed in the next couple of days, and report back when they’re working again.

We’ll get Feeds back to 100% in the next week or so. Please be patient with us!

The new MXNA (AXNA?)

First, I personally apologize for the downtime. We’ve been meaning to find MXNA a new home for quite a while now, and we finally decided to make the time to do it. Ironically, as we were working on moving the code over to the Adobe cluster, the old weblogs server went down in a big way. We’re still not sure what happened, but for some reason, Java was core dumping a few minutes after starting JRun. Rather than spending too much time fixing the old server, we decided to look ahead, put up an "under construction" page, and focus on the new platform.

Mike Chambers and I wrote MXNA five years ago, thinking we would aggregate a few dozen popular blogs. 100 at the most. We initially put it on our own server which we expensed every month. When we outgrew that, we moved it to a single Macromedia server which Mike and I managed entirely ourselves. That worked out well for a couple of years until we outgrew it, as well. By that time, we were Adobe — a much larger company with more infrastructure — so moving it over to the cluster was a fairly involved task.

But we didn’t just spread MXNA across a few more servers. As we began approaching 2,000 feeds, it became clear that the same code that managed 100 feeds wasn’t doing such a good job managing 1,800. So I finally set aside a day, installed CF8, imported the production database, and with some pointers from Ben Forta (I’d never even used CF8 before — I’ve been focusing on AIR for the last two years), started optimizing.

I spent most of my time rewriting queries, and working on reducing the number of queries per request. The most dramatic change I made was optimizing the search query which went from about 30 seconds to one or two. Be sure to give it a try.

Again, sorry not just for the recent downtime, but for all the intermittent downtime over the last year or so. Hopefully we’re past all that, and MXNA (AXNA?) will become a valuable community resource again.

More Information on Adobe’s $100 Million in Venture Capital

I’ve been getting tons of requests for more information on the Adobe $100 million venture capital investment plan. We now have a page up on the site, so if you’re interested, check it out. Don’t be put off by the last paragraph which talks about non-disclosure agreements. That’s a pretty standard investor disclaimer, and nothing to worry about from a reputable company. Good luck!