Web Platform Team Blog

Adobe

decor

Making the web awesome

Regions feature support matrix revisited – keeping it clean

Welcome to the Real World™

A while ago, I wrote a blog post on how we used Browserscope to create a feature support matrix for tracking the level of support for CSS Regions in different browsers. The initial plan was to have both the feature detection tests and the submission mechanism available to the public so that anyone could run the tests and submit their results.

Fast-forward a couple of months and the initial plan didn’t seem to hold very well to the rigors of the real life Internet: submissions for irrelevant browsers (like the one running on Nokia 72), false negatives on supported browsers (as the feature is under a runtime flag), or plain empty results as people used our test key for personal experimentation. In this context it became apparent we needed to revise both the process and the data we were exposing.

Fixing the process

Revising the process was fairly easy. We just moved the code responsible for the results submission on an internal server, while leaving all the other bits for running the tests and viewing the aggregated results in place.

The tricky part was cleaning up the existing data without re-running the tests on the whole list of browsers – some of which already unavailable due to the fast-paced release cycle of modern browsers.

Fixing the data

The reasons we initially chose Browserscope for the feature support matrix was that it did almost all the heavy lifting for us: user-agent parsing, results storage and aggregation, all through a very simple API. The downside however, was that one has almost no control over the data once it got inside Browserscope. The only exposed operations are reordering or removing test categories (basically, columns from the results table).

Browserscope’s way of keeping the results relevant – in spite of the unavoidable mis-reports – resides in the large number of results submitted, as the suites are usually public. For abuse protection they’re capping the number of results pushed from any given IP address and protection against outliers is provided by the statistical processing of the results submitted.

In our case, given that WebKit nightly is by no means a mainstream browser and Chrome has the regions implementation under a run-time flag meant that we were more likely to get inaccurate results as more people run the tests. So besides “hiding” the submit code from the general public, we also had to clean-up the results ourselves.

The main challenge was removing irrelevant browsers from the results list (basically, rows from the results table) – there’s simply no nice way to trick the system. You could hack your way around it by manually listing the user agent strings you want to see/aggregate over. But that involves updating the list every time you land some results with a new user agent. Ugh!

Delete as selective copy

Instead of actually deleting results, we resorted to creating a new results table – one that would contain only the relevant results/browsers. Once the copy would be complete, we would start using that table further on.

However, what started as an “I know, I’ll use Python” moment, turned to bitter reconsideration once I realised that:

  1. currently there’s no way to retrieve all the test results in a machine-friendly format, like CSV;
  2. Browserscope relies solely on the User-agent HTTP header to aggregate results, so I had to use some API that would allow me to fiddle with that header, too;
  3. and last, but not least: the “API” for submitting the results relies on executing some JavaScript code from the Browserscope server that has, among other things, some CSRF protection magic. So I actually had to execute that code in a real JavaScript environment – which actually limited my options quite a bit.

Most of the things above were things that I knew in a corner of my mind, but only became apparent at a closer inspection of the problem.
What I needed was basically an environment where I could

  1. retrieve some HTML, scraping it for useful data
  2. filter said data and then
  3. execute some JavaScript in a browser-like environment whose user-agent string I could change at will.

After a short initial panic moment :) I realized a platform that would allow me to do all those things actually exists, and I’m actually familiar with it, being an Adobe-built technology. This technology is Adobe AIR. One of the cool things about AIR is that it embeds a WebKit port, allowing developers to write desktop apps using just HTML/CSS and JavaScript.

HTMLLoader to the rescue

Browserscope submitter

Armed with this new-found hope, I created a helper app (pictured above) that does just a little more than the three steps I previously listed as requirements for the solution. More specifically:

  • it takes as input: the key of the source results table, the maximum number of results to fetch, the key of the destination results table and the sandbox ID for the destination table
    • the sandbox ID is required to bypass the per-IP cap on results submissions, since most probably we’re gonna submit a lot of results in a short period of time from the same IP
  • based on the key of the source table it fetches its raw (not aggregated) contents, nicely formatted as HTML (something like this). This is done via an AJAX call and the response is parsed using jQuery, extracting the list of tests and the individual test results
  • based on the list of tests and the individual results previously extracted, an interactive table is displayed that allows excluding irrelevant results
  • the non-excluded results are submitted to Browserscope using the AIR-specific air.HTMLLoader object
    • for each result, the userAgent property of an HTMLLoader object is set to the user-agent string reported by the browser that submitted the result and then, using the loadString() method, a small piece of JavaScript is injected and executed that does the actual result submission
    • results are submitted one-by-one, asynchronously, using a pool of HTMLLoader instances

Wrapping it up

Now that the cleanup is completed, you can see what CSS Regions features are supported in which browsers and run the tests yourself, just as before.

If there’s one important thing this clean-up/refactor taught me, is that when you know you’re (ab)using a technology, you’d better be prepared for the moment you’ll hit the hard wall of limitations you knew existed from the very beginning. And also, as long as it gets the job done, no technology should be discarded just because it doesn’t seem appropriate. Oh, and most of the times, the path you already know is usually the shortest.

Comments are closed.