Adobe

decor

Web Platform Team Blog

Making the web awesome

Crowdsourcing a feature support matrix using QUnit and Browserscope


This article is loosely based on @razvancaliman’s awesome post.

The idea

While we were working on the CSS Regions feature, one of the things people asked, from quite early on, was a way of telling what CSS Regions features were supported in what version of the different browsers out there. In the beginning, “Get the latest WebKit nightly” was all nice and simple, but when the code got into Chrome – which has no less than 3 official channels and Microsoft started working on their own Internet Explorer 10, things got more complicated. Especially since it was still not very easy to say what and how much of the spec was implemented at any given time.

This is how the CSS Regions support matrix was born. We decided early on that we want to have the support matrix out in the open, free for anybody to both check out the code for the tests and run them on their own setups – and eventually submit results.

So, what if you want to implement a similar feature support matrix yourself, maybe for another CSS/HTML spec? Well, you could go straight to the code on gitHub or you could read through this post for a walkthrough and then go check out the code.

The “hidden” stuff

As the title says it, under the hood, the support matrix is powered up by QUnit, Browserscope and a suite of feature detections tests we wrote. There’s also a sprinkle of Twitter Bootstrap and jQuery but that’s for the shiny UI, more on which later on.

Why QUnit?

We chose QUnit for a number of reasons, mainly having to do with its flexibility and how easy it is to write tests. We’ll start with the latter.

Writing QUnit tests is merely a matter of including qunit.js and calling test() with a function callback containing your assertions. By default, QUnit automatically runs all the tests once they’re loaded, but if you want to postpone running the tests, there’s a switch for that (QUnit.config.autostart) and a function to call later (QUnit.start()).

All in all, a very simple feature detection test for the hypothetical, yet oh-so-useful ‘sparkle’ CSS property might look like this:

<script type="text/javascript" src="qunit.js"></script><script type="text/javascript">// <![CDATA[
//Override tests autostart default;
QUnit.config.autostart = false;
test("sparkle: rainbow;", function() {
    // Since QUnit is based on jQuery we’re getting it for free
    var $div = $('
<div/>').css('sparkle', 'rainbow');
    //Any well-behaved browser will throw away a CSS declaration it doesn’t support.
    //As such, only browsers that support 'sparkle' will pass this assertion
    equal($div.css('sparkle'), 'rainbow', 'This browser cannot sparkle')
})

//Run the tests at a button’s click - useful if there’s more than just QUnit on the page
$('#mybutton').click(function() {
    QUnit.start()
})
// ]]></script>

Like most unit testing frameworks out there, QUnit provides hooks to be able to integrate it with different build or report systems. As we’ve seen before, tests can be auto-run or can be deferred to a later time. Also, there are hooks to execute an action after every test and also after all the tests have been run. These are useful for instance, for collecting data across tests and then sending it somewhere else for processing. This again, is fairly simple to do:

<script type="text/javascript">// <![CDATA[
//All the magic from above comes here plus some more,
//say another test that checks support for sparkle: 'fake' var passedTests = 0;
//This gets called after each test
QUnit.testDone = function(t) {
//t is an object containing details about the test that just run
//  t.name is the name of the test
//  t.failed is the number of assertions failed
//  t.passed is the number of assertions passed
//  t.total is the total number of assertions
//if the test passes, just increase the number of passed tests
   if (!t.failed && t.total == t.passed)
         passedTests++;
   }

   //This gets called once all the tests have run
   QUnit.done = function(r) {
     alert(passedTests + ' have passed, submitting results');
     window.location = 'http://www.example.com/dashboard?passed=' + passedTests;
   }
// ]]></script>

The snippet above, uses the testDone() hook to count the passing tests and the done() hook to report the results to a dashboard hosted at example.com. If that looks too simple, it’s because it really is. The next section will show you neater things that can be done using these tricks.

Why Browserscope?

So now you have a bunch of feature detection tests. How do you collect, store and present the results without a headache and writing too much code? That’s where Browserscope takes the spotlight.

In a nutshell, Browserscope is an open-source distributed testing platform designed with browser profiling and feature testing in mind. What this means is that it allows anyone with a browser and an internet connection to run your tests, provided they’re hosted on a publicly available site. It then takes care of collecting and grouping the results, based on the users’ browsers.

Your tests and the results people get by running them are associated to an API key you can get by logging in to Browserscope (you can find extensive details here). You can generate more than one key – basically each key represents a test suite and its results.

While all this talking about tests might get you thinking about testing frameworks or at least some APIs or helpers, the reality is a lot simpler. The only thing Browserscope cares about is that you fill in the _bTestResults object with key-value pairs. The keys represent the names of the tests in the suite and the values represent the score that particular test achieved. As long as the values are numbers, they can be as simple as a 1 for pass, and 0 for fail or as complex as percentages and fractions. Once the _bTestResults object was filled in, all you need to do is to dynamically load a script from Browserscope and it will automagically send your results in the cloud for processing.

Too much talking and too little code? Here’s how you would go about sending the test results to Browserscope for our beloved ‘sparkle’ CSS property.

 
var _bTestResults = {};

//Let’s assume that fake sparkle is supported, but rainbow sparkle isn’t
//In Real Life(TM) you would fill in this object after doing some testing,
//but here we’ll pretend we just know

//"sparkle: fake" and "sparkle: rainbow" are the names of the tests
_bTestResults["sparkle: fake"] = 1;
_bTestResults["sparkle: rainbow"] = 0;

//We have the results object, let’s call the script!
var testKey = 'CHANGE-THIS-TO-YOUR-TEST-KEY',
    newScript = document.createElement('script'),
    firstScript = document.getElementsByTagName('script')[0];
newScript.src = 'http://www.browserscope.org/user/beacon/' + testKey;
firstScript.parentNode.insertBefore(newScript, firstScript)

Congratulations, that’s it! Run this code in your browser (after filling in your test key) then head to http://www.browserscope.org/user/tests/table/CHANGE-THIS-TO-YOUR-TEST-KEY?v=3&layout=simple to check for your results.

Putting it all together

If you have followed along in the previous sections it should be pretty clear how the two work together and how they can be used to deploy a minimally functional feature support matrix. For the sake of completeness, the snippet below has pretty much everything you need to get going. Just add some markup and CSS of your liking and you’re good to go.

One final note, though: the example below automatically sends the results to Browserscope. In practice, the nice thing to do is to ask your users before sending their results to Browserscope (e.g. via an opt-in checkbox, a confirmation dialog, etc.).

 
var _bTestResults = {};

//Override tests autostart default;
QUnit.config.autostart = false;

//Start the tests upon mouse-click on a button
$('#mybutton').click(function() {
    QUnit.start();
})

//After each test, save its results in the browserscope results object
QUnit.testDone = function(t) {
    //Very simple reporting, just 1 or 0 when the test passes or fails
    if (!t.failed && t.total == t.passed)
        _bTestResults[t.name] = 1;
    else
        _bTestResults[t.name] = 0;
}

//Once all the tests have run, send the results to browserscope
QUnit.done = function(r) {
    var testKey = 'CHANGE-THIS-TO-YOUR-TEST-KEY',
        newScript = document.createElement('script'),
        firstScript = document.getElementsByTagName('script')[0];

    newScript.src = 'http://www.browserscope.org/user/beacon/' + testKey;
    firstScript.parentNode.insertBefore(newScript, firstScript);
}

//Now, our two tests for the 'sparkle' property
test("sparkle: rainbow;", function() {
var $div = $('</pre>
<div></div>
<pre>').css('sparkle', 'rainbow');
    equal($div.css('sparkle'), 'rainbow', 'This browser cannot sparkle');
})

test("sparkle: fake;", function() {
var $div = $('</pre>
<div></div>
<pre>').css('sparkle', 'fake');
    equal($div.css('sparkle'), 'fake', 'This browser cannot even fake sparkle');
})

If you want some tips on how to improve the feature support matrix with a UI that… sparkles, read on!

Making it readable

OK, so now that you got your feature detects running and the results are comfortably aggregated in Browserscope, how do you show the world the level of support for your new feature? The simplest way is to use Browserscope to display their ready-made HTML widget. Adding a line like this

<script type="text/javascript" src="http://www.browserscope.org/user/tests/table/CHANGE-THIS-TO-YOUR-TEST-KEY?o=js "></script>

to your code, will load and display a table with the test results, grouped by “top browsers”. Different URL parameterscan be used to customize both the data that’s being sent to the client and the format of this data. The most important URL parameters are:

  • v – specifies how the data is aggregated and what browsers are included. You can choose to group data by predefined categories of browsers (top, top-d, top-d-e, top-m) or by the browsers for which there are actual test results stored (0, 1, 2, 3). For instance v=3 will return all browser versions, while v=top will return the top browsers list (regardless of whether your tests were run on them or not).
  • o – specifies the format of the data. o=html is actual HTML code, o=js loads JavaScript code that will render the table and o=json will return the test results encoded as a JavaScript object.
  • w and h – set the width and height of the HTML widget, when using o=js or o=html.

To take care yourself of the actual rendering of the results, choose JSON as the format of the data. Doing so will return a JavaScript object that might look like this:

 
{
    "category": "usertest_YOUR_TEST_KEY_HERE",
    "category_name": "CSS sparkle",
    "v": "1", /*stats grouped by major browser version*/
    "results": { /*results stored as a dictionary with the browser names as the keys*/
        "Chrome 20": {
            "count": "3", /*total number of times the test suite was run on this browser (group)*/
            "summary_score": "100", /*percent of tests that pass*/
            "results": {
                "sparkle: rainbow": { /*sparkle:rainbow supported*/
                    "result": "1"
                },
                "sparkle: fake": { /*sparkle:fake supported*/
                    "result": "1"
                },
                "sparkle: none": { /*sparkle:none supported*/
                    "result": "1"
                }
            }
        },
        "Firefox 13": {
            "count": "2", /* notice the tests were only run twice in Firefox;
                this could explain why there’s no test for "sparkle:none" */
            "summary_score": "0",
            "results": {
                "sparkle: rainbow": {
                    "result": "0"
                },
                "sparkle: fake": {
                    "result": "0"
                }
            }
        }
    }
}

This object can then be turned into matrix-like table with PASS / FAIL / N/A markers for each browser and test combination or into a bar chart showing the overall level of support for each browser. You can use Bootstrapto easily give your page a modern-looking styling: just download and unzip it, then include the provided CSS and JS files into your HTML – both come in vanilla and minified versions, if you want to snoop around or just drop them in. In the end, the skeleton of your support matrix will likely look like something along these lines:

CSS Sparkle feature support matrix<script type="text/javascript" src="bootstrap/js/bootstrap.js"></script>
<script type="text/javascript" src="qunit/jquery.js"></script><script type="text/javascript" src="qunit/qunit.js"></script>
<script type="text/javascript">// <![CDATA[
    $(function(){
        /* Logic for loading & displaying Browserscope results */ /* Bind button click for test running and other events */
    });

// ]]></script>
<script type="text/javascript">// <![CDATA[
        /* Your feature detects */

// ]]></script>

</pre>
<h1>CSS Sparkle feature support matrix</h1>
<pre><section>
<h1>Test my browser</h1>
<form><button id="start_button">Run tests</button>
 <label for="send_results">
 <input id="send_results" type="checkbox" name="send_results" />
 Send my results to Browserscope
 </label></form></section><section>
<h1>Browser support</h1>
<div id="table_container"><!-- The results table will be added here via JavaScript --></div>
</section>

In the end

If you want to dive deeper in the code for the CSS Regions support matrix UI, here’s a couple of tips to get you up to speed faster: it relies on Twitter’s Bootstrap for general styling and enhanced forms while more specific styling is done using Sass. The logic for processing and displaying the Browserscope results is in the assets/js/results.js file, while the actual feature detects are in the assets/js/testregions.js file. The index.html file is a good starting point to get an idea of how things flow and are tied together.

Last but certainly not least, feel free to fork this project and write your own feature support matrices for your favorite bleeding-edge specs. Also, bug reports and pull requests are most welcome. Go sparkle!

One Comment

  1. September 28, 2012 at 6:21 am, Lately, in Web Design… said:

    […] Adobe’s QUnit testing. […]