In my last post, I discussed audience measurement – unique visitors, page views, time spent on site, and impressions – and why I believe time spent on site is not, contrary to what some are saying in the trade press, the best metric for measurement. Today, more details on why page views are not necessarily the best measure of visitor engagement…

As with other advertising mediums, the online audience measurement industry was born out of the need to provide publishers with a common currency by which they could market and qualify their sites to prospective advertisers.

If I’m an advertiser looking to reach 1 million people with a new movie promotion, how would I know which media sites I should advertise on? If I had a common metric – or a common set of metrics – I could quickly scan that list to find sites that reach 1 million people, and then buy that space to reach my audience.

For such simplified audience measurement like this, Nielsen is the de facto standard in the offline world, and at the outset of the Internet, there was no online equivalent.

In the late 1990’s, audience measurement firms sought to be this common currency, offering up metrics like unique visitors, page views, and time-spent-on-site. Nielsen was one of the first to throw its hat in the ring. They created a service that projected these metrics for major Internet sites, based on a panel they maintained of several thousands users.

This is nearly identical to their offline approach, and why not: if it worked in the offline world, why not give it a shot online? The challenge is that this panel-based approach is easily skewed and only useful at a very high-level. As I’ve talked about in the past, that’s because the Internet offers the potential to successfully reach people in extremely narrow niches of interest. If you’re a knitter who also likes to quilt but who hates to crochet, there’s probably a website for you and others with the same likes and dislikes. On the other hand, it is highly unlikely that, even with its panel of thousands, an audience measurement panel will have many knitting, quilting, crochet-haters on its panel.

Audience measurement firms will likely then struggle to measure the niche-y craft site, when in reality, that site may see tens of thousands of visitors per month. A yarn company looking for places to advertise, but who goes only by panel responses, may miss out on the site completely, never knowing there was a small but important group of crafts enthusiasts potentially eager to see the yarn company’s ads. And as many folks know, loyal customers can be 7x more valuable than new customers, so tapping into this niche customer segment is critical.

Along these same lines, targeted direct marketing initiatives like email campaigns, paid search, new microsites, etc., can also be understated by such panel services. Again, those initiatives are likely to hit only a handful of the panelists, and a “handful” is generally viewed as not being statistically significant enough to surface as a meaningful trend or change.

Similarly, when sites add new content – new articles, special editions, etc. – these can be understated or undetected. By how much? There’s no way to tell for sure, unless you use web analytics, which is arguably the most accurate way to measure the success of these initiatives.

Still in doubt? Run a simple test. If you’re a retailer, look at how many orders you have on a given day as reported by your commerce engine. Now check your web analytics platform. The orders, generally speaking, should be within 2-3% – if not perfectly in line. Now, check with an audience measurement firm – what are they reporting for the day? I’ve done this multiple times and never seen anything close to accurate. If you’re not a retailer, pick something else – like leads, applications, etc – that you can validate not only with web analytics but a back-end system. The key to this exercise is triangulation so you need at least one more data source beyond your web analytics and audience measurement services.

Of course, site-side analytics has historically offered very little to advertisers in evaluating competing sites, so I readily acknowledge that audience measurement can be a useful proxy for comparative traffic levels (as I’ve written about in the past.)

Still, in the late 1990s, when audience measurement firms introduced these panels, advertisers were understandably excited, because at least they could compare one site to another with the same metrics. In fact, for some time, venture capitalists and investment bankers often used these same services to estimate valuations for pre-IPO internet companies, using unique visitors as the measure of “eyeballs” the site could presumably monetize into paying customers some day.

Around the same time, page views also came to be viewed as a measure of engagement. Folks began to realize that not all unique visitors are created equal: two sites that each have 1 million visitors can be very different from each other in terms of reach, if most of the visitors to one of the sites come to the home page and then leave immediately, while visitors to the other site stay and browse.

Page views, then, became the check and balance against unique visitors, and ideally the two taken together could provide a rounded assessment of site engagement and revenue potential.

The challenge with page views is that they are actually not standardized. Nielsen and other audience measurement firms could control unique visitors because they managed the panels themselves. They paid or otherwise compensated each member of the panel so that uniqueness was fairly well preserved.

But audience measurement firms do not and cannot control page views because they source from the sites themselves. Pages come in all different shapes and sizes, some with dynamic content and some that are completely static. Not all page views are created equal – and audience measurement firms are faced with the impossible task of trying to create a common standard. And let’s pretend for a second that this was achievable…that audience measurement firms had picked apart every web page from every site, and classified it as a page view. Well, Web content can change multiple times per day per site so while the utopian standard could have theoretically been accurate, it would have quickly become inaccurate as content and layout changed.

For example, there is the standard HTML page that we all know. That’s fairly easy to standardize across sites.

But then there are generated pages with dynamic URLs such as retail websites that create new URLs on the fly for each product. What do you do with that? On top of that, you have dynamic pages that do not change the URL at all (see my example about the GAP in my previous post.

And streaming media and widgets are not even pages – they are complete experiences in and of themselves.

As the Web has evolved, these “non-traditional” pages have become increasingly prevalent, because in many cases, they provide a superior customer experience.

So while page views emerged as an early measure of engagement, it really was never fair to compare it across sites – whether they were tracked by a panel or otherwise.

In my next post, I’ll discuss audience measurement firms’ “new” metric, time-spent-on-site.


I still like to measure pageviews per visitor, and i'm curious as to the industry average? I have never been able to find an accurate number. I guess by reading this article above, you really can't compare that either?

Rob Blakeley
Rob Blakeley

Some thoughts: If advertisers want to pay for time-spent, they will get it or they may go elsewhere. That aside, what's the goal of engagement, time-spent , or any other metric or combination of metrics ? Follow the money. If the metric trend line matches the profit trend line, then you have validated the metric. In fact, there is a good chance that the metric would be predictive. If not, stop using it. Who's money? If you are a retailer, it's your money. If you are an advertiser, then it's sort of your money. They pay you to drive their profit. Demonstrating that with your metric would require more cooperation and risk than most companies are willing to undertake.