In an update to part 2 of this series, I decided to ref­er­ence the “Click, Baby, Click” ad that was launched on Octo­ber 8, 2013 as part of an Adobe cam­paign aimed at help­ing mar­keters under­stand the need for more intel­li­gent opti­miza­tion tools. While com­i­cal in nature, it cor­rectly teaches the dan­gers of using clicks as a pri­mary mea­sure of success.

Click here to watch the video.

As a quick reminder, these posts dis­cuss how click track­ing should be used only as a small part of a site’s mea­sure­ment strat­egy. “Click Candy”, as I call it, should be avoided in much the same way real candy should be used spar­ingly in a bal­anced diet. My four main rea­sons being:

  1. Click track­ing is dif­fi­cult and costly to main­tain, imple­ment, and report on.
  2. Heat maps and other click mea­sure­ment tools rarely tell the whole story and have inher­ent flaws.
  3. Accu­rate con­clu­sions are reached by mea­sur­ing KPIs, not clicks.
  4. Clicks are not a mea­sure of site usabil­ity or nav­i­ga­tion effectiveness.

The last post cov­ered the first two points and argued that the tools and method­ol­ogy around mea­sur­ing clicks is inher­ently dif­fi­cult and pro­duces ques­tion­able data. I’ll now address the last two rea­sons as well as a quick tip to help imple­ment an appro­pri­ate replace­ment for click tracking.

Accu­rate con­clu­sions are reached by mea­sur­ing KPIs, not clicks.

KPI

This sec­tion comes from the con­cept that the clicks are not a KPI (in most cases). Start­ing with an exam­ple, after a site redesign the click met­rics report that the new “watch video” but­ton on arti­cle pages is red hot. Users love the video con­tent, right? There is no way to tell because we’re look­ing at the wrong met­ric. It’s quite pos­si­ble that the video ad views (the actual KPI) are down sig­nif­i­cantly because users watch video but fail to watch enough to gen­er­ate more ad views than they did in the pre­vi­ous design. Focus­ing on the clicks instead of the video ad views misses the fact that the new design is not help­ing gen­er­ate revenue.

Let’s also con­sider what hap­pens if click data sim­ply points you in the wrong direc­tion. Using the same exam­ple above, what if a heatmap reports that the new “watch video” but­ton is not being clicked very often? The first impulse might be to test dif­fer­ent vari­a­tions of the “watch video” but­ton. Because it has received so few clicks, surely that is cul­prit, right? Maybe…but maybe not. The site was redesigned, so it could be any num­ber of things. Is the “watch video” but­ton even valu­able? Is it just in the wrong place on the page? Is there some­thing else on the page that pre­vents it from get­ting atten­tion? The point is that the click data incor­rectly anchors one into think­ing that the but­ton it is the prob­lem when the actual prob­lem is low ad views which may have noth­ing to do with the but­ton at all.

Clicks are not a mea­sure of site usabil­ity or nav­i­ga­tion effectiveness.

I’ve saved this argu­ment for last because it is prob­a­bly the most stated rea­son why one wants to mea­sure clicks. My dis­agree­ment with this stems from one con­cept: peo­ple are not robots. For what­ever rea­son, many feel that users browse their sites in a defined order. Many feel that their users have a sin­gu­lar defined pur­pose for com­ing to their site and that they are going to click from page A to B, and then place an order on page C, with no devi­a­tion; there­fore, the num­ber of clicks on key ele­ments on a page means suc­cess. Is that how you browse a site? Of course not! You open mul­ti­ple tabs on the same site, com­pare what the site is offer­ing to another (also prob­a­bly open on another tab), find side avenues of inter­est, etc. If the num­ber of times some­thing is clicked on is the mea­sure of how use­ful it is, then real opti­miza­tion oppor­tu­ni­ties are being missed.

Let’s go with another exam­ple. Say your site has inter­nal search func­tion­al­ity. Any heatmap, over­lay, or table report is going to tell you it is one of the most clicked places on any page. The real ques­tion remains: is it help­ing the cus­tomer con­vert? Will mea­sur­ing the num­ber of clicks on the search results tell you that? No. This reports what users think is the answer to their search based on what is pre­sented to them, not what is the answer to their search and, ulti­mately, what helps them con­vert. Mil­lions of clicks on search results does not mean that the inter­nal search tool is effective.

The same holds true for that spe­cial wid­get that Mar­ket­ing and IT worked together on to aid cus­tomers in find­ing what they need. Like inter­nal search results, mil­lions of clicks on it doesn’t mean any­thing if users who use it didn’t actu­ally con­vert or go on to view more content.

What about nav­i­ga­tion? The same argu­ment could be used in the exam­ple above. The num­ber of clicks on a nav­i­ga­tion ele­ment does not mean it was effec­tive at get­ting the user where they wanted to go. The only thing clicks tell us is that it made the user think that ele­ment takes them where they wanted to go.

What should be mea­sured and how?

What is a mea­sure of usabil­ity is if the tools, lay­out, and menus pro­vided help the cus­tomer con­vert, regard­less of the path they took, what they clicked on, or how many times they clicked it. If a site tool, menu nav­i­ga­tion ele­ment, or the like is often used in the same visit or used by the same vis­i­tors that actu­ally converted/viewed the most con­tent on your site, then you can begin to iden­tify pat­terns, trends, and correlations.

How does one mea­sure this with Site­Cat­a­lyst? When users land on key pages or com­plete key actions (not clicks), place the tool used to get them to that page/action in a prop and eVar along with an event. To take this a step fur­ther, treat inter­nal tool usage as you would exter­nal cam­paigns to the site. Stack them, run advanced attri­bu­tion mod­el­ing, see what order they are used, etc…Find out what key tools, menus, or mod­ules are used to find content/convert and then use those reports as part of a test­ing program.

Note that the actual ques­tion of causal­ity is still at large because cor­re­la­tion is not causal­ity. This is where the test­ing pro­gram comes into play. With com­mon themes and trends begin­ning to emerge, you can sniff out areas to begin test­ing and deter­min­ing what is use­ful and what is not.

Eat your veggies!

Echo­ing what your mother likely told you, eat your veg­gies, and then you can have some candy! In ana­lyt­ics land, be sure you have all of your major KPIs mea­sur­ing and report­ing prop­erly, proper ana­lyt­ics gov­er­nance in place, and a well thought-out opti­miza­tion plan/organization run­ning smoothly before invest­ing any time or money into mea­sur­ing clicks. And who knows? You may just find that you’re “health­ier” with­out all that “candy.”

0 comments