Frank’s Folly
After 12 long years of man­ag­ing his plant’s mate­r­ial inven­tory, Frank was rewarded with a long-awaited pro­mo­tion to gen­eral man­ager where he would over­see all plant oper­a­tions includ­ing sales.  Just two months into his tenure as gen­eral man­ager, Frank returned from lunch to find the company’s global sales man­ager wait­ing in Frank’s office. The sales man­ager wasted no time with polite con­ver­sa­tion. He was there to let Frank know that prof­its were sag­ging and Frank would need to increase the plant’s net income by 15% in the 4th quar­ter if he wanted to still be employed in January.

The sales man­ager left the office abruptly and Frank felt his pulse rac­ing through his tem­ples. As he pon­dered the sit­u­a­tion he drew upon his long expe­ri­ence man­ag­ing the plant’s inven­tory. Frank remem­bered that the months when prof­its were high­est, inven­tory pur­chases also tended to be high.  Instantly Frank knew what he had to do. He spent the after­noon on the phone with his top sup­pli­ers, where he directed each of them to deliver 20% more mate­ri­als than fore­casted for the 4th quar­ter. Frank sat back and waited for prof­itabil­ity to surge.

After read­ing this exam­ple, two things are likely appar­ent. One, Frank should start pol­ish­ing his resume, and two, cor­re­la­tion is not cau­sa­tion. Unfor­tu­nately, the fun­da­men­tal assump­tion Frank makes here is based on the same fal­lacy that plagues web ana­lyt­ics and web opti­miza­tion ana­lysts every day. Ana­lyt­ics and web opti­miza­tion are dis­tinct dis­ci­plines and they should not be approached in the same way.

The Dif­fer­ence between Ana­lyt­ics and Opti­miza­tion
Ana­lyt­ics is very much about iden­ti­fy­ing cor­re­la­tions or trends in your data. Here is an exam­ple: When page depth dur­ing a sin­gle visit is greater than three, vis­i­tors are 40% more likely to return two or more times per month and also 20% more likely to pur­chase an offline subscription.

Opti­miza­tion is much more about cause and effect. By chang­ing X, we saw a mea­sured response in Y. With opti­miza­tion it is not merely enough that a change to the lay­out of the arti­cle page tem­plate coin­cided with an increase in a visitor’s time on the site. Time on site may gen­er­ally cor­re­late direc­tion­ally with page con­sump­tion, but that cor­rel­a­tive leap is not cau­sa­tion. With cor­re­la­tion you will never know if increased time on site causes vis­i­tors to con­sume more page views, or if the peo­ple who are look­ing at more pages just hap­pen to spend more time.  As a result, you can­not eas­ily con­clude that a change in one will cause an equal change in the other.
What if the lay­out change to the tem­plate made the arti­cle page more dif­fi­cult to nav­i­gate? If so it may take vis­i­tors longer to find what they are look­ing for, increas­ing time on site, even though more vis­i­tors end up leav­ing the site after fewer pages, dri­ving page con­sump­tion and ad con­sump­tion down.  This high­lights the need to go beyond cor­re­la­tion in test­ing and actu­ally prove cau­sa­tion. Prov­ing cau­sa­tion requires you to pick your met­rics carefully.

Pick Met­rics that Truly Rep­re­sent Rev­enue
If you pick opti­miza­tion suc­cess met­rics sim­ply because they line up with your ana­lyt­ics dash­board, you are dan­ger­ously close to repeat­ing Frank’s folly. Mea­sur­ing bounce rate and click-thru rate to eval­u­ate a test are two great exam­ples. While bounce rate has a key role in the correlation-driven ana­lyt­ics world, it is rarely a good fit for deter­min­ing the win­ner of a test when cau­sa­tion is what mat­ters. While decreas­ing the num­ber of vis­i­tors that bounce on their ini­tial page load may cor­re­late with vis­i­tors who ulti­mately pur­chase a prod­uct or con­sume more con­tent, this is merely correlation.

The prob­lem arises from using a cor­rel­a­tive mea­sure as a proxy for some­thing you can and should mea­sure directly. This is no dif­fer­ent than look­ing up the weather by zip code on your phone to see if it is rain­ing out­side instead of sim­ply look­ing out the win­dow. Why rely on a proxy when you can track the real thing?  In this case it could be that bounce rate is being used as a proxy for down­stream page con­sump­tion (pages con­sumed after see­ing the test expe­ri­ence). It may be a good proxy, but it may also be a ter­ri­ble proxy. It could be that a tested change pro­duces higher bounce rate because it does a bet­ter job of fil­ter­ing out less-qualified vis­i­tors. As a result you have fewer vis­i­tors that remain in the path but they are much more qual­i­fied and ulti­mate con­ver­sion improves.  If you stop at bounce rate you could be mis­judg­ing the test.

Do not focus on met­rics that cor­re­late with rev­enue. Instead, pick met­rics that are revenue.

If you are lucky enough to find your­self among the ris­ing ranks of ana­lysts with respon­si­bil­i­ties in both the ana­lyt­ics and opti­miza­tion spheres, be care­ful not to repeat Frank’s folly. Do not let a cor­rel­a­tive met­ric take the place of a truly mon­e­tized met­ric in your test­ing. Just as cor­re­la­tion is not cau­sa­tion, there is a fun­da­men­tal dif­fer­ence between the dis­ci­plines of ana­lyt­ics and opti­miza­tion. Be care­ful not to con­fuse the two.

0 comments