That’s right, you read cor­rectly — GUN CONTROL. No mat­ter what side of the issue you are on, there are plenty of par­al­lels between chang­ing gun leg­is­la­tion and chang­ing a website.

Gun Con­trol
The inter­net is sat­u­rated with cor­rel­a­tive facts sup­port­ing both sides of the debate on gun con­trol. For exam­ple, one com­pelling argu­ment against gun con­trol states, “In 1976, Wash­ing­ton, D.C. enacted one of the most restric­tive gun con­trol laws in the nation. Since then, the city’s mur­der rate has risen 134% while the national mur­der rate has dropped 2%” [1].  While there is a cor­re­la­tion between strict reg­u­la­tions and increased vio­lence, cau­sa­tion is more dif­fi­cult to prove. On the other hand, a com­pelling argu­ment for gun con­trol states, “Roughly 16,272 mur­ders were com­mit­ted in the United States dur­ing 2008. Of these…67% were com­mit­ted with firearms” [2].  Again, cor­re­la­tion is not nec­es­sar­ily causation. In fact, the opti­mal solu­tion to the gun con­trol debate will never be reached by focus­ing on cor­rel­a­tive met­rics alone. Like the issue of gun con­trol, online opti­miza­tion requires more than cor­rel­a­tive data to reach opti­mal outcomes.

My Back­ground
I’ve learned many impor­tant lessons about opti­miza­tion as I con­sult for some of the top brands on the inter­net. I’ve learned from my clients’ suc­cesses as well as from their mis­takes. I’ve seen many tech­niques that work well and even more that do not. I’ve watched some of the bright­est opti­miza­tion man­agers and test­ing strate­gists suc­cumb to com­mon pit­falls. My goal is to share some insights so that you can run bet­ter opti­miza­tion pro­grams. While there are many ideas I would love to dis­cuss, my first three posts are ded­i­cated to three impor­tant prin­ci­ples of opti­miza­tion that will help you get started.

Prin­ci­ples of Optimization

  1. Do not base suc­cess on cor­rel­a­tive metrics
  2. Focus on the best pos­si­ble metric(s)
  3. Cre­ate exper­i­ments that focus on causal results

1. Do not base suc­cess on cor­rel­a­tive metrics

In the heated debate between gun rights and gun con­trol, I believe the core desire from both sides is to mit­i­gate the risk of vio­lence and pro­mote safety. If a change is made in gun leg­is­la­tion, whether it favors gun rights or gun con­trol, the real mea­sure of suc­cess is not whether gun-related vio­lence goes down. Rather, suc­cess occurs when vio­lence in gen­eral goes down. If gun-related vio­lence drops slightly, but vio­lence in gen­eral increases three fold, nei­ther side is happy with the out­come. The same crit­i­cal dis­tinc­tion must be made when look­ing at cor­rel­a­tive met­rics in the opti­miza­tion world. If I add a new mod­ule to a page, my con­cern is not in the degree to which peo­ple inter­act with the mod­ule, but whether or not that mod­ule helps or hurts my site’s over­all suc­cess. This per­spec­tive may require a com­plete rethink­ing of the way you oper­ate online, but it can mean the dif­fer­ence between suc­cess and fail­ure of your opti­miza­tion efforts.  Take, for exam­ple, a client email I received last week: “When I look at the link clicks, the sec­ond ver­sion is win­ning but it’s the oppo­site when we look at con­ver­sions.”  It’s not uncom­mon for sub­sti­tute met­rics, such as clicks, to trend a dif­fer­ent direc­tion from the true suc­cess (in this case, leads). My answer is the same every time: Which met­ric is most important?

Con­sider the fol­low­ing sce­nario below:
A test cell shows a 30% increase in email opens with a 12% decrease in site-wide revenue.

Is this suc­cess? No. Email opens are only impor­tant as they relate to site-wide suc­cess. I can­not think of any com­pany that directly mon­e­tizes the act of open­ing an email. Com­pa­nies tend to look at email opens, see the cor­re­la­tion between site-wide suc­cess, and make ter­ri­ble assump­tions like, “If I can increase email opens by X%, rev­enue should increase by Y%. That would be the same as assum­ing that if you could decrease the amount of med­i­cine con­sumed by 30%, sick­ness would go down by 15%. After all, there is a high cor­re­la­tion between med­i­cine con­sumed and sick­ness. Would any­one do that? Prob­a­bly not. Yet every day, peo­ple setup test after test focus­ing on things like bounce rate or engage­ment rate with a ban­ner. While these may be inter­est­ing and may even cor­re­late with suc­cess, they don’t equal suc­cess. Why focus on a proxy when you can focus on the real thing? Sep­a­rately con­sider each of the fol­low­ing results from online test­ing and deter­mine whether or not you deem the out­come to be a success:

  • Inter­ac­tion with the nav­i­ga­tion goes up, but over­all site engage­ment goes down
  • Clicks on a sub­scrip­tion ban­ner increase, but over­all sub­scrip­tions go down
  • Num­ber of peo­ple who make it to step 3 in the sub­scrip­tion fun­nel increases, but over­all sub­scrip­tion rev­enue plummets
  • Rate of peo­ple com­ing from an email and click­ing on mod­ule X increases, but site rev­enue drops
  • Bounce rate decreases, but site rev­enue decreases
  • Rev­enue from peo­ple who clicked on a rec­om­men­da­tion went up, but site-wide rev­enue fell

Opti­miza­tion requires mak­ing tough deci­sions and one of those is decid­ing that the cor­rel­a­tive met­ric is only rel­e­vant as it relates to the final suc­cess met­ric. If your goal is to increase rev­enue, focus on rev­enue. If your goal is to increase con­tent con­sump­tion in paid areas, focus on con­tent con­sump­tion in paid areas. More often than peo­ple real­ize, a pos­i­tive change in a proxy met­ric can neg­a­tively impact your true mea­sure of suc­cess. Some of the most com­mon proxy met­rics that I see derail test­ing pro­grams are bounce rate, click rate, inter­ac­tion rate, and email opens.

The met­rics above are fine to track with your ana­lyt­ics pack­age but prob­a­bly have lit­tle or no place in an opti­miza­tion pro­gram. In ana­lyt­ics, we want to know these met­rics so that we can attempt to under­stand any cor­rel­a­tive behav­ior. We observe their his­tor­i­cal cor­re­la­tion to suc­cess. How­ever, in the opti­miza­tion sphere, the degree to which any of these met­rics is impor­tant can fun­da­men­tally change the moment we intro­duce a new user expe­ri­ence. So the next time you run a test and your orga­ni­za­tion is push­ing for you to observe bounce rate, work to edu­cate key stake­hold­ers on the prin­ci­ples you’ve learned. As an ana­lyst, test­ing guru, opti­miza­tion direc­tor, mar­ket­ing manager—or what­ever your role may be—one of your many chal­leng­ing tasks must be to edu­cate the orga­ni­za­tion. It requires trans­form­ing the way peo­ple view online suc­cess. Just because you have observed the cor­rel­a­tive met­rics through ana­lyt­ics does not mean those same num­bers should be your focus in opti­miza­tion. Ana­lyt­ics is about observ­ing pat­terns over time. Opti­miza­tion is about chang­ing behav­ior, which often results in entirely new patterns.

Stay tuned for my next posts about focus­ing on the best pos­si­ble metric(s) and cre­at­ing exper­i­ments that focus on causal results.

0 comments