In my nearly 9 years work­ing in test­ing and data, I have worked with or eval­u­ated close to 300 dif­fer­ent sites and their test­ing pro­grams. While I wish that I had all these great sto­ries of amaz­ing pro­grams left and right and amaz­ing results that they are all receiv­ing, the sad truth is that there is no per­fect pro­gram and there are very few you would even want to take some­thing from as a start­ing point. The rea­sons that pro­grams get into this stage are legion, but there are 7 com­mon “sins” that destroy pro­grams. I want to go through each of these 7 deadly sins and look at how they man­i­fest and how to fight them. What you will find is that all of these sins come from the same place, a lack of under­stand­ing, either will­ful or not, of how to think about test­ing or what the dif­fer­ence is between being the hero or the vil­lain. What you choose to do about these sins is up to you, as there can be no greater ret­ri­bu­tion then eval­u­at­ing your own actions, find­ing your own weak­ness, and then turn­ing that into a strength.

The first of these sins, and by far the most evil and dam­ag­ing, is fail­ure to align your pro­gram on a sin­gle suc­cess met­ric. So many pro­grams fail because they opti­mize to their KPIs, or to the con­cept of the test, or even worse, to what the group run­ning the test, what they feel they have con­trol over. They opti­mize to improve their con­cepts, not to improve the site. What makes this sin espe­cially dan­ger­ous is that it will make it look like you are greatly suc­cess­ful, as you will get a return, and because the thing you are track­ing is not site wide rev­enue met­ric, you will often finds the mag­ni­tude of change is dra­mat­i­cally higher.

The rea­son this is a sin is that you are mis­tak­ing the con­cept or the area for the end result. You are ignor­ing the unin­ten­tional con­se­quences of the test to focus on what you want to find out, not what you need to find out. You are assum­ing that the world only works exactly how you think it does and you are abus­ing the data to prove your point. A clas­sic exam­ple of this is test­ing to improve “bounce rate” or clicks. In both cases, you are mis­tak­enly think­ing in a lin­ear fash­ion, assum­ing that the rate of action is the same as the value of the action. Only in cases where the rate is the same as value would you see a tie together, but you will not know the value unless you look at the global impact. To put more sim­ply, if the reduc­tion of bounce rate or the increase of clicks mat­ter, it will impact the bot­tom line. If it does not, you will see that in the bot­tom line as well. In both cases, the inter­me­di­ary action, the bounce or the click, is sim­ply a means to an ends but we are for­get­ting this in order to make the con­cept eas­ier for us to under­stand. In all cases, you can see a mas­sive increase, but because it is not tied to the end result, you have no clue if it is help­ing or hurt­ing your site make addi­tional rev­enue or be more efficient.

In way too many cases, when you go through and eval­u­ate global impact after the fact, you find that the increase that you are shoot­ing for comes at the cost of higher value actions, mean­ing that improv­ing your click through rate to your sec­tion is cost­ing your site total rev­enue. When you don’t do the work to find out, then you will con­tinue to waste money and decrease per­for­mance, while at the same time have many great impres­sive large lifts to talk about to your boss.

What is espe­cially frus­trat­ing with this sin is that there are many dif­fer­ent groups and “experts” out there that are more than happy to prop­a­gate the myth or to abuse it to make them­selves look good. Agen­cies are espe­cially noto­ri­ous for this behav­ior. They let you pick a sub met­ric and opti­mize to that, which has the dou­ble advan­tage of feed­ing your ego and avoid­ing deal­ing with the core issues that will define your suc­cess. Even worse, they will talk you into or let you pick mul­ti­ple met­rics, and if the first one doesn’t show how amaz­ing they are, they will find one deeper in to show how big an impact they had for you. This is their fail safe to make you feel bet­ter about your pro­gram while simul­ta­ne­ously suck­ing more money out of your pocket.

For many groups, fig­ur­ing out what the pur­pose of their site is or what defines suc­cess site wide (almost always rev­enue) is a dif­fi­cult and time con­sum­ing task. It is also the sin­gle great­est maker of if you will receive any value from your test. I refuse to work with a group unless they have fig­ured out what they are try­ing to do for the entire site and then will only run a test if they agree to make deci­sions only off the impact to that bot­tom line. The results then can tell you so much. If you find that you are not impact­ing it much, that means that you are only doing bet­ter test­ing and only test­ing what you want. If you find that pro­mot­ing item X increase rev­enue for that item or group, but the site loses money, you should re-evaluate your pri­or­i­ties for mer­chan­dis­ing. If you find that get­ting more peo­ple to your cart doesn’t increase rev­enue, then you are not opti­miz­ing to value. In all cases, the actual rea­son why things are hap­pen­ing is almost com­pletely irrel­e­vant, but the value derived is from act­ing on a mean­ing­ful way only on site wide goals.

Look at what you are doing and see if you are com­mit­ting this sin? Are you track­ing dif­fer­ent suc­cess met­rics for dif­fer­ent tests? Do you look at depen­dent met­rics, such as a lim­ited prod­uct set or only con­ver­sions from peo­ple who click on some­thing? Are you look­ing at met­rics that have no tie to any site wide suc­cess like bounce rate or clicks? If you are doing any of those, then you are com­mit­ting the great­est sin of test­ing. You are wast­ing your time and energy to sub opti­mize and are assur­ing that you can never know the real impact of your tests.

Find­ing your sin­gle suc­cess met­ric can be dif­fi­cult and can cause a lot of headaches get­ting buy-in, but unless you are will­ing to do the hard work, then what is the pur­pose of your pro­gram. You have no chance of find­ing real value, and the best you can do is make some­thing think you are hav­ing a much greater impact then you really are. There are many bricks on the path into and out of the dark­ness, it is up to you which direc­tion you are trav­el­ing on them.

0 comments