You have finally got­ten every­one to agree on what the sin­gle suc­cess met­ric for your pro­gram is, and thanks to your amaz­ing lead­er­ship you have got­ten buy-in from prod­uct, senior man­age­ment, IT, and design to test. You start to talk to each group and they have this amaz­ing idea for a redesigned new prod­uct page, and they want to incor­po­rate these 3 new pieces of tech­nol­ogy and they want know that they will make their deci­sion off of RPV, but they really want to know who is inter­act­ing with each mod­ule and what the flow is and how often peo­ple from iPhones come back to the page. You take all that infor­ma­tion, and then you set to tackle the job of putting together such a large test and get­ting every­one what they want…

You have just com­mit­ted the third deadly sin of test­ing, you have ignored effi­ciency. The fun­da­men­tal chal­lenge of any pro­gram is to achieve the great­est return for the low­est cost, and by ignor­ing those fac­tors, you are allow­ing bad design to lower your pos­si­ble out­come and you are dra­mat­i­cally increas­ing the cost to achieve those results through time, energy, and by mak­ing it far more dif­fi­cult to make the right deci­sion. A suc­cess­ful pro­gram knows how it is going to act before hand and fil­ters every­thing through the the lens of what is going to allow the best out­come, not what peo­ple want or what will make peo­ple happy.

You are allow­ing oth­ers to dic­tate the way things are tested and you are only test­ing the giant things that some­one else was already doing. It is extremely easy to just do what is asked of you, but is equally as impor­tant that you are doing the right things. Test­ing is some­thing that sounds easy to every­one, but the the real­ity is that it is an extremely dif­fer­ent way of think­ing about actions and requires you to really focus on what you should and shouldn’t do. If you do not have clear rules of action, and if you are not decon­struct­ing ideas and doing what is the most effi­cient action, then you are never going to achieve the value that you are capa­ble of. This means that the more you focus on the giant idea or the big promised great redesign, the less likely you are to achieve results since you are look­ing at the wrong end of the system.

If we care about effi­ciency, then what wins is irrel­e­vant and who came up with that idea more so. What mat­ters is how quickly can we mea­sure as many fea­si­ble alter­na­tives and can we act in a mean­ing­ful way. It inst about what idea is bet­ter, but instead not being happy with a 3% lit when you could have had 10% for half the effort. Focus as much energy you can in mak­ing sure that you have as many great inputs to the sys­tem as pos­si­ble, and make sure that the sys­tem only does the min­i­mum amount of work nec­es­sary to make deci­sions. This is why a sin­gle suc­cess met­ric is so impor­tant, and also why you mea­sure alter­na­tives against each other. This is why you need that great lead­er­ship, as you have to allow peo­ple to be bet­ter then would oth­er­wise be. This is why all those human biases that can ruin a pro­gram are impor­tant to com­bat, because each one makes it harder to get great inputs or to act deci­sively. This is why trust­ing just a great test idea from your agency or from some “expert” is a point­less endeavor, and why no test should ever be run with just one alternative.

The worst part is that you will not see the direct impact because you will be so caught up in just accom­plish­ing the task of mak­ing so many mov­ing parts fit together. You will only be mea­sur­ing the things you do act on. All the missed oppor­tu­ni­ties will be lost to the grave­yard of knowl­edge. A 10% lift that takes 6 months and 400 man hours is worth a lot less than the same result that takes 2 weeks and 4 man hours. In order to suc­ceed, effi­ciency must always be the top pri­or­ity for how you allo­cate resources, what you allow into a test, what you track into a test, how you test things and of course what you test. You must make sure that every test is pri­or­i­tized and set-up in a way to accom­plish both tasks. You must make sure that you are chal­leng­ing assump­tions, and that you are not car­ing about what wins, but instead how many dif­fer­ent ideas go into the system.

Every­thing starts with edu­cat­ing peo­ple about what makes a good test. Work with peo­ple to make them under­stand what makes a good test, why you need mul­ti­ple inputs, why it isn’t about val­i­dat­ing their ideas. The sys­tem works when every­one works together and get over their ego to achieve the great­est return. Great ideas can come from any­where, and the act of think­ing out­side the box frees you up to try even greater ideas. Make them under­stand that it isn’t about who is right, but instead about find­ing the best way to prove every­one wrong. In almost all cases, the great­est returns come from the things that no one thought of or that go against every­thing you have ever thought mattered.

Equally you must be edu­cat­ing peo­ple on how you act on data. Make them aware that want to know is not the same as need to know. Make them know that action is more impor­tant then com­fort, and that the worst thing you can do is get caught up try­ing to answer inquiries that have noth­ing to do with the suc­cess of the site. It won’t hap­pen overnight, but it will hap­pen if you make it your top pri­or­ity and focus on that part over just the sim­ple exe­cu­tion of any idea. It will cause peo­ple to not always be happy, and you will take them out of your com­fort zone, but if you are not enforc­ing effi­ciency, then who will?

The small details still count for a lot and are often the hard­est to change. The devil is in the details as they say. Are you not overly com­pli­cat­ing set-up for the sake of mak­ing peo­ple happy? Are you only test­ing things because your agency or a senior VP promises you it will work? Do you have 40 met­rics on a cam­paign just because some­one might ask about them? Are you look­ing at seg­ments so small that the cost of exploit­ing them would never be worth your time? Hav­ing the dis­ci­pline to set up the test for what you need and noth­ing else greatly decreases the time, effort, and tech­ni­cal set-up of each test. It also greatly increases your abil­ity to act in a mean­ing­ful and timely way after the test. Never add some­thing to a test if you are not 100% sure of how it will be used and how it will impact change if it is pos­i­tive or neg­a­tive. Never wait to see what you will get. Opti­miza­tion is a com­pletely dif­fer­ent dis­ci­pline then ana­lyt­ics, and edu­cat­ing your­self and oth­ers on how to think and act is what will greatly increase the effi­ciency of each effort.

It is not uncom­mon for a sim­ple test that would nor­mally get live in an hour to be side­tracked for weeks or months for the need to track addi­tional infor­ma­tion that has lit­tle to no value. It is also not uncom­mon for mas­sive sim­ple oppor­tu­ni­ties to be missed because you are not will­ing to explore or not will­ing to take time away from mas­sive projects that are on top of everyone’s mind. Just as com­mon is a test with a clear result being debated on need­lessly for weeks or months to finally be acted on, or in some cases, to never do so. All these hap­pen because it is eas­ier to make peo­ple happy then it is to do the right things. All these things hap­pen when the act of get­ting a test live becomes all we focus on and we for­get to do the small things that ensure we are being effi­cient. If you frame every­thing with the need to be effi­ciency, and you do the hard things that no one likes but every­one needs, then you will never have these prob­lems. Never let this hap­pen to you. Allow­ing these things to hap­pen is a fun­da­men­tal sin when it comes to achiev­ing real last­ing value with your test program.