It’s Not Just About Lift

I often see the def­i­n­i­tion of suc­cess in a test get whit­tled down to a sin­gle goal: lift. Whether it’s lift in con­ver­sion rate, rev­enue per vis­i­tor, time spent on site, or some other KPI, I get the sense that many com­pa­nies think test­ing is lift or die. While show­ing lift is cer­tainly impor­tant, I don’t believe it should be the sin­gle mea­sure­ment by which we deem a test a suc­cess or fail­ure. I would argue that it’s equally, if not more, impor­tant to be able to present learn­ings from a test.

The Art of Testing

Design­ing a good test requires a lit­tle bit of art. By art, I don’t just mean cre­ative. Yes, the cre­ative should absolutely be great, oth­er­wise you run the risk of garbage in, garbage out. How­ever, the ques­tion is also vital.

Start With a Question

Every test should nat­u­rally answer a ques­tion. If you’re hav­ing a hard time look­ing for inspi­ra­tion, think back to the ideas that you’ve often debated in your mar­ket­ing meetings.

For exam­ple, is the form more effec­tive on the right or left? Do users respond to a green but­ton more than a red but­ton? Should my sub­scrip­tion process be 1 page or 5 pages long? Do mod­els really make a dif­fer­ence in images?

Too often, I find that the ques­tion isn’t even present. A com­pany gave me a user sce­nario of their ideal A/B test recently, and it con­sisted of dra­mat­i­cally chang­ing an entire check­out process from start to end. I lost count of how many vari­ables had been changed between their A and B ver­sions. The test would also require exten­sive devel­oper resources to imple­ment because it involved a lot of back­end inte­gra­tion. I asked them what they were try­ing to under­stand through this test, and I got a lot of blank expres­sions and averted eyes.

The best-case sce­nario in run­ning this type of test is that you as the mar­keter find a lot of lift and every­body claps their hands about how great the test was, and then they get back to busi­ness as usual.

Can You Answer ‘Why?’

The more likely sce­nario is that you find some lift in the test, and the first ques­tion you get back after pre­sent­ing the results to man­age­ment is, why? Why did ver­sion B gen­er­ate lift? If you don’t have a firm grasp on what ques­tions you’re ask­ing in your test, you may find your­self at a loss to answer. Sure, you can always hypoth­e­size that the user expe­ri­ence was much improved in the alter­na­tive, or that refresh­ing the site was impact­ful, but wouldn’t it be nice to know that remov­ing the left nav­i­ga­tion and con­sol­i­dat­ing the billing and ship­ping address pages were most influential?

The worst-case sce­nario is that ver­sion B per­forms poorly, and again, every­one is ask­ing you why, but now they’re also talk­ing about how waste­ful and unsuc­cess­ful test­ing has proven to be. Where do you go from this point? The odds of get­ting the tech­ni­cal and polit­i­cal sup­port nec­es­sary to con­tinue test­ing are prob­a­bly slim at this point.

But imag­ine if you had instead designed your test with your ques­tions form­ing its foun­da­tion. Now, regard­less of the out­come in lift, you could still present learn­ings and next steps to keep things mov­ing forward.

Which of the fol­low­ing state­ments sound better?

• “We learned that remov­ing the left nav­i­ga­tion entirely was not effec­tive so we’re going to move on to test­ing a short­ened nav­i­ga­tion along with the pro­mo­tional ban­ner and call-to-action.”

• “Ver­sion B per­formed worse so we’re going to test some­thing dra­mat­i­cally dif­fer­ent from both A and B next time.”

I’d take the first one any day.

The Zen of Testing

Patience truly is a virtue, and it’s much eas­ier said than done. I know from per­sonal expe­ri­ence. I recently broke my #1 rule of “Do it right the first time” for a client because we were both rush­ing to hit some mile­stones. We ended up in that “likely sce­nario” bucket where we didn’t see the home run and were left won­der­ing what we truly got out of the test. Did we under­stand what the impact was of fea­tur­ing spe­cific prod­ucts vs. includ­ing a red free ship­ping ban­ner at the top? No — we had decided to forgo the mul­ti­vari­ate for the A/B test because we weren’t sure we had the traf­fic and time to sup­port the MVT. Did we under­stand how peo­ple pro­gressed through the fun­nel in each ver­sion? No — we skipped tag­ging the fun­nel because it would take too much time. In hind­sight, I very much regret not tak­ing the incre­men­tal time to do things right the first time so that we could avoid mak­ing up for it the sec­ond time around. How­ever, it’s a les­son learned and hope­fully it means that you won’t have to learn it through expe­ri­ence as well!

So while it’s tempt­ing to run your boil-the-ocean test all at once, it really is worth the time to break that test down into dif­fer­ent com­po­nents. Ask your­self what you are try­ing to under­stand by run­ning that test. That should nat­u­rally lead you to the ques­tions. From there, try to con­struct an iter­a­tive approach that knocks out these ques­tions in waves. This approach allows you to get learn­ings faster and also breaks up the devel­op­ment resources you may need into smaller, bite-sized chunks.

A lot of peo­ple think test­ing is all about luck, but I find that the more fre­quently and intel­li­gently you test, the luck­ier you get.

0 comments