Take a deep breath. Before you start think­ing about the imple­men­ta­tion and what you can or can­not do with the plat­form, begin with your data. Look for pages with the high­est bounce rate, steps in your fun­nel with the great­est fall­out, land­ing pages with the low­est con­ver­sion rate. These should be red flags help­ing you pri­or­i­tize where to start test­ing first. For exam­ple, in the report below, check out the high­lighted con­ver­sion rate that’s sig­nif­i­cantly lower than the other search terms:

Get key stake­hold­ers who are on board with test­ing in a room together and throw out hypothe­ses you draw from the data, along with the ideas you’ve debated in cir­cles for­ever with­out ever com­ing to a deci­sive answer. You can never under­es­ti­mate how polit­i­cal test­ing can be in an orga­ni­za­tion, so make sure you don’t have naysay­ers in the room on day 1. They’ll come into play later on :)

Once you’ve got all the ideas on the wall, it’s time to start slic­ing! Let’s say that the list below is what we have to work with.

Here are some key ques­tions you want to ask your­self when eval­u­at­ing the qual­ity of an idea:

1) What is my hypoth­e­sis? — If you don’t have a hypoth­e­sis, it’s game over.

2) What are my goals? — Sounds like a no brainer, but I can’t tell you how many times I’ve had clients want to try test­ing col­ors or lay­outs just for the heck of it. You must have a clear goal that you are work­ing towards! It can be more than one goal, such as locat­ing a dealer or request­ing a quote, in the case of an automaker, but just make sure you’ve got it clear in your mind so your design aligns with it.

3) How much traf­fic do I get? — This one’s impor­tant. If you only get 50 con­ver­sions a day, you will never be able to run a test with 100 dif­fer­ent vari­a­tions. The amount of time it would take to run that test would make it use­less. We’re talk­ing months, if not years. To high­light the point: at 50 con­ver­sions a day, half of the vari­a­tions tested wouldn’t even have a sin­gle con­ver­sion after day 1.

4) Which seg­ments do I want to drill into? This one’s directly related to the amount of traf­fic you get. Because let’s say you wanted to run a test with 100 dif­fer­ent vari­a­tions, AND you have 4 key seg­ments you’re inter­ested in, such as paid search, organic search, email, and affil­i­ates, well now you are most def­i­nitely talk­ing about a test that will take over a year. The point is not that you can’t tech­ni­cally run a test for that long, but as an online mar­keter com­pet­ing against other agile mar­keters, what would be the point?

5) What will I learn if the alter­na­tive is the win­ner? It’s always great to find a win­ning alter­na­tive in a test. But if you just grab 20 dif­fer­ent call-to-actions you found on the inter­net, and It turns out the one from T-Mobile is the win­ner, what did you learn? Maybe you found that magenta really does rock, but maybe you just stum­bled your way onto a win­ner. The upside of that dis­cov­ery is nice, but how do you actu­ally build on that learn­ing to evolve your mar­ket­ing practice?

6) What will I learn if there is no win­ner? The dreaded lack of sta­tis­ti­cal confidence…dum dum dum. This one’s going to hurt, no doubt about it. But there are ways to find grace in this result. The first is to make sure you had great answers to all of the ques­tions above. Oh, offer­ing a 20% dis­count on sub­scrip­tion didn’t increase over­all rev­enue? Well, now we know that we don’t need to cut our price and pric­ing isn’t as impact­ful as we expected. Great, let’s move on! Or how about those seg­ments that we set up? If you drill down, you may find that those Mac users really behaved dif­fer­ently than the Win­dows users, result­ing in a cancel-out effect at the top. Now we’ve got 2 sep­a­rate win­ners, and more learn­ings to build upon!

Once you’ve asked your­self these ques­tions and pared down your ideas list, we now need to pri­or­i­tize what goes first. I like to cal­cu­late a score because it forces peo­ple to emo­tion­ally detach from their favorites.

There are two fac­tors you want to con­sider when rat­ing a poten­tial test idea:

1) How dif­fi­cult will this be to imple­ment (both tech­ni­cally and polit­i­cally)? Again, you can­not under­es­ti­mate the polit­i­cal impact of a test idea. For exam­ple, you just want to test dif­fer­ent treat­ments of women’s cloth­ing on the home­page. That shouldn’t be a prob­lem, right? Well, how do the mer­chan­dis­ers for acces­sories and children’s cloth­ing feel about that? Or what about chang­ing the link to your sign-up form from small-font text to a big flash­ing but­ton? The home­page design team might have a prob­lem with that even if it’s tech­ni­cally super-easy to imple­ment. I would rec­om­mend sav­ing that test for a later date once the home­page design team has got­ten a chance to see the extent of your test­ing prowess.

2) What is the poten­tial ROI (whether in learn­ings and/or con­ver­sion)? It’s hard to esti­mate the ROI from a test. If it were easy, we prob­a­bly wouldn’t need to test. But there are still ways to get closer to a good esti­mate. For exam­ple, test­ing a page that has a low con­ver­sion rate but only 100 vis­i­tors a day might be an inter­est­ing oppor­tu­nity to learn, but the result­ing rev­enue bump is prob­a­bly not going to be impact­ful. Test­ing the first page of your check­out fun­nel could be huge in poten­tial ROI though, even if you’re just talk­ing about 3% lift in rev­enue per vis­i­tor or total orders.

Here’s an exam­ple of how you might rate the pre­vi­ous list of ideas.

I think the two fac­tors are actu­ally pretty equal in weight, so I then go ahead and do a straight sum of both to get my idea score. I’ve color-coded and sorted the dif­fer­ent tiers of scores here.

A lower score doesn’t mean the idea falls off the board, by any means. It just means that you allow for more time to get that test idea imple­mented. How­ever, a lower score plus a low value/ROI rat­ing should be grounds for being removed. Hope­fully you have a plat­form that empow­ers you to run the easy stuff faster and with more con­trol, mean­ing you’re able to leave IT alone to deal with the hard stuff. We know that makes every­body happier!

The last step is con­struct­ing the test­ing roadmap. Tak­ing that list of ideas we’ve put together, we can now cre­ate a much more intel­li­gent roadmap that takes into account the strengths and con­straints of your orga­ni­za­tion, infra­struc­ture and plat­form. Behold!

This test­ing roadmap will now inform how you imple­ment your plat­form and build a knowl­edge base of learn­ings and best prac­tices. One piece of advice though, is to stay flex­i­ble. This roadmap can and should be fluid, based on what you see in the test results. If you find that test­ing call-to-actions is yield­ing sig­nif­i­cant lift early on, it could be worth­while to keep dig­ging in that direc­tion for more con­ver­sion nuggets. Sim­i­larly, if you find that your early attempts at short­en­ing form length really aren’t doing much, you might want to push other tests in that vein fur­ther out in the time­line. Ulti­mately, test­ing is about lis­ten­ing to your cus­tomers, even when they don’t know they’re being asked the ques­tion. Don’t for­get to remain open enough to truly hear their answers!

If you found this post help­ful, re-tweet it with the link or text below! My next post will cover Day 7: We’ve Got Our Test­ing Roadmap. Now What?

RT @sflily Day 1: I Just Got a Test­ing Plat­form. Now What? http://​is​.gd/​m​Zuf #omniture

Photo Cred­its:
http://​www​.flickr​.com/​p​h​o​t​o​s​/​m​a​r​t​y​n​/​4​0​2​6​8​8​3​39/
http://​www​.flickr​.com/​p​h​o​t​o​s​/​s​c​h​o​o​l​s​t​r​e​e​t​/​1​6​3​7​2​7​7​10/
http://​www​.flickr​.com/​p​h​o​t​o​s​/​t​o​m​c​h​u​r​c​h​i​l​l​/​3​3​5​3​0​0​6​86/

4 comments
Andrea
Andrea

I enjoyed this blog. Did you ever post 'We've got our testing roadmap. Now what'?

Brian Hawkins
Brian Hawkins

Hi Sebastian, I just followed up on your blog posting but wanted to follow up here as well. In terms of our partnership with Optimost, we partner with them even though we offer Test&Target largely because we recognize that some of our analytics customers use other vendors’ solutions for things like customer feedback, site search, A/B testing, etc. We’re committed to offering integrated solutions with these vendors through Genesis which is an open platform, in addition to providing our own solutions in these areas. So, we just give our SiteCatalyst customers the ability to integrate analytics with their Optimost campaigns through Genesis if they so choose. That’s really the long and short of it. Feel free to email me directly if you’d like to discuss further.

Sebastian Robinson
Sebastian Robinson

Hi, I'm perhaps one step behind you, currently reviewing tools for a client here in Europe. I see the main players as being you guys, Maxymiser and Optimost. Optimost are a partner of yours I see, can you post any info on what that integration involves? Those guys have some great features so being able to select between them and T&T whilst still using SiteCatalyst will be a benefit I think. Thanks in advance, will watch the blog for posts. Sebastian

Jason Egan
Jason Egan

Great points on planning, Lilly. I think that most people's approach to testing is "Ready, Fire, Aim." In most cases, testing seems to be initiated by someone just asking a designer to simply create another version of a page, without ever considering existing data or creating a hypothesis.