Take a deep breath. Before you start thinking about the implementation and what you can or cannot do with the platform, begin with your data. Look for pages with the highest bounce rate, steps in your funnel with the greatest fallout, landing pages with the lowest conversion rate. These should be red flags helping you prioritize where to start testing first. For example, in the report below, check out the highlighted conversion rate that’s significantly lower than the other search terms:

Get key stakeholders who are on board with testing in a room together and throw out hypotheses you draw from the data, along with the ideas you’ve debated in circles forever without ever coming to a decisive answer. You can never underestimate how political testing can be in an organization, so make sure you don’t have naysayers in the room on day 1. They’ll come into play later on :)

Once you’ve got all the ideas on the wall, it’s time to start slicing! Let’s say that the list below is what we have to work with.

Here are some key questions you want to ask yourself when evaluating the quality of an idea:

1) What is my hypothesis? – If you don’t have a hypothesis, it’s game over.

2) What are my goals? – Sounds like a no brainer, but I can’t tell you how many times I’ve had clients want to try testing colors or layouts just for the heck of it. You must have a clear goal that you are working towards! It can be more than one goal, such as locating a dealer or requesting a quote, in the case of an automaker, but just make sure you’ve got it clear in your mind so your design aligns with it.

3) How much traffic do I get? – This one’s important. If you only get 50 conversions a day, you will never be able to run a test with 100 different variations. The amount of time it would take to run that test would make it useless. We’re talking months, if not years. To highlight the point: at 50 conversions a day, half of the variations tested wouldn’t even have a single conversion after day 1.

4) Which segments do I want to drill into? This one’s directly related to the amount of traffic you get. Because let’s say you wanted to run a test with 100 different variations, AND you have 4 key segments you’re interested in, such as paid search, organic search, email, and affiliates, well now you are most definitely talking about a test that will take over a year. The point is not that you can’t technically run a test for that long, but as an online marketer competing against other agile marketers, what would be the point?

5) What will I learn if the alternative is the winner? It’s always great to find a winning alternative in a test. But if you just grab 20 different call-to-actions you found on the internet, and It turns out the one from T-Mobile is the winner, what did you learn? Maybe you found that magenta really does rock, but maybe you just stumbled your way onto a winner. The upside of that discovery is nice, but how do you actually build on that learning to evolve your marketing practice?

6) What will I learn if there is no winner? The dreaded lack of statistical confidence…dum dum dum. This one’s going to hurt, no doubt about it. But there are ways to find grace in this result. The first is to make sure you had great answers to all of the questions above. Oh, offering a 20% discount on subscription didn’t increase overall revenue? Well, now we know that we don’t need to cut our price and pricing isn’t as impactful as we expected. Great, let’s move on! Or how about those segments that we set up? If you drill down, you may find that those Mac users really behaved differently than the Windows users, resulting in a cancel-out effect at the top. Now we’ve got 2 separate winners, and more learnings to build upon!

Once you’ve asked yourself these questions and pared down your ideas list, we now need to prioritize what goes first. I like to calculate a score because it forces people to emotionally detach from their favorites.

There are two factors you want to consider when rating a potential test idea:

1) How difficult will this be to implement (both technically and politically)? Again, you cannot underestimate the political impact of a test idea. For example, you just want to test different treatments of women’s clothing on the homepage. That shouldn’t be a problem, right? Well, how do the merchandisers for accessories and children’s clothing feel about that? Or what about changing the link to your sign-up form from small-font text to a big flashing button? The homepage design team might have a problem with that even if it’s technically super-easy to implement. I would recommend saving that test for a later date once the homepage design team has gotten a chance to see the extent of your testing prowess.

2) What is the potential ROI (whether in learnings and/or conversion)? It’s hard to estimate the ROI from a test. If it were easy, we probably wouldn’t need to test. But there are still ways to get closer to a good estimate. For example, testing a page that has a low conversion rate but only 100 visitors a day might be an interesting opportunity to learn, but the resulting revenue bump is probably not going to be impactful. Testing the first page of your checkout funnel could be huge in potential ROI though, even if you’re just talking about 3% lift in revenue per visitor or total orders.

Here’s an example of how you might rate the previous list of ideas.

I think the two factors are actually pretty equal in weight, so I then go ahead and do a straight sum of both to get my idea score. I’ve color-coded and sorted the different tiers of scores here.

A lower score doesn’t mean the idea falls off the board, by any means. It just means that you allow for more time to get that test idea implemented. However, a lower score plus a low value/ROI rating should be grounds for being removed. Hopefully you have a platform that empowers you to run the easy stuff faster and with more control, meaning you’re able to leave IT alone to deal with the hard stuff. We know that makes everybody happier!

The last step is constructing the testing roadmap. Taking that list of ideas we’ve put together, we can now create a much more intelligent roadmap that takes into account the strengths and constraints of your organization, infrastructure and platform. Behold!

This testing roadmap will now inform how you implement your platform and build a knowledge base of learnings and best practices. One piece of advice though, is to stay flexible. This roadmap can and should be fluid, based on what you see in the test results. If you find that testing call-to-actions is yielding significant lift early on, it could be worthwhile to keep digging in that direction for more conversion nuggets. Similarly, if you find that your early attempts at shortening form length really aren’t doing much, you might want to push other tests in that vein further out in the timeline. Ultimately, testing is about listening to your customers, even when they don’t know they’re being asked the question. Don’t forget to remain open enough to truly hear their answers!

If you found this post helpful, re-tweet it with the link or text below! My next post will cover Day 7: We’ve Got Our Testing Roadmap. Now What?

RT @sflily Day 1: I Just Got a Testing Platform. Now What? http://is.gd/mZuf #omniture

Photo Credits:
http://www.flickr.com/photos/martyn/402688339/
http://www.flickr.com/photos/schoolstreet/163727710/
http://www.flickr.com/photos/tomchurchill/335300686/

4 comments
Andrea
Andrea

I enjoyed this blog. Did you ever post 'We've got our testing roadmap. Now what'?

Brian Hawkins
Brian Hawkins

Hi Sebastian, I just followed up on your blog posting but wanted to follow up here as well. In terms of our partnership with Optimost, we partner with them even though we offer Test&Target largely because we recognize that some of our analytics customers use other vendors’ solutions for things like customer feedback, site search, A/B testing, etc. We’re committed to offering integrated solutions with these vendors through Genesis which is an open platform, in addition to providing our own solutions in these areas. So, we just give our SiteCatalyst customers the ability to integrate analytics with their Optimost campaigns through Genesis if they so choose. That’s really the long and short of it. Feel free to email me directly if you’d like to discuss further.

Sebastian Robinson
Sebastian Robinson

Hi, I'm perhaps one step behind you, currently reviewing tools for a client here in Europe. I see the main players as being you guys, Maxymiser and Optimost. Optimost are a partner of yours I see, can you post any info on what that integration involves? Those guys have some great features so being able to select between them and T&T whilst still using SiteCatalyst will be a benefit I think. Thanks in advance, will watch the blog for posts. Sebastian

Jason Egan
Jason Egan

Great points on planning, Lilly. I think that most people's approach to testing is "Ready, Fire, Aim." In most cases, testing seems to be initiated by someone just asking a designer to simply create another version of a page, without ever considering existing data or creating a hypothesis.