Con­tin­ual iter­a­tive test­ing on high-value pages is a pow­er­ful opti­miza­tion strat­egy for a web­site. This strat­egy gen­er­ally starts with test­ing large con­tent changes on high-traffic pages and then drills down into test­ing smaller con­tent sec­tions as value and influ­ence on your pri­mary met­ric is found. A gen­er­al­ized exam­ple of this strat­egy fol­lows these steps:

  1. Choose the test’s suc­cess metric
  2. Iden­tify high-traffic and other impor­tant pages
  3. Test the lay­out by rear­rang­ing large con­tent sections
  4. Test the amount of con­tent on the page by test­ing the inclu­sion vs. exclu­sion of dif­fer­ent con­tent blocks.
    a. If exclud­ing a con­tent sec­tion increases con­ver­sion, then leave it off.
    b. If exclud­ing a sec­tion of con­tent doesn’t hurt or help your pri­mary suc­cess met­ric, then con­sider chang­ing it to dif­fer­ent con­tent or make fur­ther test­ing in this sec­tion a low pri­or­ity.
    c. If exclud­ing a sec­tion of con­tent decreases con­ver­sion, then this has influ­ence on your pri­mary suc­cess met­ric so keep this con­tent on the page and move to the next step below.
  5. Break con­tent blocks into smaller pieces of con­tent and test each of these smaller sec­tions. If a con­tent block con­tains a head­line, bul­let points, image and a call to action link, then test each of those pieces separately.
  6. Repeat — cre­ate new tests and strive to beat the pre­vi­ous test’s win­ning version

Test­ing pro­grams are often very resource con­strained, mak­ing it hard to get the most value out of this strat­egy. The ideal solu­tion is to staff your test­ing pro­gram to han­dle the strat­egy above, but that may not be pos­si­ble right away. Com­pa­nies often require pro­grams (opti­miza­tion pro­grams included) to prove their worth over time before addi­tional headcount/resources are granted. Here are some strate­gies you can imple­ment as you build up the test­ing pro­gram into an ongo­ing series of iter­a­tive test­ing across the site.

Mix in ‘easy’ tests
Test­ing pro­grams can often grind to a halt when IT needs to delay a code release or when the design team is too busy to cre­ate the deliv­er­ables for your next test. When these delays occur, mix in a few tests that can be launched with zero or min­i­mal effort from those other teams. In your test roadmap, mark tests that can be launched quickly (e.g., tests that sim­ply change head­lines, mes­sag­ing, or images on major pages). Plan ahead and have this con­tent wrapped in an mbox so you can launch the test very quickly. It’s worth not­ing that ‘easy’ tests don’t nec­es­sar­ily trans­late to low value. They can be quite pow­er­ful too.

Test site changes that are already in the works
If the test­ing pro­gram is so resource con­strained that there is not time to cre­ate even a test roadmap, at a min­i­mum you can test changes that are already queued up by other teams. Typ­i­cally new code for these changes has to be cre­ated by the devel­oper any­way so it’s not much more effort to make it an A/B test. This is a defen­sive strat­egy because it will reveal when those changes hurt con­ver­sions and/or rev­enue. It is bet­ter to reveal this loss dur­ing a test than to push it out to all site traf­fic. For pos­i­tive changes, you will also be able to mea­sure the scope of the improve­ment, which could inform future site changes.

Good luck and keep striv­ing towards con­tin­ual iter­a­tive test­ing across your site.