In my last post, I talked about the anatomy of a land­ing page: what it should con­tain, what has no place there, and what the page is sup­posed to do. Today, I’ll delve into how A/B test­ing, or split test­ing, is one of the best invest­ments you can make to ensure a highly suc­cess­ful land­ing page.

There’s more to this strat­egy than fig­ur­ing out whether red or yel­low works bet­ter for your call-to-action (CTA) but­ton. Smart A/B test­ing is an ongo­ing process that looks at a num­ber of behav­iors and ele­ments, and results in a high-performing page with the capac­ity for con­tin­ual improvement.

In A/B test­ing, a mar­keter splits incom­ing traf­fic between two (or more) ver­sions of a dig­i­tal prop­erty and then ana­lyzes the results. This type of test­ing can be done for cam­paigns that route traf­fic to your land­ing page, as well as the land­ing page itself.

Fun­neled A/B Testing—From Broad to Fine Tuned

Apply­ing the prin­ci­ples of a mar­ket­ing fun­nel to your split test­ing is the best way to get accu­rate results. Keep in mind that you’re look­ing for ways to opti­mize your land­ing page that will con­vert more vis­i­tors, and A/B test­ing will high­light each suc­ces­sive change that proves more effective.

Begin with broad strokes. Cre­ate two com­pletely dif­fer­ent land­ing pages, and mon­i­tor your results to learn which ver­sion cap­tures more con­ver­sions. Once you have an effec­tive foun­da­tional design, you can start test­ing smaller changes—some of which can deliver big results.

The impor­tant aspect of fine-tuned A/B test­ing is know­ing exactly what you’re test­ing and why. Have a hypoth­e­sis, and then test to prove or dis­prove it. For exam­ple, you might test a color change to see if vis­i­tors stay longer on the page, or alter the word­ing on your CTA to find out whether one ver­sion gets more clicks.

A few exam­ples of small changes you can mea­sure with A/B test­ing include:

  • Images
  • But­ton colors
  • Back­ground shades
  • Head­lines
  • Lay­out
  • CTAs

As you test var­i­ous ele­ments of your land­ing page, remem­ber that you’re run­ning an exper­i­ment, and you need a “con­trol” to mea­sure results effec­tively. Make sure you’re test­ing changes against a sta­tic ver­sion of your land­ing page for best results.

This type of test­ing is use­less with­out ana­lyt­ics that pro­vide detailed results. Opti­mal A/B test­ing involves ana­lyz­ing your results as far down your sales fun­nel as possible—from vis­its to actual sales.

Set up your ana­lyt­ics to demon­strate how dif­fer­ent ver­sions of your land­ing page, or aspects of your page, affect met­rics such as click-through rates, signups, traffic-to-lead con­ver­sions, demo requests, con­tacts for more infor­ma­tion, and, ulti­mately, sales. When you use Adobe Ana­lyt­ics as the report­ing source for your Adobe Tar­get tests, you’ll be able to fil­ter results based on any spe­cific met­ric or tar­get audi­ence that’s already con­tained in the Ana­lyt­ics tool.

Rinse and Repeat: Keep Test­ing for Best Results

In A/B test­ing, the results are not always clear-cut. Many mar­keters give up on the strat­egy after an unsuc­cess­ful run, or when sev­eral tests deliver mar­ginal results. But the most effec­tive way to uti­lize A/B test­ing is to view it as a jour­ney, not a destination.

Online mar­ket­ing is con­stantly evolv­ing, and your split tests should reflect the chang­ing land­scape as well as your own efforts to fine-tune your land­ing page. Con­tinue to exper­i­ment with A/B test­ing best prac­tices, and you’ll end up with a well-optimized land­ing page that earns its keep.