I won’t over­state the obvi­ous. You have to test, test, and test some more. There’s a rea­son “always be test­ing” has become the new “ABCs” (ABTs, more accu­rately) for dig­i­tal mar­keters. Whether it’s a pay-per-click (PPC) cam­paign, email mes­sag­ing, pro­mo­tional offer, page design, or count­less other aspects of your site, your app, and your var­i­ous exten­sions and mar­ket­ing plat­forms, there’s always some­thing that can—and should—be tested, reviewed, adjusted, and opti­mized. It’s that sim­ple. So sim­ple, in fact, that you’re almost def­i­nitely already doing it, even if it’s just some basic inci­den­tal testing.

Once you begin test­ing, there’s no doubt you’ll have a few wins, even if they’re incidental—and with sur­pris­ing results! So you’ve had a few vic­to­ries with “red or green,” “image or no image,” “20 per­cent off,” or buy-one, get-one.” To ele­vate test­ing like this into opti­miza­tion ter­ri­tory, you’ve got to think about the var­i­ous steps in your site’s pur­chase flow process. How do con­sumers end up on your site? Where do they go next? How, dur­ing the process, are they respond­ing and react­ing to your con­tent, offers, and touch points? Maybe it’s an emo­tional reac­tion or a com­pletely intu­itive response, but some­thing you’d never have come to through A/B test­ing. And some­times it’s not as obvi­ous as you might think. Some­times you can’t get there with­out look­ing over the consumer’s shoul­der. Literally.

Here’s where I often see another stum­bling block to mov­ing from inci­den­tal test­ing to full-blown opti­miza­tion: inte­grat­ing usabil­ity test­ing. For orga­ni­za­tions, the test­ing process should be an iter­a­tive, mul­ti­di­men­sional, data-driven cycle. Based on the results from your tests you opti­mize, make more hypothe­ses, and you test again. Much of this nat­u­rally emerges from the num­bers, but some­times it goes deeper—that’s where user test­ing comes in. The abil­ity to peek over con­sumers’ shoul­ders and watch their paths, their real-time decision-making processes, and their unfil­tered, untrained feed­back is price­less. And it car­ries a heavy price tag, both finan­cially and tem­po­rally.  But now is not the time to slow the cycle down.

One com­pany help bridg­ing that gap is UserTest­ing, an online-based on-demand usabil­ity test­ing ser­vice that lets orga­ni­za­tions like yours peek at tar­get con­sumers while they’re actively engaged in your web­site con­tent and ser­vices. It’s quick, it’s cost effec­tive, it works—and everyone’s using it. UX usabil­ity pros are tap­ping into their inno­v­a­tive meth­ods to test wire­frames and pro­to­types, prod­uct man­agers are peek­ing at prospec­tive cus­tomers ini­ti­at­ing a search for their cat­e­gory to observe the nat­ural path in, and even app and game devel­op­ers are get­ting in on the action. Does their tar­get mar­ket think the game is fun? Cool? Is the app easy enough to use on the go? Would cus­tomers pay $1.99 for it? Think of it as a vir­tual one-on-one focus group—one that deliv­ers results in about an hour from the time you ini­ti­ate your test.

A good exam­ple? OpenTable, the pop­u­lar restau­rant reser­va­tion por­tal, was highly com­mit­ted to test­ing and opti­miza­tion. In 2009, the com­pany started tap­ping into UserTest­ing for quicker usabil­ity response. Results that once took two weeks now took about an hour. (Side note: about 79 per­cent of UserTest­ing part­ners get their results in 60 min­utes or less—you can’t get a pizza that fast.) What were they look­ing for? Plenty, includ­ing the rea­sons users aban­don the site pre­reser­va­tion, how reviews influ­ence book­ings, and what infor­ma­tion poten­tial din­ers need to choose a par­tic­u­lar restau­rant. By “sit­ting in” with tar­get users via video record­ing, OpenTable was able to test sev­eral site sec­tions, includ­ing its search fea­ture, info pages, and even the sign-up process, ensur­ing a smoother, more seam­less expe­ri­ence for din­ers with less devel­op­men­tal down time, ulti­mately increas­ing con­ver­sion rates based on these insights.

And it’s as cost effi­cient as it is time effi­cient. A sim­ple test can be set up in 3–5 min­utes, and pro­grams start around $100 for a three-person test. So check that user test­ing box—even if you can’t roll out the red car­pet to your users per­son­ally, you can still pulse-check a crit­i­cal mass and toss those results into your test­ing cycle. User test­ing can really move the nee­dle, help­ing mar­keters under­stand the whys of it all and bet­ter inform what, when, where, and how to test, and mak­ing those results mat­ter even more.

Now that you’ve got your user test­ing in place, let’s talk about those wins again. Another off­shoot of wins is user-based victories—shouting them from the rooftop, more accurately—to get the kind of buy-in you need for the next and equally impor­tant step: strate­gic align­ment. Despite its impor­tance, even some of the most dig­i­tally savvy orga­ni­za­tions con­tinue to be chal­lenged by get­ting their test­ing ducks in a row. Strate­gic align­ment across all test­ing plat­forms, prac­tices, and exe­cu­tions ensures a well-informed, well-constructed cam­paign with more mean­ing­ful, action­able results. Test­ing in a vac­uum is bet­ter than no test­ing at all, but it just can’t com­pete with this mul­ti­chan­nel, cross-departmental approach.

We see this over and over at Adobe. As test­ing pro­grams evolve, it’s more cru­cial than ever that the over­all orga­ni­za­tion and its mar­ket­ing arms pull in resources from all key facets of the busi­ness. It’s not just know­ing what to test—you likely already have that down, or at least a well-informed lead—it’s mak­ing sure you’ve got all hands on deck, falling in place and ensur­ing more ger­mane results. Or maybe you don’t know what to test—what’s get­ting in the way, what’s keep­ing cus­tomer con­ver­sion low, why peo­ple just aren’t buy­ing. Rally the troops and take the tem­per­a­ture of the extended crew, gauge their sen­ti­ments, and see what’s keep­ing them up at night.

Test­ing is crit­i­cal. I’ve said it over and over, and will con­tinue to drive that point home—though I don’t think anyone’s argu­ing. But usabil­ity test­ing pro­vides a crit­i­cal piece of that test­ing and opti­miza­tion equa­tion, for orga­ni­za­tions that don’t have the time, the invest­ment, or that sim­ply want to be more real-time in their deci­sion mak­ing and opti­miza­tion strategies.

Whether you’re look­ing for ideas on where to start your test­ing pro­gram or are actively test­ing and look­ing to kick it up a notch, this is a great place to start. Give it a whirl and tell me what you’re implementing—and tell me if the results sur­prise you. We’re work­ing on some great case study assess­ments with a few truly inno­v­a­tive com­pa­nies, so there’s plenty more to come on the topic.