One of my life­time aspi­ra­tions is to be a bet­ter mul­ti­tasker. Although I can be very effi­cient in terms of swiftly check­ing off tasks in my daily work­load, I’m not always as good as I would like to be at bal­anc­ing one activ­ity while focus­ing on another. In gen­eral, we all con­front cer­tain draw­backs when try­ing to jug­gle mul­ti­ple tasks, which play out in the world of mar­ket­ing as it does in our daily lives.

As an exam­ple, I recently saw a joint Columbia/Stanford Uni­ver­sity study that found that hav­ing too many choices can be demo­ti­vat­ing. For exam­ple, a large seg­ment of the shop­ping pop­u­la­tion would pre­fer to find a sin­gle option in the sock sec­tion of a depart­ment store than to be over­whelmed with the mul­ti­tude of vari­a­tions and array of prod­ucts avail­able in most stores today. This is a large part of the moti­va­tion behind opti­miza­tion, to deter­mine the most effec­tive rules for whit­tling down the var­i­ous con­tent, prod­uct, and expe­ri­ence options on your dig­i­tal chan­nels to a more per­son­al­ized dis­play. This aims to reduce the fric­tion or dis­as­so­ci­a­tion that a dig­i­tal vis­i­tor can feel when pre­sented with an over­whelm­ing and seem­ingly generic num­ber of options.

In con­trast to the pref­er­ences of a con­sumer, as test opti­miza­tion heroes, we would love to be mul­ti­taskers. The more well-designed tests we can get out the door, the more report­ing data we’ll have to inter­pret to deter­mine the most accu­rate, and the most inac­cu­rate, tar­get­ing strat­egy and oppor­tu­ni­ties for increas­ing return on invest­ment (ROI). How­ever, most cur­rent test­ing solu­tions pro­vide lim­its in regard to the num­ber of con­cur­rent tests or test­ing activ­i­ties that can be run at the same time or in the same location.

This is the result of inflex­i­ble imple­men­ta­tion and a lack of cus­tomiza­tion within the test design process. These “solu­tions” pro­vide no adjust­ment or cus­tomiza­tion within their sin­gle line of code imple­men­ta­tion, which pro­hibits the cre­ation of mul­ti­ple tests in one loca­tion with­out test col­li­sions. Another issue is that their seg­men­ta­tion is not gran­u­lar or cus­tomiz­able to dis­tinctly place key sub­seg­ments of the pop­u­la­tion into dis­crete test or tar­geted expe­ri­ences. There are also no built-in selec­tions to assist in reg­u­lat­ing the purity of a test’s results or pro­tect against one test impact­ing the per­for­mance of another.

Many solu­tions today also pro­vide only a hand­ful of basic, out-of-the-box suc­cess met­rics or key per­for­mance indi­ca­tors (KPIs) that can be mea­sured within the reports, which does not accu­rately gauge the rel­a­tive impact of a test vari­a­tion upon sub­se­quent expe­ri­ences, inhibit­ing under­stand­ing of the fac­tors lead­ing to a main con­ver­sion event. Because of an inflex­i­ble imple­men­ta­tion and lack of options, users of these solu­tions are con­fined to a full-service only model, where the ven­dor must code and exe­cute the test design in the back end of the customer’s site, with only lim­ited shared devel­op­ment resources on the ven­dor side that can­not scale to sup­port a rapid, con­cur­rent test­ing and tar­get­ing veloc­ity. Worse still, in many cases the tool itself has sim­ply put an arbi­trary capac­ity or limit on the num­ber of active tests.  A tool’s capac­ity to enable effi­cient, high-volume con­cur­rent test­ing should be a pri­mary fac­tor and lit­mus test for choos­ing the right tool for grow­ing your pro­gram, scal­ing your oper­a­tion, and shar­ing your opti­miza­tion suc­cess across your orga­ni­za­tion. The solu­tion must work for you, not the other way around.

The Adobe Tar­get engi­neer­ing and prod­uct teams have spent a sub­stan­tial amount of devel­op­ment time focus­ing on pro­vid­ing safe­guards in accu­racy with con­cur­rent test­ing. I’m proud to say that our cus­tomers scale to hun­dreds of con­cur­rent tests in more advanced pro­grams, and in some rare cases, even into the thou­sands (with a focus still placed on effec­tive accu­rate test design within our guided lin­ear work­flow infused with best prac­tices). Com­pa­nies using our solu­tion have con­fi­dence in their test­ing because of the lev­els of pri­or­i­ti­za­tion, seg­men­ta­tion, cus­tomiza­tion, and alerts that we pro­vide to ensure that our report­ing is as accu­rate as possible.

To begin, Tar­get offers a pri­or­ity set­ting that can be con­fig­ured when cre­at­ing a test, so if an audi­ence falls into mul­ti­ple tests, one will take prece­dence. There are also activ­ity col­li­sion alerts that inform the test designer if there is the poten­tial for impact­ing other active tests prior to push­ing a test live. Our tar­get­ing abil­ity within the test set-up process exists at three levels—campaign, expe­ri­ence, and location—to more effec­tively orches­trate traf­fic through dis­tinct expe­ri­ences or series of expe­ri­ence, and our abil­ity to cus­tom set the per­cent­age of traf­fic that will be deliv­ered within a split test or mul­ti­ple vari­a­tion sce­nario means that traf­fic can be more dis­cretely fun­neled through the right selec­tion of expe­ri­ences to avoid collisions.

In addi­tion, the gran­u­lar cus­tomiza­tion of audi­ences to tar­get, based on all avail­able first-, second-, and third-party data, or even per­sis­tent aggre­gate pro­files, means that the rules of engage­ment within con­cur­rent tests can be more spe­cific to deflect col­li­sions. We even pro­vide an option to include or exclude audi­ence mem­bers who took part in pre­vi­ous tests to allow for mul­ti­page and cross-channel test­ing options and scal­a­bil­ity. There are also basic and advanced fil­ter­ing of the results, includ­ing the abil­ity to apply syn­chro­nized (no vari­ance) ana­lyt­ics audi­ence seg­ments and suc­cess met­rics to more effec­tively drill down into the report­ing. This gives you a bet­ter under­stand­ing of your cus­tomer seg­ments and lets you exclude extreme results that might skew win­ning expe­ri­ences. Finally, the flex­i­ble imple­men­ta­tion lets you effi­ciently design advanced test and tar­get­ing sce­nar­ios with­out need­ing devel­op­ment resources required by other, inflex­i­ble prod­ucts with rigid sin­gle line of code imple­men­ta­tion (requir­ing unscal­able ven­dor main­te­nance and admin­is­tra­tion behind the scenes).

When con­sid­er­ing a test opti­miza­tion solu­tion, be sure you’re eval­u­at­ing its abil­ity to han­dle rapid con­cur­rent test sce­nar­ios, and ask the right ques­tions around the solution’s  flex­i­bil­ity. Ask to speak to other cus­tomers, to visit other cus­tomers’ web­sites, or to review pub­lished case stud­ies where scal­able con­cur­rent test­ing is in full prac­tice with these solu­tions to make sure you’re mak­ing the best deci­sion for your orga­ni­za­tion. Don’t find your­self in a rut when you’re ready to ramp up your pro­gram and you don’t have the right tools, strat­egy, or sup­port to exe­cute at the depth and speed it requires to expand your pro­gram and suc­cess rapidly.

0 comments