In my last arti­cle, we exam­ined what accu­racy in dig­i­tal mar­ket­ing is and why it is impor­tant for a suc­cess­ful mar­ket­ing opti­miza­tion pro­gram. We also looked at how the qual­ity of your data and analy­sis can affect the accu­racy of your results. In this post we’ll exam­ine some of the safe­guards that can be put in place to ensure the accu­racy of your tests, and look at ways to con­vey the infor­ma­tion achieved from your tar­get­ing and testing.

Anom­aly or Extreme Order Filtering

Few tools on the mar­ket today lever­age the ben­e­fits of anom­aly or extreme order fil­ter­ing in terms of the accu­racy of results. Anom­aly or extreme order fil­ter­ing entails the abil­ity to auto­mat­i­cally iden­tify and remove out­lier results that can severely skew result accu­racy. This is a crit­i­cal fea­ture, par­tic­u­larly in the retail space, where extreme orders can sig­nif­i­cantly impact the con­tent that you might assume is win­ning, and even skew the poten­tial con­ver­sion or rev­enue lift in results to a huge degree.

Tools like Adobe Tar­get use built-in fil­ters that auto­mat­i­cally iden­tify and elim­i­nate data out­liers that may skew your results and sab­o­tage your efforts. Elim­i­nat­ing these anom­alies affords a more effi­cient and accu­rate view of your test results, allow­ing you to fur­ther pin­point key cus­tomer seg­ments and the con­tent that most res­onates with them.

As a basic rule, it is a good idea to remove out­liers that are more than two stan­dard devi­a­tions from the mean, but these thresh­olds can also be cus­tom set based on your busi­ness require­ments. It is impor­tant to under­stand the effect that remov­ing cer­tain data points can have on your results.

There is a fine line between remov­ing out­liers from your data and delet­ing legit­i­mate test results, and it is impor­tant to under­stand where that line exists. This type of fil­ter­ing can be dif­fi­cult to accom­plish in soft­ware pack­ages with­out this built-in capability.

Mutual Exclu­siv­ity

Another impor­tant tac­tic, mutual exclu­siv­ity, lets you eas­ily pick and choose which cus­tomers to include or exclude from spe­cific tests or pro­grams. Peo­ple who par­tic­i­pate in mul­ti­ple tests may be influ­enced by the con­tent of one, and it is impor­tant to be able to iden­tify the effect this may have on your test results. In addi­tion to exclud­ing par­tic­i­pants from your tests, you can com­pare their responses to oth­ers in order to incor­po­rate the infor­ma­tion gained from their activ­i­ties with­out skew­ing the entire pro­gram. You can also run a test within a test sce­nario, which shows fur­ther cor­re­la­tions between your hypothe­ses and your results. This allows you to bet­ter under­stand how one test might affect another, and how cus­tomers might respond to a series of mar­ket­ing cam­paigns run in tandem.

Tar­get­ing at the cam­paign, loca­tion, and indi­vid­ual expe­ri­ence level lets you clearly define the rules for includ­ing or exclud­ing par­tic­u­lar vis­i­tors within a test. Campaign-level tar­get­ing lets you tar­get the entire test to a spe­cific group based on avail­able seg­ments or vari­ables. This means that peo­ple who meet cer­tain cri­te­ria will only be included within a par­tic­u­lar cam­paign or series of tests. Location-level tar­get­ing lets you show con­tent in a par­tic­u­lar loca­tion only when the vis­i­tor meets cer­tain real-time con­di­tions. It lets you limit when cer­tain offers are dis­played based on the cri­te­ria dic­tated at that loca­tion. Expe­ri­ence, or offer-level, tar­get­ing lets you dic­tate par­tic­u­lar con­tent to par­tic­u­lar vis­i­tor seg­ments within the same test, and is immensely use­ful in land­ing page cam­paigns. This level of fine-tuning and spec­i­fi­ca­tion in the test design and set-up process con­tributes to the rich­ness and accu­racy of results, even within a sin­gle test.

Report­ing Capabilities

The abil­ity to manip­u­late or fil­ter results based on dif­fer­ent groups or vari­ables within your test pop­u­la­tion is also a huge fac­tor in the accu­racy of your analy­ses. With solu­tions like Adobe Tar­get, you can eas­ily view the results of your test­ing based on dif­fer­ent con­trol groups or time­frames, or within a visual graphic con­text. You can also set and change your con­trol group. The con­trol is what you would define as your base­line for the test, or the expected result from your efforts. By chang­ing who the con­trol group is in your test­ing, you can look at your results in a dif­fer­ent way based on the expected behav­ior of the con­trol group. Mak­ing these dials acces­si­ble and easy to adjust allows for more effi­ciency and con­fi­dence in your find­ings and the abil­ity to share them with key stake­hold­ers faster.

And, of course, a final, and often over­looked, piece of ensur­ing accu­racy within your pro­gram as a whole is your abil­ity to con­vey your results accu­rately to the stake­hold­ers. You need robust cus­tom report­ing func­tion­al­i­ties that allow you to quickly gen­er­ate reports that pro­vide a com­pre­hen­sive view of your tests and their results. Good data and analy­sis is worth­less if it is not deliv­ered in an under­stand­able way, and many soft­ware pack­ages lack the abil­ity to effec­tively cre­ate sum­maries of the tests, instead opt­ing for unwieldy CSV files to con­vey results. Cus­tom lists and reports and robust data visu­al­iza­tion increase the flex­i­bil­ity with which you can accu­rately report your results.

So what do you think? How does your orga­ni­za­tion ensure accu­rate results from your test­ing efforts? What other fac­tors play into accu­racy, and how do you account for them in your testing?