Just as in life, uncer­tainty in online mar­ket­ing can make our best plans worth­less. In my Feb­ru­ary col­umn for Search Engine Land, I dis­cuss volatil­ity, a mea­sure of uncer­tainty. Volatil­ity, which can be defined as fluc­tu­a­tions in per­for­mance due to unpre­dictable events in the mar­ket­place, is often over­looked by dig­i­tal mar­keters when test­ing new mar­ket­ing strate­gies. This can derail new exper­i­men­ta­tions in mar­ket­ing and adver­tis­ing. In a world where mea­sure­ment, ROI and account­abil­ity are becom­ing ever more impor­tant, nuances like volatil­ity must be under­stood and fac­tored into test and cam­paign results. After exhibit­ing how volatil­ity can affect cam­paigns through a thought exper­i­ment, I also offer ways that mar­keters might be able to pre­vent or fac­tor in volatil­ity ahead of time.

–Dr. Sid­dharth Shah
Sr. Direc­tor, Busi­ness Analytics

Designed To Fail: Why Many Tests Give You Mean­ing­less Results 

You built out your new ad copy, tested out a bid­ding strat­egy, mea­sured web and store sales to mea­sure the online to offline effect; how­ever, in the end you got the worst out­come pos­si­ble – incon­clu­sive results.

A neg­a­tive result would have been bet­ter; at least you would have known that your hypoth­e­sis was wrong or that your strat­egy was not effec­tive. But an incon­clu­sive result tells you noth­ing, which can be incred­i­bly frus­trat­ing as a marketer.

There are many rea­sons why a well-designed test might fail. For exam­ple, sea­sonal effects might be ignored, the dataset might be too small or the mar­ket­place might change dur­ing the test.

How­ever, a very com­mon error in test design is not account­ing for volatil­ity – fluc­tu­a­tions in per­for­mance due to unpre­dictable events in the marketplace.

In this post, I shall delve into the issue of volatil­ity, how it might lead to a test with incon­clu­sive results and finally, how you can mit­i­gate its effect on your test.

A Thought Experiment

To under­stand the issue bet­ter, let us assume that you want to test the hypoth­e­sis that online SEM spend­ing leads to offline store sales. To test this hypoth­e­sis, you ramp up your online bud­gets in incre­ments every week.

Your plan is to run the test for 5 weeks, col­lect the data, do a regres­sion analy­sis and answer the ques­tion, “What does one dol­lar spent online lead to in offline sales?”.  Now let us, put some real num­bers in this thought experiment.

Daily offline store Rev­enue= $550,000

Daily base­line online SEM spend=$10,000

Your plan is to spend $10,000, $15,000, $20,000, $30,000, $40,000 per day on SEM in weekly incre­ments, i.e. you spend $10,000 per day on week 1, $15,000 per day on week 2 and so on.

Tak­ing the thought exper­i­ment fur­ther, let us assume that one dol­lar spend online leads to $3 in offline rev­enue. If this were the case, then your offline store rev­enue would look as follows:

Week

Daily Store Revenue

Daily Online Attributable

Store Rev­enue

Daily Total Store Revenue

1

$550,000

$30,000

$580,000

2

$550,000

$45,000

$595,000

3

$550,000

$60,000

$610,000

4

$550,000

$90,000

$640,000

5

$550,000

$120,000

$670,000

If we were to plot the num­bers in a chart, we would get the fol­low­ing graph. The slope of the graph is 3 telling us that one dol­lar spent online leads to $3 in offline revenue.

Volatil­ity & Its Effect On Your Experiments

Volatil­ity means that your store rev­enue would never be exactly $550,000 every day. Instead, it will be a num­ber close to the $550,000 aver­age and will fluc­tu­ate daily.

It also means that the online con­tri­bu­tion will never be exactly $3 for every dol­lar spent but a num­ber that fluc­tu­ates close to $3. Let us assume that the daily volatil­ity in the online store rev­enue is 15% of aver­age. The exper­i­men­tal results will now look some­thing like this:

It is unclear from the graph if there is any rela­tion­ship at all. Fur­ther, even the sta­tis­ti­cal con­fi­dence mea­sure (R squared) is 5.25% indi­cat­ing that we are not con­fi­dent that the regres­sion is meaningful.

So why did this hap­pen? The 15% volatil­ity in offline store rev­enue masked any effect that the online SEM spend had.

For instance: If the SEM spend con­tributed to $60,000 in store rev­enue but the store rev­enue was $60,000 lower on the same day due to volatil­ity, the two effects would can­cel out each other and you would see no change in the total revenue.

Clearly this would be an expen­sive, time con­sum­ing exper­i­ment which would lead to incon­clu­sive results. More­over, this could hap­pen for any exper­i­ment includ­ing an ad-copy test, a land­ing page test, a pro­mo­tion etc.

What Can Adver­tis­ers Do To Pre­vent This?

  • Before run­ning the exper­i­ment, mea­sure the volatil­ity on a mea­sured vari­able. In our exam­ple, we would mea­sure the volatil­ity on total store revenue.
  • Check to see the min­i­mum impact your test would need to have to be mea­sur­able. In our exam­ple, we would need to esti­mate the min­i­mum impact the online spend can have on store rev­enue in order for us to mea­sure the effect.
  • Another para­me­ter to exper­i­ment is the num­ber of days you want to run the exper­i­ment. Exper­i­ment dura­tion is always a trade-off and there are always con­flict­ing issues to be con­sid­ered. Run­ning the exper­i­ment for a longer time period might give you a more robust answer but would you be will­ing to wait longer for the result? Fur­ther, would mar­ket­place forces such as CPC infla­tion and sea­son­al­ity lead to more or less volatil­ity in the longer duration?
  • Test your assump­tions before run­ning an exper­i­ment. You can build a sim­ple exper­i­men­tal sim­u­la­tion in Excel or a sta­tis­ti­cal pack­age like R and check to see if your test will give you mean­ing­ful results.

Fol­low­ing these steps will help you avoid the heartache of expen­sive, time con­sum­ing and incon­clu­sive tests.

–Dr. Sid­dharth Shah
Sr. Direc­tor, Busi­ness Analytics

0 comments