With a new year comes the oppor­tu­nity to review what we accom­plished the pre­vi­ous year and pon­der what we hope to achieve in the new year. In an effort to rec­on­cile that dif­fer­ence we often make res­o­lu­tions for what we will change.

I will read one book each month.“
“I will go to the gym twice a week.“
“I will call my mother once a week.”

While we start strong in Jan­u­ary, by the end of March we’ve read the pro­logue to Moby Dick, the sight of the gym on the way to work reminds us of the expen­sive mem­ber­ship we’re not using, and mom won’t stop call­ing to ask why we’re not calling.

In an effort to help you make res­o­lu­tions you can keep, I have com­piled a short list of test­ing and opti­miza­tion res­o­lu­tions. After work­ing with dozens of test­ing orga­ni­za­tions, I have had the oppor­tu­nity to think about where I’ve seen testers suc­ceed and fail. Below are six easy res­o­lu­tions you can keep that will make a dif­fer­ence to your test­ing program.

1. “I won’t focus on track­ing clicks. Instead I will focus on revenue.”

If rev­enue comes from a sub­scrip­tion con­fir­ma­tion, I will focus on con­fir­ma­tions. If I have a CPM ad-based model, I will focus on dri­ving page views and ad impres­sions. I will not let my focus shift to sec­ondary met­rics that have only cor­rel­a­tive rela­tion­ships with revenue.

2. “I will focus on tests that gen­er­ate learn­ing instead of just tests that pro­duce winners.”

Find­ing a win­ning test recipe is great in that it ele­vates you from point A to point B, but some learn­ing tests will help you under­stand how the change impacts the page. This learn­ing may help cre­ate a roadmap that can move you beyond point B to a point E or F you haven’t considered.

3. “I will not waste my time try­ing to get my test­ing num­bers to match my ana­lyt­ics numbers.”

I’ve said quite a bit about this in the past so I won’t dwell on this for long. While it’s com­fort­ing to see your vis­i­tor counts in a test­ing cam­paign line up well with your ana­lyt­ics data dur­ing the same period, that’s not impor­tant. What is impor­tant is that your test­ing data allows you to see com­par­a­tive rates of change between the test cells. A 10% lift is the same rate of change regard­less of if the base con­ver­sion rate is 1.4% or 14%.

4. “I will add seg­ments to all my tests.”

A test with no seg­ments gives you a lim­ited, one-dimensional view of how the test cells do in aggre­gate. When you add seg­ments, you can slice and dice the data to see what res­onates with key groups of vis­i­tors. Does traf­fic from my top five refer­rers respond dif­fer­ently than traf­fic from every­where else? Are there spe­cific page lay­outs that per­form bet­ter for paid search vis­i­tors than for organic search vis­i­tors? Can browser seg­ments help me catch a slight tech­ni­cal glitch in Inter­net Explorer early on before I let that glitch ruin my entire test? In Test&Target seg­ments are quick and easy to set up, but with­out them, all of your data goes into one pile and there is no way to split it back out again.

5. “I will not run mul­ti­ple cam­paigns on the same page at the same time.”

When a vis­i­tor falls into two cam­paigns on the same page, or in the same flow and they share the same suc­cess met­rics you cre­ate an attri­bu­tion prob­lem. Was it the changes in cam­paign A or cam­paign B that really had the impact? This kind of crossover cre­ates unde­sir­able noise in the data and can be avoided if you make the cam­paigns mutu­ally exclu­sive by run­ning them at dif­fer­ent times or split­ting your audi­ence into exclu­sive test­ing groups such that a vis­i­tor can only land in one cam­paign or the other.

6. “I won’t ask my Test&Target con­sul­tant how to mea­sure bounce rate.”

This is a topic and dis­cus­sion I have had with many testers. I sup­pose it is a result of so many peo­ple shift­ing into a test­ing role hav­ing come from an ana­lyt­ics role. While bounce rate is a key met­ric in the correlation-driven ana­lyt­ics world, its value is ques­tion­able in the test­ing world where the cause-and-effect rela­tion­ship between a change the desired behav­ior is key. A tested change to a key land­ing page could do a bet­ter job of fil­ter­ing out unqual­i­fied traf­fic, while increas­ing the vol­ume of qual­i­fied traf­fic through to your ulti­mate goal. In this sce­nario your con­ver­sion rate may improve even as bounce rate increases. Resolve to not use bounce rate as a proxy for some­thing else you can mea­sure that is directly tied to revenue.

Test­ing is a learn­ing process. Take some time and pon­der how you can incor­po­rate these res­o­lu­tions in your opti­miza­tion pro­gram.  Just as your per­sonal New Year’s res­o­lu­tions pro­vide an often-needed kick start to the year, you may find these test­ing res­o­lu­tions breathe new life into your test­ing pro­gram — and they don’t require 2-year gym mem­ber­ship com­mit­ment. Keep at it and don’t give up in February.