Inter­nal programs/teams that are all com­pet­ing for the same prime web­site real estate often cre­ate a road­block for A/B and mul­ti­vari­ate test­ing. For­tu­nately, there is a solu­tion for this.

Have you found your­self say­ing one of these?

  • We can’t test chang­ing or remov­ing that piece of con­tent because team Z will be upset.”
  • The pri­mary suc­cess met­ric increased but we can’t push the ‘win­ning’ expe­ri­ence live  because another team/program had 2 sec­ondary met­rics that were adversely impacted by that experience.”
  • We can’t A/B/n test the home­page because team X con­trols this sec­tion, team Y con­trols that sec­tion, and team Z con­trols the rest. There is no way to coor­di­nate this across the three teams.”

Sce­nario
Com­pa­nies are often struc­tured where dif­fer­ent pro­grams are man­aged by dif­fer­ent teams and the dif­fer­ent teams have dif­fer­ent goals/metrics. These var­i­ous programs/teams are all demand­ing space on the site. One program/team is focused on its met­ric of newslet­ter sign ups, another program/team is focused on its met­ric of pur­chased sub­scrip­tions and yet another program/team is focused on its met­ric of gen­er­at­ing page views in the sports sec­tion. How do we bal­ance these some­times com­pet­ing inter­ests? Test­ing is the process that will help you opti­mize but there’s more to it.

Many times, in this sce­nario of numer­ous goals/metrics, the oppo­site hap­pens and test­ing is reduced. Teams are too focused on their program’s goals and met­rics and are unwill­ing to change their web space – even tem­porar­ily – for a test. This inter­nal strug­gle can push web­site test­ing away from high vis­i­bil­ity page place­ments to smaller less viewed pages or sec­tions. This reduc­tion can lower the over­all suc­cess of the test­ing program.

Solu­tion To This Spe­cific Sce­nario
Let’s go one layer deeper to get to the solu­tion. Often annual reviews and bonuses are tied to a team member’s spe­cific met­ric. If a test increases one team’s met­ric but reduces another team’s met­ric, there is an inevitable source of con­flict. The solu­tion has two parts. First, agree on a sin­gle over­all suc­cess met­ric for the busi­ness and use that as your pri­mary test met­ric to push win­ning ver­sions live. Sec­ond, senior man­age­ment needs to update each person/program’s per­for­mance goals if a win­ning test reduces their spe­cific sec­ondary met­ric. For exam­ple, let’s say gen­er­at­ing sub­scrip­tion rev­enue is the pri­mary met­ric and win­ning ver­sions are pushed live based on this met­ric. If a win­ning ver­sion reduces the num­ber of newslet­ter sign ups (a sec­ondary met­ric), that team whose annual per­for­mance goals are tied to newslet­ter sign ups should have their annual objec­tives adjusted accord­ingly.  When peo­ple know their bonus and annual reviews are safe it will reduce the inter­nal resis­tance to test­ing and increase their focus on the over­all suc­cess of the company.

Achiev­ing the solu­tion above is obvi­ously eas­ier said than done. Regard­less, by attempt­ing to find a solu­tion that works across teams you are almost guar­an­teed to arrive at a posi­tion that is bet­ter than ignor­ing the issue all together. Once every­one has bought into a sin­gle over­all suc­cess met­ric and under­stands they will not be indi­rectly penal­ized for work­ing toward it, a test­ing pro­gram can really take shape.

0 comments