Internal programs/teams that are all competing for the same prime website real estate often create a roadblock for A/B and multivariate testing. Fortunately, there is a solution for this.
Have you found yourself saying one of these?
- “We can’t test changing or removing that piece of content because team Z will be upset.”
- “The primary success metric increased but we can’t push the ‘winning’ experience live because another team/program had 2 secondary metrics that were adversely impacted by that experience.”
- “We can’t A/B/n test the homepage because team X controls this section, team Y controls that section, and team Z controls the rest. There is no way to coordinate this across the three teams.”
Companies are often structured where different programs are managed by different teams and the different teams have different goals/metrics. These various programs/teams are all demanding space on the site. One program/team is focused on its metric of newsletter sign ups, another program/team is focused on its metric of purchased subscriptions and yet another program/team is focused on its metric of generating page views in the sports section. How do we balance these sometimes competing interests? Testing is the process that will help you optimize but there’s more to it.
Many times, in this scenario of numerous goals/metrics, the opposite happens and testing is reduced. Teams are too focused on their program’s goals and metrics and are unwilling to change their web space – even temporarily – for a test. This internal struggle can push website testing away from high visibility page placements to smaller less viewed pages or sections. This reduction can lower the overall success of the testing program.
Solution To This Specific Scenario
Let’s go one layer deeper to get to the solution. Often annual reviews and bonuses are tied to a team member’s specific metric. If a test increases one team’s metric but reduces another team’s metric, there is an inevitable source of conflict. The solution has two parts. First, agree on a single overall success metric for the business and use that as your primary test metric to push winning versions live. Second, senior management needs to update each person/program’s performance goals if a winning test reduces their specific secondary metric. For example, let’s say generating subscription revenue is the primary metric and winning versions are pushed live based on this metric. If a winning version reduces the number of newsletter sign ups (a secondary metric), that team whose annual performance goals are tied to newsletter sign ups should have their annual objectives adjusted accordingly. When people know their bonus and annual reviews are safe it will reduce the internal resistance to testing and increase their focus on the overall success of the company.
Achieving the solution above is obviously easier said than done. Regardless, by attempting to find a solution that works across teams you are almost guaranteed to arrive at a position that is better than ignoring the issue all together. Once everyone has bought into a single overall success metric and understands they will not be indirectly penalized for working toward it, a testing program can really take shape.