That’s right, you read correctly — GUN CONTROL. No matter what side of the issue you are on, there are plenty of parallels between changing gun legislation and changing a website.

Gun Control
The internet is saturated with correlative facts supporting both sides of the debate on gun control. For example, one compelling argument against gun control states, “In 1976, Washington, D.C. enacted one of the most restrictive gun control laws in the nation. Since then, the city’s murder rate has risen 134% while the national murder rate has dropped 2%” [1].  While there is a correlation between strict regulations and increased violence, causation is more difficult to prove. On the other hand, a compelling argument for gun control states, “Roughly 16,272 murders were committed in the United States during 2008. Of these…67% were committed with firearms” [2].  Again, correlation is not necessarily causation. In fact, the optimal solution to the gun control debate will never be reached by focusing on correlative metrics alone. Like the issue of gun control, online optimization requires more than correlative data to reach optimal outcomes.

My Background
I’ve learned many important lessons about optimization as I consult for some of the top brands on the internet. I’ve learned from my clients’ successes as well as from their mistakes. I’ve seen many techniques that work well and even more that do not. I’ve watched some of the brightest optimization managers and testing strategists succumb to common pitfalls. My goal is to share some insights so that you can run better optimization programs. While there are many ideas I would love to discuss, my first three posts are dedicated to three important principles of optimization that will help you get started.

Principles of Optimization

  1. Do not base success on correlative metrics
  2. Focus on the best possible metric(s)
  3. Create experiments that focus on causal results

1. Do not base success on correlative metrics

In the heated debate between gun rights and gun control, I believe the core desire from both sides is to mitigate the risk of violence and promote safety. If a change is made in gun legislation, whether it favors gun rights or gun control, the real measure of success is not whether gun-related violence goes down. Rather, success occurs when violence in general goes down. If gun-related violence drops slightly, but violence in general increases three fold, neither side is happy with the outcome. The same critical distinction must be made when looking at correlative metrics in the optimization world. If I add a new module to a page, my concern is not in the degree to which people interact with the module, but whether or not that module helps or hurts my site’s overall success. This perspective may require a complete rethinking of the way you operate online, but it can mean the difference between success and failure of your optimization efforts.  Take, for example, a client email I received last week: “When I look at the link clicks, the second version is winning but it’s the opposite when we look at conversions.”  It’s not uncommon for substitute metrics, such as clicks, to trend a different direction from the true success (in this case, leads). My answer is the same every time: Which metric is most important?

Consider the following scenario below:
A test cell shows a 30% increase in email opens with a 12% decrease in site-wide revenue.

Is this success? No. Email opens are only important as they relate to site-wide success. I cannot think of any company that directly monetizes the act of opening an email. Companies tend to look at email opens, see the correlation between site-wide success, and make terrible assumptions like, “If I can increase email opens by X%, revenue should increase by Y%. That would be the same as assuming that if you could decrease the amount of medicine consumed by 30%, sickness would go down by 15%. After all, there is a high correlation between medicine consumed and sickness. Would anyone do that? Probably not. Yet every day, people setup test after test focusing on things like bounce rate or engagement rate with a banner. While these may be interesting and may even correlate with success, they don’t equal success. Why focus on a proxy when you can focus on the real thing? Separately consider each of the following results from online testing and determine whether or not you deem the outcome to be a success:

  • Interaction with the navigation goes up, but overall site engagement goes down
  • Clicks on a subscription banner increase, but overall subscriptions go down
  • Number of people who make it to step 3 in the subscription funnel increases, but overall subscription revenue plummets
  • Rate of people coming from an email and clicking on module X increases, but site revenue drops
  • Bounce rate decreases, but site revenue decreases
  • Revenue from people who clicked on a recommendation went up, but site-wide revenue fell

Optimization requires making tough decisions and one of those is deciding that the correlative metric is only relevant as it relates to the final success metric. If your goal is to increase revenue, focus on revenue. If your goal is to increase content consumption in paid areas, focus on content consumption in paid areas. More often than people realize, a positive change in a proxy metric can negatively impact your true measure of success. Some of the most common proxy metrics that I see derail testing programs are bounce rate, click rate, interaction rate, and email opens.

The metrics above are fine to track with your analytics package but probably have little or no place in an optimization program. In analytics, we want to know these metrics so that we can attempt to understand any correlative behavior. We observe their historical correlation to success. However, in the optimization sphere, the degree to which any of these metrics is important can fundamentally change the moment we introduce a new user experience. So the next time you run a test and your organization is pushing for you to observe bounce rate, work to educate key stakeholders on the principles you’ve learned. As an analyst, testing guru, optimization director, marketing manager—or whatever your role may be—one of your many challenging tasks must be to educate the organization. It requires transforming the way people view online success. Just because you have observed the correlative metrics through analytics does not mean those same numbers should be your focus in optimization. Analytics is about observing patterns over time. Optimization is about changing behavior, which often results in entirely new patterns.

Stay tuned for my next posts about focusing on the best possible metric(s) and creating experiments that focus on causal results.

0 comments