I was having a conversation with a colleague about the best way to assist a customer with their 2012 planning, and it brought up the third bias I deal with on a common basis, Hyperbolic Discounting or “the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs, where the tendency increases the closer to the present both payoffs are.” Besides being the basis for the entire credit card industry, in optimization this mostly comes out in terms of better versus best testing.
When groups decide on a roadmap, or the list of tests that they mistakenly refer to as a roadmap, they will often prioritize them by what is most on top of people’s minds, or what their boss wants. You have all been sitting around and think that changing the button to Buy instead of Buy Now, or you really want to change the background on your promotional images on your front door, or the copy on your landing page, you think it needs to say X… All of this leads to the want to test your theory and prove yourself right. You shut down, ignore discipline, and try to see if that idea is better than the other. The entire concept of hypothesis testing leads to this failure of human cognition. It works if the only goal is to prove a single point, but it is really inefficient if the goal is measure the relative value of an action. We forget that we are dealing with more then the day to day issues in front of us, and we try to solve a problem today instead of pushing to get the most from our entire program. Even worse, it leads to the prioritization of these tests over more efficient and discipline based tests because of the immediate payoff in the “I am right” reward they offer.
The reason that this fails is because the focus is on that short term gain, we often think it will take too much effort, both resource and especially politically to try to learn in our efforts. Instead, you end up down the path of simply trying to figure out if version B is better than version A. It isn’t about figuring out what you should be doing, it is about trying to prove your idea better then the other persons idea. In some cases, groups will add a few smaller tweaked versions of A or B (which is still better), but end of the day, your test ends up answering, “Which is better, version A or version B”. If the concept is bad, it is an inefficient test. If the section you are optimization is not as important, version B can be massively better then version A, but still only as valuable as the least “better” version of a different section of the same page (or a different type of change to the same item), it is still an inefficient test. It is far more valuable to know that if I spend $5, I can get $15, $7, $20, or $30 then to just know I got $7 and be happy with it. True optimization is to figure out the best path, not to just measure the one that you are already on. Yes, you get a result, but it is a really inefficient result, and often leads to further inefficient results as you continue down that path. Even worse is when you try to force a MVT to just throw multiple better test together and simply increase the speed of your sub optimal outcome. There is no way to make this type of testing efficient, unless you get ahead of the problem and can deconstruct the question of the test.
Best testing takes a different approach, it asks much more fundamental questions, such as “Does my copy matter the most on the page?”, or “which factor of the button is most influential?”, “what are the feasible options”, or even “does changing the button, is that the best place to put my resources?”. Best testing is about figuring out where the best places are to focus your energy and the best feasible alternative; democratizing the entire process so that your outcomes are less biased by any individual or idea. Best testing becomes about the system you have in place to make decisions, and not about any individual idea that goes into it. Any system is only as good as the quality that goes into it, but it designed to maximize your returns that come out of that system. It also makes you deconstruct your questions, one of the single most important skills for a successful program, and challenge core assumptions and biases that you have. The key is to figure out what places and what types of changes are BEST relative to each other, so that you can align resources and internal thought towards those points, while eliminating waste on things that do not matter. The act of valuing possible actions against each other with causal information is the single greatest way to maximize resources, not use them unnecessarily. It challenges you to look at the page and user experience as a holistic item, and that each component is part of that process, and then to weigh the value of each one AGAINST each other. It isn’t about your idea, or even any individual idea any more, it is about creating a system by which you can learn and figure out where to focus your energy to get even better concepts and even better outcomes. Who cares if the copy improves the page if it only does it at 1/10th the scale of changing the background, or the main image, or that small section on the page that you never expected?
What is amazing is that you are left with two options from the BEST testing:
1) You were correct — In which case you have confirmation and you still get the same lift.
2) You are wrong — You learn valuable information, and you get MORE lift then what you had before. Even better, it might send you down a path that you weren’t even considering before.
What happens when you are stuck doing better testing, is that you get the short term return, both on your idea, but also politically by showing that you were correct. You are ignoring the long term opportunity cost of figuring out what matters most and building off of something. It gives you a shiny object that you get to show off to anyone that will listen, but the real question is what are you giving up going down that path?
The irony of this is that everyone always thinks that that will take too much time or resources, when for a large majority of sites out there, it is almost always the exact opposite. Being efficient means that you are testing based on your current resources and the way to maximize speed, where better testing often leads to forcing an overly technical solution in order to execute the one idea that you are willing to consider. What is true however is that you have to shift how people think, and challenge them to understand some new disciplines, in order for them to accept and execute in this way. It takes a culture and a person willing to be “wrong” and one who is willing to let go of their ego for the sake of the site. To paraphrase Kathryn Schulz, “you are already wrong, the difference is simply that you will know that you are wrong”.
What you get out of your optimization program is all about the system you put in place and how much you are willing to challenge yourself and others to find the best thing you can be doing. It is easy to want to take the immediate payout of being right or proving a HiPPO right, but at the end of the day, you will always be better off if you focus on the disciplines that make you successful. The best thing that testing can do is challenge or eliminate human biases from the plate. It can create an equal playing field so that you can correctly know the causal value of an item, and to be able to measure the EFFICIENCY of improving it. It allows all ideas to be measured against themselves, and more importantly, it stops groups from spending hours talking about ways to improve things that don’t matter, while teaching what does matter and how best to impact it. It doesn’t eliminate the value of good input; it just filters it and refines it so that it is spent on the correct things, not just what person A thinks is the most important thing.