You have finally gotten everyone to agree on what the single success metric for your program is, and thanks to your amazing leadership you have gotten buy-in from product, senior management, IT, and design to test. You start to talk to each group and they have this amazing idea for a redesigned new product page, and they want to incorporate these 3 new pieces of technology and they want know that they will make their decision off of RPV, but they really want to know who is interacting with each module and what the flow is and how often people from iPhones come back to the page. You take all that information, and then you set to tackle the job of putting together such a large test and getting everyone what they want…

You have just committed the third deadly sin of testing, you have ignored efficiency. The fundamental challenge of any program is to achieve the greatest return for the lowest cost, and by ignoring those factors, you are allowing bad design to lower your possible outcome and you are dramatically increasing the cost to achieve those results through time, energy, and by making it far more difficult to make the right decision. A successful program knows how it is going to act before hand and filters everything through the the lens of what is going to allow the best outcome, not what people want or what will make people happy.

You are allowing others to dictate the way things are tested and you are only testing the giant things that someone else was already doing. It is extremely easy to just do what is asked of you, but is equally as important that you are doing the right things. Testing is something that sounds easy to everyone, but the the reality is that it is an extremely different way of thinking about actions and requires you to really focus on what you should and shouldn’t do. If you do not have clear rules of action, and if you are not deconstructing ideas and doing what is the most efficient action, then you are never going to achieve the value that you are capable of. This means that the more you focus on the giant idea or the big promised great redesign, the less likely you are to achieve results since you are looking at the wrong end of the system.

If we care about efficiency, then what wins is irrelevant and who came up with that idea more so. What matters is how quickly can we measure as many feasible alternatives and can we act in a meaningful way. It inst about what idea is better, but instead not being happy with a 3% lit when you could have had 10% for half the effort. Focus as much energy you can in making sure that you have as many great inputs to the system as possible, and make sure that the system only does the minimum amount of work necessary to make decisions. This is why a single success metric is so important, and also why you measure alternatives against each other. This is why you need that great leadership, as you have to allow people to be better then would otherwise be. This is why all those human biases that can ruin a program are important to combat, because each one makes it harder to get great inputs or to act decisively. This is why trusting just a great test idea from your agency or from some “expert” is a pointless endeavor, and why no test should ever be run with just one alternative.

The worst part is that you will not see the direct impact because you will be so caught up in just accomplishing the task of making so many moving parts fit together. You will only be measuring the things you do act on. All the missed opportunities will be lost to the graveyard of knowledge. A 10% lift that takes 6 months and 400 man hours is worth a lot less than the same result that takes 2 weeks and 4 man hours. In order to succeed, efficiency must always be the top priority for how you allocate resources, what you allow into a test, what you track into a test, how you test things and of course what you test. You must make sure that every test is prioritized and set-up in a way to accomplish both tasks. You must make sure that you are challenging assumptions, and that you are not caring about what wins, but instead how many different ideas go into the system.

Everything starts with educating people about what makes a good test. Work with people to make them understand what makes a good test, why you need multiple inputs, why it isn’t about validating their ideas. The system works when everyone works together and get over their ego to achieve the greatest return. Great ideas can come from anywhere, and the act of thinking outside the box frees you up to try even greater ideas. Make them understand that it isn’t about who is right, but instead about finding the best way to prove everyone wrong. In almost all cases, the greatest returns come from the things that no one thought of or that go against everything you have ever thought mattered.

Equally you must be educating people on how you act on data. Make them aware that want to know is not the same as need to know. Make them know that action is more important then comfort, and that the worst thing you can do is get caught up trying to answer inquiries that have nothing to do with the success of the site. It won’t happen overnight, but it will happen if you make it your top priority and focus on that part over just the simple execution of any idea. It will cause people to not always be happy, and you will take them out of your comfort zone, but if you are not enforcing efficiency, then who will?

The small details still count for a lot and are often the hardest to change. The devil is in the details as they say. Are you not overly complicating set-up for the sake of making people happy? Are you only testing things because your agency or a senior VP promises you it will work? Do you have 40 metrics on a campaign just because someone might ask about them? Are you looking at segments so small that the cost of exploiting them would never be worth your time? Having the discipline to set up the test for what you need and nothing else greatly decreases the time, effort, and technical set-up of each test. It also greatly increases your ability to act in a meaningful and timely way after the test. Never add something to a test if you are not 100% sure of how it will be used and how it will impact change if it is positive or negative. Never wait to see what you will get. Optimization is a completely different discipline then analytics, and educating yourself and others on how to think and act is what will greatly increase the efficiency of each effort.

It is not uncommon for a simple test that would normally get live in an hour to be sidetracked for weeks or months for the need to track additional information that has little to no value. It is also not uncommon for massive simple opportunities to be missed because you are not willing to explore or not willing to take time away from massive projects that are on top of everyone’s mind. Just as common is a test with a clear result being debated on needlessly for weeks or months to finally be acted on, or in some cases, to never do so. All these happen because it is easier to make people happy then it is to do the right things. All these things happen when the act of getting a test live becomes all we focus on and we forget to do the small things that ensure we are being efficient. If you frame everything with the need to be efficiency, and you do the hard things that no one likes but everyone needs, then you will never have these problems. Never let this happen to you. Allowing these things to happen is a fundamental sin when it comes to achieving real lasting value with your test program.

0 comments