After your optimization program has been running for a while, there will naturally come times when you get to share the amazing success that has been generated by running your program correctly. Often times groups are so happy to get to brag, they don’t realize that they are causing the program long term problems.
There are many keys to sharing results correctly, but the most important thing to remember is why are you sharing them. Yes, you are talking to others about how great you are, but ultimately results are not about the past, but about the future. Should others invest in the program? Should they expand where and what you optimize? What do others think about and interact with optimization and most importantly, why should they listen to you and your team when it comes to how and why you should be running certain tests. The critical moments come not from the sharing of results, but the framing of why and how you get those results, and what it can do for other parts of the organization. Focus on the wrong parts, and you will are asking for far more trouble than most can imagine.
Some of the worst moments for programs come after big results presentations, where they have gotten buy in from others and the massive expansion of the program, but fail to share the right message and to help others understand that testing is often times not what they think it is. These moments inevitably lead to large resource drains, negative impressions, and massive time sinks for the original testing group, leading them to frustration and less total revenue generation then before the influx of resources. Looking back 6 and 12 months later, you can easily see the moment that things went south.
With all of those concerns in mind, here are the keys to successfully sharing results within your organization.
DO – Focus on what you got by proving assumptions wrong
It is so fun to talk about getting an 8% lift, or getting a 20% lift on multiple tests, but often times the lift is secondary to the discipline that lead you to get it. If you are testing to find the optimal use of resources, then it is inevitable that you will find many times when popular assumptions have been proven wrong. As you share your results, it is important that this is the primary part of the message. It is not about “we got a 15% lift“, it is about “everyone wanted to do X, which would have generated a 3% lift, but we found that of these 5 feasible alternatives, that doing Y actually was dramatically better and generated a 15% lift, or 12% better than we would have gotten if all we did was test what people thought was going to win.”
DON’T – Report test reports as a single revenue number
It is so fun and easy to report results as, “the test generated 6.2 million in additional revenue.” The problem is that there is absolutely no way to know specifics with accuracy, and you will garner a lack of trust if later the P&L does not show that exact figure of gain. I understand how impressive it is to point to a single number and how much it can get you credit, but ultimately it is far more damaging then the temporary good it might generate.
Instead, even if we ignore all the real world problems with confidence, it is important to understand that confidence and most measures only measure the likelihood of pattern, not the actual outcome. If I have a 10% lift and 96% confidence, it is not 96% confident that you will get a 10% lift, only 96% confident that the measured experience will beat control. Confidence intervals can also be tricky because of the many assumptions of the Gaussian bell curves that they are based off of.
Instead focus on report tests as a range, based on a preset range. What that range is somewhat arbitrary, as long as it is sufficiently large enough to convey the massive range of possibility. If I have not done deep analysis of past results, I will often times report test results in a 50% – 200% range, so that the 6.2 million becomes an expected outcome of 3.1 to 12.4 million dollars. Ultimately the range is arbitrary, though there are ways to look back at results over time and see an expected range. Express everything in a large but relevant range and you will avoid all the massive problems of credibility false reporting creates.
DO – Report all tests based on revenue impact
While you can’t report an absolute number that does not mean that you should not be reporting the fiscal impact of a test. Translating all tests to a revenue figure gives you the ability to express your efforts as they impact the bottom line, while also giving you the ability to rationally compare the results amongst tests. Being able to look at tests as having bottom line impact, or not, allows others to see the scale of change and the efficiencies that testing can bring to the rest of the organization. Focusing on other things like clicks, funnels, opinions and the like will distract from the core message and devalue current and future efforts.
Even if you are not a retail site, you can translate leads or page-views to average value or CPM. Revenue also serves the purpose of making you evaluate your single success metric to ensure that it is tied to the purpose of your site and are not being caught up on side goals that do not impact the bottom line.
DON’T – Forget that you are measuring gross revenue, not net revenue
Except in rare circumstances, most groups end up measuring gross revenue when it comes to the impact to the business. While this makes numbers seem much larger than they really are, it often times leads to groups over estimating their impact to the business as a whole. If you cannot express impact based on pure revenue generation, at least make it clear what numbers and assumptions you are using and what you expect the entire program to deliver to the bottom line. Nothing kills credibility then numbers that any rational executive can not believe.
DO – Report on the scale of impact of various tests
So much is missed if we do not look at patterns across tests. One of the critical things for groups to understand is that lift by itself does not tell you revenue. A smaller population measured with a very large lift is often worth far less than a larger population with a much smaller lift. If you are translating all tests to revenue, then you can easily figure out that where you have been able to generate the most revenue, not necessarily the most lift. You have a dirent measure of saying that this type of test produces 4x another type. This active data acquisition is what allows you to plan out and increase resource efficiency in the future, and becomes vital for the long term growth of a program. Often times the lessons learned here really shape how people look at the impact of various channels. This type of analysis also helps people start to understand the differences between revenue allocation and revenue generation.
DON’T – Forget that the most valuable results from tests is not the lift
It is vital that overtime you start to get a deep causal understanding of what your ability to influence various parts of the user experience, as well as user groups, and what the cost to do so is. While it is fun to talk about the revenue impact, knowing that in 4 out of 5 places changing content did not have much of an impact compared to spatial changes completely changes how other parts of the business even operate. These lessons, about where you have been able to make an impact, how, and what it took to do it can help shape entire product roadmaps and help drive exponential revenue generation in the future.
There is no better time to express that testing is not just a list of actions, but an active acquisition of knowledge then when and how you talk about results. Failing to look at these patterns across tests and failure to really use this as a way to filter your other data can lead to massively inefficient uses of resources. Your program is worth far more then the individual actions you take, so why would you allow others to overly focus on tests when it is the act of optimization that drives the largest value opportunities? Make this the focus, what you learned, how, why, and what the impact was, and you will be able to make others see what testing can really do for them.
There is no greater time to really see where a program is at then to see how they communicate results. You can tell how efficient they have been, how they work with other groups, and most of all how much the personal ego of the people involved on both ends of the presentation gets in the way of real meaningful results. If you think about and focus on the right parts of expressing results, you will be able to move forward and really change your organization. Nothing drives others to want to invest in and expand a programs impact if you can show it improves every other part of the business. Focus on just the lift and just numbers, and you are setting yourself and others up for failure.