Return on investment, or ROI, is one of the most critical yet challenging metrics that can be measured during an optimization campaign. ROI not only validates the results of your testing, but it also proves the value of the investment in the optimization program to the stakeholders in the business. However, it can often be challenging to calculate ROI based on testing activities, and we often fall victim to “leaving money on the table” by not taking into account all of the forms of ROI that optimization produces.
Although ROI is most often calculated as a financial return on a budgetary investment made by the business, several other factors play into the overall economic benefit of an optimization program. To truly understand the impact of your program on your company’s bottom line, it is important to take into account all of the benefits that result from both the efficiency and the effectiveness of the program and its increase in customer engagement, conversion, and loyalty.
A cost-benefit analysis can not only provide a valuable way to prioritize your testing efforts (i.e., rating test ideas based on what will take the shortest amount of time to execute with the greatest return), it can also help uncover ROI in terms of cost savings relative to employee time. For instance, use of products such as Adobe Target reduce the amount of time it takes to go from test design to execution to analysis, reducing the investment needed to scale and support your testing efforts. Being able to filter and drill down deeply into the results by key segments and success metrics also helps define the content that resonates more quickly with consumers, saving creative resources and time. Not only are these hours/expense savings important to evaluate and add as ROI to your reporting, but they also have positive results in terms of employees’ ability to spend more time on other projects, or on improving the program as a whole.
Another source of ROI generated from optimization is savings in terms of customer service, call centers, and analog processes relative to the online business. For example, properly designed tests resulting in improved customer experiences means less time spent by a call center or support team responding to customers’ needs (or more accurately, complaints and confusion). Smoother interfaces mean happier users, and often result in increased sales. These folks are now spending less time calling support to figure out bugs in the system or to clarify content on poorly designed Web and mobile experiences and more time consuming your content, engaging with your brand and experiences, and buying your products. This results in a win-win for the company, driving revenue while reducing overhead costs.
A final way to look at ROI is in terms of risk mitigation. A new website, mobile site, or mobile app is a risky and heavy investment. Risk mitigation is of enormous interest to executives because it buffers against potentially bad decisions, ensuring that the potential for lost conversion is kept to an acceptable level. In fact, this can be used as an important learning tool when you’re educating executives on why they should trust the data above instinct. When a test result proves that one of their initiatives may have needed adjustment or work to optimize the customer experience, showing the potential loss the executive or business might have seen helps to prove why testing your assumptions, however educated, is important and pragmatic. As the optimization program grows, the business matures and becomes more trusting of the test results; optimization becomes an embedded part of the culture and helps to drive the business strategy by justifying different approaches first, in a smaller sample, before they are pushed to visitors as a whole. This confidence grows as risk mitigation is measured and businesses can know for certain that the content they are putting out on the Web is the most effective content for their current goals. This goes a long way in growing what we call a culture of optimization within the business, where all people and processes are at least in part driven by the results of optimization.
So why then is ROI such an elusive target? One reason is that many providers of optimization software and services are afraid to expose ROI within their tools. In a sense, ROI is the final determination of the success of a program, and the software it runs on, and exposure of potential shortcomings in the software is a risk for the providers.
Another reason many programs don’t include concrete ROI estimates is that they do not employ valid methodologies for calculating ROI. Many times ROI is a based on a series of projections and assumptions, and not calculated in a more scientific manner; therefore, the unreliability of the projections prevents developers from including the functionality within the software.
With Adobe Target, we seek to expose the ROI metric with more rigor than other programs. We allow for the design of concrete metrics within the test design that can then be assigned a certain value and monitored throughout the testing process. Users can then go back and see what their ROI was using their chosen content as well as what their potential ROI would have been if they had gone with the best-performing content as defined by the tests. This creates a process around ROI that is tangible and measurable, building confidence in the tools and the results, and maturing the optimization program and the business as a whole.
Accurate and reliable ROI metrics are, in a sense, the holy grail of test optimization. The ability of products like Adobe Target to enable a transparent assessment of ROI results in a more efficient and profitable business. With new interfaces and improved reporting, the results of your ROI analysis can be shared more easily with your stakeholders, further promoting the benefits of your optimization efforts and increasing your own ROI for the time spent designing and implementing your tests.