Last week, I started a series of posts describing five tips for transforming your optimization program from a sideshow into part of the main event. In the first post of that series, I presented the first tip that my colleague in Adobe Digital Strategy consulting, Debra Adams, shared with me: Making the business case. I also mentioned two big challenges you’ll face with getting stakeholders to buy into your business case and tests — trusting the data and the test design of your individual tests.
These next two tips give concrete steps for building that trust.
Tip 2: Overcome the Data Dilemma.
Today, everybody justifies why their feature or product is important with data. Unfortunately, not all data sources offer trustworthy or high-quality data. People know this, and are understandably dubious of data-backed claims. Take these actions to get buy-in for optimization by demonstrating that your business case and test data are credible.
- Identify a spokesperson who is experienced with data. This person probably won’t be a developer — they usually care about features and performance, not data. Good candidates are typically data analysts, data scientists, or statisticians.
- Ensure your data tool is trustworthy and set up properly. Your data-centric person can explain why the tool and its data are truthful. Product owners, executives, and other stakeholders will need to agree that the tool provides truthful, credible data.
- Track the right metrics. These metrics should clearly tie to the business bottom line, like leads for B2B, revenue for retail, applications completed for a bank, or page views for a media company. Avoid metrics that don’t tie directly to business value, like clicks through to the home page.
- Understand your web site’s various business KPIs. Business people have specific metrics they care about — speak their language and relate to their metrics.
- Track only the essential metrics. Use those same metrics across tests to allow apples-to-apples comparisons to determine what moves the needle.
- Show your test results in Analytics. Showing your results with the context of Analytics helps demonstrate that you’re not overlooking any unexpected influence.
- Use unified profile data that ideally spans your offline and online channels. This lets you fully evaluate and explain why certain data is or isn’t relevant or powerful to your strategy and how that influences the tests you suggest.
- Make sure the test results are clear, pertinent, and valuable to the business. This avoids creating so much data that results seem contradictory.
Tip 3: Do Valid Testing.
When someone pokes a hole in your test design or results, the organization may lose faith in that test — perhaps even in your entire program. Running a controlled experiment that produces valid results requires following established rules of statistical-test design. Follow these important rules to design a valid test:
- Identify the dependent and independent variables in your test — in other words, the elements you are changing and testing. If you change too many elements within test variations, you can’t isolate which element elicited a visitor response.
- Run the test long enough for results to be statistically valid. Use a sample size calculator to determine how many visitors must participate in the test and the likely time to reach that traffic level. There’s no shortcut here — although you can use the Auto-Allocate feature of Adobe Target to apply an algorithm that automatically diverts more traffic to a winning experience during the test. This lets you reach statistical confidence faster and increase conversion lift. You can only declare a valid winner after the test has experienced enough traffic to reach statistical significance. Only then can you know that the results you’re seeing in the test are consistent and trustworthy.
- Don’t include too many test experiences for your site traffic levels — test experiences will likely not get enough traffic to reach significance, or you’ll have to run your test for so long that other factors like seasonality may invalidate test results.
- Consider if your test variations have the potential to really move the needle. Changing a single word in a page headline probably isn’t a valuable test. Make your test experience differences bold enough to elicit a measurable user response.
- Track the metrics that indicate true business success. As mentioned in my previous post, tracking simple clicks through on the home page is typically meaningless. Identify and track the business metrics that make or save money for the business. Report on your test results using Adobe Analytics as the reporting source to provide even more context around the test’s impact.
Next up: Get that Seat at the Right Time — and Maintain It.
Once you’ve earned stakeholder trust in your data and your test design, you’ve likely earned your seat at the table. Realize that the timing of getting your seat is important — you need that seat before the decisions have been made. But you also need to do a few things to keep that seat.
In my final post, I’ll cover the final two tips, Tip 4 (Get your seat before the big decisions are made), and Tip 5 (Reinforce the value of your testing program).