The supreme, nirvanic level of the Adobe Target Maturity Model, the litmus test used by our consultants and customers to gauge relative maturity and next steps in the growth process of their optimization program, is called embedded optimization culture. This is where testing and optimization becomes a well-oiled machine; where its ability to quickly determine the best-performing variations of content for distinct segments (or where opportunities do not exist), based on real-time customer data, creates a culture of testing and optimization that drives content, design, and application decisions and targeting prior to their full-scale release in order to maximize return on investment (ROI) and mitigate risk.

Think of it as quality assurance (QA) based on preference data derived from real customer response. We have many power users that have come close to creating a truly embedded optimization culture, but only after fine-tuning a rock-solid testing and optimization process and gaining internal buy-in through evangelizing successes and highlighting the rich, granular reporting data that is driving it. We call these “optimization heroes,” not because they wear brightly colored spandex (at least, in the workplace), but because they drive success through logical decision making. They implement an optimization program that is driven toward revenue gain, which improves the business exponentially in comparison to their budget investment in optimization.

However, there are still many companies launching optimization programs that find themselves mired in the “incidental testing quicksand,” the first step in the maturity model. I call it “quicksand” because it’s difficult to get your program footing when there are obstacles in the testing process. These obstacles could come in the form of content that is not easily generated or adjustable into test variations, timelines that are not being met or maintained, team members or executives that have not bought into the importance of effective test design to gain the desired results, or just not enough experience or strategy to steer your optimization process in the right direction.

I’m thrilled to say that the data synchronization enabled across Adobe Marketing Cloud solutions by the master marketing profile, combined with the elimination of the variance between Analytics and Target for Analytics-level analysis and confidence in Adobe Target reporting, has made the iterative testing process easier, faster, and more definitive for many of our customers, all while providing a deeper degree of segmentation and preference qualification. Our consultants, who have years of experience in this space dating back to our pioneering Offermatica days, are leveraging this for even greater returns and insights within even a single test. This is powerful stuff when data is comparable and can be aggregated into a unified actionable profile across solutions.

But what about the instances when test hypotheses are not so clear? Or when you’re not quite sure of the trends that may exist within all of that Analytics data to determine opportunities to test and optimize personalized content?  I’m sure many marketers dream about a machine where you can just push a button, and the data and the numbers are churned and analyzed, delivering a list of the most predictive variable and segment opportunities along with the relative performance of content variations.

This is a nice dream, and one that we at Adobe Target share. It was from that dream that our residual variance model, formerly known as Test&Target 1:1, and TouchClarity were born. This essentially brought automated targeting to the individual, when the rules-based targeting methods of today were not as clear. It looked at all variables captured from an individual visitor and used a self-optimizing modeling system to determine the most predictive variable and determine the best possible option for relevant content from those variations fed to it. This allowed for distinct variations of content (or marketing directions) to be judged and defined by the true real-time customer data, not by the gut instincts of a creative team or executive. A true customer-centric approach. And the beauty of the modeling system continues under the hood, where the model is constantly self-optimizing and testing itself to ensure that its own calculations are as accurate as possible on an ongoing basis.

With the release of Adobe Target Premium on June 25, we’ve taken this concept of automated personalization to the next level. Whereas residual variance is an effective approach to statistical determination, it is only a single factor in the greater equation. What if we include more? What if we include the ability to test algorithms against each other? Let’s say you have an aspiring data scientist on your staff who has tinkered with optimization and built a new algorithm. Let’s test it against the existing algorithms to determine what value it can bring to the table. Development and testing of new algorithms can provide new insights that help drive ROI.

For example, we’re including an additional algorithm in the initial release based on what is commonly known as a random forest approach to determination. It’s speedy and highly effective, and it is a great example of how important it is to build out an arsenal of automation so that a) it becomes easier to get to the right opportunities and the most effective content decisions, and b) it’s much easier to adopt and test/qualify that your automation is performing the way it should in comparison to alternatives. Finally, the reporting has been redesigned from our former Insights reports to be more visual and easy to consume by surfacing the most valuable, predictive variables to leverage for maximum return. We have a team of data scientists working with our Adobe Technology Labs to fine-tune and deliver automation best practices, and opportunities to test it against other methods.

With a robust automation offering to utilize in time-consuming, and often unclear, instances of testing and optimization, marketers or optimization owners are freed to focus on clear testing and rules-based targeting opportunities. This allows them to uncover more within the automation reporting and quickly earn the rights to wear spandex during their off-hours, and perhaps even be whispered about around the water cooler as an “optimization hero.”