The supreme, nir­vanic level of the Adobe Tar­get Matu­rity Model, the lit­mus test used by our con­sul­tants and cus­tomers to gauge rel­a­tive matu­rity and next steps in the growth process of their opti­miza­tion pro­gram, is called embed­ded opti­miza­tion cul­ture. This is where test­ing and opti­miza­tion becomes a well-oiled machine; where its abil­ity to quickly deter­mine the best-performing vari­a­tions of con­tent for dis­tinct seg­ments (or where oppor­tu­ni­ties do not exist), based on real-time cus­tomer data, cre­ates a cul­ture of test­ing and opti­miza­tion that dri­ves con­tent, design, and appli­ca­tion deci­sions and tar­get­ing prior to their full-scale release in order to max­i­mize return on invest­ment (ROI) and mit­i­gate risk.

Think of it as qual­ity assur­ance (QA) based on pref­er­ence data derived from real cus­tomer response. We have many power users that have come close to cre­at­ing a truly embed­ded opti­miza­tion cul­ture, but only after fine-tuning a rock-solid test­ing and opti­miza­tion process and gain­ing inter­nal buy-in through evan­ge­liz­ing suc­cesses and high­light­ing the rich, gran­u­lar report­ing data that is dri­ving it. We call these “opti­miza­tion heroes,” not because they wear brightly col­ored span­dex (at least, in the work­place), but because they drive suc­cess through log­i­cal deci­sion mak­ing. They imple­ment an opti­miza­tion pro­gram that is dri­ven toward rev­enue gain, which improves the busi­ness expo­nen­tially in com­par­i­son to their bud­get invest­ment in optimization.

How­ever, there are still many com­pa­nies launch­ing opti­miza­tion pro­grams that find them­selves mired in the “inci­den­tal test­ing quick­sand,” the first step in the matu­rity model. I call it “quick­sand” because it’s dif­fi­cult to get your pro­gram foot­ing when there are obsta­cles in the test­ing process. These obsta­cles could come in the form of con­tent that is not eas­ily gen­er­ated or adjustable into test vari­a­tions, time­lines that are not being met or main­tained, team mem­bers or exec­u­tives that have not bought into the impor­tance of effec­tive test design to gain the desired results, or just not enough expe­ri­ence or strat­egy to steer your opti­miza­tion process in the right direction.

I’m thrilled to say that the data syn­chro­niza­tion enabled across Adobe Mar­ket­ing Cloud solu­tions by the mas­ter mar­ket­ing pro­file, com­bined with the elim­i­na­tion of the vari­ance between Ana­lyt­ics and Tar­get for Analytics-level analy­sis and con­fi­dence in Adobe Tar­get report­ing, has made the iter­a­tive test­ing process eas­ier, faster, and more defin­i­tive for many of our cus­tomers, all while pro­vid­ing a deeper degree of seg­men­ta­tion and pref­er­ence qual­i­fi­ca­tion. Our con­sul­tants, who have years of expe­ri­ence in this space dat­ing back to our pio­neer­ing Offer­mat­ica days, are lever­ag­ing this for even greater returns and insights within even a sin­gle test. This is pow­er­ful stuff when data is com­pa­ra­ble and can be aggre­gated into a uni­fied action­able pro­file across solutions.

But what about the instances when test hypothe­ses are not so clear? Or when you’re not quite sure of the trends that may exist within all of that Ana­lyt­ics data to deter­mine oppor­tu­ni­ties to test and opti­mize per­son­al­ized con­tent?  I’m sure many mar­keters dream about a machine where you can just push a but­ton, and the data and the num­bers are churned and ana­lyzed, deliv­er­ing a list of the most pre­dic­tive vari­able and seg­ment oppor­tu­ni­ties along with the rel­a­tive per­for­mance of con­tent variations.

This is a nice dream, and one that we at Adobe Tar­get share. It was from that dream that our resid­ual vari­ance model, for­merly known as Test&Target 1:1, and TouchClar­ity were born. This essen­tially brought auto­mated tar­get­ing to the indi­vid­ual, when the rules-based tar­get­ing meth­ods of today were not as clear. It looked at all vari­ables cap­tured from an indi­vid­ual vis­i­tor and used a self-optimizing mod­el­ing sys­tem to deter­mine the most pre­dic­tive vari­able and deter­mine the best pos­si­ble option for rel­e­vant con­tent from those vari­a­tions fed to it. This allowed for dis­tinct vari­a­tions of con­tent (or mar­ket­ing direc­tions) to be judged and defined by the true real-time cus­tomer data, not by the gut instincts of a cre­ative team or exec­u­tive. A true customer-centric approach. And the beauty of the mod­el­ing sys­tem con­tin­ues under the hood, where the model is con­stantly self-optimizing and test­ing itself to ensure that its own cal­cu­la­tions are as accu­rate as pos­si­ble on an ongo­ing basis.

With the release of Adobe Tar­get Pre­mium on June 25, we’ve taken this con­cept of auto­mated per­son­al­iza­tion to the next level. Whereas resid­ual vari­ance is an effec­tive approach to sta­tis­ti­cal deter­mi­na­tion, it is only a sin­gle fac­tor in the greater equa­tion. What if we include more? What if we include the abil­ity to test algo­rithms against each other? Let’s say you have an aspir­ing data sci­en­tist on your staff who has tin­kered with opti­miza­tion and built a new algo­rithm. Let’s test it against the exist­ing algo­rithms to deter­mine what value it can bring to the table. Devel­op­ment and test­ing of new algo­rithms can pro­vide new insights that help drive ROI.

For exam­ple, we’re includ­ing an addi­tional algo­rithm in the ini­tial release based on what is com­monly known as a ran­dom for­est approach to deter­mi­na­tion. It’s speedy and highly effec­tive, and it is a great exam­ple of how impor­tant it is to build out an arse­nal of automa­tion so that a) it becomes eas­ier to get to the right oppor­tu­ni­ties and the most effec­tive con­tent deci­sions, and b) it’s much eas­ier to adopt and test/qualify that your automa­tion is per­form­ing the way it should in com­par­i­son to alter­na­tives. Finally, the report­ing has been redesigned from our for­mer Insights reports to be more visual and easy to con­sume by sur­fac­ing the most valu­able, pre­dic­tive vari­ables to lever­age for max­i­mum return. We have a team of data sci­en­tists work­ing with our Adobe Tech­nol­ogy Labs to fine-tune and deliver automa­tion best prac­tices, and oppor­tu­ni­ties to test it against other methods.

With a robust automa­tion offer­ing to uti­lize in time-consuming, and often unclear, instances of test­ing and opti­miza­tion, mar­keters or opti­miza­tion own­ers are freed to focus on clear test­ing and rules-based tar­get­ing oppor­tu­ni­ties. This allows them to uncover more within the automa­tion report­ing and quickly earn the rights to wear span­dex dur­ing their off-hours, and per­haps even be whis­pered about around the water cooler as an “opti­miza­tion hero.”