Your test­ing and tar­get­ing solu­tion is hun­gry. Its appetite for mean­ing­ful data is lim­it­less and, like a teenager, the more rel­e­vant data it digests, the more pro­duc­tive growth it can achieve more quickly, and with greater clar­ity. With­out rich data to crunch down on, it gets tired and cranky, spit­ting out assump­tions based upon what­ever “aver­age” analy­sis it’s able to achieve using lim­ited out-of-the-box seg­ments or rigid “black box” deci­sion­ing sys­tems. These assump­tions are cre­ated with algo­rithms run­ning on basic first-party data uti­liz­ing lim­ited, variance-ridden ana­lyt­ics data.

This low-data fuel, like low blood sugar, is dan­ger­ous for the opti­miza­tion prac­ti­tioner, who can quickly latch on to basic rev­enue lift result­ing from high pop­u­la­tion or basic seg­ment lev­els (e.g., new vs. return vis­i­tor). It’s easy to push out this broad win­ning con­tent and call it a suc­cess, but what are you miss­ing within this high-level pop­u­la­tion or basic seg­ment view?

The dan­ger of act­ing on basic data assump­tions is that you get only an approx­i­mate view of a diverse test­ing pop­u­la­tion, or “false aver­age cus­tomer,” who in actu­al­ity doesn’t exist. What you’re miss­ing is a more gran­u­lar view of the cus­tomer. For instance, is this per­son pre­qual­i­fied for this offer based on geog­ra­phy, demo­graph­ics, or even third-party data sources like credit score? Is the cus­tomer a com­bi­na­tion of pro­file para­me­ters, like geog­ra­phy with refer­rer vari­ables, that iden­ti­fies the most pre­dic­tive seg­ment to tar­get for great­est return? Per­haps basic seg­men­ta­tion is mask­ing a more pre­dic­tive vari­able or com­pound audi­ence seg­ment beneath the sur­face that won’t be ana­lyzed or uncov­ered unless this data is fed to the solution.

Adobe Tar­get pro­vides cus­tom seg­men­ta­tion for fil­ter­ing your results by gran­u­lar audi­ence seg­ments to accu­rately tar­get them for even greater expo­nen­tial return on invest­ment (ROI). This is why our out-of-the-box seg­ments are cus­tomiz­able with para­me­ters rel­a­tive to any data source you feed into the solu­tion via a strong set of appli­ca­tion pro­gram­ming inter­faces (APIs). There’s even a batch pro­file upload API for cus­tomers who wish to bring in updated vis­i­tor pro­file data aug­mented with their offline branch, call cen­ter, and third-party data sources. These can be stitched together within a data ware­house, cus­tomer rela­tion­ship man­age­ment (CRM) sys­tem, or wher­ever pro­file data is stored. This can be based on sev­eral lev­els of data points, or even using a data man­age­ment plat­form like Audi­ence Man­ager to reg­u­larly update the data. Of course, the mas­ter mar­ket­ing pro­file, which uni­fies all pro­file data col­lected within the Adobe Mar­ket­ing Cloud, com­bines all of your dig­i­tal first-party data into one pro­file for ease of aggre­ga­tion within your data store.

A com­mon ques­tion I hear from diverse com­pa­nies across indus­tries and Tar­get cus­tomers is: What about Big Data? What if I have years of his­tor­i­cal cus­tomer data that I would like to use for per­son­al­iza­tion of my con­tent? Pre­vi­ously I men­tioned feed­ing the “right data” to your test­ing and tar­get­ing solu­tion. Per my anal­ogy, you shouldn’t feed a grow­ing, hun­gry teenager a bunch of junk food. We have extremely advanced cus­tomers in the retail, travel, finan­cial ser­vices, and other indus­tries with large, detailed pro­file data on their cus­tomers. This is impor­tant, espe­cially con­sid­er­ing the fre­quency of travel in the travel and hos­pi­tal­ity space or new invest­ments in the finan­cial space. Feed­ing two years of data to a solu­tion is valu­able, but it can also bog it down. Only through test­ing and fil­ter­ing by seg­ments can you define the “healthy food,” or most rel­e­vant data, to feed the system.

This topic was dis­cussed in a recent panel in which my col­league Gina Casagrande dis­cussed meth­ods for refin­ing your data and tar­get­ing the most effec­tive vari­ables for per­son­al­iza­tion.  Man­ual test­ing and rules-based tar­get­ing, fil­ter­ing reports by lev­els of vari­ables, and com­par­ing per­for­mance illu­mi­nates where tar­get­ing oppor­tu­ni­ties make sense and where they do not, defin­ing the most valu­able, or “healthy,” pro­file data to feed your solu­tion. Ana­lyt­ics can assist with this detailed view. For cus­tomers of Adobe Ana­lyt­ics and Adobe Tar­get, advanced data syn­chro­niza­tion between the two solu­tions brings broader, detailed con­text to your analy­ses. This allows you to apply any Ana­lyt­ics audi­ence seg­ment and suc­cess met­rics to your test results for unlim­ited drill-down and iden­ti­fi­ca­tion of oppor­tu­ni­ties, even for seg­ments or met­rics built in after the test is run. This speeds up the process of iden­ti­fy­ing the best oppor­tu­ni­ties for tar­get­ing rel­e­vant con­tent based on your test hypoth­e­sis, and what “junk food” data is irrel­e­vant and not healthy to feed the solution.

Often­times, a test­ing and tar­get­ing solu­tion has a sin­gle prac­ti­tioner or small team that may need assis­tance iden­ti­fy­ing the right data. They need help with seg­ment dis­cov­ery or uncov­er­ing the most pre­dic­tive vari­ables and com­bi­na­tion of pro­file vari­ables to tar­get. Although automa­tion can  be mis­tak­enly seen as an advanced capa­bil­ity, it is invalu­able in ini­tially iden­ti­fy­ing where to focus your test­ing and per­son­al­iza­tion efforts.  Don’t be fooled by opti­miza­tion point solu­tions that offer “seg­ment dis­cov­ery” by look­ing at dis­parate tests and sur­fac­ing seg­ments that react sig­nif­i­cantly. All of these tests focus on dif­fer­ent hypothe­ses and areas, which means aggre­gate or high-level analy­sis again results in broad and unqual­i­fied assump­tions about sig­nif­i­cant seg­ments. How are they react­ing and why? Can we draw par­al­lels between all of the hypothe­ses within all of your tests?

What is needed is auto­mated per­son­al­iza­tion, which is unique to Tar­get. Tar­get lever­ages a set of self-optimizing machine-learning algo­rithms to auto­mate the deliv­ery of per­son­al­ized expe­ri­ences and con­tent to the indi­vid­ual. It also pro­vides a detailed insights report that eval­u­ates vis­i­tor pro­file data and sur­faces the most pre­dic­tive variables—segment discovery—relative to the loca­tion and con­tent fed into it. It also helps to dis­card the non-predictive pro­file vari­ables based upon suc­cess met­rics or con­ver­sion goals that mean suc­cess in the loca­tions where it is run. Like hav­ing an auto­mated nutri­tion­ist, the algo­rithms will iden­tify the health­i­est data to feed the sys­tem for tar­get­ing at dif­fer­ent loca­tions of your dig­i­tal chan­nels. They will also iden­tify which vari­ables you should exploit for seg­men­ta­tion and tar­get­ing to remain pro­duc­tive and healthy in terms of pro­gram deliv­ery and growth. The imme­di­ate suc­cess of this approach was dis­cussed in a recent blog by my col­league Kevin Lind­say on the pay­off of effec­tive per­son­al­iza­tion for an organization.

So, allow Adobe Tar­get to help you grow your pro­gram with its open source APIs, auto­mated per­son­al­iza­tion, and self-regulation to assist with iden­ti­fy­ing the health­i­est data to feed your grow­ing pro­gram, and see how you can pro­duce the most prof­itable, accu­rate oppor­tu­ni­ties for tar­get­ing and per­son­al­iza­tion across your many dig­i­tal chan­nels and touch points.