In part one of this blog, we dis­cussed the short­com­ings of rules-based tar­get­ing sys­tems and why dig­i­tal mar­ket­ing today requires more auto­mated, behavior-based tar­get­ing solu­tions built on algo­rithms and machine learn­ing. Today we’ll take a closer look at the algo­rith­mic approaches that Adobe Tar­get employs to opti­mize con­tent for web­site visitors.

In cases where a mar­keter has not explic­itly spec­i­fied rules, machine learn­ing takes over to dis­cover what con­tent is likely to pro­duce the desired out­comes for each user. As high­lighted in an ear­lier blog, this approach saves time while improv­ing match­ing accu­racy, a win-win for mar­keters. How­ever, only using algo­rithms to make deci­sions isn’t enough—arriving at the best pre­dic­tive deci­sion requires the right algo­rithm for the job.

Just as online test­ing involves com­par­ing the effec­tive­ness of dif­fer­ent con­tent and pre­sen­ta­tion lay­outs, auto­mated behav­ioral tar­get­ing involves com­par­ing the effec­tive­ness of algo­rithms. There’s no “one size fits all” algo­rithm that will be a good fit for every mar­ket­ing use case. There­fore, Adobe Tar­get uses a com­bi­na­tion of industry-standard, machine-learning algo­rithms to drive results. A few of the mod­els that we use are listed below.

The team often blends one or more of these mod­els in unique ways based on empir­i­cal analy­sis, in order to max­i­mize the lift. In addi­tion, the Adobe Tar­get plat­form team con­tin­u­ously exper­i­ments with emerg­ing mod­els that are suited to spe­cific indus­try ver­ti­cals, using an offline model eval­u­a­tion frame­work, which includes broad-based sim­u­la­tion of user behav­ior as well.  Our goal is to even­tu­ally expose the innards of this frame­work as well as con­trol of the deliv­ery sys­tem to allow for the injec­tion of customer-defined mod­els and algo­rithms into the system.

Our out-of-the-box mod­el­ing sys­tems today use the fol­low­ing algorithms:

Sta­tis­ti­cal deci­sion trees: Uses a deci­sion tree as a pre­dic­tive model to draw con­clu­sions from behav­ioral observations.

Ran­dom for­est: Exam­ines mul­ti­ple deci­sion trees on var­i­ous sub-samples of the dataset and uses aver­ag­ing to improve pre­dic­tive accuracy.

Sup­port Vec­tor Machines: Pro­vides a clas­si­fi­ca­tion frame­work for mod­el­ing based on incom­ing attribute val­ues in large dimen­sional spaces.

Adobe Tar­get is the only solu­tion on the mar­ket that oper­ates at high veloc­ity, lever­ag­ing the mas­sive vol­ume of data that we have and applies our algo­rith­mic sys­tems to the hun­dreds of dis­tinct use cases across the dig­i­tal mar­ket­ing land­scape. Mak­ing the above algo­rithms work at scale with the vol­ume that we encounter is a sig­nif­i­cant part of our engi­neer­ing method­ol­ogy and intel­lec­tual prop­erty. Another equally unique aspect of our plat­form is to use a champion-challenger approach in our run­time sys­tems to test algo­rithms against each other to arrive at opti­mal lift for cus­tomers while main­tain­ing a focus on rel­e­vance to end users across all surfaces.

And we’re not rest­ing on our lau­rels. We’re con­tin­u­ally chal­leng­ing these algo­rithms over time in head-to-head com­pe­ti­tions with other devel­op­ers and research orga­ni­za­tions to fur­ther increase the reper­toire of what we have in our sys­tem, as well as tweak them in a man­ner that deliv­ers ver­ti­cal, use-case spe­cific results.

Manish Prabhune
Manish Prabhune

Pradeep, can you kindly share the part one of this blog which you have mentioned at the start of te blog post?