In part one of this blog, we discussed the shortcomings of rules-based targeting systems and why digital marketing today requires more automated, behavior-based targeting solutions built on algorithms and machine learning. Today we’ll take a closer look at the algorithmic approaches that Adobe Target employs to optimize content for website visitors.

In cases where a marketer has not explicitly specified rules, machine learning takes over to discover what content is likely to produce the desired outcomes for each user. As highlighted in an earlier blog, this approach saves time while improving matching accuracy, a win-win for marketers. However, only using algorithms to make decisions isn’t enough—arriving at the best predictive decision requires the right algorithm for the job.

Just as online testing involves comparing the effectiveness of different content and presentation layouts, automated behavioral targeting involves comparing the effectiveness of algorithms. There’s no “one size fits all” algorithm that will be a good fit for every marketing use case. Therefore, Adobe Target uses a combination of industry-standard, machine-learning algorithms to drive results. A few of the models that we use are listed below.

The team often blends one or more of these models in unique ways based on empirical analysis, in order to maximize the lift. In addition, the Adobe Target platform team continuously experiments with emerging models that are suited to specific industry verticals, using an offline model evaluation framework, which includes broad-based simulation of user behavior as well.  Our goal is to eventually expose the innards of this framework as well as control of the delivery system to allow for the injection of customer-defined models and algorithms into the system.

Our out-of-the-box modeling systems today use the following algorithms:

Statistical decision trees: Uses a decision tree as a predictive model to draw conclusions from behavioral observations.

Random forest: Examines multiple decision trees on various sub-samples of the dataset and uses averaging to improve predictive accuracy.

Support Vector Machines: Provides a classification framework for modeling based on incoming attribute values in large dimensional spaces.

Adobe Target is the only solution on the market that operates at high velocity, leveraging the massive volume of data that we have and applies our algorithmic systems to the hundreds of distinct use cases across the digital marketing landscape. Making the above algorithms work at scale with the volume that we encounter is a significant part of our engineering methodology and intellectual property. Another equally unique aspect of our platform is to use a champion-challenger approach in our runtime systems to test algorithms against each other to arrive at optimal lift for customers while maintaining a focus on relevance to end users across all surfaces.

And we’re not resting on our laurels. We’re continually challenging these algorithms over time in head-to-head competitions with other developers and research organizations to further increase the repertoire of what we have in our system, as well as tweak them in a manner that delivers vertical, use-case specific results.

Manish Prabhune
Manish Prabhune

Pradeep, can you kindly share the part one of this blog which you have mentioned at the start of te blog post?