Mid last year, we here on the Adobe Media Optimizer (AMO) team announced the availability of an integration between Adobe Analytics and Media Optimizer. With this integration, customers were able to begin to gain more value from the rich analytics data they’ve accumulated in SiteCatalyst by feeding it into the powerful algorithms of our forecasting and automated bidding systems. No longer were separate tags required to run multiple products within the Adobe Marketing Cloud. Now your world is streamlined and simplified. One tag. One data collection system. One set of unified data.
With dozens of users now set up with the integration, we’ve seen some excellent results. Not only have marketing programs leveraged the data to drive incremental ROI from our automated bidding systems, they have also realized efficiencies in workflow and reporting, with SiteCatalyst conversion metrics all available in Media Optimizer, and all search engine metrics (Clicks, Cost, Impressions, etc.) passed automatically back to Adobe Analytics for reporting and analysis in SiteCatalyst, Discover, and ReportBuilder.
However, back in the summer of 2012 we were just scratching the surface. From our perspective: If some data is good, more data is even better. So the brilliant minds of our data scientists continued plugging away to uncover even more value from the gold mine of analytics data now available to Media Optimizer. And – eureka! – they’ve done it again! Just weeks ago, the Media Optimizer product team announced the launch of beta for a site engagement metrics integration.
But first, a little context…
One of the biggest challenges for any bidding system – be it algorithmic, rules-based, or a summer intern playing manual “bid jockey” – is data scarcity. Conversions are great, and they are exactly what we hope our marketing programs lead to. Unfortunately, there are never “enough” conversions; you always want more. And we know that across all Adobe customers, the average conversion rate is only around 3%. That means out of every 100 visitors you drive to your website, 97 of them don’t actually complete the ultimate action you’d like them to perform.
Traditional bid optimization focuses heavily on the 3% of converting visitors, increasing spend to the ads and campaigns that drive that traffic. However, as most marketers know, only a small fraction of digital marketing initiates actually account for this converting traffic on a frequent enough basis to have sufficient data volume needed for effective bid optimization.
So what do you do about those keywords and ads that only convert once a month? Once a quarter? Annually? These biddable objects don’t have nearly enough conversion data to make solid bid decisions day in and day out. And there are a lot of them! (That’s why they’re appropriately named the “long tail”.)
Under the traditional approach of Hierarchical Model Estimation, keywords with sparse conversion data have their performance extrapolated from pooling data higher in the account hierarch. That means that if a given keyword only converts, say, once in the last 90 days, the system looks at the entire ad group the keyword is in to see if there is enough data to build an appropriate model. And if the ad group still doesn’t have enough data, the system looks at the campaign level. The logic is intuitive enough: Keywords with similar attributes should be grouped together, therefore performance for the group in the short term should be able to approximate the performance of the individual in the long term.
And the results we’ve seen over the years have indicated that this hierarchical modeling works fairly well – with Media Optimizer customers experiencing notable performance improvements once employing portfolio-style bidding, and the system generating models that approached 95% accuracy.
“But why,” we asked ourselves, “should we continue to estimate performance of individual keywords based on the collective performance of other keywords when we now have access to exponentially more data about the traffic that each and every keyword is driving itself?” We knew there was a better way, and we were driven to push that 95% model accuracy even higher.
Enter the next generation of data integration between Adobe Analytics and Adobe Media Optimizer. Under the newly designed system, Media Optimizer algorithms can now evaluate site engagement metrics such as PageViews/Visit, Time Spent on Site, and Bounce Rate when modeling out bid recommendations. This means that even when certain keywords and ads don’t have a high volume of conversion data (which is the majority of your account the majority of the time!), we continue on to evaluate what levels of engagement each individual keyword and ad drove to your digital properties as well as modeling out how well each of these engagement metrics serve as leading indicators to eventual conversion behaviors.
The beta program for rolling out this new cross-solution integration was announced just a few short weeks ago, and already we are seeing impressive results for participating accounts. Across a sample group of twelve portfolios running on the new models incorporating engagement metrics, we have seen long-tail model accuracy improve from 82.26% to 92.03%. That’s a nearly 10% increase in forecast accuracy for the most difficult segment to model!
Obviously we are still just scratching the surface of what is possible when you merge the rich data of an industry-leading analytics platform with the power of world-class algorithmic modeling for marketing automation. Whether it’s rapidly adapting to an ever-changing digital marketing landscape (think Google enhanced campaigns) or diving deeper into analytics-based segmentation (time-parting and device-/geo-targeting anyone?), the Adobe Media Optimizer team will continue to push forward to help our customers realize increasing levels of automation efficiency and ROI.