In my last blog, I high­lighted the ben­e­fits of apply­ing fir­ma­graphic data pro­vided by Demand­base to your vis­i­tor pro­files to assess the level of engage­ment or lead score of a poten­tial B2B cus­tomer in order to deter­mine the next best col­lat­eral or offer. This B2B use case is an exam­ple of a broader theme that applies to all ver­ti­cals includ­ing retail, finan­cial ser­vices, travel, and media and enter­tain­ment. The abil­ity to aggre­gate data for test­ing and tar­get­ing in an opti­miza­tion pro­gram is crit­i­cal to under­stand­ing the best per­form­ing strate­gies for per­son­al­iza­tion. A program’s suc­cess and scal­a­bil­ity depend on the data avail­able to act on. The more data you can bring into the test­ing process, the more gran­u­lar or spe­cific you can be in your seg­men­ta­tion; and the more dis­tinct the seg­ments, the bet­ter you can fil­ter the data to achieve accu­rate results. For exam­ple, a test that only looks at the entire pop­u­la­tion, or at broad seg­men­ta­tion such as new ver­sus return­ing vis­i­tors, may miss the dis­tinct pref­er­ences of cer­tain key sub­seg­ments. This could be peo­ple who were included in a test in a pre­vi­ous touch­point, or those who expressed an inter­est that was not col­lected natively within the first-party data cap­tured in the opti­miza­tion tool.

Data vari­ance between data sets is the fun­da­men­tal obsta­cle to bring­ing in and act­ing on ana­lyt­ics in opti­miza­tion. This data vari­ance exists because the legacy prod­ucts counted vis­i­tors in a slightly dif­fer­ent way, so the data on a par­tic­u­lar vis­i­tor does not match up when com­bined with that same visitor’s data on a dif­fer­ent prod­uct. Most opti­miza­tion tools in the mar­ket today offer some lim­ited abil­ity to bring in exter­nal data, includ­ing ana­lyt­ics for seg­men­ta­tion. Aside from hav­ing to account for the vari­ance, this process has always involved addi­tional tag­ging on the web­page, which requires a lengthy and expen­sive ini­tial audit and exten­sive devel­op­ment work. Because of con­stant changes to the site and web­page, it also tends to break down fre­quently and requires recod­ing or main­te­nance to repair issues or to account for changes in the mar­ket­place. The vari­ance also requires seg­ments to be built within each tool sep­a­rately and intro­duces a vari­ance in the results that can reduce accuracy.

For­tu­nately, the draw­backs of data vari­ance are a thing of the past. Over the past year, a ded­i­cated devel­op­ment team at Adobe has worked to resolve the data vari­ance between Adobe Ana­lyt­ics and Adobe Tar­get, and this fea­ture has been suc­cess­fully inte­grated into our April release. Here’s how it allows opti­miza­tion teams to action on, and ana­lyze, their test results based on ana­lyt­ics data with­out limits.

With the mas­ter mar­ket­ing pro­file vis­i­tors are counted once and in the same fash­ion across all Adobe Mar­ket­ing Cloud solu­tions. This syn­chro­nizes how Ana­lyt­ics counts a vis­i­tor and how Tar­get counts that same vis­i­tor. This means that any seg­ment dis­cov­ered or built within Ana­lyt­ics can auto­mat­i­cally be made avail­able within Target’s audi­ence library as a test audi­ence and used for fil­ter­ing results. Elim­i­nat­ing data vari­ance gives you com­plete con­fi­dence in launch­ing a test in real time based on an audi­ence that Ana­lyt­ics has detected through anom­aly detec­tion, or other means, to be bounc­ing from the site.  Based on the mas­ter mar­ket­ing pro­file, test­ing and tar­get­ing con­tent can now reen­gage or improve their expe­ri­ence and per­for­mance on the site in terms of rev­enue and con­ver­sion. For instance, if Ana­lyt­ics detects that vis­i­tors from a YouTube video are com­ing to the site and bounc­ing at an appre­cia­ble rate from the home­page (which anom­aly detec­tion alerts can detect), it can imme­di­ately include these vis­i­tors in a test to see if alter­nate con­tent improves their stick­i­ness and ulti­mate con­ver­sion. The free uni­fied cloud makes the com­mu­ni­ca­tion between the ana­lyt­ics and opti­miza­tion teams instan­ta­neous, so lit­tle time is lost through ana­log dis­cus­sions via phone or email. Reports can also be shared eas­ily in this col­lab­o­ra­tive envi­ron­ment to give con­text and expose the issue to appro­pri­ate team members.

Remem­ber, pro­files and seg­men­ta­tion are only as good as the data they are based on. Although the mas­ter mar­ket­ing pro­file aggre­gates all known data on a vis­i­tor across all com­po­nents of your solu­tion, it also sup­ports bring­ing in sec­ond– or third-party data via APIs to bol­ster the pro­file. Many enter­prise cus­tomers across ver­ti­cals can reg­u­larly aggre­gate their ware­housed data within the mas­ter mar­ket­ing pro­file, using the batch pro­file upload API for advanced per­son­al­iza­tion. For exam­ple, a bank that’s iden­ti­fied a customer’s inter­est in refi­nanc­ing a mort­gage while the cus­tomer is in a branch or is speak­ing to the call cen­ter can tar­get and per­son­al­ize a rel­e­vant offer when the cus­tomer arrives at the site based on the customer’s pre­qual­i­fi­ca­tions (using offline vari­ables and third-party credit score data).

Syn­chro­niz­ing Ana­lyt­ics and Tar­get data also helps super-charge the test­ing process, even within a sim­ple a/b test. Now, if Ana­lyt­ics report­ing is selected when build­ing a test, there’s no need to define suc­cess met­rics or seg­ments for fil­ter­ing. This reduces the set-up process from four steps to 2½, mak­ing tests easy to build and deploy with­out any lim­i­ta­tions when report­ing is avail­able for analy­sis. All you need to do is define the audience(s) within the test, the vari­a­tions of expe­ri­ence, and one suc­cess met­ric. The test can then be pushed live, and any suc­cess met­ric or seg­ment defined in Analytics—even those defined after the test runs—can be applied to the results to fil­ter and uncover dis­tinct per­for­mance, pref­er­ences, and prof­itable tar­get­ing oppor­tu­ni­ties.  Say an exec­u­tive sud­denly decides he or she wants to know how Mac Users from Ohio per­form in the test, but you’ve failed to define that seg­ment for fil­ter­ing in the test set-up process. You don’t need to go back to Ana­lyt­ics and try to deci­pher this out­come; instead, you can apply this seg­ment retroac­tively to the test and give an imme­di­ate answer.

This is a crit­i­cal step in mak­ing test results more intel­li­gent and action­able. The key to strat­egy and to pri­or­i­tiz­ing test­ing and per­son­al­iza­tion oppor­tu­ni­ties is a cost-benefit analy­sis: What’s the best oppor­tu­nity for improv­ing rev­enue and cus­tomer expe­ri­ence with the most effi­cient use of com­pany resources? Hav­ing an aggre­gated mas­ter mar­ket­ing pro­file, which is eas­ily enhanced with his­tor­i­cal data ware­hous­ing and third-party data, also makes your abil­ity to deter­mine and qual­ify the best per­son­al­ized expe­ri­ence for each vis­i­tor faster and more easily.

2 comments
aliskink
aliskink

Great to hear the master profile is here at last.


Is there not a potential risk when reporting Campaigns in SiteCatalyst, in that you can still attribute success events to an experience long after the experience has stopped being displayed on-site?  At least that was the case with the reports in SiteCatalyst that came out of the T&T integration.  


In most A/B tests we don't expect the experience to influence the visitor's behaviour on return visits after the content is no longer being shown, but the SiteCatalyst report continues to 'remember' that last experience and will continue attributing events to it in later visits.  Of course we can set a timeframe around the analysis, but it's not intuitive and can lead to mis-reporting.