In my last blog, I highlighted the benefits of applying firmagraphic data provided by Demandbase to your visitor profiles to assess the level of engagement or lead score of a potential B2B customer in order to determine the next best collateral or offer. This B2B use case is an example of a broader theme that applies to all verticals including retail, financial services, travel, and media and entertainment. The ability to aggregate data for testing and targeting in an optimization program is critical to understanding the best performing strategies for personalization. A program’s success and scalability depend on the data available to act on. The more data you can bring into the testing process, the more granular or specific you can be in your segmentation; and the more distinct the segments, the better you can filter the data to achieve accurate results. For example, a test that only looks at the entire population, or at broad segmentation such as new versus returning visitors, may miss the distinct preferences of certain key subsegments. This could be people who were included in a test in a previous touchpoint, or those who expressed an interest that was not collected natively within the first-party data captured in the optimization tool.

Data variance between data sets is the fundamental obstacle to bringing in and acting on analytics in optimization. This data variance exists because the legacy products counted visitors in a slightly different way, so the data on a particular visitor does not match up when combined with that same visitor’s data on a different product. Most optimization tools in the market today offer some limited ability to bring in external data, including analytics for segmentation. Aside from having to account for the variance, this process has always involved additional tagging on the webpage, which requires a lengthy and expensive initial audit and extensive development work. Because of constant changes to the site and webpage, it also tends to break down frequently and requires recoding or maintenance to repair issues or to account for changes in the marketplace. The variance also requires segments to be built within each tool separately and introduces a variance in the results that can reduce accuracy.

Fortunately, the drawbacks of data variance are a thing of the past. Over the past year, a dedicated development team at Adobe has worked to resolve the data variance between Adobe Analytics and Adobe Target, and this feature has been successfully integrated into our April release. Here’s how it allows optimization teams to action on, and analyze, their test results based on analytics data without limits.

With the master marketing profile visitors are counted once and in the same fashion across all Adobe Marketing Cloud solutions. This synchronizes how Analytics counts a visitor and how Target counts that same visitor. This means that any segment discovered or built within Analytics can automatically be made available within Target’s audience library as a test audience and used for filtering results. Eliminating data variance gives you complete confidence in launching a test in real time based on an audience that Analytics has detected through anomaly detection, or other means, to be bouncing from the site.  Based on the master marketing profile, testing and targeting content can now reengage or improve their experience and performance on the site in terms of revenue and conversion. For instance, if Analytics detects that visitors from a YouTube video are coming to the site and bouncing at an appreciable rate from the homepage (which anomaly detection alerts can detect), it can immediately include these visitors in a test to see if alternate content improves their stickiness and ultimate conversion. The free unified cloud makes the communication between the analytics and optimization teams instantaneous, so little time is lost through analog discussions via phone or email. Reports can also be shared easily in this collaborative environment to give context and expose the issue to appropriate team members.

Remember, profiles and segmentation are only as good as the data they are based on. Although the master marketing profile aggregates all known data on a visitor across all components of your solution, it also supports bringing in second- or third-party data via APIs to bolster the profile. Many enterprise customers across verticals can regularly aggregate their warehoused data within the master marketing profile, using the batch profile upload API for advanced personalization. For example, a bank that’s identified a customer’s interest in refinancing a mortgage while the customer is in a branch or is speaking to the call center can target and personalize a relevant offer when the customer arrives at the site based on the customer’s prequalifications (using offline variables and third-party credit score data).

Synchronizing Analytics and Target data also helps super-charge the testing process, even within a simple a/b test. Now, if Analytics reporting is selected when building a test, there’s no need to define success metrics or segments for filtering. This reduces the set-up process from four steps to 2½, making tests easy to build and deploy without any limitations when reporting is available for analysis. All you need to do is define the audience(s) within the test, the variations of experience, and one success metric. The test can then be pushed live, and any success metric or segment defined in Analytics—even those defined after the test runs—can be applied to the results to filter and uncover distinct performance, preferences, and profitable targeting opportunities.  Say an executive suddenly decides he or she wants to know how Mac Users from Ohio perform in the test, but you’ve failed to define that segment for filtering in the test set-up process. You don’t need to go back to Analytics and try to decipher this outcome; instead, you can apply this segment retroactively to the test and give an immediate answer.

This is a critical step in making test results more intelligent and actionable. The key to strategy and to prioritizing testing and personalization opportunities is a cost-benefit analysis: What’s the best opportunity for improving revenue and customer experience with the most efficient use of company resources? Having an aggregated master marketing profile, which is easily enhanced with historical data warehousing and third-party data, also makes your ability to determine and qualify the best personalized experience for each visitor faster and more easily.

2 comments
aliskink
aliskink

Great to hear the master profile is here at last.


Is there not a potential risk when reporting Campaigns in SiteCatalyst, in that you can still attribute success events to an experience long after the experience has stopped being displayed on-site?  At least that was the case with the reports in SiteCatalyst that came out of the T&T integration.  


In most A/B tests we don't expect the experience to influence the visitor's behaviour on return visits after the content is no longer being shown, but the SiteCatalyst report continues to 'remember' that last experience and will continue attributing events to it in later visits.  Of course we can set a timeframe around the analysis, but it's not intuitive and can lead to mis-reporting.