One of my lifetime aspirations is to be a better multitasker. Although I can be very efficient in terms of swiftly checking off tasks in my daily workload, I’m not always as good as I would like to be at balancing one activity while focusing on another. In general, we all confront certain drawbacks when trying to juggle multiple tasks, which play out in the world of marketing as it does in our daily lives.

As an example, I recently saw a joint Columbia/Stanford University study that found that having too many choices can be demotivating. For example, a large segment of the shopping population would prefer to find a single option in the sock section of a department store than to be overwhelmed with the multitude of variations and array of products available in most stores today. This is a large part of the motivation behind optimization, to determine the most effective rules for whittling down the various content, product, and experience options on your digital channels to a more personalized display. This aims to reduce the friction or disassociation that a digital visitor can feel when presented with an overwhelming and seemingly generic number of options.

In contrast to the preferences of a consumer, as test optimization heroes, we would love to be multitaskers. The more well-designed tests we can get out the door, the more reporting data we’ll have to interpret to determine the most accurate, and the most inaccurate, targeting strategy and opportunities for increasing return on investment (ROI). However, most current testing solutions provide limits in regard to the number of concurrent tests or testing activities that can be run at the same time or in the same location.

This is the result of inflexible implementation and a lack of customization within the test design process. These “solutions” provide no adjustment or customization within their single line of code implementation, which prohibits the creation of multiple tests in one location without test collisions. Another issue is that their segmentation is not granular or customizable to distinctly place key subsegments of the population into discrete test or targeted experiences. There are also no built-in selections to assist in regulating the purity of a test’s results or protect against one test impacting the performance of another.

Many solutions today also provide only a handful of basic, out-of-the-box success metrics or key performance indicators (KPIs) that can be measured within the reports, which does not accurately gauge the relative impact of a test variation upon subsequent experiences, inhibiting understanding of the factors leading to a main conversion event. Because of an inflexible implementation and lack of options, users of these solutions are confined to a full-service only model, where the vendor must code and execute the test design in the back end of the customer’s site, with only limited shared development resources on the vendor side that cannot scale to support a rapid, concurrent testing and targeting velocity. Worse still, in many cases the tool itself has simply put an arbitrary capacity or limit on the number of active tests.  A tool’s capacity to enable efficient, high-volume concurrent testing should be a primary factor and litmus test for choosing the right tool for growing your program, scaling your operation, and sharing your optimization success across your organization. The solution must work for you, not the other way around.

The Adobe Target engineering and product teams have spent a substantial amount of development time focusing on providing safeguards in accuracy with concurrent testing. I’m proud to say that our customers scale to hundreds of concurrent tests in more advanced programs, and in some rare cases, even into the thousands (with a focus still placed on effective accurate test design within our guided linear workflow infused with best practices). Companies using our solution have confidence in their testing because of the levels of prioritization, segmentation, customization, and alerts that we provide to ensure that our reporting is as accurate as possible.

To begin, Target offers a priority setting that can be configured when creating a test, so if an audience falls into multiple tests, one will take precedence. There are also activity collision alerts that inform the test designer if there is the potential for impacting other active tests prior to pushing a test live. Our targeting ability within the test set-up process exists at three levels—campaign, experience, and location—to more effectively orchestrate traffic through distinct experiences or series of experience, and our ability to custom set the percentage of traffic that will be delivered within a split test or multiple variation scenario means that traffic can be more discretely funneled through the right selection of experiences to avoid collisions.

In addition, the granular customization of audiences to target, based on all available first-, second-, and third-party data, or even persistent aggregate profiles, means that the rules of engagement within concurrent tests can be more specific to deflect collisions. We even provide an option to include or exclude audience members who took part in previous tests to allow for multipage and cross-channel testing options and scalability. There are also basic and advanced filtering of the results, including the ability to apply synchronized (no variance) analytics audience segments and success metrics to more effectively drill down into the reporting. This gives you a better understanding of your customer segments and lets you exclude extreme results that might skew winning experiences. Finally, the flexible implementation lets you efficiently design advanced test and targeting scenarios without needing development resources required by other, inflexible products with rigid single line of code implementation (requiring unscalable vendor maintenance and administration behind the scenes).

When considering a test optimization solution, be sure you’re evaluating its ability to handle rapid concurrent test scenarios, and ask the right questions around the solution’s  flexibility. Ask to speak to other customers, to visit other customers’ websites, or to review published case studies where scalable concurrent testing is in full practice with these solutions to make sure you’re making the best decision for your organization. Don’t find yourself in a rut when you’re ready to ramp up your program and you don’t have the right tools, strategy, or support to execute at the depth and speed it requires to expand your program and success rapidly.