One of the funnier trick so of the human mind is the want to pigeon hole or describe things in as much detail as possible. While there are stereotypes and other harmful versions of this, the inverse is usually far more likely to cause havoc with your optimization program, and as such it is the next bias that you need to be aware of; Conjunction Fallacy or “the tendency to assume that specific conditions are more probable than general ones.”

The classic example of this fallacy is to ask someone, “which is most likely true about a person on your site? Did they come from search, or did they come from your paid search campaign code that landed on your #3 landing page and who then looked at 3 pages before entering your funnel?”. Statistically, there is no way for the second statement to be more likely then the first one, since the first one incorporates the second one and a much larger audience, meaning that the scale is magnitudes greater. Yet we often times find ourselves trying to think or do analysis in the most detailed terms possible, hoping that some persona or other minute sub segment is somehow more likely to be valuable then the much larger population.

This mental worm tends to make its appearance the most often when groups set out to do segment analysis or to evaluate user groups. We dive into groups and try to figure out the rate of actions that we want to exploit. Whether it is an engagement score, category affinity, or even simple campaign analysis, we dive so deep into the weeds that we will miss a very simple truth. If the group is not large enough, then no matter what work we do, it is never going to be worth the time and effort to exploit it for revenue. The other trade for this is the inability or want to not group these same users into larger groups that may be far more valuable to interact with. Whether it is people who have looked at 5 category pages and signed-up for newsletters or other inefficient levels of detail, you need to always keep an eye on your ability to do something with the data.

This also plays out in your biases towards what type of test you run. Even if internet explorer and Firefox users may be worth more or more exploitable than campaign code 784567 which is only 2% of your users, this bias makes you want to target to that specific group so much more, both as a sign of your great abilities, but also because we want to be more specific with our interactions with people. Even if the group is much more exploitable, the smaller scale of impact still make it far less valuable to your site.

Here are some very simple rules for segmentation that will make sure that you combat this fallacy:

1) Test all content to all feasible segments, never pre-dispose that you are targeting to group X.

2) Measure all segmentation and targeting against the whole, so that you have the same scale in order to compare relative impact.

3) All segments needs to be actionable and comparable, meaning the smallest segments generally are going to be greater than 7-10% of your population depending on your traffic volume.

4) Segments need to incorporate more than site behaviors and direction to the site, try to include segments of all descriptions in your analysis. Just because you want to target to a specific behavior does not mean that behaviors have more value than non-controllable interactions such as the time of day.

5) Be very very excited when you prove your assumptions wrong on which segment matters most or is the best descriptor of exploitable user behavior.

If you follow those rules, you are going to get more value from your segment interactions and you will stop yourself from falling down this pitrap. We often times have to force a system on ourselves to insure that we are being better than we really are, but when it is over, we can look back and see how far we have come and how much we grew because of that discipline. Revel in those moments, as they will be the things that give you the greatest value to yourself and your program.

2 comments
John Hunter
John Hunter

A similar idea (I think) is believing continuing to optimize based on your current users is best. It might be. But you might have wandered into an area where you make 5% of your potential market very happy but those other 95% won't use it. That is potentially a very bad state to be in.

Andrew Anderson
Andrew Anderson

That is not quite what this is about, but that is an example of a form of bounded rationality or of the N-Armed bandit problem. You can never stop trying to grow your market, but there becomes a question of how much do you allocate to growing the market versus how much do you optimize what you have. This really comes down to efficiency, do you make more growing or do you make more optimizing? And how much do I allocate to explore that option on an ongoing basis. Either way, you need to think in terms of what the best way to think about the same users. You can define anyone, anyway you want, with as much or as little details as possible, but when it is all said and done, you have to be able to leverage that definition in the way that generates the most good, as opposed to the way that is creates the most uniqueness.