There comes a moment in all programs where they find something that does not match what others want to hear. Sometimes it is that the new redesign didn’t work as planned; other times it is simply that targeting a specific group does not improve or even lowers overall site performance. The more successful the program, the more often this is happening. The most common times are when you are showing a result that proves all of those assumptions and “experts” wrong. When you come to those moments, often times the hardest thing to overcome is Semmelweis Reflex, or the tendency to immediately dismiss new evidence or new knowledge that contradicts your current world view. Tied very closely to cognitive dissonance, the act is involuntary and a defense around one person’s world view and the presentation of evidence to the contrary. While we like to think that we will react to new information by deep consideration and analysis, the truth is that the brain is wired to immediately dismiss that information and to attack the presenter of the information. At some point, all “experts” and thought leaders in a community or organization run into this effect, and often times the most difficult moments of a program come from trying to overcome the shutting down of the logic centers of otherwise rational people.

Semmelweis takes many forms, but is essentially the immediate denial or search to discredit information that we don’t want to hear. People do not want to hear that what they have believed or acted on is wrong, so they find any excuse to attack the information or the presenter in order to escape the message that is being delivered. It is not actually a conscious action but a subconscious action, one that many studies have shown impacts more intelligent people far more then lower intelligence. It is the reason why people think that not liking the way something sounds is the same as having an actual argument against it. We have a cognitive dissonance of needing to understand and move through the world, and also simultaneously understanding that we cannot know everything or be good at everything. Because of that dissonance, we purposely create walls to either feed our world view, or in the case of Semmelwies Reflex ignore or discredit anything that does not agree with us. We love to think that we are rational, and that when presented with new evidence we will evaluate it and then consider it, but history and psychology both show the exact opposite, we we become emotional and ignore any and all evidence. People are wired to want to prove themselves “right” through any means necessary, reality be damned. The more evidence that mounts against us, the more we become emotional in our reaction to information and the more we reject factual evidence in favor of confirmation of our belief.

One of the most common ways that a program can be run off the road by Semmelwies is when we stop trying to find what is best, and proving people wrong, but instead stick to what is safe and understandable. We can not directly combat this reflex, which means that any time we lose focus on what matters, and allow any individual to get caught up on being “right”, we are then left with the need to either confirm or push back directly on their claims. We are always going to be limited if we only look at what we want to test or only look to improve things in a small window of what we know or understand. There is a great deal of study lately, led by people like Nassim Taleb, that the biggest mistakes we make as a society or only trying to rationalize what we know or not accepting that there are things that don’t fit the patterns we convince ourselves define the world. Testing programs are no different, and many leaders of these programs become gun shy to trying things that might hit this reflex from the people they are trying to impress. It can sap a lot of energy out of a program when people dismiss things out of hand without real understanding. It can be even more damaging if we are losing resources to try and find additional data to help someone disprove something they don’t understand. Many programs end up lead by those that are least willing to challenge ideas and most willing to tell others what they want to hear, often times not understanding they are themselves guilty of this same disease.

So what can you do to make sure that you are not losing ground and value to this fallacy? The most simple way is to establish clear rules of action before any test. Make testing about finding the best answer and not about any individual or any proving anyone “right”. Set rules to insure that you include enough recipes, that you include the null assumption as an actual recipe, and that you test things to break “best practices”. Making sure that you get people’s agreement on what you are trying to accomplish, how you are going to test not just what people want to see win, and most importantly what the barriers and actions are based purely on the data is the fundamental differentiator of a successful program. Make it about everyone focusing on the action of testing, and not about individual ideas. The worst thing that can happen to a program is to get into a never ending spiral of trying to find more data or of changing the target just to help someone understand something. The biggest mistake we make is not in the test but in our reactions to others reactions. Do not start answering questions just because you can and never lose focus on what really matters just to appease someone.

Before you do anything with a test, make sure that you are looking only at a single success metric that is consistent for the entire site. Make sure that you are testing enough recipes to challenge people and to prove them wrong. Make sure that you have agreement on when to act on data, and what acting on data means. Never start a test if you don’t have these things, as doing so leaves you open to Semmelwies and many other ego driven fallacies.

We are never going to know everything, nor are we ever going to have a perfect system to accomplish a task. Nor are we ever going to truly stop people from rationalizing and convincing themselves that they know more then they really do. The most we can do is try to limit the known factors that reduce our effectiveness, namely the fallacies of the people involved in the program, and make sure we have a system in place that makes us better than we really want to be. We think we are far more rational then we really are, and we think we have evidence for most of our own beliefs that is based in most cases on selective memory and data acquisition. We regularly thrash out against rational statistics that do not support the narrative of our own fiction. We love stories, none more then one that we use to define ourselves. We need a system that frees us up to be more creative then approval processes allow, while also facilitating our ability to know the efficiency of any action. Just because you like or dislike the sound of something in no way validates the information presented. We have to put in place limits that are focused on measuring outcomes against each other instead of only validating our own beliefs. The first step to accomplishing all these goals is to make sure that we understand the enemy. Thanks to the many fallacies we know we will face, with Semmelwies being one of the worst, we know the enemy and the enemy is ourselves.

0 comments