There comes a moment in all pro­grams where they find some­thing that does not match what oth­ers want to hear. Some­times it is that the new redesign didn’t work as planned; other times it is sim­ply that tar­get­ing a spe­cific group does not improve or even low­ers over­all site per­for­mance. The more suc­cess­ful the pro­gram, the more often this is hap­pen­ing. The most com­mon times are when you are show­ing a result that proves all of those assump­tions and “experts” wrong. When you come to those moments, often times the hard­est thing to over­come is Sem­mel­weis Reflex, or the ten­dency to imme­di­ately dis­miss new evi­dence or new knowl­edge that con­tra­dicts your cur­rent world view. Tied very closely to cog­ni­tive dis­so­nance, the act is invol­un­tary and a defense around one person’s world view and the pre­sen­ta­tion of evi­dence to the con­trary. While we like to think that we will react to new infor­ma­tion by deep con­sid­er­a­tion and analy­sis, the truth is that the brain is wired to imme­di­ately dis­miss that infor­ma­tion and to attack the pre­sen­ter of the infor­ma­tion. At some point, all “experts” and thought lead­ers in a com­mu­nity or orga­ni­za­tion run into this effect, and often times the most dif­fi­cult moments of a pro­gram come from try­ing to over­come the shut­ting down of the logic cen­ters of oth­er­wise ratio­nal people.

Sem­mel­weis takes many forms, but is essen­tially the imme­di­ate denial or search to dis­credit infor­ma­tion that we don’t want to hear. Peo­ple do not want to hear that what they have believed or acted on is wrong, so they find any excuse to attack the infor­ma­tion or the pre­sen­ter in order to escape the mes­sage that is being deliv­ered. It is not actu­ally a con­scious action but a sub­con­scious action, one that many stud­ies have shown impacts more intel­li­gent peo­ple far more then lower intel­li­gence. It is the rea­son why peo­ple think that not lik­ing the way some­thing sounds is the same as hav­ing an actual argu­ment against it. We have a cog­ni­tive dis­so­nance of need­ing to under­stand and move through the world, and also simul­ta­ne­ously under­stand­ing that we can­not know every­thing or be good at every­thing. Because of that dis­so­nance, we pur­posely cre­ate walls to either feed our world view, or in the case of Sem­mel­wies Reflex ignore or dis­credit any­thing that does not agree with us. We love to think that we are ratio­nal, and that when pre­sented with new evi­dence we will eval­u­ate it and then con­sider it, but his­tory and psy­chol­ogy both show the exact oppo­site, we we become emo­tional and ignore any and all evi­dence. Peo­ple are wired to want to prove them­selves “right” through any means nec­es­sary, real­ity be damned. The more evi­dence that mounts against us, the more we become emo­tional in our reac­tion to infor­ma­tion and the more we reject fac­tual evi­dence in favor of con­fir­ma­tion of our belief.

One of the most com­mon ways that a pro­gram can be run off the road by Sem­mel­wies is when we stop try­ing to find what is best, and prov­ing peo­ple wrong, but instead stick to what is safe and under­stand­able. We can not directly com­bat this reflex, which means that any time we lose focus on what mat­ters, and allow any indi­vid­ual to get caught up on being “right”, we are then left with the need to either con­firm or push back directly on their claims. We are always going to be lim­ited if we only look at what we want to test or only look to improve things in a small win­dow of what we know or under­stand. There is a great deal of study lately, led by peo­ple like Nas­sim Taleb, that the biggest mis­takes we make as a soci­ety or only try­ing to ratio­nal­ize what we know or not accept­ing that there are things that don’t fit the pat­terns we con­vince our­selves define the world. Test­ing pro­grams are no dif­fer­ent, and many lead­ers of these pro­grams become gun shy to try­ing things that might hit this reflex from the peo­ple they are try­ing to impress. It can sap a lot of energy out of a pro­gram when peo­ple dis­miss things out of hand with­out real under­stand­ing. It can be even more dam­ag­ing if we are los­ing resources to try and find addi­tional data to help some­one dis­prove some­thing they don’t under­stand. Many pro­grams end up lead by those that are least will­ing to chal­lenge ideas and most will­ing to tell oth­ers what they want to hear, often times not under­stand­ing they are them­selves guilty of this same disease.

So what can you do to make sure that you are not los­ing ground and value to this fal­lacy? The most sim­ple way is to estab­lish clear rules of action before any test. Make test­ing about find­ing the best answer and not about any indi­vid­ual or any prov­ing any­one “right”. Set rules to insure that you include enough recipes, that you include the null assump­tion as an actual recipe, and that you test things to break “best prac­tices”. Mak­ing sure that you get people’s agree­ment on what you are try­ing to accom­plish, how you are going to test not just what peo­ple want to see win, and most impor­tantly what the bar­ri­ers and actions are based purely on the data is the fun­da­men­tal dif­fer­en­tia­tor of a suc­cess­ful pro­gram. Make it about every­one focus­ing on the action of test­ing, and not about indi­vid­ual ideas. The worst thing that can hap­pen to a pro­gram is to get into a never end­ing spi­ral of try­ing to find more data or of chang­ing the tar­get just to help some­one under­stand some­thing. The biggest mis­take we make is not in the test but in our reac­tions to oth­ers reac­tions. Do not start answer­ing ques­tions just because you can and never lose focus on what really mat­ters just to appease someone.

Before you do any­thing with a test, make sure that you are look­ing only at a sin­gle suc­cess met­ric that is con­sis­tent for the entire site. Make sure that you are test­ing enough recipes to chal­lenge peo­ple and to prove them wrong. Make sure that you have agree­ment on when to act on data, and what act­ing on data means. Never start a test if you don’t have these things, as doing so leaves you open to Sem­mel­wies and many other ego dri­ven fallacies.

We are never going to know every­thing, nor are we ever going to have a per­fect sys­tem to accom­plish a task. Nor are we ever going to truly stop peo­ple from ratio­nal­iz­ing and con­vinc­ing them­selves that they know more then they really do. The most we can do is try to limit the known fac­tors that reduce our effec­tive­ness, namely the fal­lac­ies of the peo­ple involved in the pro­gram, and make sure we have a sys­tem in place that makes us bet­ter than we really want to be. We think we are far more ratio­nal then we really are, and we think we have evi­dence for most of our own beliefs that is based in most cases on selec­tive mem­ory and data acqui­si­tion. We reg­u­larly thrash out against ratio­nal sta­tis­tics that do not sup­port the nar­ra­tive of our own fic­tion. We love sto­ries, none more then one that we use to define our­selves. We need a sys­tem that frees us up to be more cre­ative then approval processes allow, while also facil­i­tat­ing our abil­ity to know the effi­ciency of any action. Just because you like or dis­like the sound of some­thing in no way val­i­dates the infor­ma­tion pre­sented. We have to put in place lim­its that are focused on mea­sur­ing out­comes against each other instead of only val­i­dat­ing our own beliefs. The first step to accom­plish­ing all these goals is to make sure that we under­stand the enemy. Thanks to the many fal­lac­ies we know we will face, with Sem­mel­wies being one of the worst, we know the enemy and the enemy is ourselves.