When choos­ing the next fal­lacy to cover, I faced a tough choice as there are so many dif­fer­ent fal­lac­ies that describe the same human behav­ior: The belief that we know or can answer things we can’t by assign­ing pat­tern or rea­son to things with­out actual cause. We are wired to want to explain why things hap­pen, but in order to accom­plish that task, we ignore or use only data we want and we sup­plant our own points of view as the core rea­son things hap­pen. We believe that the world is far more estab­lished and easy to under­stand then it really is. My favorite fal­lacy that cov­ers this behav­ior is the Texas Sharp­shooter Fal­lacy, which is when some­one assigns pat­tern or rea­son to ran­dom chance.

The name Texas Sharp­shooter comes from this “story”:

A cow­boy takes aim at a barn and starts shoot­ing ran­domly. When he is done, we walks up and notices that there are a large num­ber of holes in one area and fewer holes in another. He then paints a bull’s-eye over the area where there are a large num­ber of holes. To any­one walk­ing up, it looks like he was a good shot and mostly hit where he was aiming.

Now while I am sure that we can all think of cases where oth­ers have done this with data, the first thing you need to under­stand is that we all do this… all the time. We see pat­terns and ratio­nal­ize our own actions, whether it is why we do things in a cer­tain order or even why we believe cer­tain “truths” about the world. We ratio­nal­ize deci­sions after we make them, and while they are not all ran­dom, our under­stand­ing of why we do things is often flawed at best and com­pletely delu­sional other times. The human brain actu­ally engages the ratio­nal­iza­tion part after the action part, mean­ing that we always act, then think of why we act, not the other way around. We draw cir­cles around the pat­terns of our own behav­ior and then accept those cir­cles as the logic that lead to the deci­sion. This makes our under­stand­ing of why peo­ple do things often extremely flawed, since so much of how we view oth­ers behav­iors is through the con­text of our own “under­stand­ing” of what dri­ves our own actions. We so want to come up with a why, and we dive so deep, that we miss the point that we will never truly know. Nor does it mat­ter, sense we are describ­ing a pat­tern, one that we can engage and inter­act with and build rules around, with­out needed to know all the causes of that pattern.

One of my favorite exam­ples of this in the real world is a psy­chol­ogy pro­fes­sor in Bal­ti­more that does the same demon­stra­tion each year. He starts his lec­ture by bring­ing a chicken up in a cage on stage. The cage has a feeder that is set to dis­pense food pel­lets at ran­dom time inter­vals. He then cov­ers the cage and talks for an hour and half. At the end of the pre­sen­ta­tion, he takes the cover off and with­out fail, the chicken is found doing some behav­ior over and over again; it has con­vinced itself that this behav­ior is why the food comes out. The food comes out no mat­ter what it does, and it has no con­trol, but it has con­vinced itself that it is in con­trol of the sit­u­a­tion. We are all like that, we have to explain things so bad that we will believe any­thing, or will paint bulls-eyes, where they aren’t to make our­selves feel like we have more con­trol then we really do.

We like to believe we are smarter then that chicken, but we aren’t. In our world, data is our food, so we assign pat­terns to explain changes in what we observe. Data becomes a crutch to accom­plish this task. We so want to have a story to tell oth­ers and our­selves that we find one in the data. We believe that because con­ver­sions went up, the mes­sage must have “res­onated” or because one group has a dif­fer­ent win­ner then another group, that it is because of their socioe­co­nomic sta­tus or because they are more famil­iar with tech­nol­ogy. We have no way of know­ing this, but we con­vince our­selves and oth­ers that this is the rea­son why. The real­ity of the sit­u­a­tion is that we need “why” to help us feel like we under­stand, but act­ing and using data in no way requires a why so much as it requires a will­ing­ness to act.

Looked at from a data per­spec­tive, this means that when we see a notice­able mean­ing­ful change, often from test­ing, we are left to think of why it hap­pened. Peo­ple are fas­ci­nated with the “why?” often at the cost of what comes next. The real­ity is that we are always going to be look­ing only at a notice­able change and then apply ratio­nal­iza­tion after. We get so caught up in the why that we miss the truth that we will never really know nor does it mat­ter. Hav­ing a clear plan of action for our data means that we never need to know the why to be suc­cess­ful, and in fact insures that the more we dive in and try to answer it, the more we are wast­ing resources. Act­ing on data requires will­ing­ness and align­ment, it is decided before some­thing hap­pens. Ratio­nal­iza­tion is what hap­pens after­wards. Why does not change your need to act on the data, nor does it allow you have some sud­den insight into human behav­ior. At best you have a sin­gle data point, at worst you are paint­ing bull’s-eyes around holes and call­ing them insight.

Mar­keters have been try­ing to fig­ure out the “why” for a long time, and while there is a lot of peo­ple that claim to know, the real­ity is at best we have pat­tern, and at worst we have sto­ries we present to make our­selves look good. You can not derive pat­tern from a sin­gle data point, yet we are obsessed with try­ing to do that very thing. If we are hon­est with how we go about col­lect­ing data, and we are open to con­sis­tent and mean­ing­ful action from test­ing, then why will never mat­ter. If we are fol­low­ing the data and dis­ci­plined, then we know how we are going to act based on the results, not why the results hap­pened. If you are dis­ci­plined in how you think about users, then you know that a story or a sin­gle data point will never tell you any­thing. If we really want to make things per­sonal, then we won’t force “per­sonas” on peo­ple, but instead let data tell you the casual value of chang­ing the user expe­ri­ence and for whom it works best.

At its worst, the Texas Sharp­shooter Fal­lacy rep­re­sents our need to show that we are more in con­trol or know more than we really do. We use the need to explain why to make sto­ries and to help com­mu­ni­cate our value to oth­ers. My back­ground is in his­tor­i­cal analy­sis, and one of the first things you learn is how lit­tle value comes from the first per­son nar­ra­tive. It shows far more about the fal­lac­ies of the per­son speak­ing then it does for pro­vid­ing real infor­ma­tion about what really is hap­pen­ing. Data at its heart is meant to improve sit­u­a­tions, not to allow you to come up with a story that sat­is­fies your world view.

Why is not a ques­tion that you can ever truly answer, yet most peo­ple in mar­ket­ing are obsessed with a Sisyphean quest to answer it. The real­ity is that it is a ques­tion that has noth­ing to do with how you act on data or the dis­ci­plines needed to be suc­cess­ful. We do not need to know why for every­thing, even if it seems to hold all the answers. We just need to know what to do with what is in front of us and to appre­ci­ate how lit­tle we really know about the world in which we live.