One of the great truths about any orga­ni­za­tion is that no mat­ter what it is you are doing, each pro­gram even­tu­ally plateaus and finds a nor­mal­iza­tion point where it no longer grows at the same rate or with the same push it did before. Whether it is men­tal fatigue, new objec­tives, changes in lead­er­ship, or more com­monly reach­ing the end point of the cur­rent path of think­ing, each pro­gram can only go so far for­ward with­out a re-invigoration of new ways of think­ing and by chal­leng­ing itself to get bet­ter. It is only by bring­ing in new ways of think­ing and chal­leng­ing core beliefs that you are free to grow past those self-imposed limits.

In our intro­duc­tion, we talked about dis­ci­plines that enable you to move faster and align on a com­mon goal. In the sec­ond series, we went over dis­ci­plines to help you think about tests and test­ing dif­fer­ently to get more value from your actions. The third and final evo­lu­tion takes us in new ways to view the world and our orga­ni­za­tion, and chal­lenges us to go in new direc­tions, to under­stand new par­a­digms that should fun­da­men­tally chal­lenge some of the most com­mon and fun­da­men­tal beliefs about data and optimization.

One of the great quotes that I keep close at heart comes from John Maxwell, “If we are grow­ing, we are always going to be out of our com­fort zone.” With that in mind, I want to intro­duce these par­a­digms for your pro­gram and chal­lenge you to take these and eval­u­ate them out­side of the fish­bowl, as an idea on its own that can help you pro­gram get past its cur­rent plateau and to help you grow in your think­ing about optimization.

Ana­lyt­ics and Opti­miza­tion as very Dif­fer­ent Disciplines –

There are many dif­fer­ent ways that you look at and act on data in test­ing that are the exact oppo­sites of ana­lyt­ics. Where so many pro­grams fail is when they force one way of think­ing onto their data, resort­ing back to what they are most famil­iar with. In the world of ana­lyt­ics, you have to look for pat­terns and anom­alies, look across large data sets and try to find some­thing that doesn’t belong, and that doesn’t fit with the rest of the data. You are con­stantly look­ing for out­liers that show a dif­fer­ence and then extrap­o­lat­ing value from those mea­sur­able dif­fer­ences. In the world of opti­miza­tion, you have to limit your­self from look­ing at any­thing but what you are try­ing to achieve, and to act on data that answers fun­da­men­tal ques­tions. It becomes extremely easy to fall back into more com­fort­able ways of think­ing, because the data sounds and looks sim­i­lar, but ulti­mately suc­cess is dic­tated by your abil­ity to only look at the data through a dif­fer­ent lens. You have to stop your­self from try­ing to dive down every pos­si­ble data set and instead focus on the action from the casual rela­tion­ship around the sin­gle end goal.

It is what you don’t do that defines you as much as what you do. It is about “did remov­ing this mod­ule improve rev­enue per­for­mance”, not “did the CTR drop of this change increase CTR to the main image and where did those paths lead”. It is also about not allow­ing lin­ear think­ing to inter­rupt what you are doing. You have to focus on the value of actions, not the rate of them. You are look­ing at the value add from a user (RPV), not the amount of short term actions (CTR). Never look at how many peo­ple moved from point A to point B, but instead only look at the mea­sur­able impact towards your site goals. Just because you increased clicks, or got more peo­ple into a fun­nel, or even got more trans­ac­tions, it does not mean that you increased rev­enue. Assum­ing there is a lin­ear rela­tion­ship between action and value can be extremely dan­ger­ous and myopic, many pro­grams have been ran into the ground because they do not under­stand the dif­fer­ence between the count of and action and the value of the action. Ana­lyt­ics forces you to think in terms of rates of action, but opti­miza­tion forces you to think about the value of actions and the cost to change a person’s propen­sity of action.

Think of your site as a giant sys­tem. You have an input of peo­ple, with each input type inter­act­ing dif­fer­ently with the sys­tem. The things you sell, the lay­out, the expe­ri­ence, all of it makes up a giant equa­tion. When those peo­ple enter your site, they go through, and they come out the other end at some rate or some value. The num­bers or rates asso­ci­ated with that one path is ana­lyt­ics. That inher­ent behav­ior based on the cur­rent user expe­ri­ence is their propen­sity of action. In test­ing, you have to solely focus on your abil­ity to increase or decrease that propen­sity of action, not about the absolute value of that action. We care that we increased that behav­ior by some delta, some lift per­cent­age, not that it was 45% and moved to 49%, but that we increased it by 8.9%.

In test­ing, the answers you might receive will be of the nature of “we got more of the high value and less of the low value pop­u­la­tions” or “the sys­tem improved as a whole by 4%.” Ulti­mately “answers” mat­ter far less then the changes observed and your abil­ity to act quickly and deci­sively on it. What you won’t receive is why, or what each indi­vid­ual part of the sys­tem did to get you there. Those answers are stuck to the realm of cor­re­la­tion and as such have to be ignored because you at best have only a sin­gle data point. We are try­ing to move for­ward as quickly as pos­si­ble in the realm of opti­miza­tion, so get­ting lost in loops of try­ing to answer ques­tions that are not answer­able only hin­ders your efforts. No mat­ter how much you ana­lyze an indi­vid­ual test result, you will never have more than a cor­re­la­tion. This means you have to think dif­fer­ently in order to use that data. It doesn’t mat­ter why, or which piece, or even which indi­vid­ual pop­u­la­tion (though dynamic expe­ri­ences on out­comes is impor­tant) so you have to force your­self to not go down those roads.

It is also about the abil­ity to hold your­self account­able for change. So many ana­lysts fail because they view their job respon­si­bil­ity ends on the moment they make a rec­om­men­da­tion. There is a rev­o­lu­tion tak­ing place in our indus­try, lead by peo­ple like Brent Dykes, that is chang­ing the entire view of opti­miza­tion away from the rec­om­men­da­tion and data, but to the final out­put. In opti­miza­tion, you are only suc­cess­ful if the results you find are acted on and made live. It requires you to view the cycle as one of action and not one of inac­tion. It is not that both don’t have their place, but you to be really suc­cess­ful you have to be able to step away from your ana­lyt­ics self and instead think dif­fer­ently and force your­self to act dif­fer­ently in order to get the results you need.

Test­ing Applied to Mul­ti­ple Teams

Test­ing is some­thing that has many core dis­ci­plines, but takes on a very dif­fer­ent look and value for dif­fer­ent groups. Your IT team may get a com­pletely dif­fer­ent “value” from test­ing than your mer­chan­dis­ing team, as your design teams might from your ana­lyt­ics team. Many groups believe that because they have applied their test­ing from their land­ing pages to their prod­uct pages, that they have expanded the value of test­ing through­out the orga­ni­za­tion. Instead they need to rethink how opti­miza­tion dis­ci­plines can inter­act with the dif­fer­ent groups efforts on a fun­da­men­tal basis. Test­ing is not just chang­ing a mar­ket­ing mes­sage, it is the eval­u­a­tion of pos­si­ble fea­si­ble alt­ner­a­tives, some­thing that all groups need to do to improve their effec­tive­ness. Test­ing is just as applic­a­ble to your SEM team, your mer­chan­dis­ing, your prod­uct man­age­ment, your IT team, your per­son­al­iza­tion team and many oth­ers. Each group has dif­fer­ent needs and dif­fer­ent dis­ci­plines, and as such you have to apply the dis­ci­plines of test­ing to them in dif­fer­ent ways. A IT team can use test­ing to decide on which project to apply long term resources to. Your UX team can tie test­ing to their qual­i­ta­tive research to under­stand the inter­con­nec­tion of pos­i­tive feed­back to over­all site per­for­mance. Your SEM team can use test­ing to mea­sure the down­stream impact of their var­i­ous brand­ing campaigns.

The real­ity is that apply­ing all the unique ben­e­fits of test­ing to dif­fer­ent groups, and not just increas­ing the space that you do the same things to can fun­da­men­tally improve your entire orga­ni­za­tion. While this might sound like a sim­ple one, the real­ity is that most groups do the same type of test­ing or try to apply the same tech­niques across mul­ti­ple parts of the site, not for dif­fer­ent teams. Each group may be align­ing on the same goal, but they do things in a very dif­fer­ent way. Apply­ing opti­miza­tion to those groups looks and acts in very dif­fer­ent ways, and such it is dif­fi­cult for most groups to really apply these dis­ci­plines in a way that truly impacts the fun­da­men­tal prac­tices of more than one group.

Instill­ing this use of test­ing as a fun­da­men­tal build­ing block also allows you to get ahead of a large num­ber of major prob­lems. It forces orga­ni­za­tions to test out con­cepts well before they decide on them as long term ini­tia­tives. One of the most com­mon exam­ples of this is in the realm of per­son­al­iza­tion, where so many groups are sold on the con­cept, but not will­ing to go through all the hard work of fig­ur­ing out exploitable seg­ments or the value and effi­ciency of var­i­ous ways of inter­act­ing with the same user. Get­ting ahead of the curve and test­ing out the effi­ciency of the effort will save dra­mat­i­cally improve the per­for­mance of the effort. If you test out a com­plex idea in one spot against other fea­si­ble sim­pler ideas, and find the sim­pler idea is bet­ter per­form­ing, as it almost always is, you save mas­sive IT resources while get­ting bet­ter results. It is far more likely that sim­ple dynamic lay­out changes for fire­fox users are going to be mag­ni­tudes more valu­able then a com­plex data feed sys­tem from your CRM solu­tions, and test­ing is the bridge to know that before you fall down that rab­bit hole.

Each group tends to end up at the Nth degree of the same thing they bought the tool for. So often, the fear of the unknown or of chal­leng­ing someone’s domain stops new groups from allow­ing test­ing in, but when you can over­come those bar­ri­ers, you can have an expo­nen­tial impact on the orga­ni­za­tion. When you start try­ing to apply opti­miza­tion to mul­ti­ple types of inter­nal prac­tices, and you are able to bring the results together in a real syn­ergy, that is when you are able to really see opti­miza­tion spread and to see the bar­ri­ers drop through­out an entire orga­ni­za­tion. It also the point where those lessons you learn become three-dimensional and become uni­ver­sal across the entire organization.

Test­ing has No Start and No End -

Opti­miza­tion is not a project. It is not some­thing that is just one person’s job and it is most def­i­nitely not some­thing you can just choose to end some ran­dom Tues­day. So why then do peo­ple view it as a series of projects, with a start and a stop? Why do they view it is only part of one person’s role or respon­si­bil­ity, or some­thing that is done when they have the chance. There are func­tional rea­sons to have set peo­ple assigned to test­ing, and as pro­grams grow to have a sep­a­rate spe­cial­ized team, but that is not the end of the bat­tle. Why do we try to force arti­fi­cial time con­straint on it, with starts and stops and talk about it as some­thing we did, or will do. It is either an action that you live, or it is not. If every­thing your orga­ni­za­tion is doing, be it some small tweak, or a redesign, or the release of a new fea­ture is not viewed as part of an ongo­ing process, with lessons to learn and to be eval­u­ated demo­c­ra­t­i­cally through the sys­tem of opti­miza­tion, then opti­miza­tion has been allowed to have this arti­fi­cial start or stop just to appease var­i­ous mem­bers of your organization.

Opti­miza­tion has to be some­thing you live. You have to be think­ing in terms of it every day, you have to view each task as some­thing that can get bet­ter, you have to view each idea as just one of many, and that it is not up to the HiPPo or any­one else to decide on. It is a respon­si­bil­ity to not let projects, or hol­i­days, or new CMOs or any­thing else stop you from this con­stant quest to improve, the site, the processes, the peo­ple. Do not con­fuse the actions of run­ning or a test as the entirety of opti­miza­tion. It is vital that you view the act of cre­at­ing some­thing new as just the very first step, and not the end point. There should be no point where any­thing is thought of as “per­fect”, or “done” or that you can just throw some­thing live and walk away. Opti­miza­tion is part of every process, it is part of every job, and it is some­thing that every­one works together to make sure that it is part of every action that the orga­ni­za­tion takes.

When you have finally started to incor­po­rate test­ing into your orga­ni­za­tion, all projects will view it as another nat­ural part of their evo­lu­tion. Project plans will incor­po­rate not only the con­cept of opti­miza­tion as an ongo­ing basis so that it is part of your expected time­line, but they will also stop try­ing to get every­thing “per­fect”. If you view your projects as never fin­ished, then there is no need to have every­thing get per­fect sig­noff, nor do you need to have per­fect agree­ment on each and every piece. What is impor­tant is that you spend as much time and resources on test­ing out all those ideas that you have dis­cussed, instead of just sit­ting around a room and com­pro­mis­ing on a final ver­sion. You will no longer be so caught up on your pet project, as the entire con­cept is that it will and must change.

So much of what hap­pens in orga­ni­za­tions is about the pol­i­tics of own­ing and tak­ing credit for dif­fer­ent ini­tia­tives. There are people’s rep­u­ta­tions and egos on the line when they pro­pose and lead dra­matic changes, espe­cially redesigns, for the site. If you can truly incor­po­rate test­ing and opti­miza­tion as a vital part of all processes, one that is not just a “project” but is part of the very exis­tence of the site and the group, then you free peo­ple up to no longer being so tied to their “baby”. Treat all ideas as mal­leable and tran­sient, to the point that every­one is really work­ing together to con­stantly move the idea for­ward. It will can be a dra­matic shift to orga­ni­za­tions once they reach this point, but ulti­mately it is when groups really start to see dra­matic improve­ments on a con­tin­u­ous basis.

So often we talk about not fol­low­ing through with each of the con­cepts I have brought forth, but the real­ity is each action is tan­ta­liz­ingly easy, but the real dis­ci­pline, the abil­ity to keep push­ing 6 months from now is what really dif­fer­en­ti­ates pro­grams and peo­ple. Being will­ing to move past the bar­ri­ers, put the pieces in place that make a dif­fer­ence, and being will­ing to change how you and oth­ers think are the real keys of a suc­cess­ful pro­gram. If you are always try­ing to do what is easy, or just lis­ten to the push­ers of magic beans and myths, then you can never really grow your pro­gram to the lev­els that are pos­si­ble. Do the hard work, get out of your com­fort zone, and you can con­tinue to get bet­ter and can con­tinue to see more and more value from your test­ing program.

To nav­i­gate the entire test­ing series:
Test­ing 101 / Test­ing 202 / Test­ing 303 — Part 1 / Test­ing 303 — Part 2

1 comments
Rudy Chou
Rudy Chou

Awesome article Andrew! I appreciate the insight you share. Its not about taking credit or playing politics. It really should be about harmony and taking the business objectives and bringing it to the next level. Continuously test, fix and change so that it keeps improving. Sad thing is most discussions and meetings on concepts go nowhere. Great design cannot please everyone. And the HiPPo shouldn't dictate everything. Analytics is useless if you just use aggregate data without understanding acquisition, behavior and outcomes. You can't measure without goals either. Same with optimization. You can suggest and recommend to the end of day but until its implemented, its useless. Its an ongoing struggle and not for the feint of heart. You just have to keep pushing with a dose of patience and a whole lot of clever ways to communicate.