Optimising FSI Customer Experiences, Part 5: Testing, Learning, and Improving

Dig­i­tal trans­for­ma­tion in the finan­cial ser­vices indus­try (FSI) is not an “either/or” propo­si­tion. It’s an ongo­ing jour­ney of con­tin­u­al improve­ment and enhance­ment. You might get the data, con­tent, machine learn­ing, and deliv­ery com­po­nents exact­ly right, but if you don’t keep learn­ing and improv­ing, you’re going to stag­nate, and your com­peti­tors will soon over­take you.

This dri­ve toward con­tin­u­al improve­ment can only flour­ish in a cul­ture of ongo­ing learn­ing and growth. And many top per­form­ers are start­ing to get that mes­sage. In a recent sur­vey of FSI exec­u­tives, 74 per­cent of indus­try-lead­ing mar­keters report­ed that they focus on con­tin­u­al­ly improv­ing their mul­ti­chan­nel per­son­al­i­sa­tion, where­as only 31 per­cent of the main­stream do. In oth­er words, lead­ers are per­son­al­is­ing a lot more than the lag­gards are.

Even com­pared to last year, there’s been a sig­nif­i­cant increase in the amount of test­ing and per­son­al­i­sa­tion that FSI com­pa­nies are doing. From 2016 to 2017, we’ve seen a major drop in FSI firms that don’t do per­son­al­i­sa­tion, from 12 per­cent down to just 1 per­cent. It’s become an inescapable fact that con­tin­u­al refine­ment of per­son­al­i­sa­tion tac­tics is the only way to keep win­ning new cus­tomers, and retain the cus­tomers you already have.

How do we go about cre­at­ing a work­flow for con­tin­u­al improve­ment in per­son­al­i­sa­tion? It all starts when you look at your cus­tomer data, and find out where the biggest prob­lems lie.

Struc­tur­ing and pri­ori­tis­ing

Con­trary to what you might expect, you don’t need to run any spe­cialised tests to find where you need to improve. Start by look­ing at your exist­ing cus­tomer jour­ney stats, and find­ing the points where the biggest drop-offs occur. At the same time, walk through the entire jour­ney your­self, and per­form your own qual­i­ta­tive assess­ment, keep­ing an eye out for any unnec­es­sary fric­tion. Back those con­clu­sions up with hard data, and you’ve got a sol­id basis for improve­ment.

Once you’ve got a feel for this sort of analy­sis, it’s time to set up a more for­mal frame­work for test­ing and improv­ing. Devel­op hypothe­ses about changes that might stream­line the jour­ney and raise con­ver­sions, then design and exe­cute those tests on seg­ments of your audi­ence. Although it’s impor­tant to pri­ori­tise your tests around the biggest fric­tion points, don’t get too focused on a rigid test­ing sched­ule. Instead, work on cre­at­ing an iter­a­tive test­ing method­ol­o­gy that enables you to redesign parts of the jour­ney quick­ly.

As your learn­ing starts to grow, new ideas for tests will emerge nat­u­ral­ly. And the more tests you run, the more impor­tant it will become to rapid­ly coor­di­nate their results into action­able insights.

Mov­ing and adapt­ing

Rapid report­ing and adap­ta­tion are cru­cial when you’re run­ning many tests in par­al­lel, because the fact is, most of your tests are going to fail. That’s why you run each test only on a small seg­ment of your audi­ence, using your mar­ket­ing platform’s “mul­ti-armed ban­dit” approach to auto­mat­i­cal­ly allo­cate more traf­fic to sta­tis­ti­cal­ly suc­cess­ful expe­ri­ences, in real time.

At Adobe.com, for exam­ple, we run hun­dreds of tests per month, each on a small seg­ment of our audi­ence. About 80 per­cent of those tests “fail”—i.e., they pro­duce dif­fer­ent results than our hypoth­e­sis pre­dict­ed. But that kind of fail­ure doesn’t mean we’ve wast­ed our time and resources. On the con­trary, it means we’ve gained a valu­able data point. And because we fail fast, we can quick­ly real­lo­cate traf­fic to the win­ning expe­ri­ence, and con­tin­ue to opti­mise.

We’re also using data to under­stand that dif­fer­ent audi­ences want dif­fer­ent expe­ri­ences. The clas­sic exam­ple is the case of the red but­ton ver­sus the blue one. Over­all, more peo­ple may pre­fer the blue but­ton, but some still pre­fer the red one. This doesn’t mean we show every­one the blue but­ton. It means we use data to build “blue but­ton” and “red but­ton” audi­ence seg­ments, and serve the pre­ferred but­ton colour to each seg­ment.

Upshots of opti­mi­sa­tion

Like all ele­ments of dig­i­tal trans­for­ma­tion, the learn­ing and improve­ment process looks dif­fer­ent for every organ­i­sa­tion. But com­pa­nies that get it right are see­ing stun­ning growth in con­ver­sion rates and rev­enue, and they con­tin­ue to test and opti­mise across all chan­nels.

For exam­ple, Nation­al Aus­tralia Bank recent­ly achieved a 45 per­cent lift in con­ver­sion by analysing per­for­mance and opti­mis­ing page designs on the cred­it card pages, result­ing in 3,000 more appli­ca­tions per year. Along sim­i­lar lines, Sun­trust increased its click-through rate by 47 per­cent, sim­ply by doing A/B test­ing and opti­mis­ing its mes­sag­ing. Uni­cred­it, mean­while, increased online acqui­si­tion by 60 per­cent, reduc­ing the cost per lead by 43 per­cent, by using A/B test­ing in a con­tin­u­al opti­mi­sa­tion frame­work.

When Sir Clive Wood­ward, England’s for­mer rug­by man­ag­er, was asked how he built the team that won the 2003 World Cup, he replied, “It was not about doing one thing 100 per­cent bet­ter, but about doing 100 things one per­cent bet­ter.” In oth­er words, when you’re run­ning these test-and-learn pro­grams, it’s not always about the big wins. It’s about the small incre­men­tal changes that you make to con­tin­u­ous­ly dri­ve bet­ter per­for­mance.

All the com­po­nents we’ve spo­ken about in this blog series—data, con­tent, arti­fi­cial intel­li­gence, and cross-chan­nel deliv­ery—are fun­da­men­tals. Opti­mi­sa­tion is about bring­ing all these pieces togeth­er. And the final piece, test­ing and improv­ing, is about mak­ing sure you stay ahead. This is an ongo­ing jour­ney we all have to take, mak­ing sure our con­tent is as on-brand and con­nect­ed as pos­si­ble, by con­stant­ly test­ing and opti­mis­ing the expe­ri­ence. Thanks for join­ing me.

One Response to Optimising FSI Customer Experiences, Part 5: Testing, Learning, and Improving

Leave a Reply

Your email address will not be published. Required fields are marked *