For Amy Lew, a senior user expe­ri­ence (UX) designer at Adobe, it’s all about marketers—what they want, what they need, and, more impor­tantly, how they think about every­thing from mak­ing data-driven deci­sions to gain­ing orga­ni­za­tional buy-in to grow­ing rel­e­vance at scale. It’s these con­sid­er­a­tions that drive Amy’s think­ing to help the mar­keter take more risks and trust in a hands-off approach to per­son­al­iza­tion that results in quick wins with much less effort. I recently got a chance to catch up with Amy at the Thirsty Bear Brew­ing Co. in San Fran­cisco. In addi­tion to some great beer tast­ing, we had a fun chat about per­son­al­iza­tion and the role of mar­keter trust.

The term “per­son­al­iza­tion” has been thrown around a lot recently—but the con­cept is far from new, and the tech­nol­ogy itself goes back at least 15 years. I’m old enough to remem­ber all the talk about “black box” per­son­al­iza­tion, but I’ve also seen awe­some exam­ples of it really working—higher con­ver­sion rates, great cross-sell results. What will it take to get mar­keters more com­fort­able with per­son­al­iza­tion tech­nol­ogy? What needs to change?

Mar­keters have always run to their lead­er­ship say­ing, “This works!” when it comes to personalization—because, yes, per­son­al­iza­tion gets results. And that was good enough for a long time. But fast for­ward to today and everyone—including that same management—is more knowl­edge­able. Sure, the results are great, but now stake­hold­ers say, “But why?” Why are we get­ting those results and, more impor­tantly, how can we do more of it? They don’t want sta­tis­ti­cal prob­a­bil­i­ties and algo­rithms, of course, but they want to know that next level of gran­u­lar­ity in terms of report­ing. Adobe Tar­get is meant to answer that, and that’s what I’m focus­ing my energy on—not just answer­ing but ele­vat­ing that conversation.

In that vein, I always hear, “Why use auto­mated per­son­al­iza­tion if I can use A/B test­ing and my own pre­dic­tions and knowl­edge to do the same thing?” It’s impor­tant to take a step back and look at the cur­rent land­scape and ask your­self if that level of knowl­edge and action­able data is enough. Right now, I don’t think it is.

That’s where my role within Adobe Tar­get picks up. When I think about Tar­get, I’m think­ing about how I can help you exper­i­ment more, make bet­ter deci­sions or unearth met­rics so you can back into and lever­age for more inter­nal sup­port and greater resources. Because with auto­mated per­son­al­iza­tion there’s much, much more in terms of com­plex­ity than with sim­ple A/B test­ing or the notion that “some per­son­al­iza­tion” is bet­ter than none. Tar­get can take it fur­ther and, instead of cre­at­ing new offers from scratch or from the user inter­face, we’re basi­cally pair­ing audi­ences with expe­ri­ences or offers based on algo­rithms and our own pro­pri­etary logic. It’s an incred­i­ble ser­vice to the mar­keter and opens up lim­it­less pos­si­bil­i­ties. But of course we’re encour­ag­ing mar­keters to trust our per­son­al­iza­tion engine and under­stand that data-driven, self-learning, and algo­rith­mic approaches to tar­get­ing con­tent can offer quick wins with less effort. And this means mak­ing sure we give them what they need in the prod­uct to make them feel com­fort­able enough to relin­quish some of their control.

So how, tac­ti­cally, is this play­ing out both within Adobe Tar­get and in the greater per­son­al­iza­tion conversation?

We’re out there talk­ing to mar­keters every day—and we’re mar­keters our­selves. We know you need to prove true value and ROI to keep inte­grat­ing a solu­tion like Tar­get. And it’s in our best inter­est, of course, for you and the deci­sion mak­ers in your com­pany to say, “Tar­get makes us incre­men­tal rev­enue or dri­ves greater engage­ment, let’s keep using it.” So we pro­vide everything—every met­ric, every data point, every next step—we can to make sure you can say that, defin­i­tively. That starts by pro­vid­ing strong data to sup­port that the con­tent, offers, prod­ucts, and infor­ma­tion we “pair” with a spe­cific audi­ence com­pels them to act, through higher con­ver­sions, greater pur­chas­ing, or what­ever other KPIs we’re up against.

When we look at those num­bers, we always fun­nel them through three tiers. First, is auto­mated per­son­al­iza­tion per­form­ing bet­ter than if you used noth­ing at all? We com­pare the per­son­al­ized set to a ran­dom set—a per­son who’s com­ing to the site and get­ting some com­bi­na­tion of any­thing that couldbe valid, ver­sus what our algo­rithm believes is more likely to get him to convert—so that we can, essen­tially, say that our algo­rithms are per­form­ing. We’re mak­ing you money. We can say, with­out a doubt, that auto­mated per­son­al­iza­tion works in this sce­nario. If we can’t say that, we’ve failed.

The next level looks at per­son­al­iza­tion beyond just “did it work?” Here we dig in to what worked well and what didn’t. The tough part here is that dif­fer­ent offers or con­tent pieces are going to dif­fer­ent audi­ences. So we have to look and say, “Did one set of offers do much bet­ter than oth­ers? Why?” Maybe there’s some­one really fan­tas­tic and incred­i­bly cre­ative putting out spot-on offers that con­vert well, or maybe the blue just out­per­forms the pink, for some rea­son. Maybe ski­ing just always out­per­forms snow­board­ing. By mak­ing those obser­va­tions we can start to deter­mine smart, action­able next steps. If ski­ing is draw­ing sig­nif­i­cantly more engage­ment and con­ver­sion than snow­board­ing, maybe ski­ing deserves that hero spot—or maybe snow­board­ing shouldn’t be on the site at all, if it’s per­form­ing that badly.

The last stage is really the holy grail of Tar­get. It’s the offer detail report. We’ve now homed in on a spe­cific expe­ri­ence or offer, and can drill down on who the spe­cific audi­ence is for that con­tent. In this stage we’ve changed how much pre­dic­tive power an audi­ence or vari­able has. Through capa­bil­i­ties like iter­a­tive per­son­al­iza­tion, we can get a first glimpse into what kind of offer appeals to a spe­cific audi­ence seg­ment, and vice versa—what kind of offer will this par­tic­u­lar seg­ment respond to? There’s still work to be done, but even­tu­ally we’ll be able to have com­plete auto­mated per­son­al­iza­tion, and know defin­i­tively what offer or expe­ri­ence goes with what audi­ence pre­dic­tively. For now, though, we can say “these are predictive—and that works.” We have an offer, we have asso­ci­ated vari­ables with it. And when we can look at those pieces and say “this is worth this much,” you’re lend­ing incred­i­ble value to an organization.

One thing we’ve been talk­ing a lot about is trust and the notion of dig­i­tal dis­tress. Mar­keters don’t trust the data, the sources, and even them­selves and, as a result, expe­ri­ence vary­ing lev­els of dis­tress that can lead to stag­nancy. How are you think­ing about this when it comes to design­ing a mar­keter experience?

I think about this a lot. At the end of the day, Adobe Tar­get and its rel­a­tive effec­tive­ness come down to that trust. Mar­keters need to trust the report­ing, data, and seg­men­ta­tions enough, and need to be open to rein­vent­ing and reimag­in­ing their roles. Tar­get is designed to be com­pletely in sync with those mar­keters. It’s about mak­ing their jobs eas­ier in every sense. More to the point, mar­keters need to get com­fort­able with a more hands-off approach such as auto­mated per­son­al­iza­tion. In my role, I’ve designed Adobe Tar­get to alle­vi­ate some of the most com­mon causes of anx­i­ety for the mar­keter who wants to exper­i­ment more aggres­sively. It’s sim­ply more foolproof—and elim­i­nat­ing that self-doubt is a big step on the road to break­ing down some of mar­keters’ major trust issues.

So what’s next? What are your pri­or­i­ties and how are we evolv­ing Tar­get to address them?

Going for­ward, it’s all about doing even more with the data. Mar­keters need to think of data as part of a “vir­tu­ous cycle,” but must avoid get­ting over­whelmed by it. This means just-in-time data, small data not big data—and, yes, con­tin­u­ally learn­ing to trust more in the power of algo­rithms to con­sume data and make deci­sions, predictions.

We see Tar­get tak­ing advan­tage of on-the-fly audi­ences, and this is where real-time report­ing and visu­al­iza­tions are going to be crit­i­cal for the mar­keter. To be able to infer an audi­ence and say, “This sur­faced from my per­son­al­iza­tion efforts, and it has a full life cycle.” Pretty cool. You can apply other char­ac­ter­is­tics to it from your mas­ter mar­ket­ing pro­file, do some addi­tional A/B test­ing on it, and use an entire suite of dig­i­tal mar­ket­ing prod­ucts to get the right stuff to the right peo­ple at the right time.

From there I want to see this go a step fur­ther. In an ideal world I would be able to say, “This offer matches with this very spe­cific audi­ence, and I’m always going to be able to feed this to these peo­ple and win.” Mar­keters want to be guided through the opti­miza­tion process, but not give up con­trol entirely. Let the machine do the per­son­al­iza­tion heavy lift­ing, but give the mar­keter the insights and tools to craft a strat­egy that gives them win after win. My job is to be there along­side them. Well, at least that’s what I’ll be imag­in­ing as I work on evolv­ing the Tar­get UX.

Amy would love to hear from you. Join the con­ver­sa­tion on Twit­ter with @amy_lew

0 comments