We’re pleased to rec­og­nize Kyle Power of CHG Health­care Ser­vices as this month’s Adobe Con­ver­sion All-Star. Kyle is the direc­tor of online mar­ket­ing at CHG and has been instru­men­tal in help­ing the com­pany move to a data-driven cul­ture where they test every­thing and never take any­thing for granted. I con­nected with Kyle to hear more about the test­ing suc­cesses they’ve been able to achieve across their online prop­er­ties and how they’re plan­ning to take their opti­miza­tion pro­gram to the next level by seg­ment­ing their audi­ences and tar­get­ing rel­e­vant con­tent to them.

Q: Your opti­miza­tion pro­gram is rel­a­tively young, yet rel­a­tively strong. About a year ago you had just got­ten some test­ing ini­tia­tives off the ground and a year later you are run­ning one to two tests a month and get­ting tremen­dous buy-in from across the orga­ni­za­tion to con­tinue. How did you iden­tify the need to begin an opti­miza­tion program?

A: Our website’s pre­vi­ous design was more of a cos­metic rebrand­ing effort and occurred with­out a lot of ana­lyt­ics research to under­stand what was work­ing well before and what wasn’t. When it launched, the mar­ket­ing team saw a fairly sig­nif­i­cant decline in online con­ver­sion rate which wasn’t sus­tain­able because they were spend­ing more mar­ket­ing dol­lars, but get­ting less of a return. It was around that time that I joined CHG and worked to eval­u­ate where we were spend­ing our online mar­ket­ing bud­get. I iso­lated pay-per-click (PPC) cam­paigns as low-hanging fruit where we could build momen­tum – we built some land­ing pages and brought in Adobe Test&Target to show the busi­ness the value of online test­ing. I wanted to show that we as humans could some­times be infal­li­ble and at some point our ideas were going to fail. We needed a tool like Test&Target to help us eval­u­ate these ideas with real sta­tis­ti­cal con­fi­dence behind them. How­ever, before we even kicked off our test­ing efforts, we decided to get a bench­mark by run­ning five dif­fer­ent home­page expe­ri­ences to under­stand how they per­formed day-to-day. In fact, this was an idea that our Adobe con­sult­ing team sug­gested and it turned out to be a good one.

Q: So what were some of your ini­tial test­ing wins like?

A: Our legacy PPC cam­paign sent vis­i­tors inter­ested in find­ing a health­care job to one land­ing page expe­ri­ence, but we found that vis­i­tors were expe­ri­enc­ing paral­y­sis by analy­sis because this page was a bit over­whelm­ing. It had a mul­ti­tude of job types and list­ings and our hypoth­e­sis was that vis­i­tors didn’t have time to research and browse through all the dif­fer­ent job list­ings so they would end up aban­don­ing the page. We decided to instead send vis­i­tors inter­ested in jobs to a page where they would fill out a short form and one of our recruiters would fol­low up and tai­lor the con­ver­sa­tion to their inter­ests and qual­i­fi­ca­tions. We saw a pretty sig­nif­i­cant list with this test and quickly took that logic and applied it to two other divi­sions within our company.

Another quick win we had was on our home­page. I held some inter­nal focus groups to find out what employ­ees liked and dis­liked about our home­page. Some of our employ­ees didn’t think it was clear from our home­page what types of jobs we help fill so we added a layer of nav­i­ga­tion to the five main cat­e­gories on our job board and we saw a lift in con­ver­sion which con­vinced folks inter­nally that we were onto something.

How­ever, we’ve real­ized that we’re not always right. Inter­nally there was a feel­ing that our site search was bro­ken so we tested drop-down nav­i­ga­tion vs. our search bar and it did OK, but over­all, vis­i­tors didn’t like it and the test recipe lost so we called it off. It was a great wake up call for me and our team and solid­i­fied why it’s so impor­tant for us to test. We can have a zil­lion inter­nal opin­ions about what might work and what won’t, but the num­bers don’t lie.

Q: Have your ini­tial tests sparked addi­tional tests?

A: Yes, one test def­i­nitely begets another and these ini­tial wins were instru­men­tal in help­ing us build out an inter­nal cul­ture of opti­miza­tion. After our ini­tial home­page test, we went on to test three dif­fer­ent home­pages. We tested an ecom­merce slider-type of home­page that would rotate a big hero image, eye-catching lan­guage and a search box vs. a smaller slider page with tabs to quickly show what types of staffing we sup­port, vs. the default. The slider page with the tabs won and we quickly fol­lowed up on that with another test. We thought – what if we tar­geted the con­tent shown on the slider?  We could eval­u­ate the dif­fer­ent types of vis­i­tor data we have such as pro­fes­sion, spe­cialty, URL struc­ture, etc. to cre­ate dif­fer­ent vis­i­tor seg­ments and serve more tar­geted con­tent to peo­ple. This test is cur­rently still run­ning, but so far the tar­geted con­tent vs. the default is win­ning so it’s appear­ing to be quite effec­tive. Now we’re think­ing about the next steps to take on our job board page and what we can test there. It’s forced us to have some thought­ful dis­cus­sion about how we can improve the user expe­ri­ence for our visitors.

Q: What about get­ting fur­ther buy-in up the food chain? Are there reg­u­lar inter­nal pre­sen­ta­tions that you need to make to demon­strate value and con­tin­ued progress?

A: Our inter­ac­tive team owns the Test&Target rela­tion­ship and we reg­u­larly present our results to the Brand team, along with our VP of Mar­ket­ing. We’re also work­ing to make a bet­ter effort of keep­ing our IT team up to speed too because we’ve learned that if they’re not engaged early, it’s harder to get their resources to help us imple­ment and mod­ify tests when nec­es­sary. It’s impor­tant for IT to see the value of test­ing so that they know they are help­ful to dri­ving the process for­ward.  I see them as a part­ner that can help us think through our test­ing strat­egy, rather than merely an exe­cu­tion part­ner because they have a good, holis­tic per­spec­tive. We also trans­late our results to our var­i­ous mar­ket­ing divi­sions and teams so that they under­stand what a set of results means for them.

Q: By test­ing and under­stand­ing what our vis­i­tors like and dis­like, are you get­ting a bet­ter sense for what types of tests are going to work?

A: Again, it’s really too early for us to tell. We’ve done some sig­nif­i­cant lay­out changes so it’s hard to iso­late what will work and what won’t in the future, but we’re try­ing to chip away at it. We want to under­stand what type of copy works best for job seek­ers and are they pri­mar­ily look­ing for more infor­ma­tion about our com­pany, what types of jobs are avail­able or what is in it for them? It’s all a learn­ing process, but luck­ily, Test&Target helps boil it down to a near-science.

Q: How quickly can you turn around ini­tial tests ideas to actual results?

A: It varies and depends on where our mboxes are and what we want to test. If we’re build­ing new tem­plates or page lay­outs, it will require IT resources vs. an easy copy test where our inter­ac­tive team can swap things out on their own. In gen­eral, it takes us about 2 – 3 weeks to turn around a sim­ple test and some­times 3 – 4 months for more com­plex tests, from ini­tial con­cept to completion.

Q: What would be nir­vana for your opti­miza­tion pro­gram? What goals are you work­ing toward?

A: While we’ve seen a lot of great results, we’re really still in the infancy stage. We real­ize that it can be fairly easy to derive some great lift from ini­tial tests, but as we con­tinue to progress on the opti­miza­tion path, we know we have to work harder at it to achieve the same type of results. We’ve def­i­nitely evolved to an inter­nal data-driven cul­ture where we don’t assume any­thing and test every­thing. But, we have to be prag­matic about it and come up with dif­fer­ent hypothe­ses to test against, we don’t run tests just for the sake of run­ning them. It’s a four step process built around this frame­work – we iden­tify our objec­tive, the cor­re­spond­ing hypoth­e­sis, the test results and then under­take a full eval­u­a­tion. We’re con­stantly work­ing to under­stand the results of our tests and why they did or didn’t suc­ceed and how we can iter­ate from that point on and con­tinue to opti­mize. We know our work is never done and we’re mov­ing for­ward as fast as we can.

0 comments