We’re pleased to recognize Kyle Power of CHG Healthcare Services as this month’s Adobe Conversion All-Star. Kyle is the director of online marketing at CHG and has been instrumental in helping the company move to a data-driven culture where they test everything and never take anything for granted. I connected with Kyle to hear more about the testing successes they’ve been able to achieve across their online properties and how they’re planning to take their optimization program to the next level by segmenting their audiences and targeting relevant content to them.

Q: Your optimization program is relatively young, yet relatively strong. About a year ago you had just gotten some testing initiatives off the ground and a year later you are running one to two tests a month and getting tremendous buy-in from across the organization to continue. How did you identify the need to begin an optimization program?

A: Our website’s previous design was more of a cosmetic rebranding effort and occurred without a lot of analytics research to understand what was working well before and what wasn’t. When it launched, the marketing team saw a fairly significant decline in online conversion rate which wasn’t sustainable because they were spending more marketing dollars, but getting less of a return. It was around that time that I joined CHG and worked to evaluate where we were spending our online marketing budget. I isolated pay-per-click (PPC) campaigns as low-hanging fruit where we could build momentum – we built some landing pages and brought in Adobe Test&Target to show the business the value of online testing. I wanted to show that we as humans could sometimes be infallible and at some point our ideas were going to fail. We needed a tool like Test&Target to help us evaluate these ideas with real statistical confidence behind them. However, before we even kicked off our testing efforts, we decided to get a benchmark by running five different homepage experiences to understand how they performed day-to-day. In fact, this was an idea that our Adobe consulting team suggested and it turned out to be a good one.

Q: So what were some of your initial testing wins like?

A: Our legacy PPC campaign sent visitors interested in finding a healthcare job to one landing page experience, but we found that visitors were experiencing paralysis by analysis because this page was a bit overwhelming. It had a multitude of job types and listings and our hypothesis was that visitors didn’t have time to research and browse through all the different job listings so they would end up abandoning the page. We decided to instead send visitors interested in jobs to a page where they would fill out a short form and one of our recruiters would follow up and tailor the conversation to their interests and qualifications. We saw a pretty significant list with this test and quickly took that logic and applied it to two other divisions within our company.

Another quick win we had was on our homepage. I held some internal focus groups to find out what employees liked and disliked about our homepage. Some of our employees didn’t think it was clear from our homepage what types of jobs we help fill so we added a layer of navigation to the five main categories on our job board and we saw a lift in conversion which convinced folks internally that we were onto something.

However, we’ve realized that we’re not always right. Internally there was a feeling that our site search was broken so we tested drop-down navigation vs. our search bar and it did OK, but overall, visitors didn’t like it and the test recipe lost so we called it off. It was a great wake up call for me and our team and solidified why it’s so important for us to test. We can have a zillion internal opinions about what might work and what won’t, but the numbers don’t lie.

Q: Have your initial tests sparked additional tests?

A: Yes, one test definitely begets another and these initial wins were instrumental in helping us build out an internal culture of optimization. After our initial homepage test, we went on to test three different homepages. We tested an ecommerce slider-type of homepage that would rotate a big hero image, eye-catching language and a search box vs. a smaller slider page with tabs to quickly show what types of staffing we support, vs. the default. The slider page with the tabs won and we quickly followed up on that with another test. We thought – what if we targeted the content shown on the slider?  We could evaluate the different types of visitor data we have such as profession, specialty, URL structure, etc. to create different visitor segments and serve more targeted content to people. This test is currently still running, but so far the targeted content vs. the default is winning so it’s appearing to be quite effective. Now we’re thinking about the next steps to take on our job board page and what we can test there. It’s forced us to have some thoughtful discussion about how we can improve the user experience for our visitors.

Q: What about getting further buy-in up the food chain? Are there regular internal presentations that you need to make to demonstrate value and continued progress?

A: Our interactive team owns the Test&Target relationship and we regularly present our results to the Brand team, along with our VP of Marketing. We’re also working to make a better effort of keeping our IT team up to speed too because we’ve learned that if they’re not engaged early, it’s harder to get their resources to help us implement and modify tests when necessary. It’s important for IT to see the value of testing so that they know they are helpful to driving the process forward.  I see them as a partner that can help us think through our testing strategy, rather than merely an execution partner because they have a good, holistic perspective. We also translate our results to our various marketing divisions and teams so that they understand what a set of results means for them.

Q: By testing and understanding what our visitors like and dislike, are you getting a better sense for what types of tests are going to work?

A: Again, it’s really too early for us to tell. We’ve done some significant layout changes so it’s hard to isolate what will work and what won’t in the future, but we’re trying to chip away at it. We want to understand what type of copy works best for job seekers and are they primarily looking for more information about our company, what types of jobs are available or what is in it for them? It’s all a learning process, but luckily, Test&Target helps boil it down to a near-science.

Q: How quickly can you turn around initial tests ideas to actual results?

A: It varies and depends on where our mboxes are and what we want to test. If we’re building new templates or page layouts, it will require IT resources vs. an easy copy test where our interactive team can swap things out on their own. In general, it takes us about 2 – 3 weeks to turn around a simple test and sometimes 3 – 4 months for more complex tests, from initial concept to completion.

Q: What would be nirvana for your optimization program? What goals are you working toward?

A: While we’ve seen a lot of great results, we’re really still in the infancy stage. We realize that it can be fairly easy to derive some great lift from initial tests, but as we continue to progress on the optimization path, we know we have to work harder at it to achieve the same type of results. We’ve definitely evolved to an internal data-driven culture where we don’t assume anything and test everything. But, we have to be pragmatic about it and come up with different hypotheses to test against, we don’t run tests just for the sake of running them. It’s a four step process built around this framework – we identify our objective, the corresponding hypothesis, the test results and then undertake a full evaluation. We’re constantly working to understand the results of our tests and why they did or didn’t succeed and how we can iterate from that point on and continue to optimize. We know our work is never done and we’re moving forward as fast as we can.

0 comments