Blog Post:One of my consultants caught me in the hall last week and asked what I thought about visitor engagement. A customer of his developed a complex formula -- a mashup of metrics -- to measure “Visitor Engagement” on his site. The idea is that this will feature prominently in executive dashboards.  If the number goes up, great.  If it goes down, they'll take action to rectify the situation. This sounds like the answer to our engagement prayers: one metric to measure all.  It’s the Esperanto of engagement, a common language by which we can understand the customer. Unfortunately, I think it's a terrible idea. In many ways, it’s the antithesis of all that measurement stands for in my mind.  Why?  Let me explain. The Basic Premise of Measurement The basic premise of measurement is that you want to measure something so you can improve it, if necessary. If my body/fat ratio is out of whack, I’ll work out and eat better to bring it back in line. If my conversion rate is lower than my historical average, I’ll try to improve it.  If campaign response is weak, I’ll look at some fresh creative. It’s pretty simple – collect data, analyze, improve.  I love this because of its simplicity and objectivity. In my early days of analytics, I spent countless hours watching as executives argued emotions instead of facts.  And, unfortunately, back in the late 90s, analytics were hardly robust enough to confidently argue in favor of either side.  Generally the person with the bigger title won the argument and their recommendations were put into place. But now, analytics are far more robust (when implemented and managed properly), and we live in a wonderful world of objectivity (for the most part). So what happens when you start combining metrics into uber-formulas like Visitor Engagement?  That model breaks, because you introduce a level of abstraction on the data. You “dumb it down,” introducing bias and subjectivity. Breaking the model: why uber metrics don’t work Let's say ‘engagement’ is classically defined as leads/visits on the site.  That's an objective measure of how a visitor's experience is leading to a positive outcome for both parties.  In other words, it’s a measure of how engaged the visitor has become in his relationship with a company, and it demonstrates a strengthening relationship – all good things in the world of customer management. Now let's say you create an engagement mashup.  The mashup includes visitors that have returned "often" to the site as one metric, when they view "important" content as another metric, and, just for good, measure, we’ll include visitors that spent a "long" time on the site as the final metric. That's just three metrics; it can’t be that biased, right?  You bet it can. First, what kind of return frequency is "often" - two visits?  Four?  Six?  That's subjective.  What is "important" content?  The home page?  An article?  A support document? Subjective again.  And what is a "long" time on site -- 5 minutes, 10 minutes?  Perhaps "long" means any visit that exceeds the average for the site that week? You can see how quickly this becomes totally subjective.  Because of its subjectivity, it has become totally worthless.  You have introduced massive bias without coming up with a metric that allows you to make decisions. Let’s say this formula yields a Visitor Engagement “Score” of 40 for last month.  This month, the same formula produces a score of 30.  That’s a pretty dire situation -- but what do you do about it?  How can your executive team act on that number?  They can’t!  Your best hope is to begin dissecting the Visitor Engagement score to its fundamental metrics and figure out which one is responsible for the decrease. For example, suppose return frequency was flat, visits to important content skyrocketed, but time on site fell through the floor.  You'll probably want to focus on time spent on site, and see if you can improve that.  But if your primary KPI of leads/visits has increased (i.e. your conversion), maybe you’ve actually done a really good thing and you should leave it alone. You’ve created a more frictionless experience, and the declining Visitor Engagement score supports this. At this point, you’ll face the undesirable task of convincing your execs that the Visitor Engagement metric, which you fought so hard to socialize and adopt, should actually decline. But WAIT! Not all uber metrics are bad So I think you get the point.  Visitor engagement formulas are largely another fad, just like parachute pants and the Hollywood diet.  It’s a measure some consultants and vendors can pitch like snake oil. But, that is not to say that uber metrics are completely worthless.  In select cases, you can actually leverage uber formulas to make very useful decisions. Uber metrics that are purely objective can hold value to an organization.  Perhaps one of the greatest is RFM – Recency Frequency Monetary.  In that case, you’re dealing with an (almost) entirely objective uber metric.  For those not as familiar with RFM, it’s a classic customer segmentation technique that essentially calls for you to score your customers based on their ‘relative’ rank to one another along three primary metrics. You then roll up these scores to arrive at an uber score, and identify your best (highest scoring) and worst (lowest scoring) customers.  Action you can take from learnings gleaned from this analysis are too numerous to name.  It’s actually a lot of fun to do these kinds of models. But even in this case, subjectively can often enter the picture. For example, the timeline over which you analyze customer data is one of the principal points of subjectivity.  In the RFM model above, do you analyze behavior over 1 month, 6 months, 1 year or 6 years?   Maybe you just take as much data as you can find and mix it all together and hope for the best.  In turn, once you complete your RFM segmentation, what time period do you compare it to?  Weeks?  Months? Years?  Again, subjective. RFM has a long history of being valuable – so again, I’m not throwing uber metrics under the bus entirely.  Still, I wouldn’t waste your time with most of them.  There are so many opportunities for optimization based on primary key performance indicators like conversion that you can keep your entire team busy for years. Don’t try to build a better mouse trap, when you’re not taking advantage of the one you’ve got today. So, those are my thoughts.  As always, I welcome your ideas and feedback. Author: Date Created:July 14, 2008 Date Published: Headline:Visitor Engagement: Time for a reality check? Social Counts: Keywords: Publisher:Adobe Image:https://blogs.adobe.com/digitalmarketing/wp-content/uploads/no-image/no-image.jpg

One of my consultants caught me in the hall last week and asked what I thought about visitor engagement. A customer of his developed a complex formula — a mashup of metrics — to measure “Visitor Engagement” on his site. The idea is that this will feature prominently in executive dashboards.  If the number goes up, great.  If it goes down, they’ll take action to rectify the situation.

This sounds like the answer to our engagement prayers: one metric to measure all.  It’s the Esperanto of engagement, a common language by which we can understand the customer.

Unfortunately, I think it’s a terrible idea. In many ways, it’s the antithesis of all that measurement stands for in my mind.  Why?  Let me explain.

The Basic Premise of Measurement
The basic premise of measurement is that you want to measure something so you can improve it, if necessary. If my body/fat ratio is out of whack, I’ll work out and eat better to bring it back in line. If my conversion rate is lower than my historical average, I’ll try to improve it.  If campaign response is weak, I’ll look at some fresh creative.

It’s pretty simple – collect data, analyze, improve.  I love this because of its simplicity and objectivity. In my early days of analytics, I spent countless hours watching as executives argued emotions instead of facts.  And, unfortunately, back in the late 90s, analytics were hardly robust enough to confidently argue in favor of either side.  Generally the person with the bigger title won the argument and their recommendations were put into place.

But now, analytics are far more robust (when implemented and managed properly), and we live in a wonderful world of objectivity (for the most part).

So what happens when you start combining metrics into uber-formulas like Visitor Engagement?  That model breaks, because you introduce a level of abstraction on the data. You “dumb it down,” introducing bias and subjectivity.

Breaking the model: why uber metrics don’t work
Let’s say ‘engagement’ is classically defined as leads/visits on the site.  That’s an objective measure of how a visitor’s experience is leading to a positive outcome for both parties.  In other words, it’s a measure of how engaged the visitor has become in his relationship with a company, and it demonstrates a strengthening relationship – all good things in the world of customer management.

Now let’s say you create an engagement mashup.  The mashup includes visitors that have returned “often” to the site as one metric, when they view “important” content as another metric, and, just for good, measure, we’ll include visitors that spent a “long” time on the site as the final metric.

That’s just three metrics; it can’t be that biased, right?  You bet it can.

First, what kind of return frequency is “often” – two visits?  Four?  Six?  That’s subjective.  What is “important” content?  The home page?  An article?  A support document? Subjective again.  And what is a “long” time on site — 5 minutes, 10 minutes?  Perhaps “long” means any visit that exceeds the average for the site that week?

You can see how quickly this becomes totally subjective.  Because of its subjectivity, it has become totally worthless.  You have introduced massive bias without coming up with a metric that allows you to make decisions.

Let’s say this formula yields a Visitor Engagement “Score” of 40 for last month.  This month, the same formula produces a score of 30.  That’s a pretty dire situation — but what do you do about it?  How can your executive team act on that number?  They can’t!  Your best hope is to begin dissecting the Visitor Engagement score to its fundamental metrics and figure out which one is responsible for the decrease.

For example, suppose return frequency was flat, visits to important content skyrocketed, but time on site fell through the floor.  You’ll probably want to focus on time spent on site, and see if you can improve that.  But if your primary KPI of leads/visits has increased (i.e. your conversion), maybe you’ve actually done a really good thing and you should leave it alone. You’ve created a more frictionless experience, and the declining Visitor Engagement score supports this.

At this point, you’ll face the undesirable task of convincing your execs that the Visitor Engagement metric, which you fought so hard to socialize and adopt, should actually decline.

But WAIT! Not all uber metrics are bad
So I think you get the point.  Visitor engagement formulas are largely another fad, just like parachute pants and the Hollywood diet.  It’s a measure some consultants and vendors can pitch like snake oil.

But, that is not to say that uber metrics are completely worthless.  In select cases, you can actually leverage uber formulas to make very useful decisions.

Uber metrics that are purely objective can hold value to an organization.  Perhaps one of the greatest is RFM – Recency Frequency Monetary.  In that case, you’re dealing with an (almost) entirely objective uber metric.  For those not as familiar with RFM, it’s a classic customer segmentation technique that essentially calls for you to score your customers based on their ‘relative’ rank to one another along three primary metrics.

You then roll up these scores to arrive at an uber score, and identify your best (highest scoring) and worst (lowest scoring) customers.  Action you can take from learnings gleaned from this analysis are too numerous to name.  It’s actually a lot of fun to do these kinds of models. But even in this case, subjectively can often enter the picture.

For example, the timeline over which you analyze customer data is one of the principal points of subjectivity.  In the RFM model above, do you analyze behavior over 1 month, 6 months, 1 year or 6 years?   Maybe you just take as much data as you can find and mix it all together and hope for the best.  In turn, once you complete your RFM segmentation, what time period do you compare it to?  Weeks?  Months? Years?  Again, subjective.

RFM has a long history of being valuable – so again, I’m not throwing uber metrics under the bus entirely.  Still, I wouldn’t waste your time with most of them.  There are so many opportunities for optimization based on primary key performance indicators like conversion that you can keep your entire team busy for years.

Don’t try to build a better mouse trap, when you’re not taking advantage of the one you’ve got today.

So, those are my thoughts.  As always, I welcome your ideas and feedback.