Archive for September, 2011

Reasons why visitors might not leave page ratings

In November 2010, a ratings badge was added to most of our online help and learning content. Visitors are asked on each page, “Was this helpful?”:

We actually use these page ratings quite extensively – we are continually monitoring changes in the scores and trying to improve our content.
We have found that the vast majority of visitors don’t leave a rating. This is sometimes troublesome on our less trafficked documents since this leaves us little guidance for what we can improve.

Matt Horn, a Senior Content and Community Lead, decided to do some investigating into the reasons why people might not leave a rating. He wrote a blog post over on the Flex Doc Team blog asking people to comment on when they do and do not rate our content. Although it is difficult to draw general conclusions based on such a small sample, the results he obtained are still pretty interesting. Here is a summary of his results, provided by Matt:

 

Why they don’t rate

The biggest reason folks don’t rate is because they’re too busy. 6 people said they either ignore the widget or are too busy to click on it. They just want the help content and ignore everything else: “Don’t even see it. It’s like when you go to a website and have to click away the “research” popups.” They may notice it, but just want to move on: “When I solve the problem, I am relieved and just want to get back to work.”

A couple of people didn’t even realize there was a ratings widget. 2 people mentioned that it loads slower than the rest of the page so they are usually already off and scrolling by the time it finishes loading. My tests bear this out: it loads after the content.

2 people felt the question was imprecise but didn’t elaborate on what question we should ask that would be better. “I perceive it as a question about overall experience and every time I find the information I feel frustrated a bit, so that I don’t feel like pressing “Yes” because I didn’t like the way I reached the info, and pressing “No” is also not an option, because info was indeed helpful.” Similarly, another person said he might go to a page but doesn’t rate it because he often searches for and finds the wrong information. Perhaps rewording to something like “Was this page helpful?” would be a small step.

1 person said maybe people only rate when the pages are really bad or really good. “Remember a non-answer can still be an answer. I think only the extreme ends want to be vocal.”

Some misperceptions persist about the ratings widget. 1 person said he doesn’t want to log in to rate anything, and 2 people said they don’t rate because they feel the comments are ignored. In both cases, they were confusing the way the commenting system at the bottom of the page either works now or used to work.

Specific suggestions

Some users suggested specific ways to improve the number of ratings.

  • 1 person suggested having the widget stick with the user on the side of the page as they navigate. He specifically mentioned the Oracle feedback widget. “Its not annoying but it is also hard to ignore.”
  • 1 guy just doesn’t like radio buttons, but didn’t appear to dislike the idea of rating pages: “Do something better and I’ll rate more.”

Don’t bother

3 people said they wouldn’t rate the pages regardless of how or where we put the widget. Instead, they suggested we collect analytics in other ways.

1 person suggested that we track the number of times users copy code blocks by adding a “Copy” button that we could track.

Another person said he would be fine if Adobe contacted him and asked him specific questions about help page usage.

1 person mentioned that it would be interesting to generate reports of help usage: “Maybe you could find a way to track user’s usage and then present it back to them as a report which they could comment on”. This seems a little big brotherish to me.

Off topic

As usual, users took the opportunity to make a few points that were not exactly related to the issue of collecting ratings:

  • 2 people want Eclipse help.
  • 2 people want more sample code.
  • 2 people wanted links to the Adobe forums from the help pages.
  • 1 person wanted the ability to rate comments like the StackOverflow ask/answer system.

Actions

There are some steps we could take that might improve the ratings, although most of these might not move the needle much:

  • Reword the question to say “Was this page helpful?” from “Was this helpful?”
  • Add “No login required” to assuage users of the need to log in to rate.
  • Load widget earlier in the process.
  • Change to a 5-star rating system rather than a YES/NO question.
  • Have ratings widget move with the user while they scroll the page.
  • How about adding the current rating to the widget? Something like:
    • “Was this page helpful? YES/NO (45% of users found this page helpful)”

 

What about you? Do you typically leave page ratings? Why or why not?

Come see our upcoming Community Help session at MAX!

Will you be attending MAX this year? If so, come check out this Community Help session!

 

Social Studies: Connecting Content and Community in the Cloud

Come see how a few simple UX design patterns can facilitate a shared, social learning experience that blurs the boundaries between inspiration and instruction, as well as between content and community. Three trends are currently sweeping digital media: Tablets are moving away from content consumption to creation, social features are increasingly pervasive, and everything is shifting to the cloud. Join us to explore how this trifecta creates exciting opportunities for designers and developers, and to examine our own promising effort at taking advantage of these trends. For more info click here.

For more information on Adobe MAX, please visit the official website.

 

We’re hiring! Updated: Req is closed

The Community Help & Learning (CHL) group works with the broader Adobe community to identify and provide Adobe learning and troubleshooting content. As we expand further into mobile, social and online communication channels, our content and community leads increasingly need ways to illuminate the customer experience.

We are the Learning Research group within Community Help & Learning (CHL), and here’s what we get to do all day in support of the CHL vision:

  • Identify and measure our success – How well do we provide the information users need, in the way they need to get it?
  • Collaborate with our content and community leads to move toward success
  • Conduct design research – Collaborate with research scientists & engineers on iterative investigation toward technical solutions
  • Decision support – Should we spend money on a solution? How could we test to find out?
  • Understand the big picture – staying abreast of search optimization, social and online communication, mobile content, online ethnographic methods and whatever else comes our way.

We’re looking for two colleagues, one a recent (or soon) graduate for a full-time internal Learning Researcher position, and one a more senior person coming in as a contractor.  Both should have background in multiple research methods (such as quant, ethnographic, user) and understand measurement in the context of social/online communication, mobile content, or maybe both!

Join our team! To view the job posting for the contractor position, please visit here or here. The full-time recent graduate position has not yet been posted. We will update with the link when it is available, but contact Jill Merlin, jmerlin@adobe.com to get started if you’re too eager to wait for the official posting.

Using live web-based user interaction studies

Over the past few months we have been experimenting with a new (to us) methodology: live web-based user interaction studies. This methodology allows us to observe and interview, in real-time, a user on Adobe’s online Help and Support pages who is trying to solve a problem.

 

How it works:
Using a service called Ethnio, we are able to set up a screener that pops up to visitors on a certain page. If the user is interested in participating, he or she is asked to fill out a short survey and to provide us with his/her phone number. We are then able to immediately phone the user and conduct the interview  and directly investigate his or her experience of trying to solve a problem using the Adobe site.

Example Study  – Printing Tips

The most frequently visited document in the Adobe.com knowledge base is Printing Tips – an article helping users with common printing tasks in Acrobat/Reader. This document was receiving 5 million page views per month. The vast majority of visitors arrive at this page through a button in the printer dialog box labeled ‘Printing Tips.’ When you click this button, you are taken to the Printing Tips web page.

Aside from being so highly trafficked, this document also had a poor user rating (~50%), making it a sensible target for improvement.  The Printing Improvement team had analyzed a lot of data from different sources including web analytics and text analysis of user comments. This analysis led the team to make several well thought out content improvements to the document. Despite these efforts, the user rating didn’t budge and remained at 50%.

It was clear to the team that they needed a deeper understanding of the customer experience in order to improve the document, and so we conducted a study piloting the user interaction methodology. We had 4 main questions for users:

  1. What problem were you trying to solve?
  2. How did you get to this page?
  3. What were you expecting when you clicked the Printing Tips button?
  4. Did the page help solve your problem?

After conducting 7 interviews, we discovered that users were coming to the page by accident, in the course of testing each element in the print dialog box, which included the Printing Tips button.  The true source of the pain was not poor content but a confusing dialog box.

The Printing Improvement team was able to use these findings to actually improve the print dialog box.

When to use interaction studies:

In our group, we advocate using this methodology when you have some sort of mystery to solve about user behavior. If users are behaving in a way you don’t understand and the quantitative data available isn’t providing you with the necessary insights, and you have a focused research question, a user interaction study may be beneficial. However, we also have some guidelines about when NOT to use an interaction study:

  • When you already have a pretty good idea about what is going on – user interaction studies are time-intensive; and
  • When you are trying to identify a pattern – interaction studies can provide hints and insights, but the number of participants is generally too few for us to generalize our results to the general population.