Reasons why visitors might not leave page ratings

In November 2010, a ratings badge was added to most of our online help and learning content. Visitors are asked on each page, “Was this helpful?”:

We actually use these page ratings quite extensively – we are continually monitoring changes in the scores and trying to improve our content.
We have found that the vast majority of visitors don’t leave a rating. This is sometimes troublesome on our less trafficked documents since this leaves us little guidance for what we can improve.

Matt Horn, a Senior Content and Community Lead, decided to do some investigating into the reasons why people might not leave a rating. He wrote a blog post over on the Flex Doc Team blog asking people to comment on when they do and do not rate our content. Although it is difficult to draw general conclusions based on such a small sample, the results he obtained are still pretty interesting. Here is a summary of his results, provided by Matt:

 

Why they don’t rate

The biggest reason folks don’t rate is because they’re too busy. 6 people said they either ignore the widget or are too busy to click on it. They just want the help content and ignore everything else: “Don’t even see it. It’s like when you go to a website and have to click away the “research” popups.” They may notice it, but just want to move on: “When I solve the problem, I am relieved and just want to get back to work.”

A couple of people didn’t even realize there was a ratings widget. 2 people mentioned that it loads slower than the rest of the page so they are usually already off and scrolling by the time it finishes loading. My tests bear this out: it loads after the content.

2 people felt the question was imprecise but didn’t elaborate on what question we should ask that would be better. “I perceive it as a question about overall experience and every time I find the information I feel frustrated a bit, so that I don’t feel like pressing “Yes” because I didn’t like the way I reached the info, and pressing “No” is also not an option, because info was indeed helpful.” Similarly, another person said he might go to a page but doesn’t rate it because he often searches for and finds the wrong information. Perhaps rewording to something like “Was this page helpful?” would be a small step.

1 person said maybe people only rate when the pages are really bad or really good. “Remember a non-answer can still be an answer. I think only the extreme ends want to be vocal.”

Some misperceptions persist about the ratings widget. 1 person said he doesn’t want to log in to rate anything, and 2 people said they don’t rate because they feel the comments are ignored. In both cases, they were confusing the way the commenting system at the bottom of the page either works now or used to work.

Specific suggestions

Some users suggested specific ways to improve the number of ratings.

  • 1 person suggested having the widget stick with the user on the side of the page as they navigate. He specifically mentioned the Oracle feedback widget. “Its not annoying but it is also hard to ignore.”
  • 1 guy just doesn’t like radio buttons, but didn’t appear to dislike the idea of rating pages: “Do something better and I’ll rate more.”

Don’t bother

3 people said they wouldn’t rate the pages regardless of how or where we put the widget. Instead, they suggested we collect analytics in other ways.

1 person suggested that we track the number of times users copy code blocks by adding a “Copy” button that we could track.

Another person said he would be fine if Adobe contacted him and asked him specific questions about help page usage.

1 person mentioned that it would be interesting to generate reports of help usage: “Maybe you could find a way to track user’s usage and then present it back to them as a report which they could comment on”. This seems a little big brotherish to me.

Off topic

As usual, users took the opportunity to make a few points that were not exactly related to the issue of collecting ratings:

  • 2 people want Eclipse help.
  • 2 people want more sample code.
  • 2 people wanted links to the Adobe forums from the help pages.
  • 1 person wanted the ability to rate comments like the StackOverflow ask/answer system.

Actions

There are some steps we could take that might improve the ratings, although most of these might not move the needle much:

  • Reword the question to say “Was this page helpful?” from “Was this helpful?”
  • Add “No login required” to assuage users of the need to log in to rate.
  • Load widget earlier in the process.
  • Change to a 5-star rating system rather than a YES/NO question.
  • Have ratings widget move with the user while they scroll the page.
  • How about adding the current rating to the widget? Something like:
    • “Was this page helpful? YES/NO (45% of users found this page helpful)”

 

What about you? Do you typically leave page ratings? Why or why not?

Come see our upcoming Community Help session at MAX!

Will you be attending MAX this year? If so, come check out this Community Help session!

 

Social Studies: Connecting Content and Community in the Cloud

Come see how a few simple UX design patterns can facilitate a shared, social learning experience that blurs the boundaries between inspiration and instruction, as well as between content and community. Three trends are currently sweeping digital media: Tablets are moving away from content consumption to creation, social features are increasingly pervasive, and everything is shifting to the cloud. Join us to explore how this trifecta creates exciting opportunities for designers and developers, and to examine our own promising effort at taking advantage of these trends. For more info click here.

For more information on Adobe MAX, please visit the official website.

 

We’re hiring! Updated: Req is closed

The Community Help & Learning (CHL) group works with the broader Adobe community to identify and provide Adobe learning and troubleshooting content. As we expand further into mobile, social and online communication channels, our content and community leads increasingly need ways to illuminate the customer experience.

We are the Learning Research group within Community Help & Learning (CHL), and here’s what we get to do all day in support of the CHL vision:

  • Identify and measure our success – How well do we provide the information users need, in the way they need to get it?
  • Collaborate with our content and community leads to move toward success
  • Conduct design research – Collaborate with research scientists & engineers on iterative investigation toward technical solutions
  • Decision support – Should we spend money on a solution? How could we test to find out?
  • Understand the big picture – staying abreast of search optimization, social and online communication, mobile content, online ethnographic methods and whatever else comes our way.

We’re looking for two colleagues, one a recent (or soon) graduate for a full-time internal Learning Researcher position, and one a more senior person coming in as a contractor.  Both should have background in multiple research methods (such as quant, ethnographic, user) and understand measurement in the context of social/online communication, mobile content, or maybe both!

Join our team! To view the job posting for the contractor position, please visit here or here. The full-time recent graduate position has not yet been posted. We will update with the link when it is available, but contact Jill Merlin, jmerlin@adobe.com to get started if you’re too eager to wait for the official posting.

Using live web-based user interaction studies

Over the past few months we have been experimenting with a new (to us) methodology: live web-based user interaction studies. This methodology allows us to observe and interview, in real-time, a user on Adobe’s online Help and Support pages who is trying to solve a problem.

 

How it works:
Using a service called Ethnio, we are able to set up a screener that pops up to visitors on a certain page. If the user is interested in participating, he or she is asked to fill out a short survey and to provide us with his/her phone number. We are then able to immediately phone the user and conduct the interview  and directly investigate his or her experience of trying to solve a problem using the Adobe site.

Example Study  – Printing Tips

The most frequently visited document in the Adobe.com knowledge base is Printing Tips – an article helping users with common printing tasks in Acrobat/Reader. This document was receiving 5 million page views per month. The vast majority of visitors arrive at this page through a button in the printer dialog box labeled ‘Printing Tips.’ When you click this button, you are taken to the Printing Tips web page.

Aside from being so highly trafficked, this document also had a poor user rating (~50%), making it a sensible target for improvement.  The Printing Improvement team had analyzed a lot of data from different sources including web analytics and text analysis of user comments. This analysis led the team to make several well thought out content improvements to the document. Despite these efforts, the user rating didn’t budge and remained at 50%.

It was clear to the team that they needed a deeper understanding of the customer experience in order to improve the document, and so we conducted a study piloting the user interaction methodology. We had 4 main questions for users:

  1. What problem were you trying to solve?
  2. How did you get to this page?
  3. What were you expecting when you clicked the Printing Tips button?
  4. Did the page help solve your problem?

After conducting 7 interviews, we discovered that users were coming to the page by accident, in the course of testing each element in the print dialog box, which included the Printing Tips button.  The true source of the pain was not poor content but a confusing dialog box.

The Printing Improvement team was able to use these findings to actually improve the print dialog box.

When to use interaction studies:

In our group, we advocate using this methodology when you have some sort of mystery to solve about user behavior. If users are behaving in a way you don’t understand and the quantitative data available isn’t providing you with the necessary insights, and you have a focused research question, a user interaction study may be beneficial. However, we also have some guidelines about when NOT to use an interaction study:

  • When you already have a pretty good idea about what is going on – user interaction studies are time-intensive; and
  • When you are trying to identify a pattern – interaction studies can provide hints and insights, but the number of participants is generally too few for us to generalize our results to the general population.

 

 

 

 

 

 

 

Determining the ROI of Social Media for Online Learning Content

By now most companies have realized just how valuable social media can be, particularly when it comes to marketing. But how do you quantify the value of using social media to promote online learning content?

Dr. Natalie Petouhoff has written extensively about calculating the ROI of social media in terms of reduced support costs for businesses. Check out her great video series on this topic. In fact, she has a free ROI of Social Media webinar coming up on Aug. 25th that is being offered through CRM Xchange – can’t wait to learn more about her approach!

But what about other effects of social and online communication? Some things we are currently looking at include:

  • Sentiment (i.e., positive/negative perceptions of a product or brand);
  • Online customer satisfaction; and
  • Traffic to the online resource.

When we share links to helpful resources for our customers, are there other valuable outcomes? How should these outcomes be measured?

 

 

Add to: Facebook | Digg | Del.icio.us | Stumbleupon | Reddit | Blinklist | Twitter | Technorati | Yahoo Buzz | Newsvine

New content coming soon!

We are excited to revive this blog and we will be posting new content very soon – stay tuned!

Major lessons from observing user workflows

We recently asked Create with Context (CWC), an independent research and design company, to conduct lab studies of four important user workflows involving Adobe products. We wanted to understand the effectiveness of the learning experience around these workflows, and figure out how to improve them.
We learned three important lessons that we think will apply across every workflow and learning experience using Adobe products and learning resources:
* Users would prefer _not_ to learn something new in the middle of their work. Rich learning experiences like this one for Flex are good for advancing your skills — you need something different when you just want to get something done. We need to find ways to deliver appropriate content quickly, while still offering rich resources when people have time for them.
* Users are in a big hurry and read as little as possible. This matches what we know from prior research. The tricky part is that some of the time, users need to read in order to get what they’re looking for. How far can we boil down our content? How can we help users understand when it’s actually worth reading?
* Users may not know the technical language for the techniques they want to learn. This is a big obstacle to effective searching! We need to figure out how to connect the words people use to describe what they are looking for with the words used in learning materials.
Coming soon: The methodology behind these lessons

So, what does a learning research team do exactly?

Our team is responsible for much of the research used in the Learning Resources Group. Our colleagues in Learning Resources not only develop learning content for all Adobe products, but also administrate the communities, and maintain the navigation and search mechanisms involved in the learning experience on Adobe.com. They seek out and share community-created content, support the community moderators who help manage community participation, and produce the product Help. In order to do this effectively, they conduct extensive investigations into the needs and preferences of the Adobe community.
The research team’s work boils down to three kinds:
Summative – we are the scorekeepers for the Learning Resources group. We’re especially interested in user success, user contributions (comments that add value to our learning resources), search success metrics (abandonment, clickthrough, and search modification), and calls to the Support call center.
Formative – we compile data that our colleagues, Adobe’s corps of content leads, can use to improve the above scores. For example, we report open-ended survey responses, contributions by product, search success by product and query, and calls by call category. We also work with them on studies of user behavior, to find out exactly what gets in users’ way when they are trying to learn Adobe software.
Decision-support – we help our colleagues articulate their design decisions, questions whose answers will inform those decisions, and strategies for answering those questions.
Going forward, we’ll write more about each aspect of the work, and report on some of our findings.

Welcome to the Adobe Learning Research Blog!

In this blog, members of the Learning Research team at Adobe will post about a variety of topics. The main purpose of the blog is to inform customers about how we use data to improve our learning resource offerings. We’ll post about our methods, about current projects, and about decisions made by the Learning Resources group based on our findings. We really do use what we learn from you, and we hope you’ll continue to tell us what we need to know!
Our team is:
Meg Gordon, Learning Researcher
Diana Joseph, Learning Research Manager
Rob Liebscher, Senior Research Analyst
Lindsay Oishi, Learning Research Intern
Ya Ting Teng, Learning Research Intern