Adobe Creative Cloud

July 21, 2016 /UX/UI Design /

How to Support Design Decisions Through Iterative Testing

A while ago I was tasked with designing new features for a client’s data-rich web app. The requirements came from business analysts who, together with the product owner, would talk to the client’s product manager and discuss the features that needed built.

This is the workflow found in many service-based companies, resulting in the business analysts and product managers becoming the customer needs experts due to the lack of access to the end user. The immediate consequence is their personal views become the assumed views of the customer, as there is no data to support a different perspective. Therefore, decisions are made based on what business analysts think the customer wants, rather than what they actually need.

This is one outcome of the shift towards an Agile culture that sets the focus on continuously building new features, rather than testing and researching which ones are actually worth the development effort. The opportunity cost of not investing time in research to determine what features are actually needed is much higher than a couple of days’ work to prepare low-fi concepts.

In an Agile environment, the UX designer has to constantly deliver solutions for the current iteration while thinking of the remaining items in the backlog, and preparing the concepts for the next sprint.

The Pitfalls of a Continuous Design Process

The continuous design process is a double-edged sword. On one hand it provides constant challenges to designers and keeps them focused on solving design problems, but on the other it doesn’t allow time for feature validation through proper research and testing.

The need to be on schedule and deliver at the beginning of the sprint forces designers to cut corners and to rely on gut feeling and personal experience instead of valid research data. And after a feature is released, it is assumed that the client will use it, allowing the team to jump into the next item in the backlog.

In this scenario, we could go on about the lack of support for user research, but the problem is created by Agile development practices that promote feature factories. Believing that validation will come after a release is a flawed expectation as it assumes that if you build a feature, customers will use it.

costIn many cases, even if a clients requests a feature, if its functionality slows down a user’s productivity, they will simply ignore it.

Unless designers coordinate across all teams to keep users at the center of their design and development efforts, they will most likely release features that will end up re-worked or dropped.

Unsuccessful features impose an even greater cost than the building them in the first place.

The code written to offer that functionality adds to the complexity of an app, and with each feature the cost of carrying this extra “weight” is higher.

With each new feature released, it becomes harder to modify and test an application.

Support Design Decisions Through User Research

To avoid unnecessary complexity, we must support design decisions based on both qualitative and quantitative research. All the more so if your team adheres to an Agile design and development practice. In this instance, you need to adapt your methods to suit the lack of time and resources that classic user research demands.

There is a wide range of user research methods available, and the Nielsen Norman Group has done a great job in classifying them by 3 dimensions:

  1. Attitudinal vs. Behavioral
  2. Qualitative vs. Quantitative
  3. By Context of Use

Qualitative research generates data about user behaviors and attitudes, based on observing them directly. This usually happens in usability and field studies where the researcher directly observes how individuals use a specific product.

Quantitative research collects data indirectly through methods like surveys and analytics. Quantitative research captures large amounts of data that can be analyzed to show the scale at which certain issues appear.

While quantitative research is focusing more on the behavioral dimension, the qualitative approach tries to understand and measure people’s attitudes and mental models when using a product.

Choosing which method to use, and when, is difficult due to the adaptive and evolutionary nature of Agile methods. Depending on the phase of the development process, whether it’s execution or assessment, a lean approach that uses concepts to test assumptions is recommended.

By relying on a build-measure-learn feedback loop, the team can quickly prototype a new feature, measure how clients respond, and learn whether they need to make improvements or completely re-work the concept.

The RITE Method

One of my preferred methods to test concepts is the RITE method (Rapid Iterative Testing and Evaluation), an iterative testing method in which changes to the UI are made as soon as the usability test reveals a problem for which a solution can be proposed.rite

Usually this happens after observing the behavior of the first participant and the team decides whether improvements to the concept need to be made before the next round of testing.

After that second round, the team continues to run tests and modify the prototype iteratively, until the desired result is achieved and the feature is ready for development.

The disadvantages in using this method in a pure form in an Agile context are:

  • the time necessary to make adjustments to the prototype in between testing sessions can affect the development schedule.
  • not having a clear deadline when the testing will be finished, as it depends on the type of issues found with each round of testing.

Solving User Testing Time and Scope Issues

In an iterative process, low fidelity design tasks replace the traditional design phase. By using low-fidelity wireframes in the testing sessions, much of the functionality in the concept will be incomplete.

The risk is that during the first test, the client might point out those incomplete parts as issues and disregard the areas that the researcher needs validation on.

The best way to test in with low fidelity wireframes is to keep the focus on specific sections and test them in an isolated context.

By focusing on smaller components, or even just parts of a bigger feature that will be developed in several sprints, the total number of potential issues is minimized.

Conclusion

In just a couple of days of user research, designers can identify the biggest usability problems in their concept. This activity protects the business from investing time and resources in features that distract and damage its long-term goals.

This adapted RITE method allows for testing and reiterating to be completed in just a couple of days, empowering designers to conduct tests in every sprint. By running tests on small components every two or three weeks, UX designers can align themselves to the development schedule and push design improvements faster.

UX/UI Design

Design. Prototype. Share. All in Adobe XD.

Go from idea to prototype faster with Adobe Experience Design (Preview) CC, the first all-in-one tool for creating and sharing website and mobile app designs. Test drive the XD preview and tell us what you think.

DOWNLOAD