User Testing

Testing and being proven wrong is not a mistake. Is to take a step forward towards something better.

It is inconceivable that one claims to design user experiences, focus on user-centred design, or apply heuristic principles to products without conducting regular and thorough user testing. Period.

One can't be pretentious to think one's assumptions and decisions work for others the way one thinks they would. More often than not, they won't.

We live in our own context. In our own bubble. This bubble is determined by our environment, the people we interact with, our preferences, and interests.

It goes without saying that unless you design something for yourself or for people within the same context as yours, you are not in a position to presume your assumptions are valid. If you do so, the odds are significantly against you.

This is why user testing is so seriously fundamental.

Push boundaries. Test it.

You know, we are keen to ask why. It is rooted in us. Part of this "ask why" mentality relates to pushing the limits. To create something better. Something different. Every time. Not for the artistic urge to be different, but because there are invariably more, better ways to solve a problem.

It is analogue to science. Science is constantly trying to prove itself wrong. Not right. Wrong! New hypotheses and experiments are continually held to forfeit the status quo and, as a result, push knowledge forward. You don't see a scientist going: "I believe the cure for AIDS is bleach pills! Let's get that going, shall we?". How mental would that be?

We must face Interface Design likewise. Observe through user and competitor research. Question how we can improve from there. Hypothesise new ways of solving that problem. Design it. Test it to prove yourself wrong until you are right.

Even then, you will be wrong. But right-er.

Testing and being proven wrong is not a mistake. Is to take a step forward towards something better.

User testing, why?

As with any job, some of your hypotheses will be wrong. Some of them will be spot on. Most will be off-target or hit the woodwork.

User testing minimises shots off-target. It doesn't mean that every ball will nail the top corners, but it boosts the odds of actually scoring a goal.

Then, there are two options, really:

  1. You conduct user testing during the design and development phases, just in time to fix underlying problems before it is too late. You are safer.

  2. You don't conduct user testing at all, and a few months from now, all the problems will hit you back, Murphy's law style. And when they hit, they swing hard at you.

In other words, user testing indisputably results in better products. The single fact that you can figure out whether expectation meets reality by watching someone else using the product would be a good enough argument. But this is not all.

It saves time and resources. Anticipating usability problems by identifying them early in the process and fixing them instantly will prevent resources from being invested in unworthy features.

Realise feature value. Sometimes, you are invested in a feature, an interaction, or a particular decision you got yourself attached to. This may not be the same for your users. User testing allows you to impartially discover the real value of a given feature.

Get unbiased eyes. You are biased because you created it. You know it to the core. Your brain is addicted to it. You can use it with your eyes closed. Usability tests deliver a reality check. It forces you to get objective information about its understandability and usability.

Uncover frustrations. Get hold of friction points before it is too late. You can immediately determine broken, hard to carry, frustrating flows by watching the test first-hand.

Discover hidden issues. Detect anomalies that would otherwise be unnoticeable. Things that you'd think are pretty obvious often get ignored by the users in the overwhelming wonder of a new product.

How it's done.

Conducting user testing is not a 7 headed beast. By the looks, it may seem like so, but, in the end, it is a pretty tameable kitten.

First: set the goals. The first thing you need is objectives. Identify what needs to be tested. Set it clear. You may want to experiment with a particular interaction or warrant the easiness of express checkout.

Now, this is very important, the more granular the test scope is, the better the results will be. It is far more effective to conduct multiple tests, rather than a massive one. It all comes down to granular goals.

Next: prototypes. Based on the goals, create the appropriate prototypes. It is that simple.

Bear in mind one thing, though: be careful about hotspots to avoid skewed results. If you are using design prototypes, avoid single hotspot screens. If there is only one on-screen clickable spot, testers will skim through brainlessly, following the hotspots. Keep prototypes as authentic as possible.

Then: recruit participants. Finding people isn't the tricky bit. Finding the right people is. Make sure whoever you recruit belongs to the product's target group. You don't wanna be testing a pharmaceutical app for the elderly with a 30-year old, do you?

Keep the participant numbers small. Something between 5 to 7 people per goal is enough to prove you right or wrong.

After: find the location. Set things up. Although most tests can be run remotely, some will need on-site or in-person tests. The latter requires a bit more planning.

Remote tests are more straightforward because there are tools that save you from the hassle of picking up the phone and calling people. The good news is that the projects we typically work on rarely require in-person tests. With that said, choose a tool and define the audience. Done.

Always watch the test. There are two types of tests: moderated and unmoderated. The first refers to face-to-face tests, where the moderator is responsible for providing guidance to participants in real-time. The other relates to sessions without a moderator. It intends to mimic a situation where the user would typically use the product: by themselves.

Regardless of the type, always record the screen and the given instructions. On the first type, you will require extra gear if you intend to register the users' reactions, audio, and screen. Take notes nonetheless. You wanna get those juicy, real-time responses from participants.

For the second type, you'll likely use one of the readily available platforms. These take care of audio, video, and screen for you if you choose to. One-click away. Less fuss.

Regardless of the type, you should always watch the test more than once. There will be reactions or micro-expressions that go unseen the first time you watch it. So, review it once to take more pragmatic notes. Then watch it again with emphasis on the subliminal variables.

Analyse the results. A big chunk of time will be spent analysing results and cross-checking data. User testing is more ambiguous than user research, given its distinct nature and goals. Most of it will come down to your interpretations of what happened rather than mathematical data. Still, your conclusions are to be drawn.

In the end: consolidate results. Prepare a document and compose your analysis. It goes as follows:

  1. The goal. Present the goals for the session.

  2. The participants. Introduce the participant count and background of each.

  3. The results. Display the findings resulting from the analysis of the tests.

  4. The action points. Portray the picture of what's to be done as a consequence of testing.

When it's done.

User testing can be done at any moment throughout our process. During wireframes, after the wireframes. During the look and feel, or after. During final design, or subsequently. During the development stages.

There is never a reason not to do user testing. There is never a reason to postpone it.

Make no mistake, though: we don't do nearly as much user testing as we'd fancy to. That's a dream we crave, but uncontrollable variables lurk, like whether or not clients are willing to invest in it.