The big advantage of using online testing tools is that once you launch a study, it runs by itself; people can participate any time (and any place) they want, and you don’t have to be there to officiate.

The flip side of this, of course, is that you can’t ask remote participants why they made certain choices or what specifically confused them in a certain task.

If many (if not most) tree tests that we’ve run, it was not hard to figure out why certain tasks scored poorly and certain parts of the tree did not work well. There are cases, though, where we’ve looked at some very low scores, inspected the tasks in question, gone back and studied the tree, and we still weren’t sure why participants were getting it wrong.

Running an in-person session

In these cases, we’ve found it very helpful to follow up the remote study with an identical study using moderated sessions. These can be done in person, using a screen-sharing/audio app (e.g. Skype), or even a simple phone call while the participant is in front of their computer. If the tree test normally takes 5 minutes, we schedule a 15-minute call to allow for testing and discussion.

Each session is run much like a standard usability test:

Recording results


For more tips on testing in-person, see Nick Bowmast’s short article.

 


Next: Closing the test