Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »


 

The big advantage of using online testing tools is that once you launch a study, it runs by itself; people can participate any time (and any place) they want, and you don’t have to be there to moderate the session.

The flip side of this, of course, is that you can’t ask remote participants why they made certain choices or what specifically confused them in a certain task.

In many (if not most) tree tests that we have run, it was not hard to figure out why certain tasks scored poorly and certain parts of the tree did not work well. There are cases, though, where we’ve looked at some very low scores, inspected the tasks in question, gone back and studied the tree, and we still weren’t sure why participants were getting those tasks wrong.

Running an in-person session

In these cases, we’ve found it very helpful to follow up the remote study with an identical study using moderated sessions. These can be done in person, using a screen-sharing/audio app (e.g. Skype or Google Hangouts), or even a simple phone call while the participant is in front of their computer.

If the tree test normally takes 5 minutes, we schedule a 15-minute call to allow for testing and discussion.

Each in-person session is run much like a standard usability test:

  • We ask the participant to do the tree test (using the testing tool) and encourage them to “think aloud” as they go.

  • If they struggle to find a correct answer, we ask neutral questions to find out why they’re having trouble (“Tell me what you’re thinking now…tell me more about that…” and so forth).

  • While not every participant will have trouble with the tasks we’re concentrating on, we hope to get a few who do, so we can find the cause of the problem.

For more handy tips on testing in-person, see Nick Bowmast’s short article.
 

Recording results

Another way to see why participants are making puzzling choices is to employ a remote usability-testing tool that records a screencast of the user's session. (UserTesting.com and TryMyUI.com are popular examples.)

Instead of using this type of tool to test a website, you give them the web address of your tree test, and you get back screen recordings of users doing your study. Because you can instruct them to "think aloud", you'll be able to hear their reasoning (and possible confusion or misunderstanding) as they click through your tasks.

See this UserTesting.com blog post for an example of how recording remote sessions helped the author understand what needed fixing in their tree test.



Next: Closing the test

 

  • No labels