"But I still haven't found what I'm looking for" - U2


In conventional usability testing, we test a UI by having participants perform tasks -  things that we want them to find or to do, just as they would when using that UI for real.

Tree testing is no different. We don’t want the participant to just wander through the tree, giving us their opinions on how easy it would be to find things. We want to simulate what it’s really like to look for something – something specific – using the top-down hierarchy of the site.

So, we ask our participants to start at the top of the tree, and we give them a definite item to find. In fact, we give them a series of tasks – enough to test several parts of the tree in several different contexts, but not so many that they get tired or grumpy, and not so many that they learn the tree more than a real site visitor would.

We need to make each task concise and unambiguous. Our participants should be able to understand the task quickly, and interpret it the way we intended.

In this chapter, we’ll cover how to decide which tasks to include in our tree test, and how to make sure each one is clear to our participants.



Which tasks to include?

Common and critical tasks, problem areas, and borrowing

How many tasks?

As many as needed to cover major areas, but <10 per participant

Mapping tasks to the tree

Collecting task ideas, refining them, and checking coverage

Different tasks for different user groups

Reasonable pretending, mixing and splitting groups

Collaborating on tasks

Divide and conquer with multi-user editing

Writing a good task

8 tips on creating effective and unambiguous tasks

Identifying correct answers

Multiple answers, intermediate nodes, and judging correctness

Entering tasks and their answers

Copying from a spreadsheet to an online tool

Randomizing the order of tasks

Do this to reduce the learning effect on your results

Letting participants skip tasks

Almost always a good idea

Asking questions after a task

  • summary text here

Key points