A free comprehensive guide for evaluating site structures

Page tree
Skip to end of metadata
Go to start of metadata

 "But I still haven't found what I'm looking for" - U2


In conventional usability testing, we test a UI by having participants perform tasks -  things that we want them to find or to do, just as they would when using that UI for real.

Tree testing is no different. We don’t want the participant to just wander through the tree, giving us their opinions on how easy it would be to find things. We want to simulate what it’s really like to look for something – something specific – using the top-down hierarchy of the site.

So, we ask our participants to start at the top of the tree, and we give them a definite item to find. In fact, we give them a series of tasks – enough to test several parts of the tree in several different contexts, but not so many that they get tired or grumpy, and not so many that they learn the tree more than a real site visitor would.

And our job, for each task, is to make sure that it’s concise and unambiguous. We need each participant to understand the task quickly, and decide it means the same thing that we meant when we wrote it.

In this chapter, we’ll cover how to decide which tasks to include in our tree test, and how to make sure each one is clear to our participants.

Which tasks to include?

Common and critical tasks, problem areas, and borrowing

How many tasks?

As many as needed to cover major areas, but <10 per participant

Mapping tasks to the tree

Collecting task ideas, refining them, and checking coverage

Different tasks for different user groups

Reasonable pretending, mixing and splitting groups

Collaborating on tasks

Divide and conquer with multi-user editing

Writing a good task

8 tips on creating effective and unambiguous tasks

Identifying correct answers

Multiple answers, intermediate nodes, and judging correctness

Entering tasks and their answers

Copying from a spreadsheet to an online tool

Randomizing the order of tasks

Do this to reduce the learning effect on your results

Letting participants skip tasks

Almost always a good idea

Asking questions after a task

  • summary text here

Key points

  • No labels
Write a comment…