"But I still haven't found what I'm looking for" - U2
In conventional usability testing, we test a UI by having participants perform tasks - things that we want them to find or to do, just as they would when using that UI for real.
Tree testing is no different. We don’t want the participant to just wander through the tree, giving us their opinions on how easy it would be to find things. We want to simulate what it’s really like to look for something – something specific – using the top-down hierarchy of the site.
So, we ask our participants to start at the top of the tree, and we give them a definite item to find. In fact, we give them a series of tasks – enough to test several parts of the tree in several different contexts, but not so many that they get tired or grumpy, and not so many that they learn the tree more than a real site visitor would.
And our job, for each task, is to make sure that it’s We need to make each task concise and unambiguous. We need each participant Our participants should be able to understand the task quickly, and decide interpret it means the same thing that we meant when we wrote itway we intended.
In this chapter, we’ll cover how to decide which tasks to include in
yourour tree test, and how to make sure each one is clear to
your participants.- (global) “we” vs. “you” vs. “I” – prefer “we” when talking collectively?
- (global) American spelling
- check webinar slides
- import stuff from previous slide decks
- check TJ and other tools for other features we haven’t covered yet
our participants.
Which tasks to include?
overview text hereCommon and critical tasks, problem areas, and borrowing
How many tasks?
- overview text here
As many as needed to cover major areas, but <10 per participant
Mapping tasks to the tree
- overview text here
Collecting task ideas, refining them, and checking coverage
Different tasks for different user groups
- overview text here
Reasonable pretending, mixing and splitting groups
Collaborating on tasks
- overview text here
Divide and conquer with multi-user editing
Writing a good task
- overview text here
8 tips on creating effective and unambiguous tasks
Identifying correct answers
- overview text here
Multiple answers, intermediate nodes, and judging correctness
Entering tasks and their answers
- overview text here
Copying from a spreadsheet to an online tool
Randomizing the order of tasks
Do this to reduce the learning effect on your results
Letting participants skip tasks
- overview text here
Paper - creating task cards
- overview text here
Almost always a good idea
Asking questions after a task
- summary text here