A free comprehensive guide for evaluating site structures

Page tree
Skip to end of metadata
Go to start of metadata


 

Tree testing is very much like usability testing – we ask participants to try doing typical tasks, except that we present them with a simple text tree rather than a real website.

To run a tree test, we typically do the following:

1. Plan the test

Before we do anything else, we need to answer a few basic questions such as “What are we trying to find out?”, “Which tree ideas should we test?”, “Who will we ask to participate?”, and so on.

For more on this, see Chapter 4 - Planning a tree test.

2. Prepare the tree(s)

Whether we’re testing an existing structure (as a baseline) or a new one (as described in Chapter 5 - Creating trees), we’ll need to decide if we’re testing the whole tree or not, which headings to include/exclude, how to spot missing content, and so on.

For more on this, see 6 - Preparing a tree for testing 

3. Write tasks for users to do

We typically pick the most common and critical activities for our various user groups, while making sure we cover the parts of the tree are most doubtful. And we need to be careful about task wording to avoid confusing users or giving away the answer.

For more on this, see Chapter 7 - Writing tasks. 

4. Set up the test using our chosen tool

Online tree-testing tools offer several features beyond the bare test itself, including survey questions, integration with online user panels, and so on.

For more on this, see Chapter 8 - Setting up a test. 

5. Recruit participants

We need to decide how many participants we need from each user group, how to invite them (e.g. web ads, email blasts, etc.), whether to offer incentives, and so on.

For more on this, see Chapter 9 - Recruiting participants. 

6. Pilot and run the test

We preview the test to shake out the bugs, then launch it and keep an eye on its progress, looking for response rates, initial scores, and drop-out rates.

For more on this, see Chapter 10 - Piloting the test and Chapter 11 - Running the test.

7. Analyze the results

The online tools handle most of the analytical gruntwork, but we still need to know what to look for – success rates, backtracking, slow response times, and patterns across tasks. And we need to be able to communicate our results to our project team and management.

For more on this, see Chapter 12 - Analyzing results and Chapter 13 - Communicating results.

8. Revise our tree and retest

Our analysis will have suggested several parts of the tree that need fixing. But after we make those changes, we should retest to make sure we got things right.

 


Next:  Chapter 2 - key points

  • No labels
Write a comment…