Another result we can measure is speed (time taken). There are two metrics in play here:

In both cases, high speed suggests confidence, or at least clarity - the participant found it easy to choose between competing headings. Conversely, if they took a long time to decide, this suggests that it was harder to understand the headings, and/or harder to choose between them.

Task times

For a given task, some tools show us the average (or median) time it took participants to complete it:

 

By itself, this is just a number. But if the tool also provides an average time across all tasks (or if we calculate it ourselves), we can then spot which tasks took substantially longer to complete. We can then drill into these tasks to look for possible causes.

While task time is the most obvious measure for speed, it’s problematic because some answers are often deeper in the tree than others. Some tasks only take a few clicks to find (if they’re on level 3 of the tree, for example), while others make take more clicks (if they’re on level 5), so the total time is affected. For example:

For this reason, we recommend treating task times with a grain of salt, and trying to factor in how many clicks were involved.

Click times

Ideally, we want to flag moments when clicks slowed down – where a participant took longer than they usually do between clicks. If a participant falls below their usual “click pace” during a task, that's an indication that the participant took longer to understand their choices and make a decision.

The task’s speed score can then be calculated as the percentage of participants who didn’t slow down significantly during that task. For example, for each participant, we could decide that "slowing down" means at least one click time greater than a standard deviation from that person's average click time across the entire test.

This is a better measure of speed than the “task time” described above, because:

This works because the speed is measured relative to a single participant. To spot a slowdown, we look for tasks where that person took a long time between clicks – and by “long”, we mean longer than that same person took for other tasks. For example:

For a given participant, that slow spot might have been caused by any number of things – a tough choice in the tree, the doorbell ringing, anything really. But if we look at all the participants who did that task, we might see that most participants were slow at that spot. That suggests that those topics were hard to decide on for that task – a valuable thing to know.

Why they slowed down

When we spot a task with a poor speed score, we need to find out if there are specific locations in the tree that are to blame. If the tool provides a way to inspect click times for tree headings (either with a graph or raw data that we can process), we can determine which parts of the tree are bottlenecks.

Once we locate the heading where participants are slowing down, there may be several reasons why it’s a bottleneck:


Next: Where they gave up