A free comprehensive guide for evaluating site structures

Page tree
Skip to end of metadata
Go to start of metadata


“How many people do we need to get?”


This is the #1 question we hear when we help clients run online studies. And because we’re consultants, our stock answer is “It depends.”  (smile)

The simple answer is:

Aim for about 50 participants per user group.


The more sophisticated answer is:

  • 30 participants will start showing patterns in the results, but it will be hard to know what to do with “small effects” because we don’t have enough participants to know if these are real effects or just outliers.

  • 50-100 participants will make the patterns much clearer, and we’ll be able to identify which results are significant and which can be discarded as noise.

  • Hundreds of participants give diminishing returns, and we’re potentially “using up” participants who might be better employed as fresh participants in a subsequent round of testing.

For a more rigorous look at how many participants we should aim for, see this MeasuringUsability article on tree testing.

Counting by user group

Most products/websites have more than one major type of user. In Which part of the tree? in Chapter 6, for example, we saw that the Shimano website has 3 user groups – cyclists, anglers, and rowers.

If our study is covering several user groups, we’ll ideally want about 50 participants for each group. That way, wecan filter the results by user group and still have enough data to see clear patterns in the results.

(We’ll also need a way of identifying which participants belong to which user group. We often do this with survey questions – see Adding survey questions in Chapter 8.)

Some user groups are more important than others, and our pool of participants is often limited, so we try to get more participants from our major groups. If we end up with too few participants of a less important group, that’s something the project team can probably live with.


More participants for fewer questions

The other factor that affects how many participants we need is how many tasks (out of the total) that each participant does.

If each participant is only asked a subset of tasks, we’ll need proportionally more participants.


To understand this, let’s consider two cases:

  • Each participant does all tasks.
    Suppose we have 10 tasks in our tree test – that is, 10 things that we want our participants to find, and this provides adequate coverage of the important parts of the site tree.
    Suppose we decide that each participant should do all 10 tasks. This is a reasonable number because, as we saw in How many tasks? in Chapter 7, 10 tasks makes for a quick test and minimizes the learning effect.
    Because each participant is doing all the tasks, we would simply aim for 50 participants of each user group.

  • Each participant does half the tasks.
    Suppose now that we actually wrote 20 tasks, perhaps because the tree is large and 10 tasks just wasn’t enough to test everything we wanted to cover.
    If we asked each participant to do all 20 tasks, the number-of-participants answer is the same – about 50 per user group.
    However, we saw in the Tasks chapter that it’s not a good idea to ask participants to do that many tasks: it takes too long, they get bored or tired, and they are more likely to “learn” the tree (which skews the results).
    If we did the prudent thing and asked each participant to do 10 tasks, that’s half the tasks in the test, so we would need twice the number of participants (about 100) to get each task “hit” by the 50 we're aiming for.

In the end, the formula for this is simple: if we divide our total number of tasks by the number per participant, this gives us a multiplier for how many participants we need. For example, if we have 20 tasks total and we ask 10 per participant, this would be (20 ÷ 10) = 2, so we’ll need 2 times the normal number of participants. If we wanted 50 responses per task (the minimum recommended), that means we'll need to get (50 X 2) = 100 participants in total.


Next: Different user groups


  • No labels
Write a comment…