For subsequent rounds of testing, a common question is “Do we need to get fresh participants, or should we use people who have already done an earlier test?”

The answer here is the same as for most other forms of user testing: ideally, we prefer fresh participants because they haven’t seen the tree (in an earlier revision) or the tasks before, so they should be untainted (so to speak).

If we have a large audience to draw from, we can certainly do some screening to make sure we’re not accepting people who participated in an earlier round of tree testing. (For more on screening, see Screening for specific participants in Chapter 9.)

In the real world, however, we often do not have a huge pool for testing, so we often forego this screening just to get the minimum numbers we need to produce clear results. We think this is acceptable because the “experienced” participants don’t really have a big advantage over the fresh participants – the tree may have changed substantially since the last test, the tasks may have been revised, and enough time has probably passed (say, a week or two) that their recall is only partial.

If we’re concerned about re-using participants, we can also add a survey question to the second test, asking if they participated in the first test. Later, during analysis, we can filter the results to see if the “veterans” did significantly better than the first-timers. We suspect the difference (if any) would be small enough to ignore.

 


Next: When are we done?