Running a pilot test


 

Previewing the test ourselves is a good first step, but we all know from trying to proofread our own content that we really need someone else to spot our mistakes.

That’s why, in our studies, we always run a pilot test with a small group of people before launching the real test. Because they bring fresh eyes to the study, this initial group will find things that we missed – confusing task wording, typos, and so on.

Note that we want to make this pilot as realistic as possible, so we use the same invitation we normally would (but tweaked to say that this is a dry run and the incentive (if any) doesn’t apply to it).

Who should participate

There are 2 types of people who should pilot a tree test:

  • Project stakeholders
    It’s good for team members and sponsors to see what made it (and didn’t make it) into the study, and this gives them a final chance to approve (or raise issues with) the tree and tasks.

  • Representative users
    If possible, we try to include a few users in our pilot as well. These could be actual users, or surrogates such as customer-service staff or friends/family that resemble the target audience.
    Getting some real users means that we can double-check that our tree and tasks are written in language that they understand (not just the jargon of the organisation and industry).

Running an in-person session

Even though most tree tests are run as online (remote) studies, the absolute best way of piloting a test is in person.

This is because the feedback is quicker and easier:

  • The pilot participant can just talk aloud as they do the tree test – they don’t have to take the time to write down their feedback.

  • We (as the testers) can jot down the feedback that we think is useful, and can ask the participant to clarify their remarks when necessary. We can also spot problems that the participant doesn’t even spot themselves (such as misunderstanding the meaning of a task).

However, in-person sessions do take more time, so it’s usually not practical to do all pilot sessions this way. Typically we run 1 or 2 in-person pilot sessions, then get the rest of the feedback by emailing the pilot invitation to a wider pilot audience.

Getting feedback from participants

When we “launch” the pilot test and invite these people to try it out, we make 2 things clear to them:

  • They should report anything wrong, missing, or confusing.
    It’s better to get too much feedback than too little, so we encourage pilot users to tell us about anything that could be improved. We may not make all the changes they suggest, but we don’t want to miss something just because they think it’s too minor to report.
    In most cases, it’s easiest to get feedback by asking them to email us. If we have included a comment field as a post-test survey question, we also check that in case they decided to enter their feedback there.

  • They’re not eligible for the study's reward.
    The invitation and test may mention a payment or prize draw, but we make sure that they know this reward is only for the real participants in the real test.

 


Next: Checking for technical problems

 


Copyright © 2016 Dave O'Brien

This guide is covered by a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.