Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Current »


 

Previewing the test yourself is a good first step, but we all know from trying to proofread our own content that you really need someone else to spot your mistakes.

That’s why, in our studies, we always run a pilot test with a small group of people before launching the real test. Because they bring fresh eyes to the study, this initial group will find things that you missed – confusing task wording, typos, and so on.

Note that we want to make this pilot as realistic as possible, so we use the same invitation we normally would (but tweaked to say that this is a dry run and the incentive (if any) doesn’t apply to it).

Who should participate

There are 2 types of people who should pilot your tree test:

  • Project stakeholders
    It’s good for team members and sponsors to see what made it (and didn’t make it) into the study, and this gives them a final chance to approve (or raise issues with) your tree and tasks.

  • Representative users
    If possible, try to include a few users in your pilot as well. These could be actual users, or surrogates such as customer-service staff or friends/family that resemble your target audience.
    Getting some real users means that you can double-check that your tree and tasks are written in language that they understand (not just the jargon of your organisation and industry).

Running an in-person session

Even though most tree tests are run as online (remote) studies, the absolute best way of piloting your test is in person.

This is because the feedback is quicker and easier:

  • The pilot participant can just talk aloud as they do the tree test – they don’t have to take the time to write down their feedback.

  • You (as the tester) can jot down the feedback that you think is useful, and can ask the participant to clarify their remarks when necessary. You can also spot problems that the participant doesn’t even spot themselves (such as misunderstanding the meaning of a task).

However, in-person sessions do take more of your time, so it’s usually not practical to do all pilot sessions this way. Typically we run 1 or 2 in-person pilot sessions, then get the rest of the feedback by emailing the pilot invitation to a wider pilot audience.

Getting feedback from participants

When you “launch” the pilot test and invite these people to try it out, be sure that you make 2 things clear to them:

  • They should report anything wrong, missing, or confusing.
    It’s better to get too much feedback than too little, so encourage your pilot users to tell you about anything that could be improved. You may not make all the changes they suggest, but you don’t want to miss something just because they think it’s too minor to report.
    In most cases, it’s easiest to get feedback by asking them to email you. If you have included a comment field as a post-test survey question, also check that in case they decided to enter their feedback there.

  • They’re not eligible for the study's reward.
    The invitation and test may mention a payment or prize draw, but you need to make sure that they know this reward is only for the real participants in the real test.

 


Next: Checking for technical problems

 

  • No labels