Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

  • "And all the right junk in all the right places" - Meghan Trainor

 

  • check webinar slides

 

OK, so you’ve prepared your tree, created a good list of tasks, set it all up in your testing tool, and figured out which users you’re going to target.

You’re ready to go, right?

Ah, not quite. There’s something wrong with your study.

It’s likely there’s at least one glitch in your study that you haven’t discovered yet. The problem is that you don’t know what’s wrong. At this point, you can either:

  • Launch your study anyway and let your participants find the problem for you (and maybe muck up your data as a result), or

  • Take an afternoon to pilot your study, find the glitches, then launch a slightly revised study that gives you higher-quality results.

Luckily, doing a test run of your study is simple, and the problems you’ll find are usually easy to fix.

 


Trying out your task wording

A pointer to Chapter 7

Previewing a test yourself

Trying it out in almost-real conditions

Running a pilot test

Who should participate, online vs. in person, getting feedback

Checking for technical problems

Dealing with spam blockers, mobile devices, old browsers, and firewalls

Revising the test

Editing vs. duplicating, deleting pilot results

 Key points

  • No labels