"
And allAll the right junk in all the right places" - Meghan Trainor
- check webinar slides
OK, so you’ve we’ve prepared your our tree, created a good list of tasks, set it all up in your a testing tool, and figured out which users you’re we’re going to target.
You’re We’re ready to go, right?
Ah, not quite. There’s something wrong with your our study.
It’s likely there’s at least one glitch in your the study that you we haven’t discovered yet. The problem is that you we don’t know what’s wrong. At this point, you we can either:
- Launch your the study anyway and let your our participants find the problem for you us (and maybe muck up your the data as a result), or
- Take an afternoon to pilot your the study, find the glitches, then launch a slightly revised study that gives you us higher-quality results.
Luckily, doing a test run of your a study is simple, and the problems you’ll we’ll find are usually easy to fix.
Trying out
yourthe task wording
- summary text here
A pointer to Chapter 7
Previewing a test
yourself- summary text here
Trying it out in almost-real conditions
Running a pilot test
- summary text here
Who should participate
- summary text here
Running an in-person session
- summary text here
Getting feedback from participants
- summary text here
, online vs. in person, getting feedback
Checking for technical problems
- summary text here
Dealing with spam blockers, mobile devices, old browsers, and firewalls
Revising the test
- summary text here
Editing vs. duplicating, deleting pilot results