"All the right junk in all the right places" - Meghan Trainor

 

OK, so we’ve prepared our tree, created a good list of tasks, set it all up in a testing tool, and figured out which users we’re going to target.

We’re ready to go, right?

Ah, not quite. There’s something wrong with our study.

It’s likely there’s at least one glitch in the study that we haven’t discovered yet. The problem is that we don’t know what’s wrong. At this point, we can either:

  • Launch the study anyway and let our participants find the problem for us (and maybe muck up the data as a result), or

  • Take an afternoon to pilot the study, find the glitches, then launch a slightly revised study that gives us higher-quality results.

Luckily, doing a test run of a study is simple, and the problems we’ll find are usually easy to fix.

 


Trying out the task wording

A pointer to Chapter 7

Previewing a test

Trying it out in almost-real conditions

Running a pilot test

Who should participate, online vs. in person, getting feedback

Checking for technical problems

Dealing with spam blockers, mobile devices, old browsers, and firewalls

Revising the test

Editing vs. duplicating, deleting pilot results

 Key points