If we’re running more than one test, and need to split participants between them, we can provide mutually exclusive links or use code.

When timing our launch, we should consider our audiences' availability.

Monitor the study every few days to see if there are any obvious problems, such as low or unbalanced response rates, lower- or higher-than-expected success rates, or high drop-out rates.

Keep stakeholders updated on the progress of the study, particularly if there are problems that they can help fix.

If we encounter low scores that we can’t explain, we should consider running a few in-person tree tests so we can probe the participants’ behaviour and thinking.

Once we close the test, don’t forget the housekeeping (taking down web ads, doing the prize draw, and so on).

 


Next: Chapter 12 - Analyzing results