Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Low participant numbers are the most common worry for any online study.

If you’re we’re halfway through your the test period and you we only have a quarter of the participants you planned we hoped for, you we may need to work harder to find people.

  • Email: If you we only sent an initial batch of email invitations, we send another batch.

  • Web ads: ReWe re-check that your our ad is very visible and presents a concise, attractive proposition. Consider We also consider putting it on more web pages and (if possible) on more websites to get more views.

  • Social media: Don’t be afraid to repeat your We often repeat our invitation on your our social networks halfway though the testing period.

  • Incentive: If you we suspect that your our incentive is not big enough (and you’ve we’ve done everything else you we can to boost your our responses), consider increasing it. If your management reduced your our planned incentive because they thought it was excessive, you we may want to revisit that decision with them (with your the data in hand).


Missing user groups

When we target several user groups in a single tree test, sometimes we get lots of people from group A and B, but hardly any from group C. If you we included a survey question that identified the participant’s user group, you we can check that now to see if any groups are lagging and need more recruiting effort.

Obviously, the best way to boost a certain group’s numbers is to invite more of that group.

  • If you we have group-specific email batches that you we haven’t sent yet, that’s the easiest thing to do.

  • If you we can place an ad on websites that this group frequents, that should also help boost your our numbers.

  • If this group is likely to have some kind of organization that they belong to (a trade association, meet-up group, special-interest forum, etc.), you we may want to approach the organization’s administrator and ask for help.


...

Earlier we talked about splitting users randomly among tests. Usually this is an even split (e.g. two tests would each have a 50% chance of being selected), but sometimes we find that, halfway through the test period, test A has two thirds of the responses for some reason.

Whether you we used code or a set of arbitrarily split links, you we can change this partway through the test to even up the numbers.

  • If you used code, you can change it from a 50/50 split to 80/20 in favor of your under-supplied group.
    If you we used a set of links, you we can change the split from something like "first name A-M, first name N-Z" to "first name A-E, first name F-Z" so that the first test now gets 20% of the clicks, while the second test (the one that’s lagging) gets 80%.

  • If we used code, we can change it from a 50/50 split to 80/20 in favor of the under-supplied group.

 

Low success rates at first

Besides the number of participants, the other big thing you’re we’re sure to check is the scoring – how well your our tree is performing overall, and how individual tasks are doing.

Very often, you’ll we’ll be surprised (and appalled) by how low your the interim scores are. Some part of the low scores will be justified – especially in a first-round test of a new tree, parts of that tree will simply not work well for your participants. Testing simply lets you us identify the parts that need rethinking.

...

  • Some tasks may be confusing or misleading.
    This is especially likely if you we didn’t properly pilot your our test. Some tasks are hard to phrase clearly without giving away the answer, but remember that a confusing task is a problem in your the study, not necessarily a problem in the tree itself. You We shouldn’t change the wording during the test, but you we should revise in your our next round of testing.

  • Some correct answers aren’t marked as “correct”.
    After doing hundreds of tree tests, we still run into this wrinkle all the time. When we set up each task, we try to mark all the correct answers for it. However, in a large tree, each task may have several correct answers, and it’s likely we’ll miss a few.
    Because of this, a good testing tool should let you us (as the test administratoradministrators) change which answers are correct for each task, either while the test is running or afterward when you’re we’re doing your the analysis. We often find that test scores go up substantially when we do this post-test correction. For more, see Cleaning the data in Chapter 12.


...

Ideally, a high task score means that you we did your our job well when you we created the tree.

Unfortunately, it can also mean that you we included a “giveaway” word (and didn’t spot it during piloting). If you we did, then this isn’t a fair measure of the real-word effectiveness of your the tree.

Again, you we shouldn’t edit the task’s wording while the test is running (unless you we spot it very early); fix it in the next round.

 

High drop-out rates

You We may find that lots of participants start your our study, but many drop out before they finish it.

You’ll We’ll always have some drop-off (it’s the nature of online studies), but if it exceeds about 25%, you we should investigate.

  • At the explanation page
    If your our web ads or email invitations link to an explanation page, you we can use web analytics to compare how many people visit that page to how many actually start the tree test itself. A large drop-off here indicates that your the explanation page is either confusing, hard to scan, or the “start” link is not obvious. (Is it prominent and above the fold?)

  • During the test itself
    If a person makes it to the tree test itself, try to find out where they drop out. (Unfortunately, most testing tools do a poor job of helping you us in this regard.)

    If it’s during the welcome/instructions stage, they may be finding these pages confusing, too long, or simply not what they expected. You We can check this by trying the test with a few people in person to see where the problem lies.

    If they drop out during the tasks, it could be caused by:

    • Having too many tasks (seeing “1 of 26” is daunting)

    • Presenting tasks that are confusing (“Forget this, it's just too hard”)

    • Or simply because this is the first time they’ve done a tree test and they’re not sure what to do. (Better instructions may help, but some people will leave no matter how well you we explain it.)

 

...

Next: Keeping stakeholders informed

...