Offering incentives


 

For most studies, we should offer an incentive.

 

One of the strengths of tree testing is that we can evaluate our proposed site structure quickly, fix it, and test it again until we get it right.

To get quick results, an incentive is almost always the way to go. We offer incentives in 90% of the studies we run because we’re not willing to wait for results to dribble in.

We’ve encountered some client organizations (usually government agencies) that balked at offering a reward for a 5-minute online study, but after running a no-reward study and seeing the glacial pace of returns for themselves, most were very willing to pay for an incentive for the next study.

We have only found a few situations where an incentive is not necessary:

  • The participants are the organization’s staff.
    This is common for intranet studies, where management considers participation to be part of the employee’s job.
    This can work if it’s made clear to employees that they’re expected to do the study. It helps if each manager makes sure their respective staff do it by a certain date.
    If this is unlikely to happen, then offering a modest incentive is still a good way to get results faster.

  • The organization’s audience is very large.
    If an organization has a huge user base, and we only need a few hundred responses, we can probably get that many in a reasonably short time, even without an incentive. But we may still need to offer incentives if we want to target certain specific segments of the organization’s users.

  • The organization’s audience is very engaged.
    Many organizations think they have very loyal and dedicated users, but in our experience, only a few actually do. We can easily judge this by the response rates to previous studies they’ve done. If that rate is unusually high, we may try the study without an incentive and monitor how it goes. Otherwise, we encourage the organization to add an incentive; compared to the overall cost of the research, it’s a small expense.


Prize draws

For incentives, we usually set up a prize draw, because:

  • Draws are easy to set up.
    We just need to decide on how much we want to spend and what our participants would desire at that price point.

  • They let us offer participants an enticing prize.
    For a 5-minute tree test, offering each participant a small reward (say, $5) is not usually feasible. And most people would rather have a chance at winning a big prize than the certainty of getting a very small reward.

For a standard study, we may offer a prize worth anywhere from $200 to $500, where the amount depends on:

  • How much the organist offers for other online studies (such as customer surveys). Sometimes this is the organization’s own products or services (which presumably costs them less than the value to the prize winner).

  • How much is likely to motivate the desired audience to participate. For example, we would need to offer more to orthopedic surgeons than to university students.

To save money and reduce administrative effort, we may want to combine prize draws across tests. For example, if we’re testing 3 alternative site structures (which means 3 tree tests), we can probably put all the entrants into the same prize pool. Offering a single $300 prize is a more enticing incentive than offering $100 to the winner of each test. We just need to make sure that the single prize is something that all of those participants would want.

Given a certain price point, we also need to pick a prize that your audience desires. A power company might offer $300 off the winner’s next bill, while a software company might offer a full version of their flagship program. If we’re offering a gadget such as an iPad, pitching the latest version will get more interest from our audience.

Also, we should choose a prize that appeals to all (or at least most) of our participants. Being offered a free version of a program that you already own is not much of an incentive.   (tongue)

Having trouble thinking up a “custom” prize for the draw? Consider one of these generic prizes:

  • Gift cards
    These could be vouchers from a specific retailer (such as Amazon or Best Buy), but that may not suit some participants. We prefer to offer generic gift cards that can be used at a variety of retailers; they essentially act as prepaid debit cards that the participant can use at any store or online.

  • Tablets
    These are popular prizes because they’re fun and useful without being as need-specific as mobile phones. We usually offer the winner a choice of an Apple iPad (or iPad Mini, if the budget is smaller) or equivalent-value Android tablet.

If we opt for a prize draw, we escape the need to reward each participant, but we do need to do a few extra administrative things:

  • State the terms and conditions of the draw
    For more on T&Cs, see Writing supporting text in Chapter 8.

  • Collect contact information for draw entrants
    We need to be able to contact the prize winner, so we’ll need some kind of contact information from each participant who wants to enter the draw.
    If our study requires an email address or participant identifier for other reasons, then we may not need to do anything extra.
    In most of our studies, however, we make anonymity the default. For participants who want to enter the draw, we ask for their email address (and promise that it won’t be used for anything else):



  • Pick the prize winner and notify them
    Once the tree test is closed, we download a list of the participants and pick a random winner. (We use a online random-number generator to do this.) We then notify the lucky winner and arrange to get the prize to them.

  • Publicize the prize winner (for example, by including a picture and short blurb about them in the organization’s public blog), assuming this was stated as a condition of the draw. This highlights how we’re working to make things better for our customers, and provides a way to generate interest for our next study.


Rewarding each participant

Because a tree test only takes 5-10 minutes to complete, paying each person is not usually feasible unless they are participating through an existing reward system, such as a commercial research panel or services like Amazon’s Mechanical Turk.

 


Next: Recruiting for in-person sessions

 


Copyright © 2016 Dave O'Brien

This guide is covered by a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.