Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »


 

Whenever you recruit participants, you are looking to get a representative sample of your actual (or desired) users.

You may even go to a fair bit of effort to get (or exclude) specific types of users, by querying customer databases, posting invitations to specific user forums, and so on.

In the end, though, all recruiting is imperfect; you’ll miss some users you were hoping to get, and you’ll get some that you were hoping to miss.

It is important, nonetheless, to try to identify any selection bias in your recruiting, so you can take that into account when you analyze your results, or when you do your next study.

 

What is selection bias?

From Wikipedia’s article:

Selection bias is the selection of individuals, groups or data for analysis in such a way that proper randomization is not achieved, thereby ensuring that the sample obtained is not representative of the population intended to be analyzed.

 

In other words, certain recruitment methods may yield a skewed selection of participants, rather than the representative sample that you normally want.

For example, suppose that you only use a web ad on your site to recruit for your study. Only people who visit your site in the next few days (the duration of your recruitment) will see the ad. This means that:

  • You are ignoring customers who don’t use your website. (For many businesses, such as banks, this may be a big chunk of customers.)

  • You are more likely to get people who visit your site frequently (say, several times a week), but less likely to get people who use your site once a month (e.g. to check their bill).


Which recruitment methods cause it?

Here are some common causes of selection bias:

  • Web ads only get web users
    If you only use web ads to recruit users, then (by definition) you’re only getting those users who visit your website. While this is OK for many studies, it does ignore those customers who use other channels instead of the website. If you need offline users too, you’ll need to find another way to recruit them.

  • Customer email lists only get existing customers
    If you only use a customer list to email invitations, you are missing prospective customers (a potentially valuable audience) and ex-customers (who are often good sources of honest feedback).

  • ~more sources of bias?


How can we reduce bias?

You may decide that a given selection bias is acceptable; in our example above, you may only want customers who visit your website, so this implicit selection actually serves as a useful screening mechanism.

However, it’s important that you consider what kind of selection bias each recruiting method adds to your study. Then, you can either:

  • Try to reduce that bias, and/or

  • Acknowledge the bias and take it into account when analyzing your results and presenting your findings.

The most common way to reduce selection bias is to use several different types of recruitment. For example, instead of just running a web ad (which only yields site visitors), you could use customer lists to reach those customers who don’t use the website.

Another (generally less effective) way to reduce bias is to modify the single recruitment method you are using. If you only use a web ad, for example, you could post that ad on several different websites. You will still only get website visitors, but some of them will be people who have not visited your website.

While you will probably never eliminate bias from your studies completely, these steps should help minimize it so you can be reasonably confident in your results.

 


Next: Coordinating audiences and channels

 

  • No labels