Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »


 

Along with web ads, using email lists is a very common way to get participants for online studies.

  • Many of our clients have maintain customer databases that they use for sales and marketing purposes. These often include useful data on demographics and product use.

  • Even small organizations usually have lists of customer contacts (often stored in spreadsheets). These are typically modest in size and detail, but may still give you a good pool of people to invite.

One big advantage of using customer lists is that you’re contacting people who already have some kind of relationship with the organization, usually as current users of their products or services. This relationship usually boosts the response rate, because these people are likely to have a vested interest in improving those products and services.

Another advantage of customer lists is that you often get to pick who to invite, usually based on information in the lists such as region, age, usage, and so on.

The downside is that, unlike passive web ads, email invitations are an active (albeit minor) intrusion into people’s lives. Organizations should be very careful about how (and how often) they “bother” their customers with unsolicited messages, no matter how good the cause. For more on this, see “Letting people opt out” below.

 

How many should you invite?

Earlier we recommended getting about 50 participants from each user group you want to test.

However, we all know that most people (including you and I) will ignore most email invitations to research studies like this. So, to get 50, you have to invite many more than that.

How many more?

  • Ask what the organization's traditional response rate has been.
    Most organization have done email invitations at one time or another (usually for customer surveys), so they should have some idea of their response rate. Certain organizations with very loyal/vocal customers can get a 30-40% response rate, but most are much lower (often under 10%).

  • If unknown, assume a response rate of 5-10%.
    Note that this number can vary greatly depending on factors like the desirability of the incentive (see below) and even the time of year (e.g. farmers are unlikely to participate during harvest).

  • Do the math to determine the number of emails to send out.
    For example, if you expect a 10% response rate and you need 50 participants, you’ll probably need to send about 500 invitations to hit your number.

For many organizations, this is more people than they have on their lists, so the question is not “how many should we invite?” but rather “how else can we get participants?”. Luckily, you don’t need to be tied to any one method of recruiting. Most of the studies we do include web ads AND email lists, and sometimes even then we have to start beating the bushes for more people – see the other methods described in this chapter.

 

Inviting in batches

If you have access to a large list of customers (perhaps thousands), you may be tempted to email them all and get lots of results fast.

Careful – emailing your whole pool is a rookie mistake.

First of all, you should almost never email everyone in a big list. Lists of those size usually have more detail in them that you can use to filter the list down to the people you really want (not just bank customers, for example, but those with home loans who use Internet banking). For more on this, see Filtering lists below.

Second, even if you still have a big list after filtering (lucky you), remember that people have a limited appetite for invitations from a given organization. If you or anyone else in your organization wants to run another study in a month or two (remember that we recommend at least 2 rounds of tree testing to get it right), you should probably avoid emailing the same people you just pinged this week. Many large organizations have formal rules about this, typically along the lines of “Do not email a given customer more than once every 3 months”. Even if your organization has no such policy, it’s still a healthy rule of thumb.

Third, you may not need that many responses to get the results you want. 50 responses shows you patterns, and 100 responses makes them clearer, but beyond that you’ll just get diminishing returns.

So, if you have a large number of potential invitees, we recommend that you invite them in smaller batches according to your expected response rate.

For example, suppose you need 50 participants and you have 1000 people on your list? How many should you invite?

  • If your response rate was known to be 10%, you would send 500 invitations.

  • If you expect the rate to be higher, you might send out only 200-300 invitations in the first batch.

  • After a few days, if you hadn’t reached your target of 50 participants, you could send out another few hundred. And so on until you reach your target.

The big win here is that you “save” a bunch of people to use on your next study – you’re rationing them to that you always have a pool of users to fuel your ongoing research.

The other factor at work here is urgency. Using batches slows down the study (as you wait a few days between batches).

  • If you need results fast, and you have users to burn (so to speak), invite larger batches of users.

  • If you don’t have many users on your list, and so need to conserve them, invite smaller batches so you can get just enough participants to show clear results.


Filtering lists to get the right people

You don’t just want any 50 people to do your study; you want the right 50 people – people who match your idea of a representative user.

If your study is for all users, then a simple web ad or a blanket email blast to a customer list is probably OK. This should ensure that most of your participants are current (or past or future) users.

Often, however, you may want to get more specific about who does your study. If you are reorganizing the Large Business section of a bank’s website, for example, you want large-business users to test the new structure, but you don’t want personal-banking users because they do different tasks, use different terminology, and would generally be irrelevant to your study.

When you’re going after a specific user group, there are two common approaches:

  • Using customer lists that are specific to that user group.
    For example, the bank may have a separate customer list for their business customers. If you invite people from that list, you’re automatically picking the right users.

  • Filtering a broad list down to the users you want.
    The bank may have a customer database with fields that let you narrow down to just the business users.

Having a database that you can filter is very useful if you have specific recruiting criteria. While this is mostly used for targeted studies like in-person usability testing (you’re only testing 10 participants, so you want to be sure you get just the right users), it can also help improve your tree-test results. For example, you may want to recruit not just personal-banking users, but specifically those who use Internet banking frequently.

If you do want specific users, you can do your filtering early or late:

  • Early filtering – You only send invitations to people who fit your criteria (by filtering a customer database first).

  • Late filtering – You invite anyone to do your study (via a blanket email blast or a web ad), then screen out the people you don’t want (by using screening questions just before the tree test starts). For more on this, see Screening for specific participants later in this chapter.


Letting people opt out

We mentioned earlier than inviting people by email is intrusive – most people have a limited appetite for unsolicited invitations, and some people may not want to be contacted at all. We need to respect their wishes and keep their goodwill.

There are two common ways to handle this:

  • Don’t contact people who have already opted out.
    Many customer databases have a field indicating whether the person has opted out of non-essential communications (often termed “marketing and promotional” messages). Obviously, we don’t invite people who have opted out.
    Related to this is an embargo period, where we don’t contact people too soon after we last contacted them. The database shows when the last contact was, so we only invite those who have not been contacted recently. (3 months is a typical waiting period.)

  • Make sure your invitation includes a way to opt out.
    Most people who don’t want to participate in your study will just skim the email and delete it. But there will be some who don’t want to receive more of these invitations, so it’s a simple courtesy to give them a way to easily opt out of future invitations. A clear link at the bottom of the message handles this. How you implement it (as a web link to an “unsubscribe” page, an email to an automated system, or an email to a staffer who remove them from the list) is up to your organization.


Hiding participants from each other

When you send a batch of email invitations, it’s important that the recipients don’t see each other in the received message. Beyond the clutter of several hundred names in the “To” field, it’s also a privacy violation – people shouldn’t be able to see who else is on a email list.

To prevent this, you can either:

  • Use the Blind Carbon Copy (BCC) field – If you’re sending from a normal email account, you set the “To” field to yourself and add the recipients to the BCC field. The BCC recipients are “CC’d’” on the email, but the “blind” part means that they don’t see anyone else on the BCC list.

  • Use a bulk-email service – If you use a third-party email service (such as MailChimp or Mailerlite), it will give you the option of hiding recipients from each other.


Who should send the email?

Because spam email is a fact of Internet life, you need to make sure that your email looks legitimate to both the email program and the recipient themselves.

The easiest way to do this is to make sure that the email is sent from an account officially belonging to the organization. If you’re an employee of the organization, you can use your own email address, or you may prefer to set up a dedicated address for research purposes (e.g. research@company.com).

If you’re a consultant running the study on behalf of an organization, you should still send the invitation from an organization address rather than your own. People who use Acme Supply’s products and services are more likely to believe (and respond to) an email from Acme than they are from Dave’s Research Inc.

You can increase your response rate by having the invitation sent by someone the user knows (or knows of). When we had trouble recruiting enough people for a study with businesses, we asked the company’s account managers to forward our email to their respective customers. Because the invitation was sent by someone they knew (and had a business relationship with), we got a much higher hit rate.

 


Next: Using social media

 

  • No labels