Using email lists


Along with web ads, using email lists is a very common way to get participants for online studies.

  • Many of our clients have maintain customer databases that they use for sales and marketing purposes. These often include useful data on demographics and product use.

  • Even small organizations usually have lists of customer contacts (often stored in spreadsheets). These are typically modest in size and detail, but may still give us a good pool of people to invite.

One big advantage of using customer lists is that we’re contacting people who already have some kind of relationship with the organization, usually as current users of their products or services. This relationship usually boosts the response rate, because these people are likely to have a vested interest in improving those products and services.

Another advantage of customer lists is that we often get to pick who to invite, usually based on information in the lists such as region, age, usage, and so on.

The downside is that, unlike passive web ads, email invitations are an active (albeit minor) intrusion into people’s lives. Organizations should be very careful about how (and how often) they “bother” their customers with unsolicited messages, no matter how good the cause. For more on this, see Letting people opt out below.

How many should we invite?

Earlier we recommended getting about 50 participants from each user group we want to test.

However, we all know that most people will ignore most email invitations to research studies like this. So, to get 50, we have to invite many more than that.

How many more?

  • Ask what the organization's traditional response rate has been.
    Most organization have done email invitations at one time or another (usually for customer surveys), so they should have some idea of their response rate. Certain organizations with very loyal/vocal customers can get a 30-40% response rate, but most are much lower (often under 10%).

  • If unknown, assume a response rate of 5-10%.
    Note that this number can vary greatly depending on factors like the desirability of the incentive (see below) and even the time of year (e.g. farmers are unlikely to participate during harvest).

  • Do the math to determine the number of emails to send out.
    For example, if we expect a 10% response rate and we need 50 participants, we’ll probably need to send about 500 invitations to hit our number.

For many organizations, this is more people than they have on their lists, so the question is not “how many should we invite?” but rather “how else can we get participants?”. Luckily, we don’t need to be tied to any one method of recruiting. Most of the studies we do include web ads AND email lists, and sometimes even then we have to start beating the bushes for more people – see the other methods described in this chapter.

Inviting in batches

If we have access to a large list of customers (perhaps thousands), we may be tempted to email them all and get lots of results fast.

Careful – emailing everyone in a large pool is a rookie mistake.

  • First of all, we should almost never email everyone in a big list. Lists of those size usually have more detail in them that we can use to filter the list down to the people we really want (not just bank customers, for example, but those with home loans who use Internet banking). For more on this, see Filtering lists below.

  • Second, even if we still have a big list after filtering (lucky us), remember that people have a limited appetite for invitations from a given organization. If we or anyone else in our organization wants to run another study in a month or two (remember that we recommend at least 2 rounds of tree testing to get it right), we should probably avoid emailing the same people we just pinged this week. Many large organizations have formal rules about this, typically along the lines of “Do not email a given customer more than once every 3 months”. Even if the organization has no such policy, it’s still a healthy rule of thumb.

  • Third, we may not need that many responses to get the results we want. 50 responses shows us patterns, and 100 responses makes them clearer, but beyond that we’ll just get diminishing returns.

So, if we have a large number of potential invitees, we recommend inviting them in smaller batches according to the expected response rate.

For example, suppose we need 50 participants and we have 1000 people on our list. How many should we invite?

  • If our response rate was known to be 10%, we would send 500 invitations.

  • If we expect the rate to be higher, we might send out only 200-300 invitations in the first batch.

  • After a few days, if we hadn’t reached our target of 50 participants, we could send out another few hundred. And so on until we reach our target.

The big win here is that we “save” a bunch of people to use on our next study; we’re rationing them so that we always have a pool of users to fuel our ongoing research.

The other factor at work here is urgency. Using batches slows down the study (because we wait a few days between batches).

  • If we need results fast, and we have users to burn (so to speak), we can invite larger batches of users.

  • If we don’t have many users on our list, and so need to conserve them, invite smaller batches so we can get just enough participants to show clear results.

Filtering lists to get the right people

We don’t just want any 50 people to do our study; we want the right 50 people – people who match our idea of a representative user.

If our study is for all users, then a simple web ad or a blanket email blast to a customer list is probably OK. This should ensure that most of our participants are current (or past or future) users.

Often, however, we may want to get more specific about who does our study. If we are reorganizing the Large Business section of a bank’s website, for example, we want large-business users to test the new structure, but we don’t want personal-banking users because they do different tasks, use different terminology, and would generally be irrelevant to this study.

When we’re going after a specific user group, there are two common approaches:

  • Using customer lists that are specific to that user group.
    For example, the bank may have a separate customer list for their business customers. If we invite people from that list, we’re automatically picking the right users.

  • Filtering a broad list down to the users we want.
    The bank may have a customer database with fields that let us narrow down to just the business users.

Having a database that we can filter is very useful if we have specific recruiting criteria. While this is mostly used for targeted studies like in-person usability testing (we’re only testing 10 participants, so we want to be sure we get just the right users), it can also help improve our tree-test results. For example, we may want to recruit not just personal-banking users, but specifically those who use Internet banking frequently.

If we do want specific users, we can do your filtering early or late:

  • Early filtering – We only send invitations to people who fit our criteria (by filtering a customer database first).

  • Late filtering – We invite anyone to do the study (via a blanket email blast or a web ad), then screen out the people we don’t want (by using screening questions just before the tree test starts). For more on this, see Screening for specific participants later in this chapter.

Letting people opt out

We mentioned earlier than inviting people by email is intrusive – most people have a limited appetite for unsolicited invitations, and some people may not want to be contacted at all. We need to respect their wishes and keep their goodwill.

There are two common ways to handle this:

  • Don’t contact people who have already opted out.
    Many customer databases have a field indicating whether the person has opted out of non-essential communications (often termed “marketing and promotional” messages). Obviously, we don’t invite people who have opted out.
    Related to this is an embargo period, where we don’t contact people too soon after we last contacted them. The database shows when the last contact was, so we only invite those who have not been contacted recently. (3 months is a typical waiting period.)

  • Make sure the invitation includes a way to opt out.
    Most people who don’t want to participate in our study will just skim the email and delete it. But there will be some who don’t want to receive more of these invitations, so it’s a simple courtesy to give them a way to easily opt out of future invitations. A clear link at the bottom of the message handles this. How we implement it (as a web link to an “unsubscribe” page, an email to an automated system, or an email to a staffer who remove them from the list) is up to the organization.

Hiding participants from each other

When we send a batch of email invitations, it’s important that the recipients don’t see each other in the received message. Beyond the clutter of several hundred names in the “To” field, it’s also a privacy violation – people shouldn’t be able to see who else is on a email list.

To prevent this, we can either:

  • Use the Blind Carbon Copy (BCC) field – If we’re sending from a normal email account, we set the “To” field to ourselves and add the recipients to the BCC field. The BCC recipients are “CC’d’” on the email, but the “blind” part means that they don’t see anyone else on the BCC list.

  • Use a bulk-email service – If we use a third-party email service (such as MailChimp or Mailerlite), it will give us the option of hiding recipients from each other.

Who should send the email?

Because spam and phishing emails are a fact of Internet life, we need to make sure that our email looks legitimate to both the email system and the recipient themselves.

  • The easiest way to do this is to make sure that the email is sent from an account officially belonging to the organization. If we’re an employee of the organization, we can use our own email address, or we may prefer to set up a dedicated address for research purposes (e.g. research@company.com).

  • If we’re a consultant running the study on behalf of an organization, we should still send the invitation from an organization address rather than our own. People who use Acme Supply’s products and services are more likely to believe (and respond to) an email from Acme than they are from Bob’s Research Inc.

  • Some recipients may contact our organization to see if the invitation is legitimate, so we should alert our support channels that we're doing a customer study - see Alerting the organization about our study in Chapter 8.

We can increase the response rate by having the invitation sent by someone the user knows (or knows of). When we had trouble recruiting enough people for a study with businesses, we asked the company’s account managers to forward our email to their respective customers. Because the invitation was sent by someone they knew (and had a business relationship with), we got a much higher response rate.


Next: Using social media


Copyright © 2024 Dave O'Brien

This guide is covered by a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.