Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Many of our clients have maintain customer databases that they use for sales and marketing purposes. These often include useful data on demographics and product use.

  • Even small organizations usually have lists of customer contacts (often stored in spreadsheets). These are typically modest in size and detail, but may still give you us a good pool of people to invite.

One big advantage of using customer lists is that you’re we’re contacting people who already have some kind of relationship with the organization, usually as current users of their products or services. This relationship usually boosts the response rate, because these people are likely to have a vested interest in improving those products and services.

Another advantage of customer lists is that you we often get to pick who to invite, usually based on information in the lists such as region, age, usage, and so on.

The downside is that, unlike passive web ads, email invitations are an active (albeit minor) intrusion into people’s lives. Organizations should be very careful about how (and how often) they “bother” their customers with unsolicited messages, no matter how good the cause. For more on this, see “Letting Letting people opt out” out below.

 

How many should

you

we invite?

Earlier we recommended getting about 50 participants from each user group you we want to test.

However, we all know that most people (including you and I) will ignore most email invitations to research studies like this. So, to get 50, you we have to invite many more than that.

...

  • Ask what the organization's traditional response rate has been.
    Most organization have done email invitations at one time or another (usually for customer surveys), so they should have some idea of their response rate. Certain organizations with very loyal/vocal customers can get a 30-40% response rate, but most are much lower (often under 10%).

  • If unknown, assume a response rate of 5-10%.
    Note that this number can vary greatly depending on factors like the desirability of the incentive (see below) and even the time of year (e.g. farmers are unlikely to participate during harvest).

  • Do the math to determine the number of emails to send out.
    For example, if you we expect a 10% response rate and you we need 50 participants, you’ll we’ll probably need to send about 500 invitations to hit your our number.

For many organizations, this is more people than they have on their lists, so the question is not “how many should we invite?” but rather “how else can we get participants?”. Luckily, you we don’t need to be tied to any one method of recruiting. Most of the studies we do include web ads AND email lists, and sometimes even then we have to start beating the bushes for more people – see the other methods described in this chapter.

 

Inviting in batches

If you we have access to a large list of customers (perhaps thousands), you we may be tempted to email them all and get lots of results fast.

Warning
Careful – emailing your whole everyone in a large pool is a rookie mistake.
  • First of all,

...

  • we should almost never email everyone in a big list. Lists of those size usually have more detail in them that

...

  • we can use to filter the list down to the people

...

  • we really want (not just bank customers, for example, but those with home loans who use Internet banking). For more on this, see Filtering lists below.

  • Second, even if

...

  • we still have a big list after filtering (lucky

...

  • us), remember that people have a limited appetite for invitations from a given organization. If

...

  • we or anyone else in

...

  • our organization wants to run another study in a month or two (remember that we recommend at least 2 rounds of tree testing to get it right),

...

  • we should probably avoid emailing the same people

...

  • we just pinged this week. Many large organizations have formal rules about this, typically along the lines of “Do not email a given customer more than once every 3 months”. Even if

...

  • the organization has no such policy, it’s still a healthy rule of thumb.

  • Third,

...

  • we may not need that many responses to get the results

...

  • we want. 50 responses shows

...

  • us patterns, and 100 responses makes them clearer, but beyond that

...

  • we’ll just get diminishing returns.

So, if you we have a large number of potential invitees, we recommend that you invite inviting them in smaller batches according to your the expected response rate.

For example, suppose you we need 50 participants and you we have 1000 people on your our list? . How many should you we invite?

  • If your our response rate was known to be 10%, you we would send 500 invitations.

  • If you we expect the rate to be higher, you we might send out only 200-300 invitations in the first batch.

  • After a few days, if you we hadn’t reached your our target of 50 participants, you we could send out another few hundred. And so on until you we reach your our target.

The big win here is that you we “save” a bunch of people to use on your our next study – you’re ; we’re rationing them to so that you we always have a pool of users to fuel your our ongoing research.

The other factor at work here is urgency. Using batches slows down the study (as you because we wait a few days between batches).

  • If you we need results fast, and you we have users to burn (so to speak), we can invite larger batches of users.

  • If you we don’t have many users on your our list, and so need to conserve them, invite smaller batches so you we can get just enough participants to show clear results.


Anchor
filtering
filtering
Filtering lists to get the right people

You We don’t just want any 50 people to do your our study; you we want the right 50 people – people who match your our idea of a representative user.

If your our study is for all users, then a simple web ad or a blanket email blast to a customer list is probably OK. This should ensure that most of your our participants are current (or past or future) users.

Often, however, you we may want to get more specific about who does your our study. If you we are reorganizing the Large Business section of a bank’s website, for example, you we want large-business users to test the new structure, but you we don’t want personal-banking users because they do different tasks, use different terminology, and would generally be irrelevant to your this study.

When you’re we’re going after a specific user group, there are two common approaches:

  • Using customer lists that are specific to that user group.
    For example, the bank may have a separate customer list for their business customers. If you we invite people from that list, you’re we’re automatically picking the right users.

  • Filtering a broad list down to the users you we want.
    The bank may have a customer database with fields that let you us narrow down to just the business users.

Having a database that you we can filter is very useful if you we have specific recruiting criteria. While this is mostly used for targeted studies like in-person usability testing (you’re we’re only testing 10 participants, so you we want to be sure you we get just the right users), it can also help improve your our tree-test results. For example, you we may want to recruit not just personal-banking users, but specifically those who use Internet banking frequently.

If you we do want specific users, you we can do your filtering early or late:

  • Early filteringYou We only send invitations to people who fit your our criteria (by filtering a customer database first).

  • Late filteringYou We invite anyone to do your the study (via a blanket email blast or a web ad), then screen out the people you we don’t want (by using screening questions just before the tree test starts). For more on this, see Screening for specific participants later in this chapter.


Anchor
optout
optout
Letting people opt out

We mentioned earlier than inviting people by email is intrusive – most people have a limited appetite for unsolicited invitations, and some people may not want to be contacted at all. We need to respect their wishes and keep their goodwill.

...

  • Don’t contact people who have already opted out.
    Many customer databases have a field indicating whether the person has opted out of non-essential communications (often termed “marketing and promotional” messages). Obviously, we don’t invite people who have opted out.
    Related to this is an embargo period, where we don’t contact people too soon after we last contacted them. The database shows when the last contact was, so we only invite those who have not been contacted recently. (3 months is a typical waiting period.)

  • Make sure your the invitation includes a way to opt out.
    Most people who don’t want to participate in your our study will just skim the email and delete it. But there will be some who don’t want to receive more of these invitations, so it’s a simple courtesy to give them a way to easily opt out of future invitations. A clear link at the bottom of the message handles this. How you we implement it (as a web link to an “unsubscribe” page, an email to an automated system, or an email to a staffer who remove them from the list) is up to your the organization.


Hiding participants from each other

When you we send a batch of email invitations, it’s important that the recipients don’t see each other in the received message. Beyond the clutter of several hundred names in the “To” field, it’s also a privacy violation – people shouldn’t be able to see who else is on a email list.

To prevent this, you we can either:

  • Use the Blind Carbon Copy (BCC) field – If you’re we’re sending from a normal email account, you we set the “To” field to yourself ourselves and add the recipients to the BCC field. The BCC recipients are “CC’d’” on the email, but the “blind” part means that they don’t see anyone else on the BCC list.

  • Use a bulk-email service – If you we use a third-party email service (such as MailChimp or Mailerlite), it will give you us the option of hiding recipients from each other.


...

Because spam and phishing emails are a fact of Internet life, you we need to make sure that your our email looks legitimate to both the email system and the recipient themselves.

  • The easiest way to do this is to make sure that the email is sent from an account officially belonging to the organization. If you’re we’re an employee of the organization, you we can use your our own email address, or you we may prefer to set up a dedicated address for research purposes (e.g. research@company.com).

  • If you’re we’re a consultant running the study on behalf of an organization, you we should still send the invitation from an organization address rather than your our own. People who use Acme Supply’s products and services are more likely to believe (and respond to) an email from Acme than they are from Bob’s Research Inc.

  • Some recipients may contact your our organization to see if the invitation is legitimate, so remember to we should alert your our support channels that youwe're doing a customer study - see Alerting the organization about our study in Chapter 8.


Tip

...

We can increase

...

the response rate by having the invitation sent by someone the user knows (or knows of). When we had trouble recruiting enough people for a study with businesses, we asked the company’s account managers to forward our email to their respective customers. Because the invitation was sent by someone they knew (and had a business relationship with), we got a much higher response rate.

 

...

Next: Using social media