Writing a good task


 So far, we’ve talked about “task ideas” – rough jot notes about things we think would make good “find it” questions for our tree test.

Eventually, though, these need to be refined into properly worded tasks for our participants. But what do we mean by “properly worded”?

And how hard is it, really, to write a task? We just need to tell our participants to go look for women’s jewelry, right? What could be hard about that?

OK, here’s the truth - after running scores of tree tests for a wide range of clients, and after reviewing dozens more run by organizations on their own, we can truthfully say:

Without guidance and experience, each of us will write some horrible tasks.

We're not saying that all the tasks we've seen are horrible, but a surprising number are. And it’s not just first-timers – some people repeatedly put tasks out there that are just not going to give them the clear results they want. Their intentions are good, but they keep making the same mistakes over and over.

Luckily, these mistakes are usually easy to fix. We can eliminate most or all of them by following the guidelines below.

Avoiding matching words

The most common cause of bad tasks, by a long shot, is using “give-away” words that point the user to the right answer. We ask them to find something, and that exact word jumps out at them from the topics they’re looking at.

For example, suppose the task is:

Find out who to contact about your warranty.

...and these are the top-level topics:

  • Products

  • Support

  • About Us

  • News

  • Contact Us

99% of our participants are going to click Contact Us, not just because it’s correct, but because the word “contact” matched the task. They didn't even have to think about it.

We want participants to choose topics because they seem the best choice among alternatives, not because of simple word-matching.

While this seems easy enough to do, it turns out that all of us are guilty of using keywords in some of our tasks (or, at least, in our first drafts of the tasks). That’s because we’re immersed in the site’s content, its structure, and its jargon. Once we’re in deep, it’s how we naturally think and talk about the site, so it inevitably spills over into our task phrasing.

The cure is obvious – we need to avoid those keywords, paraphrasing and substituting synonyms to end up with the same meaning. Usually this is straightforward, especially if we remember to “speak” conversationally, as our participants would to each other.

Here’s an example taken from an intranet tree test:

“You need to get reimbursed for expenses. Find the expense form.”

And here's the tree:

  • Library

  • Expenses/Reimbursements

    • Rules for claiming expenses

    • Expense forms

    • Who to contact

    • Travel

Only a card-carrying cretin would get this one wrong, but what about the 99% who got it right? Did they succeed because they thought and decided that Expenses/Reimbursements was the best choice (as they’re supposed to do), or because they simply matched the words “reimburse” and “expense”? Even someone who didn’t understand English could find the right answer here based on word matching. So while we get a high success rate for this task, we still don’t know if this was a result of the clarity of our tree or the wording of the task itself. We haven’t properly “tested” the tree.

It’s easy to solve this one, because it follows a common pattern; the tree uses jargon, and the task does too. If we just change the task to sound more like what a real employee might say in conversation, it might come out something like this:

“You paid for some work supplies using your personal credit card. You need to fill out the paperwork to get paid back.”

Notice that we’ve avoided using the term “reimburse” (easy) and the term “expense” (a bit harder) by replacing them with synonyms. The revised task is a bit longer, but it’s still clear and doesn’t give anything away. Now, participants will have to make their own connections by thinking, not by just playing "spot the matching word".

When phrasing a task, avoid keyword matches in the tree.

Avoiding matching sequences

We also want to avoid matching sequences between our tasks and the tree. This happens when we phrase a task so that it uses the same order of words that the tree’s headings use.

For example, an IT department used this task in their testing:

“For a fixed asset such as a laptop, how would you request an upgrade?”

Their tree:

  • Fixed assets

    • Computer equipment

      • New purchases

      • Upgrades

      • Warranties and repairs

    • Furniture

    • Heating/cooling/ventilation equipment

  • Non-fixed assets

This task achieved a high success rate, but like the word-matching discussed above, we can’t tell if this success was because of the clarity of the tree or the wording of the task.

In this case, the word-matching was made worse by the sequence-matching of fixed assets, computer equipment (laptop), and upgrades. Because the task used the same sequence of terms that the tree used, we gave away the answer to our participants, so we can’t trust the resulting high score.

Luckily, sequence matching is easy to fix; we changed the phrasing of the task to use a different sequence, and used a more everyday tone:

“You want to request more memory for your company laptop.”

Avoiding ambiguity

Another common task mistake is ambiguity – wording a task so it could be construed more than one way. We may think the task is clear (after all, we wrote it), but will all of our participants understand what we meant?

Consider this example from an online-banking study:

“How would you get notified when your account balance is low?”

We thought this task wording was fine – until we piloted it with some members of the project team.

  • We intended the task to mean “How would you set up a notification?”, and half the team thought the same thing. The answer was in the Setup section.

  • However, the others thought it meant receiving the notification itself, so they went to the Secure Mail section.

Once we discovered the problem, of course, it was easy to fix. We revised the task to:

“How would you arrange to get notified when your account balance is low?”

This meant the same thing to everyone, and we managed to avoid using the give-away phrase “set up”.

Avoiding ambiguity is hard, because it doesn’t usually seem ambiguous to the writer. The best way to avoid ambiguity is to do the simple paraphrasing exercise we describe in Test-driving tasks below.

Specific task = specific answer

We should beware of tasks that are too broad – that is, tasks that have so many answers that we don’t learn much from the results.

Note that it’s OK for a task to have several correct answers. In fact, a mark of an effective tree is that it supports more than one reasonable path to success, when we find that substantial numbers of users follow those different paths.

But that’s only OK when the task is specific, and there are one or more specific answers that we’re looking for.

If the task is too broad, however, we may find that most of the headings in a certain section could be considered correct. Consider this example from a power company’s website:

“You’re looking for a new power company. What advantages does Acme Power offer you?”

The tree offers the following correct answers (marked here in bold green):

  • Why join us?

    • Renewables AND gas

    • Discounted pricing

    • Smart metering

    • Online tools

  • Your account

    • Sign up for online billing

    • How to read your bill

    • Track your usage

    • (etc.)

  • About us

    • Staff

    • In the community

    • (etc.)

One problem here is that the entire Why join us section is correct – they are all reasons to join Acme Power. Yes, we learned that the section is labeled well, but we don’t learn anything more specific.

Worse, there are several other topics (highlighted above) that could be considered correct. The ability to get online billing and usage tracking are selling points, as is the community work that Acme Power does. These weren’t intended to be the targets, but they need to counted as correct for this task. If the intent of this task was to find if people knew what the Why join us section is for, these extra answers just muddy the water.

If we made this task more specific, we could evaluate the tree’s effectiveness better. For example, we could revise the task to:

“You’re looking for a new power company. Does Acme offer a way to measure your consumption on the web?”

Now it’s easy to determine which answers are correct (marked in bold green), and we find out if those topics are clear and distinguishable from their siblings and parents:

  • Why join us?

    • Renewables AND gas

    • Discounted pricing

    • Smart metering

    • Online tools

  • Your account

    • Sign up for online billing

    • How to read your bill

    • Track your usage

    • (etc.)

  • About us

    • Staff

    • In the community

    • (etc.)

Using real-life situations and language

In the same vein as being specific, we also want to use real-life situations and real-life “things” in our tasks, even if the tree we’re testing doesn’t.

For example, we tested an intranet site for an IT department. Their tree included the following section:

  • Requests

    • Fixed-asset requests

    • Fixed-asset upgrades

    • Software requests

When they wrote the first draft of tasks, one of them was:

“How would you request a fixed-asset upgrade?”

Ouch. Some major word-matching problems here. But no worries, this was just a first draft; the idea is there, it’s just the wording that needs work.

However, there’s another problem here beyond the word matching. Even if the task didn’t use give-away words from the tree, it’s still badly worded. Why? Because the user is being asked to do something abstract, something that sounds institutional and artificial. When was the last time we decided to “request a fixed-asset upgrade”?

To turn this into a good task, we need to move it into real life. In our example, it turned out that “fixed-asset upgrades” were IT-speak for “computer upgrades”, so we revised the task to:

“Your laptop needs more memory. How would you ask IT for this?”

 We’ve done two things here:

  • We’ve replaced an abstraction (upgrade a fixed asset) with a specific real-life situation (asking for more memory for a laptop).

  • We’ve revised stilted institutional jargon into everyday conversational language (how real people talk to each other).

The result is a task that is both easier for participants to understand and a better test of the clarity of the tree.

Keeping it short and direct

Where possible, we should keep our tasks concise – no longer than necessary to get the idea across clearly.

Yes, we want to give them a real-life situation (as we saw above), but we don’t want to write a novel. Consider this example:

Your brother has decided to try running a marathon, and that will take a lot of training. His old running shoes are in sad shape, so you volunteer to find him a new pair at this sporting-goods website. Where would you look?

Long-winded tasks like this pose several problems for participants:

  • They’re intimidating to see at first glance.

  • They’re tiring to read. Imagine doing 10 tasks like the one above – phew!

  • They often end up providing too much detail, confusing the participant and forcing them to read it again.

When we’re writing a task (and particularly when we’re revising a task that was unclear in its first version), we must be careful about how long our text becomes. We should put it away and come back later, or ask a colleague to edit it down for us. It’s easy to get stuck when we’re trying to do it by ourselves.

Here's the example above, after some judicious revising:

Help your brother find a new pair of running shoes for marathon training.

Varying phrasing to maintain interest

Some people write their tasks using the same formula each time, like this:

Where would you find men's running shoes?

Where would you find gold necklaces?

Where would you find electric lawnmowers?

Generally, it’s OK to repeat the sentence format. Tree tests are typically short, so it’s not too likely our participants will abandon the study just because the task wording is boring.

In the longer term, though, if we’d like those participants to continue doing our studies, making the tasks a little more varied is one way we can keep their interest.

Here are some common phrasings that we’ve used in our studies:

Your son needs some new running shoes.

Your son's running shoes are worn out. Look for a new pair.

Find out if this site sells running shoes for men.

Running shoes would make a great birthday gift for your brother. Find them.

More ideas

For more ideas on the types of tasks you can write, see Dan Brown’s article on Tasks for Treejack Tests.

Test-driving tasks

Following these guidelines will improve the wording of our tasks. But wording is harder than most people think, and it’s likely that our first draft of tasks will still contain at least one clunker.

No worries, though, because we can quickly reveal any remaining problems by doing this simple paraphrasing exercise:

For each of our tasks:

  1. We get a colleague to read the task.

  2. We ask them to paraphrase it back to us. That is, we have them explain (specifically) what they think we want them to look for.

  3. We make sure that what they understand is what we intended. If there is any doubt, we discuss it until we’re sure one way or the other.

If we do go back and revise some wording, we shouldn't assume we’ve fixed it first try. Repeat this paraphrasing exercise, but with a different person who didn’t see the first version.


Next: Identifying correct answers


Copyright © 2024 Dave O'Brien

This guide is covered by a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.