A free comprehensive guide for evaluating site structures
Usability guru Jared Spool has written extensively about the scent of information – how users hunt through a site, click by click, to find the content they’re looking for. Tree testing helps us deliver a strong scent by improving:
- organization (how we group headings and subheadings), and
- labeling (what we call each of them).
Anyone who’s watched a spy film knows that there are always false scents and red herrings to lead the hero astray. Anyone who’s run a few tree tests has probably seen the same thing – headings that lure participants to the wrong answer. We call these “evil attractors”. These are headings that lure participants down the wrong path – not just for one task, but for several different tasks.
The false scent
One of our favorite examples of an “evil attractor” comes from a tree test we ran for consumer.org.nz, a NZ consumer-review website much like Consumer Reports in the USA. Their site lists a wide range of consumer products in a tree several levels deep, and they wanted to try out a few ideas to make things easier to find as the site grew bigger.
We ran the tests and got some useful answers, but we also noticed that there was one particular subheading (Home > Appliances > Personal) that got clicks from participants looking for very different things – mobile phones, vacuum cleaners, home-theater systems, and so on:
The website intended this “personal appliance” category to be for products like electric shavers and curling irons, but apparently “Personal” meant many things to our participants, because they also went there for “personal” items like mobiles and cordless drills that actually lived somewhere else.
This is the false scent, the heading that “attracts” clicks when it shouldn’t, leading participants astray. Hence this definition:
What makes an attractor “evil”?
Attracting clicks isn’t a bad thing in itself. After all, that’s what a good heading does – it attracts clicks for the content it contains (and discourages clicks for everything else).
“Evil” attractors, on the other hand, attract clicks for things they shouldn’t. They “lure” users down the wrong path, at which point the user either find themselves in the wrong place, backs up, and tries elsewhere (if they’re patient), or gives up (if they’re not). Because these attractor topics are magnets for the user’s attention, they make it less likely that our user will get to the place we intended.
The other “evil” part of these attractors is the way they hide in the shadows. Most of the time, they don’t get the lion’s share of traffic for a given task; instead, they’ll siphon off 5-10% of the responses, luring away a fraction of users who might otherwise have found the right answer.
Spotting an evil attractor
The easiest attractors to spot are those at the “answer” end of our tree, where participants ended up for each task. If we can look across tasks for similar wrong answers, then we can see which of these might be evil attractors.
We use the same results view that we saw in the “Analyzing by user group” section above - a matrix that shows the tree down the left side and the tasks across the top. Here’s part of the view from the consumer.org.nz study:
Normally, when we look at this view, we’re looking down a column for big hits and misses for a specific task. To look for evil attractors, however, we look for patterns across rows. In other words, we looking horizontally, not vertically.
If we do that here, we immediately notice the row for “Personal” (which we’ve highlighted yellow for this example). See all those hits along the row? Those indicate an attractor – steady traffic across many tasks that seem to have little in common.
But remember, traffic alone is not enough – we’re looking for unwanted traffic across unrelated tasks. Do we see that here?
Well, it looks like the tasks (about cameras, drills, laptops, vacuums, etc.) are not closely related – we wouldn’t expect users to go to the same topic for each of these. And the answer they chose – “Personal” – certainly isn’t the destination we intended. While we can probably rationalise why they chose this answer, it is definitely unwanted from an IA perspective.
So yes, in this case, we seem to have caught an evil attractor red-handed. “Personal” is clearly a heading that’s getting steady traffic when it shouldn’t.
Why do they happen?
It’s usually not hard to figure out why an item in our tree is an evil attractor. In almost all cases, it’s because the item is vague or ambiguous – something that could mean a lot of different things to a lot of different people.
Look at our example above. In the context of a product-review site, “Personal” is too general to be a good heading. It could mean products we wear, or carry, or use in the bathroom, or any number of things. So, when those participants come along clutching a task, and they see “Personal”, a few of them think “That looks like it might be what I’m looking for”, and they go that way.
Individually, those choices may be defensible, but as information architects, are we really going to group mobile phones with vacuum cleaners? The “personal” link between them is tenuous at best.
How do we get rid of them?
Just as it’s easy to see why most attractors attract, it’s usually easy to fix them.
Evil attractors trade in vagueness and ambiguity, so the obvious remedy is to try to make those headings more concrete and specific.
In the consumer-site example, we looked at the actual content under the “Personal” heading. It turned out to be items like shavers, curling irons, and hair dryers. A quick discussion yielded “Personal care” as a promising replacement – one that should keep away people looking for mobile phones and jewelery and the like.
In the second round of tree testing, among the other changes we made to the tree, we replaced “Personal” with “Personal care”. A few days later, the results confirmed our thinking – our former evil attractor was no longer luring participants away from the correct answers:
Evil attractors as waypoints
Above, we learned how to spot evil attractors that were final endpoints in our tree. However, topics higher in the tree can also be evil attractors. They may be top-level headings, or they may be in lower levels. But in all cases, they are waypoints that are attracting traffic when they shouldn’t, across several unrelated tasks.
- Top-level headings are usually easy to check for evil attractors, if the tool we’re using has a way of highlighting “first clicks”. If we see a level-1 topic getting lots of incorrect clicks across several tasks, chances are that it’s an evil attractor.
A very common culprit is an “other stuff” heading like Resources. It’s so general that it will get all kinds of traffic for all kinds of reasons.
- Mid-level headings typically need a bit more detective work to see if they’re evil attractors. Usually this means eyeballing all of our task results to see if certain mid-level topics are luring clicks when they shouldn’t. For more, see Where they went earlier in this chapter.