Results from a single tree

If we only tested a single tree in our first round, then there are two likely outcomes from that testing:

Results from competing trees

If we tested several alternative trees (as we recommended In The design phase: creating new trees in Chapter 3), we’re obviously keen to see which performed best (so we can pursue them) and which performed worst (so we can discard them).

If the overall success rates point to a clear winner, we should go with that tree and then revise it to be even better.

If there is more than one “winner”, we can still discard the poor trees and narrow our field:

Keeping a record

As we select and discard trees and elements in them, and make changes to them, it’s a good idea to keep a record of:

Updating correct answers accordingly

If we make any substantial revisions to our tree(s), we should re-test those revisions, to make sure our changes work.

As we make our revisions, we must also remember that these changes may affect the correct answers we’ve marked for our tasks.

Pandering to the task

When we analyze individual tasks, especially low-scoring ones, it’s natural to want to fix the problems we discover. This usually means shuffling or rewording topics.

That’s all well and good, but we need to make sure that we’re making a change that will help the tree perform better in real-world use, not just for this single task.

If we’re considering a change to our tree to fix a low-scoring task, we should make sure to:

 


Next: Cherrypicking and hybrid trees