Tuning survey questions
In addition to tasks that need revising, we may find that some of our survey questions didn’t quite hit the mark the first time around.
For example, we may have given participants a multiple-choice question that didn’t include the answers they needed, so we got a lot of “other” responses. For example:
When you first got interested in electric vehicles, where did you go for information?
48 - Google or other web searches
8 - Facebook groups
5 - Car manufacturers/dealers
46 - Other (please describe)
In this example, half of our participants chose "Other", and many of them wrote in the same answers ("my friend who owns an EV", "government transportation website", etc.). We realised that we had missed several popular answers in our initial question.
This was a useful result, because not only did we learn something for our research, but we were also able to improve this question in our next round of testing, turning that big blob of “other” into more useful specifics:
When you first got interested in electric vehicles, where did you go for information?
- Google or other web searches
- Facebook groups
- Car manufacturers/dealers
- Government websites
- Word of mouth (neighbors, friends, colleagues, etc.)
- Other (please describe)
Again, by revising our questions between rounds, we lose some of our ability to compare the results against each other, but given the choice of analyzing two sets of cloudy data vs. 1 set of clear data, we’ll always take the latter.
Next: Using fresh participants
Copyright © 2024 Dave O'Brien
This guide is covered by a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.