People say there’s no such thing as a bad question, and while I agree with that in principle, the notion falters in specific contexts. When it comes to original research and surveys, bad questions absolutely exist — and they can sabotage your results. Phrase them wisely, though, and you’ll end up with unique and compelling insights to fuel your thought leadership content engine.
Let’s take a quick look at the most important step in any original research process: developing your questions. We’ll talk dos, don’ts and tips to help you come up with the right queries that lead to rock-solid responses.
Subscribe to
The Content Marketer
Get weekly insights, advice and opinions about all things digital marketing.
Thanks for subscribing! Keep an eye out for a Welcome email from us shortly. If you don’t see it come through, check your spam folder and mark the email as “not spam.”
4 Common Question Mistakes That Sabotage Results
Before exploring the right way to approach question creation, it’s worth considering a few common mistakes, so you don’t have to backtrack later.
1. Double-Barralled Questions
Asking two (or more) things at once isn’t just confusing, but it forces respondents into an answer that may not be completely accurate. For example, if they agree with or have unique thoughts about one part of the double-barrel question, but feel differently about the other, they can’t provide a full, clear picture.
Example: “How satisfied are you with our product quality and customer support?”
These should be split into two distinct questions, allowing participants to provide clean, cohesive answers for each that won’t obscure results or make it more difficult for you to discern later.
Instead, try:
“How satisfied are you with the quality of our product?” and: “How satisfied are you with your most recent interaction with our customer support team?”
2. Leading or Biased Language
Wording that nudges respondents toward a specific answer artificially inflates agreement or positivity, leading to results that reflect the question’s framing rather than true sentiment. This can undermine your credibility and create brand trust issues when it’s time to present your findings.
Example: “How helpful was our award-winning customer service?”
Questions need to be fair and neutral to produce unbiased and accurate results — like this:
“How helpful was your most recent interaction with our customer support team?”
3. Ambiguity
Questions that lack context or specificity force respondents to fill gaps with their own assumptions, leading to answers that are inconsistent and unreliable.
Example: “Do you use our platform often?” (Often compared to what?)
“Often” means different things to different people, and respondents may be thinking about different timeframes or use cases.
A better question here might be: “In the past 30 days, how frequently have you logged into our platform?”
4. Making Assumptions
Asking questions that presume respondents understand a concept, feature or process is bound to create inaccuracy. People may guess, skip or answer in a way you didn’t intend, which can skew results.
Example: “How would you rate our API documentation?” (When many users may not use the API).
Instead, start with a qualifying question like: “Have you used our API documentation?” And if they answer “Yes”, follow up by asking them to provide a rating. If they answer “No”, guide them to the next question.
Best Practices for Structure, Sequencing and Answer Matching
What you ask is one thing, but how and where you ask it — and the answer options you provide — are all important parts of the question creation process too. Once you know what you’re going to ask, consider the following:
- Group related questions together to reduce cognitive load and help respondents stay focused on one topic at a time.
- Start broad, then narrow. Open with general questions before moving into specifics to avoid anchoring or biasing responses.
- Use consistent scales and formats (e.g., the same 1–5 agreement scale) so answers are easier for respondents to process and for you to analyze later.
- Match the answer choices to the question by ensuring options are mutually exclusive, collectively exhaustive and aligned with how people actually think. In other words, answer options shouldn’t overlap, shouldn’t leave anyone out and should feel intuitive to the person taking the survey.
- Include “Not applicable” or “I don’t know” when appropriate to avoid forcing inaccurate responses.
- Keep surveys as short as possible by cutting nice-to-have questions. Shorter surveys facilitate better completion rates.
Final Thoughts
Good questions are the catalyst for solid survey results. When what you’re asking is clear, structured and focused, respondents can give their best, most accurate answers. That not only makes the survey less of a headache to complete, but also collects quality data that’s truly representative of your audience.
We used contentmarketing.ai to help draft this blog. It’s been carefully proofed and polished by Chad Hetherington and other members of the Brafton team.

