Pitfalls to Avoid in Product Research
If you’re a product person just starting to take research into your own hands, the key message of this post is one of reassurance. Getting feedback from your intended customers is so valuable that it’s best to get started, and learn as you go.
That said, there are some pitfalls to look out for to make sure you get valid results that will help you make your product a hit. Based on thousands of research tests yielding millions of data points, we’ve identified some of the pitfalls that beginners are most likely to encounter and documented them here. We also provide recommendations for remedial action, in case you’ve accidentally veered off course.
This is not to say you’ll have earned a black belt in market research after reading this blog. After all, research technology platforms exist so that you can get good data without becoming a research expert. But these learnings, preferably combined with the intelligence embedded in a CX platform, should enable any product manager to bring high-quality consumer feedback to critical decisions.
This is the fifth blog in our Beginner’s Series. You can read a Beginner’s Guide to Product Research to start at the beginning of the blog series.
Lowering the cost of course correction
The cost of making a mistake during a research project varies widely, depending upon its scope and the methods used to gather the data. For example, a research project executed by an agency with custom audience sourcing and survey design may cost tens of thousands of dollars and consume months to execute. In this type of “big bang” project, a mistake that invalidates some or all of the data, or even valid results that raise additional questions, can require you to restart or expand the project at additional cost and time.
In contrast, the short surveys that are conducted using a platform like DISQO's typically have a much lower cost and are fully executed in a couple of days. Course corrections or follow-ups are significantly less costly and recovery is faster and easier.
Moreover, platforms offer subscription services designed to facilitate iterative research. You can think of these services as offering built-in course correction capabilities. If survey data raises more questions than it answers, you can simply issue a new survey. That said, we want to optimize our chances for success. So, let’s examine some common places where things can go wrong and how to avoid them.
Survey length and participant fatigue
Anyone who has taken a survey (and we’ve all done this) knows how a lengthy survey could cause participants to become disengaged and provide unreliable data. Participants have expectations about how long it should take to complete a survey, especially in exchange for a fixed incentive. When they feel a particular survey exceeds the norm, they may check out.
The key to avoiding fatigue is setting reasonable expectations and incentives with participants. We recommend you adhere to expectations across all factors that can cause fatigue. Here are some fatigue factors to avoid, no matter how your survey is to be fielded:
Excessive open-ended questions - Participants can get fatigued when responses require lots of typing and when these questions appear one after the other in sequence. With DISQO Experience Suite, we recommend no more than two open-ended questions per survey.
Excessive survey length - Fatigue sets in when the number of questions, regardless of question type, exceeds expectations. We recommend that surveys not exceed ten questions.
Multiple questions with highly granular responses - The more participants must think about ratings, or sort through a large number of response selections, the more likely they are to become fatigued. Limit these to questions where the level of detail is actually needed to meet your learning objectives.
You can often tell when fatigue has occurred because participants will “straight line” their responses. This is when they select a specific response in a specific position on each question. For example, if the responses offered were A, B, C, D, the participant would select C on all questions. If you notice straight-lining in your data, you should consider addressing the fatigue factors by issuing a restructured survey and/or breaking up questions into two separate surveys.
Another way to detect fatigue is by evaluating the speed of a participant’s response. When a participant answers too quickly, they may be disengaged. Providers can determine whether a participant has spent a reasonable amount of time on each question. Look for a platform, like the DISQO CX platform, that automatically detects and removes these responses from your data.
Biased questions
Biased questions can tip your participants to deliver a certain response, hiding their true sentiments. Avoiding biased questions is relatively straightforward, once you know how to. We’ve written a separate blog, Removing Bias from Questions, with eight guidelines that are easy for beginners to follow.
Too many learning objectives
Another common pitfall occurs when you try to satisfy too many learning objectives with a single survey. This leads to a disjointed and confusing survey structure, plus a lengthy list of questions that cause participant fatigue. The data you collect may be invalid because participants have trouble following the survey logic.
We recommend you set only one learning objective for each survey. This helps to keep your research survey short and focused, which ensures that participants provide quality responses from beginning to end. Ask yourself, what is it you want to learn by surveying these consumers on this specific occasion.
If you have multiple objectives, you can break them down into separate surveys. After all, part of the point of using a research technology platform is being able to get feedback more often.
No learning objectives
A similar problem occurs when you go on a fishing expedition with no clear objective. This can also result in a survey that is disjointed and confusing for participants.As you formulate your objective, we recommend you consider these factors:
- Who is going to use the research results?
- What is their perspective on the business problem?
- What specific decision do you want to make based on research results?
Poor participant screening
Sometimes surveys return bad data when the wrong audience is invited to participate. As we described in Tips on Developing the Right Audience, you want an audience that is representative of your target market and you often have to screen participants to find them. But, if the screening process goes south, you'll get bad data.
You can detect a screening problem when response data isn’t aligning. For example, participants may choose I Don’t Know, I Don’t Own [criteria], or N.A. in response to screening questions. The cause may be a poorly constructed screener or leading questions.
Another common mistake is when you invite only your existing customers to participate, instead of a representative sample of your target population. Unless your business is a monopoly, existing customers represent only a portion of the available market.
Data misinterpretation
Sometimes a survey can return ambiguous data. When this happens, it’s not uncommon for product teams to interpret the data according to their own biases and desired outcomes.
It’s worth asking yourself whether the data could be used to support a different decision than the one you prefer. If the results are ambiguous, it is probably wiser to run another follow-up survey with questions that will clarify the results than to plow ahead with a poorly supported product decision.
One step closer to quality research data
While this list isn’t exhaustive, it represents the most common pitfalls that can derail you from conducting high-quality product research. These pitfalls and the guidelines for avoiding them are based on the cumulative experience of hundreds of product teams that have preceded you on the beginner’s journey.
And always remember that the ultimate product research pitfall is not doing any!
To learn more about what motivates people to share their customer experience, check out our recent report that covers the five drivers that increase study participation.