“If you survey 10,000 of your customers by email and 200 reply, what will you learn from the responses?” asks Seth Godin on his blog. “You will probably not get a statistically accurate presentation of how your customers feel. What you will get is an accurate understanding of how customers who answer email surveys feel.”
So don’t do research? Silly solution! The key is to do the most rigorous sampling that you can.
The problem he points to is a real one: samples certainly can be biased. For example, those who volunteer to take political surveys are not perfectly representative of the entire voting population. But that doesn’t mean it’s impossible to do accurate election polling. All that’s required to make sure your sample isn’t biased or misleading is expertise in dealing with this thorny issue.
While some ivory tower statisticians continue to bemoan the death of the true probability sample, those of us living in the real world have had to mourn and move on. Whether for political polling or market research, practitioners have largely replaced random samples with quota samples to compensate for the wild variance in response rates by different types of people.
The outcome of a good quota sample is one that accurately represents the population you’re studying on important controlled variables. In many cases, you may need to control for more than just demographics to make sure you’re representing different groups in the right proportion. For example, if you want to understand the gaming market, you can find out from other sources how many households own an Xbox and how many own a PS4 and recruit within your smaller study accordingly.
Some companies (but not enough) have also begun to optimize their survey instruments to allow smartphone-reliant consumers to readily participate. How the questions are displayed on a device, and more importantly, how long and complex the survey is can trigger another form of non-response bias among a systematically biased subset of the population. Simply capturing the right demographics, but not capturing the right types of technology users within those demographics, will not a representative sample yield.
Careful consideration of the invitation content can mitigate further issues of self-selection bias. Stating the client or topic can sometimes boost response rates (if it’s a fun or interesting topic) or lower them (if it’s an embarrassing or boring one), but either way it will introduce bias into the sample, so we recommend more general topic references when asking for participation. We also recommend generic incentives (for example, don’t use a gift certificate to your client’s store) to make sure you’re not getting a sample that’s biased toward those who already love your client’s brand.
Seth warns that Yelp raters “are not the same as people who buy from you”. It’s true that such a biased sample can’t provide a fair understanding of your performance, and certainly can’t replace a rigorously sampled customer satisfaction survey. But Yelp reviews ARE an influential form of word of mouth available to your broad customer base which you ought to make yourself aware of. Moreover, when combined with properly sampled customer experience survey data, digital ratings can in fact increase the predictive power of your research.
Long story short, it’s perfectly possible to get a sufficiently representative sample in the digital age: all it requires is smart thinking about the population you’re trying to understand and experienced researchers who understand how to get to them.
Often Seth is right on the money…just not this time.