The illusion of scientific validity
Let’s face it. Public opinion polls are pretentious. They boast that their results are obtained through “scientific” surveys. The implication of course is that a scientific survey delivers accurate results. In fact, they claim the results are so accurate, they even offer a numerical range which they assure us represents a 95% chance that the true results lie within this “scientifically” determined range.
This range is commonly known as the 95% confidence interval. It measures the error resulting from the random selection process of poll samples, also known as sampling error. It is the only error that the confidence interval measures. It is calculated through statistical formulas that pretty well nobody questions, or quite frankly is even qualified to understand. But with claims of “scientific accuracy”, who dares to question their authenticity?
Invalid formulas
The fact is, polls are not only pretentious, they’re misleading. The polls are not that “scientific” and the formulas used to calculate these 95% confidence intervals don’t apply in the real world of public opinion polling.
The formulas are not valid because they do not meet the statistical assumptions used to derive them. To ensure that the statistical calculations for the 95% confidence interval are valid, the sample would
1. have to be selected from a complete list of all the individuals in the population.
Real-world example: Telephone surveys that use land lines miss out about 25% of the population that have only cell phones. This group is younger, vote Democrat, are less affluent, they tend to be minorities, etc. So much for an accurate representation in the sample if this group is missed.
2. have to be selected in a random fashion such that each person has the same chance of being selected.
Real-world example: Selecting the person who answers the phone means that others in the household have zero chance of being selected. How representative is mom when answering for her teenage daughter?
3. have 100% response rate such that each person that was selected would be willing and able to answer the questions from the questionnaire.
Real-world example: Telephone response rates are dreadful. If less than 20% of selected respondents participate in a survey (not uncommon), how reliable can sample inferences be? Also, what if certain groups e.g., certain minorities, youth, in society have a lower response rate than the general population? How does this imbalance effect representivity in the sample?
Biases
The above real world examples demonstrate what are called non-sampling errors, or sample biases. They are what happens when the real world meets the theoretical world. Confidence intervals around a statistic become meaningless if you can’t be certain of the statistic itself. Often biases are the result of employing poor methodological practices and there no magical calculations to estimate the effect of biases on the accuracy of sample results. Controlling them can be time consuming and expensive (multiple callbacks, weighting, hybrid polling, incentives). Pollsters loath and fear them. In the world of pollsters, it is what separates the men from the boys.
If you can’t rely on confidence intervals and are uncertain about potential biases, how can an average intelligent layman assess the merits of a poll?
Follow the money
Before a poll is done, the pollster is contracted by a client who pays for it. In theory the client explains what information is needed from the public and the pollster designs the sample methodology to accurately gather this information with a poll.
In real life, things work quite differently.
To survive in the business, a pollster learns very quickly that the most important factor in polling is the relationship between the pollster and client. The pollster realizes that the client has to be satisfied with the results of the poll. Otherwise, no more business. Human nature being what it is, a pollster understands the client isn’t looking for negativity from the poll. So the most critical part of this process is the pollster trying to figure out in advance of the poll, what would make the client happy. Once a pollster has learned that important truth, he/she can proceed to successfully do the job — provide polling data that will make the client happy.
As the reader can appreciate, this “understanding” has flipped everything upside down. Instead of polls bringing us “truths” from the public domain, they have become marketing tools that provide “scientific” validation to the speculative ideas of the client. While I have exaggerated this dynamic between pollster and client slightly for purpose of clarity, have no doubt that this is the operative equation in organizing and conducting surveys in just about every domain of interest. This is the dirty little secret that all pollsters share.
And that is why the most important question in assessing the merits of a poll is: Who is funding the poll?
This observation is not mere conjecture. There is a mountain of research evidence that poll results tend to be biased in favor of the interests of the clients who are funding these polls. The latest datum point comes from the 2010 US midterm elections. Analysis of the hundreds of polls conducted during these campaigns provides clear evidence that Republican funded polls have a significant Republican candidate bias and Democrat funded polls have a significant Democratic candidate bias. Curiously, pollsters have decided the problem is not them. It’s those ignorant journalists who write the stories without applying sufficient “rigorous examination” to the poling methods. Please!
Poll profits
The second key question in assessing the merit of a poll has to do with business decisions relating to the design of the poll. Pollsters want to maximize their profits and they do this by minimizing survey design costs.
Cost cutting decisions include:
- The use of interview robots. These further reduce the “pleasure” of the poll interview and make any respondent questions somewhat difficult to answer.
- Zero callbacks if the selected respondent isn’t available. The end result is “the most easy to get” sample, not a representative sample.
- Random Digit Dialing (RDD) telephone surveys based on land lines. These often don’t augment the samples with “cell only” respondents because the law requires that these numbers be hand dialed, thereby significantly increasing costs over RDD.
Shortcuts like these substantially increase the likelihood of biased, unrepresentative results.
Most readers of poll stories don’t have a clue as to the inner machinations of the shortcuts pollsters use to cut costs and in so doing introducing bias in their results. Some reporters actually do a good job of explaining to readers the pros and cons of some of the methodological decisions that have gone into the design of a poll. I would argue that one of the most important pieces of technical information that should always be included in every poll story is the response rate. I believe it is more critical in assessment of poll validity than sampling errors or confidence intervals. Usually, however, these technical points are not addressed in polling stories. The only thing readers can do is some quick and dirty googling to determine the professional reputation and integrity of the pollster conducting the survey.
Once the funding source and survey quality are determined to be acceptable, then one really has the confidence to begin to look at confidence intervals. But only then.
I must admit this all sounds very depressing. Why bother doing polling or reading about it if it’s so fraught with uncertainty? Because in a modern democracy its importance cannot be overstated. Public opinion polling is a critical information channel that informs the government and political parties about the public’s thinking. It is too precious a resource to squander with polling that lacks uniform (or even minimal) standards and transparency. It needs to get its act together, the sooner the better. In the meantime the only advice I have for the poor schlep trying to separate polling wheat from poling chaff is caveat emptor –buyer beware. Good luck.
One Response to Trusting Polls: Confidence Intervals and Survey Bias