Why are American polls more successful in predicting election results than Canadian polls?
The question isn’t merely an attention-getter. Soon there will be general elections held in Quebec, Ontario, and nationally. Polls, as usual, will play a big role in how the media cover the elections, and, ultimately, how people vote. If the polls prove to be as inaccurate as they have been recently, the Canadian electoral process is in big trouble.
The latest Canadian blowout was this year’s B.C. provincial election where the polls predicted an easy NDP victory only to see the incumbent Liberals returned to power. Equally memorable were the missed calls in Alberta last year and the federal election in 2011.
In an effort to explain (here and here), pollsters trotted out the usual suspects like response rates, negative ads, voter turnout, sample representivity, weighting, etc. But it’s clear the exercise was a fishing expedition. There is no consensus on what is the cause of these failures.
American pollsters, on the other hand, have for the most part managed over a similar time frame to avoid any such humiliations.
In the search for the culprit, no Canadian pollster seriously investigated the possibility that voters would mislead polls on their voting intentions, quite possibly without even meaning to do so. And yet, strongly suggestive evidence to support this hypothesis begs examination.
There’s a good reason why pollsters are not enamoured of this hypothesis. If true, it undermines the legitimacy of any if not all their prediction polls. How can polls reliably predict if the voting behavior of a significant number of voters contradicts their expressed voting intentions?
For news media, the implications are equally unpleasant. If polls cannot reliably predict election outcomes why do news media continue to run stories about who’s winning the horse race when the prediction tool is broken?
The unpredictability of voting intentions
The evidence of a fundamental disconnect between what the public admits to pollsters versus what it does in the voting booth is based on a revealing voter intention study undertaken by Todd Rogers and Masa Aida at Harvard’s Kennedy School of Government. The study included 29,000 people and compared their response to six pre-election surveys mostly from the 2008 U.S. presidential election, with actual voting behaviour.
Rogers and Aida looked at a question all prediction polls ask: How likely is a respondent to vote in the upcoming election? The question is important because only a little more than half of eligible voters get out and vote. (In the 2008 presidential election, the turnout was 62 per cent; it dropped to 58 per cent in the 2012 election.) Pollsters use this question among others to help identify those likely to vote and base their election day projection on this group. Voters who indicate they are not likely to vote are excluded from the projection.
The study found among those likely to vote (“almost certain to vote” and “probably”), 14 per cent did not vote. Among those who indicated in the polls they were not likely to vote (“chances are 50-50” and “don’t think I will vote”), 63 per cent were found to have actually voted on election day. This was true regardless of whether the polls were conducted just before election day or earlier.
Also, polls tend to significantly under-represent the number who for one reason or another indicate they are not likely to vote. That’s understandable. Prediction polls aren’t interested in non-voters. But given that many in this group will vote, they should be.
For the polls used in the voter intentions study, the estimate of non-voters was about seven per cent. However, on election day of the 2008 presidential election, non-voters represented 38 per cent of all eligible voters. Based on the data from this study of how respondents actually voted (from both the “likely” and “not likely” groups), non-voters would total less than half of 38 per cent. This is a significant polling bias in favor of those who say they are likely to vote.
For pollsters, the consequences of ignoring voting behaviour of the “not likely” voters and underestimating the size of the group creates a fertile ground for producing inaccurate voting projections.
The conflicted unhappiness of those not likely to vote
The reason? Many who classify themselves as “not likely” voters do so because they are deeply disappointed with the conduct of U.S. politics. Evidence of this comes from a poll of “not likely” voters conducted by USA Today/ Suffolk University during the 2012 U.S. presidential campaign.
The poll found that a majority (59 per cent) complained that they weren’t interested in politics because nothing ever gets done—“it’s a bunch of empty promises.” Many (54 per cent) justified their lack of interest on the belief that politics was corrupt. Their cynicism included not only politicians, but also political institutions like the Congress, the Supreme Court and the presidency.
For pollsters, the consequence of this disappointment is manifested by a wholesale rejection of the political candidates queried in their polls. Many expressed no interest in even participating in election polls.
But why do so many among them end up voting on election day?
The USA Today/Suffolk University study found respondents were deeply conflicted on the question of to vote or not to vote. Most (79 per cent) felt the federal government played an important part in their lives and many were bothered by the fact that by not voting, others will have selected the president. A large majority also indicated that they would go out and vote for their candidate if they felt their vote would count.
Hence, on election day, whether stimulated by civic duty guilt or campaign propaganda, many among the “not likely” group cast a ballot. A large block of voters who the polls reported were not aligned with any candidate is found on election day to be distributed among these candidates.
The consequences of this behaviour can be enormous.
Consequences of voters voting when they said they won’t
If, by luck, the choices on election day of the “not likely to vote” group are similar to those who say they are likely to vote, then the ability of polls to predict election results is not undermined.
But if their preferences are different, the poll prediction model is broken.
For example, many of the dissatisfied, “not likely” voters may be displeased with the performance of the incumbent party but not enough to switch to another party. They show their dissatisfaction by not aligning with any of the candidates polled. On election day, perhaps influenced by the election campaign or choosing the lesser of two evils, they vote in favor of the incumbent party. Seemingly out of nowhere, the incumbent party ends up with a bloc of voters that is not accounted for in any of the polling.
Polls projecting a regime change can be a manifestation of this behaviour—a projected polling defeat turns into an election day incumbent victory.
Alternatively, this dissatisfied group could side with the rival party due to the party’s more persuasive campaign. For the rival party, this could mean the difference between a projected polling defeat or, at best, an uncertain outcome, versus a substantial election day victory.
As evidenced by the low election turnouts in Canada, it’s fairly safe to assume we Canadians share much of the same disappointment in politics as do American voters. Hence the polling biases measured by Rogers and Aida are likely relevant in a Canadian context. In fact, they may be more significant for Canada. For example, polls predicting a modest victory may augur a minority government whereas a more substantial victory prediction would suggest a majority government. These differences would have significant consequences on the campaign narrative in Canada.
But for pollsters, all this falls completely under the radar. What is on the radar are vote projections that too often completely are at odds with the popular vote on election day. The misfiring election prognostications in Canada are striking demonstrations of this phenomenon. In all three elections, the polling results differed from election day figures far more than any sampling error estimate could account for.
Canadian polling blowouts
In the case of the B.C. election, the polls were showing the NDP with an eight per cent to nine per cent lead just before election day. However, on election day, the winners were the incumbent Liberals with a five per cent popular vote advantage. The weak Liberal numbers during the campaign were an expression of voter dissatisfaction with political missteps of the incumbent party such as the HST flip-flop and other political scandals. On election day, many of the Liberal voters decided to return to the incumbent fold, making a mockery of the polling predictions.
The polls themselves offer evidence in support of this “dissatisfied voter” explanation. Two days after the B.C. election, Ekos Research replicated its pre-election poll and found both the Liberals and the NDP within sampling error of election day results. Since both polls were methodologically identical, the only difference was the absence of motivation for voters to mislead the pollster on how they plan to vote.
In Alberta, polls had the Wildrose Party ahead by seven per cent to 10 per cent— until the voters readjusted this figure to plus-10 per cent in favour of PCs on election day. While the dynamics in the two provincial elections had some significant differences and the difference could be explained by methodological shortcomings, it is equally plausible that the results were caused by dissatisfied PC voters who temporarily pumped up Wildrose numbers during the election campaign by simply withholding their support, only to revert to the incumbent PC party on election day.
In the 2011 federal election, all the polls predicted a minority government whereas election day delivered a majority Conservative government. It was an enormous embarrassment for the polling industry. Central to this failure was overstating Liberal strength in Ontario. Many traditionally-Liberal voters chose to express their dissatisfaction with the party’s campaign by withholding their support, and by the end of the campaign switching their vote instead to Conservative candidates. These dynamics were in large part invisible to polls for the reasons noted above. They were, however, the difference between a minority versus a majority polling prediction.
The accuracy of U.S. prediction polls
While this biasing mechanism can explain the flagrant missed calls in recent Canadian elections, it doesn’t seem to have played any significant role in American elections. Poll predictions came within sampling error in both the 2012 and the 2008 Presidential elections. Also they were fairly accurate in predicting results in the critical swing states that essentially determine the election outcome. This accuracy was instrumental for poll aggregators like Nate Silver in helping him achieve his remarkable predictions for the 2012 presidential electoral college vote as well the popular vote.
Predicting the winner in 2012 was not easy. The final polls of those likely to vote put Obama ahead of Romney by just 1.6%. Perhaps more salient, the USA Today/Suffolk University poll of those “not likely” to vote, showed that many first-term Obama supporters (44% of the sample versus 20% who supported McCain) preferred to sit on the sidelines and not support Obama for the second term. Obama’s election victory in 2012 had 3 ½ million fewer supporters than in 2008.
For all the times during the campaign that Romney shot himself in the foot, he still managed to get 47 per cent of the popular vote compared to Obama’s 51 per cent. The strength of the GOP support in its defeat underlines the strength of partisan divisions among American voters. If only the GOP could have mustered a somewhat more palatable, less gaffe prone candidate, thepresidency was theirs for the taking. Even now, this realization must make GOP leadership sick to their stomach.
The importance of party loyalty in voting
The discrepancy between U.S. and Canadian prediction polls raises the question of whether Canadian pollsters are doing something wrong, or whether the Canadian electorate is in some way fundamentally different from American voters. The results from the Rogers and Aida study strongly suggest it’s the latter.
In a key finding, the study found the accuracy of voter predictions were significantly correlated with consistency of previous voting behaviour—“people are more accurate when predicting they will behave consistently with their past behavior then when predicting they will behave inconsistently.” In other words, those who had not voted in the past and predicted they would not vote in the upcoming election were more likely not to do so than those who had voted. Similarly, those who had voted in the past and predicted they would vote in the upcoming election were more likely to do so than those who had not. Self-predictions became unreliable when they were inconsistent with past behaviour.
Consistency of voter behaviour is greatly influenced by voter loyalty to political parties. Although the roots of party loyalty are complex, factors that strengthen loyalty include partisan political propaganda disseminated through mass media. In the U.S., this propaganda machinery is extremely effective, particularly as a large part of the electorate is poorly informed on important national issues and the issues themselves can be extraordinarily complex. Recent examples that come to mind include the ACA (“Obamacare”) and the Dodd-Frank Act (banking reform). In such circumstances many voters defer to party positions that are communicated by the propaganda machine. This strengthens their reliance on the political parties and increases the likelihood of blindly casting their vote for those parties on election day.
In Canada, while it would be wonderful to suggest that Canadian voters are more savvy politically, the truth is that the strength of mass media political propaganda is significantly less potent due primarily to more modest election funding resources than in the U.S. Hence, party loyalty in Canada is not nearly as strong. The happy result (not so much for pollsters) is an electorate more amenable to change than in the U.S.
A good demonstration of such a change was the 1993 federal election where the ruling Progressive Conservative party was left with only two of the 156 seats they held previously. One cannot even imagine a similar situation where the Republicans or Democrats are left with only two seats in the Senate or House of Representatives as a result of an election. The near disappearance of the Bloc Québécois in the 2011 federal election (dropping from a provincial majority of 47 to only four seats) was also quite striking.
Clearly, the Canadian electorate has a significant capacity of vote switching between elections. The findings of Rogers and Aida suggest this makes self-prediction questions in polls far less accurate in predicting voting behavior on election day when compared to the U.S. experience.
Hence, when polls are predicting a big switch from the party in power to another party, as was the case in B.C., self-prediction in polls is in its least accurate guise. In these situations, pollsters need to be extremely careful in their prediction calls.
Impact of wrong polls on political journalism
For pollsters, the inconsistency of unhappy voters withholding their voting decisions until election day represents a nightmare scenario. That’s because for consumers of polling data—the public, politicians, pundits, and the media—predicting who will win the horserace is the central enticement of prediction polls. If it turns out that the polls cannot reliably predict who will win, they will have lost their raison d’être.
Among the news media that commission these polls and are financially rewarded for their investments with an increased subscriber usage, this outcome is awkward to say the least. How do they justify these expenditures when election day results contradict the polls? Should they apologize to their subscribers that the stories they were exposed to were based on inaccurate statistics?
What about the erosion of public confidence that all of this creates. Public confidence in newspapers has been deteriorating steadily over the past few decades. It now stands at 23% compared to 51% in 1979. Publishing political analysis based on faulty polls doesn’t help.
More directly, how do pundits and political journalists employed by these media feel about writing stories explaining why Party X is ahead of Party Y, as reported by the polls, when election day results reveal the opposite may have been true? What confidence can they have in the polling enterprise? And why would any respectable journalist continue using a story source when the source proves time and again to be unreliable? In frustration, journalists likeTim Harper of the Toronto Star and Andrew Coyne of the National Post, have simply decided to throw in the towel and in future not rely on poll predictions.
Unfortunately, that would be like throwing the baby out with the bathwater. Even polls that fail to predict, offer useful information.
For all their deficiencies, publicly-available election polls are central to our democratic process. We need to know what fellow citizens are thinking on public issues. Polls help reveal this public consensus while at the same time helping to create it. Not being able to predict election outcomes may damage this process, but it doesn’t destroy it.
The reality is that even in the absence of publicly-available opinion polls, there would still be polls. However, they would be secretive, private polls funded by organizations and individuals who would be in a position to use the results to manipulate public opinion for their own interests. This most assuredly would not be in the interests of the public and democracy.
In Canada, everyone who is part of this process has to come to terms with the uncomfortable reality that sometimes polls will lie.
Pollsters need to better understand how and when this happens. Researching this problem and coming up with a solution, is perhaps one of the biggest challenges Canadian pollsters have ever faced.
At the same time, political journalists and the media they work for have to factor in the possibility that when polls are predicting an election outcome, they may be completely off the mark. Since the media influence in constructing an election campaign narrative is substantial, not to mention this possibility in their stories would be irresponsible and damaging to a fair and balanced election process.
Until such time that pollsters solve the problem of misleading responses, getting it right when analyzing election poll data will be a most challenging and messy undertaking for Canadian journalism.