NSA spying, Edward Snowden, and the Polls: Misrepresenting public opinion

Using inappropriately vague and misleading questions, polls have found an American public evenly divided in their support of NSA domestic espionage — and on whether Edward Snowden’s role in revealing the breadth and depth of it makes him a patriot or a traitor. Closer scrutiny indicates these divisions are more likely the result of systemic methodological biases in the polls than an expression of genuine opinion. This points to a far more troubling problem: Bad polls subvert a fair and balanced public debate on mass government spying, resulting in potentially anti-democratic remedies.

And Canadians shouldn’t tell themselves this is just an American problem. Testifying before a Senate committee, Canadian intelligence agencies seem to feel that mass spying falls nicely in their bailiwick, legal constraints be damned. Regrettably, there is no Canadian Edward Snowden to blow the whistle on these operations.

A poll conducted by Pew Research in July of last year is a good example of this built-in bias. It asked respondents if they “approve or disapprove of the government’s collection of telephone and Internet data as part of anti-terrorism efforts”. The results showed 50 per cent approved while 44 per cent disapproved.

If a respondent didn’t know much about the issue, spying on some phone calls that could lead to exposing terrorist plots would seem like a reasonable trade-off between protecting privacy rights and fighting terrorism.

In fact, many knew very little about the issue. When Pew asked if respondents had heard about this government spying program, about 50 per cent indicated they heard little or nothing at all.

But what if, before answering the question, respondents were given the background to make an informed choice? What if they were told that it wasn’t just “some phone calls” but all phone calls? And what if they were told this monitored phone traffic wasn’t just among legitimate terror suspects but included family members, friends and associates? And not just phone calls but all emails, texts and other Internet communications, including website visits, over a period of years?

And what if they were told also that for all the years that this spying system has been in place, it was almost completely ineffective in thwarting terrorism? By the NSA’s own admission, of the 54 terrorist activities that were intercepted, only one or at most two relied on mass telephone spying (most intercepts involved non-telephone spying such as the PRISM Program). Would the average American see any economic sense in the government continuing to spy on the telephone calls of all of its citizens?

That mass telephone spying is ineffective in deterring terrorism was, in fact, the conclusion of President Obama’s own review panel on NSA spying. It recommended that the telephone spying program be significantly curtailed and provided with much more oversight than before. It also was sharply criticized by the Privacy and Civil Liberties Oversight Board, an independent federal agency which concluded that NSA telephone spying had “minimal” counterterrorism benefits, is illegal and should be shut down. In another rebuke, a Federal Court judge also questioned the constitutional legitimacy of the program.

Prior to all these revelations, a Pew poll reported that 53 per cent of Americans said they believed that “the government’s collection of telephone and Internet data has helped prevent terrorist attacks.” That was the administration line. But the evidence does not support the claim.

Had the public known of the shortcomings of the spying program — and had poll respondents been more fully apprised of its invasiveness and ineffectiveness in the wording of the questions — those poll results suggest that support for the program likely would  have dropped significantly.

To be fair, it wasn’t just Pew using vague and misleading wording. Other pollsters were equally to blame. A CBS News poll used phrases like “… to reduce the threat of terrorism … (by) collecting phone records of ordinary Americans”. ABC News/Washington Post used the phrase “…extensive records of phone calls … to try to identify possible terrorist threats”. A Time poll mentioned the need “… to prevent terrorist attacks by collecting data on telephone dialing records of U.S. citizens”.

Ironically, the sensitivity of the response to question wording was revealed by a parallel methodological study conducted by Pew Research. It found that support for NSA spying was very much dependent on the wording of poll questions. For example, if the NSA spying program was described as collecting “data such as the date, time, phone numbers and emails … with court approval as part of anti-terrorism efforts”, the study found the program had 41 per cent in favor and 49 per cent opposed. However, if the program was described as “collecting recordings of phone calls or the text of emails (of nearly all communications in the U.S.), with no mention of either courts with the goal of fighting terrorism”, the level of support dropped to 16 per cent in favor and 76 per cent opposed — an increase in opposition of 27 per cent.

Armed with this information, why wasn’t Pew Research more careful in its choice of wording for the questions?

Perhaps the central reason was that the program’s lack of an impact on terrorist activity only emerged clearly after the Pew survey was conducted. Prior to Snowden’s revelations about mass telephone spying, Director of National Intelligence James Clapper lied to Congress under oath when he stated that U.S. intelligence agencies did not collect telephone data on millions of Americans. After the revelations, the chief of the NSA, General Keith Alexander, assured Congress that this mass spying program helped thwart 54 terrorist operations. Later, he was forced to admit that the actual number of terrorist operations disrupted by the program was closer to … one.

The other important revelation that has emerged since the survey was taken is that most of the program operated in a de facto warrantless search environment. NSA analysts could spy on anyone, whenever they wanted.

This is crucial in understanding Snowden’s concern with the spying program. Spying was never the issue; the question was and is whether it is done in accordance with the law. Commentary from independent judges and constitutional lawyers casts the program’s constitutionality in doubt, and questions whether the role of FISA, the secretive oversight court, was to be nothing more than a rubber stamp for all forms of NSA snooping.

That said, it’s hardly surprising that a large segment of the American public regards Snowden as a traitor (49 per cent, according to an Angus Reid poll). The U.S. has been subjected to a massive vilification campaign against Snowden for his leaking of NSA spying documents — a campaign driven by the establishment in both the Republican and Democratic parties, as well as the White House. It’s one of those rare national issues that unites both traditional Democrat and Republican voters but finds less support in an odd mix that includes Independents, Tea Party types, conservatives and voters under 30.

This campaign portrays Snowden as a traitorous criminal bent on endangering national security by revealing confidential state secrets in a way that aids terrorists. The reality is that, in the absence of warrants and in an environment of secrecy and deceit, the truly dangerous crimes were committed by NSA. This is an organization that both parties had a hand in creating. It has become a rogue operation that now needs their protection.

In light of this, the ongoing debate as to whether Snowden is a patriot or a traitor seems silly. It simply demonstrates how easy it is to manipulate public opinion when the electorate is ill-informed, the subject matter is complex and terrorism is on the table. That word seems to elicit the most irrational fears — a sort of national hysteria — among Americans. Polls show that year after year (CBS News poll, June 9-10 2013 in PollingReport.com), between 40 per cent and 70 per cent of Americans believe the country will “likely” suffer a terrorist attack in the next few months.

Snowden’s decision was clearly motivated by a desire to see the NSA function within constitutional constraints.  Its failure to do so, and its willingness to have its officials lie under oath before Congress, make the organization liable to criminal prosecution.

The wise men who drafted the American Constitution understood the purpose of the Fourth Amendment — which guarantees privacy rights — was not to protect individuals who may be doing something illegal. Its intention was to limit the powers of government and the potential for abuse. When a government decides to override Fourth Amendment rights, it is the government that is behaving in a criminal manner, not the citizen. That’s where Snowden and the U.S. government parted company.

An ill-informed public is easily swayed by propaganda. Pollsters need to help information-challenged respondents. One traditional approach is to provide a short but accurate preamble that provides crucial and relevant information a respondent would need to know in order to meaningfully answer a question. Pollsters don’t like to do this. It makes questions wordy, slows down the survey, and reduces response rates. Nevertheless, it’s the price they need to pay to get reliable data.

Given what we know today about NSA spying, these polls simply cannot be relied upon. Polling firms need to go back into the field with properly tested questions — and do it right this time. Otherwise, it’s just a question of whose propaganda is more compelling.

This article was originally posted on August 24, 2014 in iPolitics under the title “Spying blind: How polls provide cover for domestic espionage

Posted in Polling | Tagged , , , , , , , , , | Leave a comment

Putin’s Invasion Plan for Ukraine

While there has been a huge amount of conjecture on what motivated Putin to invade Crimea and where his expansionist ambitions will take Russia, precious little evidence to support such conjecture is offered.  Was this a strategic action planned long in advance, or was it an opportunistic, tactical decision capitalizing on the turmoil of the Maidan revolution?  Is it an invasion that for historical reasons is limited to Crimea, as suggested by Putin, or is it a precursor of an invasion of mainland Ukraine (also ominously suggested by Putin).  It’s safe to say the West is completely at a loss in trying to figure out Putin’s game plan.

Historical precedents to Putin’s game plan

However, what we do know about Putin is that the man is a student of history.  While Western leaders may have very different interpretation of some historical events e.g., the collapse of the Soviet Union, other events provide a common ground of understanding.  It is here where we can discover the calculus for his actions.  As much as Putin wishes to re-create the power and the glory of Russia’s past, he has every intention of avoiding the disasters that brought the country to its knees.

Some would argue this would make Putin fearful of economic sanctions. That’s missing the point.  Sanction losses have already been baked into the territorial decisions.  However severe they might become (so far they’re inconsequential), they will be temporary.  The world needs Russia’s energy.  The economy will bounce back.

Nor is Putin concerned whether the Russian army will be successful in defeating any conventional military opposition in Ukraine.  The fraudulent Yanukovych regime managed to do this for him by reducing the Ukrainian Army’s fighting capability to 6000 troops.  With between 40,000 to 80,000 well-trained, well-equipped troops positioned on Ukraine’s border, the country would be overrun within a week.

But Ukraine isn’t a Georgia or Chechnya. It’s a country of 46 million with a land mass  that is the second largest in Europe. Putin’s main worry is not whether the Russian army is successful in invading Ukraine — it’s whether it can hold it.

Memories of Afghanastan

The bitter experience in Afghanistan is still freshly etched in Russian leadership of how easy it was to invade the country and how difficult it was to hold it.  The hostile locals exacted a heavy price on the invaders . Over the course of the 10 year war, Russia, then the Soviet Union, suffered about 15,000 military deaths and over 50,000 soldiers who were maimed or injured. The total cost of the war was over 80 billion dollars. The economic consequences were instrumental in the demise of the Soviet Union in 1991. In spite of these sacrifices, it became apparent to Russia then, as it is for America in Afghanistan today, that victory was impossible.

The central question to which Putin needs an answer is: Will the local population be welcoming or hostile to a Russian invasion?

The case in Crimea

In the case of Crimea the answer was simple.  With approximately 60% of Crimea being Russian ethnics who in large part regard themselves as Russian citizens, the Russian invading force would be guaranteed a welcoming public.  To ensure that this would happen, Russian thugs and criminal elements in Crimea were organized into local militias that would physically threaten any dissident voices from the Tatar or Ukrainian ethnic minorities.  Secondly, communication with the outside world was restricted to Russian propaganda informing the locals that the Kiev revolutionary government was controlled by fascists who were intent on subjugating Russian ethnics.

The case in Ukraine

The situation in Ukraine and specifically in eastern Ukraine, however, is far more complicated .  A Russian military invasion there has a much greater chance of following the Afghanistan script than what happened in Crimea.

That assessment is based on Ukrainian public opinion polls which  strongly suggest that regardless of whether Ukrainians are from predominantly Russian-speaking or Ukrainian-speaking regions of the country, a large majority in both the east and west of the country support an independent Ukraine, not annexation to Russia.  Polls conducted by the Kiev International Institute of Sociology (KMIS) show that support for Ukrainian independence steadily increased from below 60% in 1991 when the Soviet Union collapsed, to over 80% today.

A more recent poll conducted  by the Razumkov Centre from December 2013 (after the Maidan protests began), showed that 95% of all respondents (including 88% in the South) perceived Ukraine as their motherland, and 85% considered themselves patriots of Ukraine.  When asked specifically whether they supported the separation of their region (Western, Central, South-Eastern) from Ukraine and uniting with Russia, the poll found about 80% were opposed to this idea That hardly seems like data showing the country so divided it is on the verge of civil war, as Russian propagandists would have you believe.

The myth of linguistic differences

That’s not to say there aren’t important ethnic/linguistic differences in Ukraine.  In Western and Central Ukraine a majority speak Ukrainian.  In Eastern and Southern Ukraine a majority speak Russian.  But then, things quickly get complicated.  Many ethnic Ukrainians speak Russian.  Some speak both Russian and Ukrainian.  Some ethnic Russians speak Ukrainian.  Some speak Russian at home and Ukrainian at work, or vice versa.

Over the years there has been a great deal of intermarriage and socio-cultural integration between Ukrainian ethnics and Russian ethnics.  While Ukrainian has the constitutional sanction as the official language of the country, the Russian language is protected by that same Constitution.  On a practical level, both languages have equal standing in the country.  In business, Russian is often the preferred language.  Somehow over time the two languages have managed to find a peaceful coexistence with each other.

So in spite of the fact that only about 17% of the population consists of Russian ethnics, the number of Ukrainian citizens who speak Russian at home or at work is much higher (ranging from 29% to 46%).  These linguistic/ethnic differences were often exploited by political parties during elections.

It is therefore not at all surprising that when looking at color-coded election maps of Ukraine one sees a country that is politically divided — just as color-coded maps of Canadian and American elections also show the countries are politically divided.  Regrettably, these popular vote distributions have too often been used in Western media accounts to demonstrate a country so divided, secession or a highly decentralized central government (a splintering “federalization” in the words of Russian Foreign Minister Sergei Lavrov) are the only ways to prevent civil strife.

While this “color-coded” division very much plays into the Russian propaganda motif it is, as the above polls attest, simply the wrong conclusion. The reality is that to their credit, Ukrainian-speaking and Russian-speaking citizens have been able to overcome potential divisions related to language differences and achieve a sense of Ukrainian nationhood that rises above linguistic differences.

The myth of fascist extremists

Nevertheless, Putin and his Russian propaganda machine have decided to take these ethnic/linguistic differences one Orwellian step further.  Without any evidence that could be verified by independent journalists, Putin has claimed that the Maidan revolution was taken over by fascists and nationalist Ukrainian extremists whose intention was to rob ethnic Russians of their constitutional rights — which was exactly what he did to the Tatar and ethnic Ukrainian minorities in Crimea. The only surprise was a bogus referendum showing 97% in favor of annexation to Russia and not 100%.

These extremist fabrications were spread by Russian government agitators in eastern and southern cities with high ethnic Russian concentrations like Donetsk and Kharkiv.  The resulting fear and anger among ethnic Russians has produced demonstrations that have led to violence and death, exactly the pretext Russia was looking for to justify military intervention in these regions to protect the ethnic Russians .

A real problem – economic stagnation

While language issues can be easily manipulated by politicians and propaganda to inflame nationalistic or secessionary fervor, Ukrainians in both the east and the west are deeply troubled by something that can’t be easily manipulated — the country’s lack of economic growth since it became independent in 1991.  Over this span of time, Ukraine’s per capita GDP has increased by only 40%, while Poland, a neighbouring country starting at a similar GDP level, has increased its per capita GDP by almost 400%.

The country’s economic quagmire has led some to suggest that a proxy for the deep divisions within Ukraine is the desire of eastern Ukraine to join the Russian Customs Union while those in western Ukraine to favor the EU.  A KMIS poll from February of 2014 shows that given the choice in a national referendum, a large majority in eastern Ukraine would favor the customs union while a large majority in western Ukraine would favor the EU.

Forcing Ukrainians to choose between EU and Russia, however, is a misleading polling strategy. For many Ukrainians trading with Russia and with the EU is not an either/or choice. To flourish economically it needs both.

This sentiment is confirmed in a recent poll by the market research company GfK that found 56% in eastern Ukraine who wanted the country to align itself equally with Russia and the West.  In western Ukraine the figure was 44% and 52% across the country.

Opinions such as these are anathema to Putin.  Russia has managed to maintain Ukraine as a colonial state for several hundred years.  If the Maidan revolution gives it the freedom to develop economic ties with Europe and the rest of the world, it’s game over for this relationship.  That perhaps more than anything strikes fear in Putin’s imperial plans.

Which brings us back to the question that gnaws at Putin –What price for a military invasion and occupation of Ukraine mainland?

The rise of patriotism in Ukraine

The invasion of Crimea has stirred a patriotic nerve among many Ukrainians who previously would not have even imagined they would volunteer to fight to keep their country free.  Little did the Russian-trained secret service snipers at Maidan appreciate the events they would set in motion when they started killing the young protesters.  With Ukrainians willing to sacrifice their lives for freedom, there is a very good chance of an insurgency rising to battle the Russian invaders.  This would create a substantial drain potentially for many years on the Russian treasury which itself would be greatly humbled in its largesse by Western economic sanctions.

Confusing Russian polls

Russian public opinion on invading Ukraine is conflicted. A recent rally in Moscow protesting the military action against Ukraine drew 50,000 participants. Polls show there is no appetite among the Russian people to start a war with Ukraine — 73% of Russians in a VCIOM poll believe Russia should not interfere in the internal affairs of Ukraine.

Yet for many, stealing Crimea somehow is not perceived as an internal matter but merely correcting a historical mistake committed in 1954 by Khrushchev. Putin’s popularity in correcting this mistake is riding at an all time high. According to the Russian polling company Levada it’s at 80%, although I’m not sure what this enthusiasm really means when polling a dictator in a totalitarian state. But how long would this popularity last once Russians started feeling economic pain from the sanctions and a drawn out insurgency?

In what demonstrates the utter unreliability of the polling exercise in Russia, the same firm in its latest survey finds that 75% of Russians would support its government’s war against Ukraine and 77% think it’s all Ukraine’s fault.  So what happened to the 73% from the VCIOM poll two months ago who said Russia shouldn’t interfere in Ukrainian affairs?

It’s clear that much of the Russian public has been completely blinded by a massive government propaganda campaign in which most independent Russian news media outlets have been shuttered.  Whatever democratic progress Russia may have made since the fall of the Soviet dictatorship has simply vanished within weeks.

Understanding Putin

While the Obama administration has been publicly coy about providing military aid to Ukraine so as not to escalate tensions, there is little doubt that if it happened, an insurgency would be fed by weaponry from the West to conduct its defensive military operations.  No one need remind Putin what happened in Afghanistan when the mujahedin acquired Stinger missiles from the US.  It was the beginning of the end of the Soviet occupation of Afghanistan.

For all his pompous theatrics as the new czar of Russia, attempting to portray Putin’s actions as those of a man out of touch with reality is borne more out of frustration than fact.  His sense of reality may not be what Chancellor Merkel (Putin is in “another world”) or President Obama agree with, but it needs to be addressed.  They need to figure out what part of that reality is rhetorical BS, what could be negotiated, and what crosses that proverbial line in the sand.

Sage advice from Jimmy Carter

On the latter point, former President Jimmy Carter in a recent interview offered President Obama some unsolicited but sage advice on how to deal with Putin’s aggressive behavior.  Carter was confronted with a very similar situation when Soviet leader Leonid Brezhnev invaded Afghanistan. To prevent any further military incursions, Carter “sent Brezhnev a direct message that if you go any further, we will take military action and would not exclude any weapons that we have.”  That’s a line in the sand you could trip over.

Putin is acutely aware of the nationalistic sentiments in eastern and southern Ukraine noted earlier.  He knows it won’t be a Crimea cakewalk.  He knows what happened in Afghanistan.  A military invasion of Ukrainian mainland, while an excellent threat to extract negotiation advantages diplomatically, is simply not an acceptable long-term strategy for Russia.

Unless events in Ukraine go terribly wrong, it is a mistake Putin will not be tempted to make.

This story was originally published on April 7, 2014 at the National NewsWatch site.

Posted in Polling | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Why U.S. polls are better at predicting election results

Why are American polls more successful in predicting election results than Canadian polls?

The question isn’t merely an attention-getter. Soon there will be general elections held in Quebec, Ontario, and nationally. Polls, as usual, will play a big role in how the media cover the elections, and, ultimately, how people vote. If the polls prove to be as inaccurate as they have been recently, the Canadian electoral process is in big trouble.

The latest Canadian blowout was this year’s B.C. provincial election where the polls predicted an easy NDP victory only to see the incumbent Liberals returned to power. Equally memorable were the missed calls in Alberta last year and the federal election in 2011.

In an effort to explain (here and here), pollsters trotted out the usual suspects like response rates, negative ads, voter turnout, sample representivity, weighting, etc. But it’s clear the exercise was a fishing expedition. There is no consensus on what is the cause of these failures.

American pollsters, on the other hand, have for the most part managed over a similar time frame to avoid any such humiliations.

In the search for the culprit, no Canadian pollster seriously investigated the possibility that voters would mislead polls on their voting intentions, quite possibly without even meaning to do so. And yet, strongly suggestive evidence to support this hypothesis begs examination.

There’s a good reason why pollsters are not enamoured of this hypothesis. If true, it undermines the legitimacy of any if not all their prediction polls. How can polls reliably predict if the voting behavior of a significant number of voters contradicts their expressed voting intentions?

For news media, the implications are equally unpleasant. If polls cannot reliably predict election outcomes why do news media continue to run stories about who’s winning the horse race when the prediction tool is broken?

The unpredictability of voting intentions

The evidence of a fundamental disconnect between what the public admits to pollsters versus what it does in the voting booth is based on a revealing voter intention study undertaken by Todd Rogers and Masa Aida at Harvard’s Kennedy School of Government. The study included 29,000 people and compared their response to six pre-election surveys mostly from the 2008 U.S. presidential election, with actual voting behaviour.

Rogers and Aida looked at a question all prediction polls ask: How likely is a respondent to vote in the upcoming election? The question is important because only a little more than half of eligible voters get out and vote.  (In the 2008 presidential election, the turnout was 62 per cent; it dropped to 58 per cent in the 2012 election.) Pollsters use this question among others to help identify those likely to vote and base their election day projection on this group. Voters who indicate they are not likely to vote are excluded from the projection.

The study found among those likely to vote (“almost certain to vote” and “probably”), 14 per cent did not vote. Among those who indicated in the polls they were not likely to vote (“chances are 50-50” and “don’t think I will vote”), 63 per cent were found to have actually voted on election day. This was true regardless of whether the polls were conducted just before election day or earlier.

Also, polls tend to significantly under-represent the number who for one reason or another indicate they are not likely to vote. That’s understandable. Prediction polls aren’t interested in non-voters. But given that many in this group will vote, they should be.

For the polls used in the voter intentions study, the estimate of non-voters was about seven per cent. However, on election day of the 2008 presidential election, non-voters represented 38 per cent of all eligible voters. Based on the data from this study of how respondents actually voted (from both the “likely” and “not likely” groups), non-voters would total less than half of 38 per cent. This is a significant polling bias in favor of those who say they are likely to vote.

For pollsters, the consequences of ignoring voting behaviour of the “not likely” voters and underestimating the size of the group creates a fertile ground for producing inaccurate voting projections.

The conflicted unhappiness of those not likely to vote

The reason? Many who classify themselves as “not likely” voters do so because they are deeply disappointed with the conduct of U.S. politics. Evidence of this comes from a poll of “not likely” voters conducted by USA Today/ Suffolk University during the 2012 U.S. presidential campaign.

The poll found that a majority (59 per cent) complained that they weren’t interested in politics because nothing ever gets done—“it’s a bunch of empty promises.” Many (54 per cent) justified their lack of interest on the belief that politics was corrupt. Their cynicism included not only politicians, but also political institutions like the Congress, the Supreme Court and the presidency.

For pollsters, the consequence of this disappointment is manifested by a wholesale rejection of the political candidates queried in their polls. Many expressed no interest in even participating in election polls.

But why do so many among them end up voting on election day?

The USA Today/Suffolk University study found respondents were deeply conflicted on the question of to vote or not to vote. Most (79 per cent) felt the federal government played an important part in their lives and many were bothered by the fact that by not voting, others will have selected the president. A large majority also indicated that they would go out and vote for their candidate if they felt their vote would count.

Hence, on election day, whether stimulated by civic duty guilt or campaign propaganda, many among the “not likely” group cast a ballot. A large block of voters who the polls reported were not aligned with any candidate is found on election day to be distributed among these candidates.

The consequences of this behaviour can be enormous.

Consequences of voters voting when they said they won’t

If, by luck, the choices on election day of the “not likely to vote” group are similar to those who say they are likely to vote, then the ability of polls to predict election results is not undermined.

But if their preferences are different, the poll prediction model is broken.

For example, many of the dissatisfied, “not likely” voters may be displeased with the performance of the incumbent party but not enough to switch to another party. They show their dissatisfaction by not aligning with any of the candidates polled. On election day, perhaps influenced by the election campaign or choosing the lesser of two evils, they vote in favor of the incumbent party. Seemingly out of nowhere, the incumbent party ends up with a bloc of voters that is not accounted for in any of the polling.

Polls projecting a regime change can be a manifestation of this behaviour—a projected polling defeat turns into an election day incumbent victory.

Alternatively, this dissatisfied group could side with the rival party due to the party’s more persuasive campaign. For the rival party, this could mean the difference between a projected polling defeat or, at best, an uncertain outcome, versus a substantial election day victory.

As evidenced by the low election turnouts in Canada, it’s fairly safe to assume we Canadians share much of the same disappointment in politics as do American voters. Hence the polling biases measured by Rogers and Aida are likely relevant in a Canadian context. In fact, they may be more significant for Canada. For example, polls predicting a modest victory may augur a minority government whereas a more substantial victory prediction would suggest a majority government. These differences would have significant consequences on the campaign narrative in Canada.

But for pollsters, all this falls completely under the radar. What is on the radar are vote projections that too often completely are at odds with the popular vote on election day. The misfiring election prognostications in Canada are striking demonstrations of this phenomenon. In all three elections, the polling results differed from election day figures far more than any sampling error estimate could account for.

Canadian polling blowouts

In the case of the B.C. election, the polls were showing the NDP with an eight per cent to nine per cent lead just before election day. However, on election day, the winners were the incumbent Liberals with a five per cent popular vote advantage. The weak Liberal numbers during the campaign were an expression of voter dissatisfaction with political missteps of the incumbent party such as the HST flip-flop and other political scandals. On election day, many of the Liberal voters decided to return to the incumbent fold, making a mockery of the polling predictions.

The polls themselves offer evidence in support of this “dissatisfied voter” explanation. Two days after the B.C. election, Ekos Research replicated its pre-election poll and found both the Liberals and the NDP within sampling error of election day results.  Since both polls were methodologically identical, the only difference was the absence of motivation for voters to mislead the pollster on how they plan to vote.

In Alberta, polls had the Wildrose Party ahead by seven per cent to 10 per cent— until the voters readjusted this figure to plus-10 per cent in favour of PCs on election day. While the dynamics in the two provincial elections had some significant differences and the difference could be explained by methodological shortcomings, it is equally plausible that the results were caused by dissatisfied PC voters who temporarily pumped up Wildrose numbers during the election campaign by simply withholding their support, only to revert to the incumbent PC party on election day.

In the 2011 federal election, all the polls predicted a minority government whereas election day delivered a majority Conservative government. It was an enormous embarrassment for the polling industry. Central to this failure was overstating Liberal strength in Ontario. Many traditionally-Liberal voters chose to express their dissatisfaction with the party’s campaign by withholding their support, and by the end of the campaign switching their vote instead to Conservative candidates. These dynamics were in large part invisible to polls for the reasons noted above. They were, however, the difference between a minority versus a majority polling prediction.

The accuracy of U.S. prediction polls

While this biasing mechanism can explain the flagrant missed calls in recent Canadian elections, it doesn’t seem to have played any significant role in American elections. Poll predictions came within sampling error in both the 2012 and the 2008 Presidential elections. Also they were fairly accurate in predicting results in the critical swing states that essentially determine the election outcome. This accuracy was instrumental for poll aggregators like Nate Silver in helping him achieve his remarkable predictions for the 2012 presidential electoral college vote as well the popular vote.

Predicting the winner in 2012 was not easy. The final polls of those likely to vote put Obama ahead of Romney by just 1.6%.  Perhaps more salient, the USA Today/Suffolk University poll of those “not likely” to vote, showed that many first-term Obama supporters (44% of the sample versus 20% who supported McCain) preferred to sit on the sidelines and not support Obama for the second term. Obama’s election victory in 2012 had 3 ½ million fewer supporters than in 2008.

For all the times during the campaign that Romney shot himself in the foot, he still managed to get 47 per cent of the popular vote compared to Obama’s 51 per cent. The strength of the GOP support in its defeat underlines the strength of partisan divisions among American voters. If only the GOP could have mustered a somewhat more palatable, less gaffe prone candidate, thepresidency was theirs for the taking.  Even now, this realization must make GOP leadership sick to their stomach.

The importance of party loyalty in voting

The discrepancy between U.S. and Canadian prediction polls raises the question of whether Canadian pollsters are doing something wrong, or whether the Canadian electorate is in some way fundamentally different from American voters.  The results from the Rogers and Aida study strongly suggest it’s the latter.

In a key finding, the study found the accuracy of voter predictions were significantly correlated with consistency of previous voting behaviour—“people are more accurate when predicting they will behave consistently with their past behavior then when predicting they will behave inconsistently.” In other words, those who had not voted in the past and predicted they would not vote in the upcoming election were more likely not to do so than those who had voted. Similarly, those who had voted in the past and predicted they would vote in the upcoming election were more likely to do so than those who had not. Self-predictions became unreliable when they were inconsistent with past behaviour.

Consistency of voter behaviour is greatly influenced by voter loyalty to political parties. Although the roots of party loyalty are complex, factors that strengthen loyalty include partisan political propaganda disseminated through mass media. In the U.S., this propaganda machinery is extremely effective, particularly as a large part of the electorate is poorly informed on important national issues and the issues themselves can be extraordinarily complex. Recent examples that come to mind include the ACA (“Obamacare”) and the Dodd-Frank Act (banking reform). In such circumstances many voters defer to party positions that are communicated by the propaganda machine. This strengthens their reliance on the political parties and increases the likelihood of blindly casting their vote for those parties on election day.

In Canada, while it would be wonderful to suggest that Canadian voters are more savvy politically, the truth is that the strength of mass media political propaganda is significantly less potent due primarily to more modest election funding resources than in the U.S. Hence, party loyalty in Canada is not nearly as strong. The happy result (not so much for pollsters) is an electorate more amenable to change than in the U.S.

A good demonstration of such a change was the 1993 federal election where the ruling Progressive Conservative party was left with only two of the 156 seats they held previously. One cannot even imagine a similar situation where the Republicans or Democrats are left with only two seats in the Senate or House of Representatives as a result of an election. The near disappearance of the Bloc Québécois in the 2011 federal election (dropping from a provincial majority of 47 to only four seats) was also quite striking.

Clearly, the Canadian electorate has a significant capacity of vote switching between elections. The findings of Rogers and Aida suggest this makes self-prediction questions in polls far less accurate in predicting voting behavior on election day when compared to the U.S. experience.

Hence, when polls are predicting a big switch from the party in power to another party, as was the case in B.C., self-prediction in polls is in its least accurate guise. In these situations, pollsters need to be extremely careful in their prediction calls.

Impact of wrong polls on political journalism

For pollsters, the inconsistency of unhappy voters withholding their voting decisions until election day represents a nightmare scenario. That’s because for consumers of polling data—the public, politicians, pundits, and the media—predicting who will win the horserace is the central enticement of prediction polls. If it turns out that the polls cannot reliably predict who will win, they will have lost their raison d’être.

Among the news media that commission these polls and are financially rewarded for their investments with an increased subscriber usage, this outcome is awkward to say the least. How do they justify these expenditures when election day results contradict the polls? Should they apologize to their subscribers that the stories they were exposed to were based on inaccurate statistics?

What about the erosion of public confidence that all of this creates. Public confidence in newspapers has been deteriorating steadily over the past few decades. It now stands at 23% compared to 51% in 1979. Publishing political analysis based on faulty polls doesn’t help.

More directly, how do pundits and political journalists employed by these media feel about writing stories explaining why Party X is ahead of Party Y, as reported by the polls, when election day results reveal the opposite may have been true?  What confidence can they have in the polling enterprise? And why would any respectable journalist continue using a story source when the source proves time and again to be unreliable? In frustration, journalists likeTim Harper of the Toronto Star and Andrew Coyne of the National Post, have simply decided to throw in the towel and in future not rely on poll predictions.

Unfortunately, that would be like throwing the baby out with the bathwater. Even polls that fail to predict, offer useful information.

For all their deficiencies, publicly-available election polls are central to our democratic process. We need to know what fellow citizens are thinking on public issues. Polls help reveal this public consensus while at the same time helping to create it. Not being able to predict election outcomes may damage this process, but it doesn’t destroy it.

The reality is that even in the absence of publicly-available opinion polls, there would still be polls. However, they would be secretive, private polls funded by organizations and individuals who would be in a position to use the results to manipulate public opinion for their own interests. This most assuredly would not be in the interests of the public and democracy.

In Canada, everyone who is part of this process has to come to terms with the uncomfortable reality that sometimes polls will lie.

Pollsters need to better understand how and when this happens. Researching this problem and coming up with a solution, is perhaps one of the biggest challenges Canadian pollsters have ever faced.

At the same time, political journalists and the media they work for have to factor in the possibility that when polls are predicting an election outcome, they may be completely off the mark. Since the media influence in constructing an election campaign narrative is substantial, not to mention this possibility in their stories would be irresponsible and damaging to a fair and balanced election process.

Until such time that pollsters solve the problem of misleading responses, getting it right when analyzing election poll data will be a most challenging and messy undertaking for Canadian journalism.

This article was originally published on December 2, 2013 by the Hill Times.

Posted in Polling | Tagged , , , , , , , , , , | 1 Comment

The Vilification of Québec’s Charter of Values by English Media and Opinion Polls

In its condemnation of Québec’s Charter of Values as an attack on Canada’s religious freedoms, English media have vilified not only Premier Marois and the Parti Québecois, but also millions of Canadians both in Québec and across English Canada who happen to agree with its intent. This is the unavoidable conclusion one is led to when examining public opinion polls.

Are millions of Canadians guilty of religious intolerance?

Polls show public support for this legislation embraces about half of all Quebecers (based on aggregating polls from Léger, Forum and SOM)  , and almost  4 of every 10 Canadians in English Canada. That translates to about 3,5 million individuals 15 years of age and older in Québec and over  8 million in the rest of Canada. Realistically, there is little doubt that if these millions of Canadians were asked whether they were opposed to religious freedoms, most would vehemently disagree. Clearly, the conclusion that support of the Charter constitutes a de facto expression of religious intolerance is nonsensical.

The polls themselves affirm this. When asked if public employees should be fired for wearing prominent religious apparel, a CTV-Ipsos Reid poll found a large majority of Canadians (72% nationally and 69% in Québec) disagreed. Since many of those also support the Charter, it does not seem like a response indicative of religious intolerance.

Public input in Québec on the Charter confirms what the polls have told us–the plan has popular support.  .  The PQ government revealed that 47% of 26,000 comments were entirely favorable, 21% were favorable with changes, and only 18% were opposed.  Clearly, there is something more at play here.

Is it religious intolerance or separation of church and state?

A more plausible conclusion is that for most, support for the Charter has little to do with religious intolerance.  In fact, its primary inspiration is something far more important and distinctive to the Canadian way of life–namely the historical principle of the separation of church and state.

For many Canadians the separation of church and state is not as abstract a concept as one might think.  While in Canada the concept does not have the force of law, many Quebecers still remember the dark age of the Duplessis regime and the extraordinary power of the Catholic Church in the affairs of the state. Also, the problems in the Middle East are fresh reminders of what happens when religion enters the domain of politics.  Whether newcomers or native born, most Canadians would prefer to avoid creating a similar situation in Canada.

This somewhat obvious rationale seems to attract little interest among English media critics. Instead, their focus is the allegedly nefarious political machinations of Premier Marois and PQ and the fear that the ultimate intent is to win a majority and separate Québec from the rest of Canada.

That’s a big if.

A defense of religious freedoms or an attack on the PQ?

Portraying the Charter as an attack on religious freedoms is simply a convenient way by which to bash the PQ party while feigning concern about religious freedoms.

The recent editorial in the Toronto Star is typical of this duplicity. It describes the proposed Charter as something that “offends basic Canadian decency”, erases “human dignity”, sends “an ugly message that some are less welcome than others”, and “offends the Constitution, Québec’s long tradition of tolerance, and this nation’s deeply held values.”

Really?

Are the millions of Canadians, both French and English,  who support the Charter that gullible, so lacking in decency, so insensitive of the human indignity they are causing, and so intolerant of other religions that they could embrace this diabolical creed?  Unless one is totally cynical, this line of reasoning fails the credibility test.

Public opinion polls have not been helpful in clarifying this complex and sensitive issue.  Some of the questions used by pollsters seem intent on portraying Quebecers as religiously intolerant and xenophobic.

Bad poll questions, sloppy media interpretations

An Angus Reid Global poll found that while 64% of Quebecers believe they are doing too much to accommodate religious and cultural differences, only 17% of the rest of Canada agree with them.  But what do Canadians in English Canada know about the religious accommodations in Québec?  Do they know that the Québec government provides funding to a diversity of religious schools, not just for Catholics as in Ontario? For most, probably not. The meager 17% response is dictated by ignorance, and likely has more to do with the historical antipathy of some in English Canada of Québec’s equal status as a founding nation. Nevertheless the bottom line media message is that Québec doesn’t do enough religious accommodation and is therefore intolerant of religious groups.

A Forum Research poll found 43% of Quebecers were “uncomfortable being served or

attended to by someone wearing a turban, a hijab or a yarmulke in a public sector

office or setting such as a school or a hospital”. The wording of the question makes a proper response to the variety of combinations impossible. For starters, what does “uncomfortable” mean? Is it a euphemism for religious intolerance, or an expression of concern about the imbalance between church and state in the public arena?  What if a respondent is comfortable with a yarmulke or turban, but is uncomfortable with a niqab because it masks the face (except for the slit across the eyes)?  What if a respondent is comfortable with all forms of religious apparel in a hospital setting, but not in school or government offices?   Clearly, it is a poorly designed question, woefully ambiguous in meaning and interpretation.  That said, the most damaging interpretation is one where “uncomfortable” is code for religious intolerance.  That would cast 43% of Quebecers into that undeserved role.

And exploiting religious intolerance was exactly the path a Léger poll took. With the headline proclaiming “Poll challenges Quebec’s image of tolerance: Findings dispute PQ stance that province is welcoming to immigrants“,  the  Montréal Gazette article  reported 46% of Quebecers agree with the statement that Québec society is threatened by the influx of immigrants, and that 40% don’t believe their city is enriched by the diversity of religious groups. Among Francophones these figures are even higher. This, the article concludes is evidence that the province is not welcoming and tolerant of new arrivals.

That logic is a stretch. There is a much simpler explanation for these data points.

Balancing secular state values with religious values

Many Quebecers feel there is an imbalance between the accommodations of their host society and the religious demands of some immigrant groups. To them, it’s not a matter of religious intolerance; it’s a separation of church and state issue. If they believe that certain immigrant customs based on religious beliefs are intruding upon host customs based on the secular notion of separation of church and state, then of course this group feels threatened by the influx of immigrants and feels that they should modify their customs.  It’s therefore not surprising they would also feel that their community is not enriched by the diversity of religious groups.

To designate this group as intolerant seems an excessive over-reaction and unfair.

Need for fair play in accommodation

But it is also unfair to those for whom wearing religious apparel is important, to summarily pass a law that makes this custom illegal in certain designations.  If, for example, immigrants were told prior to immigrating to Québec that there would be restrictions on their religious apparel, they may have decided not to emigrate to Québec. Having set up their families, homes, and careers in Québec, telling immigrants to now obey this new law or suffer being excluded from a significant swath of Québec society, is inconsistent with our Canadian sense of fair play.

In the spirit of fairness, there has to be an accommodation on both sides.  But in trying to understand where there is common ground for accommodation, it is misleading and morally repugnant for pollsters (in their choice of questions) and English media (in how they interpret these questions), to vilify not only millions of Quebecers, but also Canadians in the other provinces as being religiously intolerant.  Starting with this assumption is not the road to a mutually acceptable accommodation.

This story was originally posted on November 12, 2013 at the Hill Times site under the titleOleh Iwanyshyn: English media, polls vilifying Quebec’s Charter of Values

Posted in Polling | Tagged , , , , , , , , | 1 Comment

How Polls Have Demonized Federal Deficits – Part II

In Part I of this article we examined how a highly respected newspaper like the New York Times employs biased polling questions to arrive at the questionable conclusion that the public is in favor of austerity measures to cut deficits.  In Part II, we shed light on why this happens and its consequences for the economy.

The massive scale of the austerity bias in media polls

To be fair to the New York Times and CBS News, they’re not the only organizations running polls with an austerity bias.  Similarly biased poll results were recently published by Fox News, Pew Research, and Bloomberg. The latest is an ABC News/Washington Post poll that shows most Americans support a 5% cut in overall Federal spending except for the military.

Where lies the inspiration for an austerity bias that seems to have captured the whole polling industry?
One has to look no further than Washington.

The bipartisan roots of the austerity bias

For years the Republicans have been warning Americans that the federal debt now at $16 trillion, will lead to hyperinflation and destroy the economy.  Economists have looked everywhere but, just like President Bush’s WMD scare in Iraq, found nothing. Nevertheless, like a vampire, the message refuses to die.

The Republican prescription for the problem is to cut taxes and government programs.  They believe tax cuts will leave more money in the pockets of consumers which will increase demand and cause the economy to grow.

That logic didn’t work in the crisis of 1929 and there is no reason to think it will work in today’s crisis.  What economists are pretty certain will happen is that deficits will get bigger due to decreased tax revenues.

Having lost their tax cutting argument in the Fiscal Cliff deal, the Republican focus has shifted to promoting austerity through program cuts.

The Democrats on the other hand, while being labeled by Republicans as spendthrifts i.e., pro-stimulus, don’t behave that way at all. Their victory in the Fiscal Cliff standoff was in fact a victory for austerity.

Perhaps more to the point, the limited success of the earlier stimulus program has caused them to doubt its potential for success. There is also a problem with the numbers. The Republican-controlled House makes the likelihood of a new stimulus bill passing through Congress close to zero. Also, it should be noted that during the Grand Bargain negotiations with Republicans, President Obama made explicit his intention to cut spending in the areas of Social Security and Medicare.

Simply put, both parties have come to favor austerity over stimulus.

Consequences of partisan austerity bias on media and public opinion

With neither Party championing stimulus, media stories are pretty well unanimous that the deficits and the debt need to be addressed through some combination of austerity measures.  The precise nature of these measures differs depending on whether they originate from a Republican or a Democratic plan.

If the dominant media message promoted by both Parties is one of austerity, it is therefore not surprising that when polled, a majority of the public, whether Republican or Democrat, supports austerity solutions. That’s confirmed in the earlier finding showing majority support for raising the debt ceiling with offsetting spending cuts.  Republicans support stands at 64% while Democratic support is at 55%.

The clueless American public

However, the public’s lack of understanding of the deficit and fixation with cutting it seems somewhat irrational given that when the sequester kicked in on March 1, it was the mother of all deficit cuts — $1.2 trillion.

Yet most Americans are barely aware of it.

A Pew Research poll found that with respect to the already legislated deficit cuts in the sequester, 72% of Americans have little awareness of the massive cuts (43% said they heard a little about the automatic sequester cuts while 29% said they heard nothing at all) scheduled to begin less than two weeks after the poll.  But that very same poll reports that 70% of Americans feel that it is essential that the President and Congress act on legislation to reduce the deficit this year.

If most of the public aren’t aware of these legislated cuts, yet insist that the legislation to reduce the deficit is “essential”, isn’t the problem one of communication and not the deficit?

The public’s confusion on these matters is even more apparent by its contradictory stance on the desire to see the deficit reduced yet preserve the funding for popular government programs.

Wanting cuts yet wanting it all

A follow-up  Pew Research poll found that 87% of Americans were in favor of increasing spending on Social Security or at least maintaining it at its present level. For Medicare, the corresponding figure was 82%, while for military spending it was 73%.

The inconvenient truth, however, is that if there is to be any impact on the deficit from program cuts, these are the programs that need to be cut.  The polls show that clearly there is no public appetite for this.

If the public is insisting on program cuts to reduce the deficit but doesn’t want cuts to its favorite programs, what is it saying?

The failure of public opinion polling

The polls offer no resolution to this quandary.

What is clear, however, is that in addition to being biased in favor of austerity, public opinion on deficits is neither informed, nor even coherent.

Yet we know that informing the public of the facts can create massive changes in poll results.

This represents a serious failure in public opinion polling.

In a democracy, polling results have consequences.  They structure political debate.  They influence government policy.  Elections are won and lost on the basis of poll results.

In the case of deficits, polls seem to support the conclusion that the public favors austerity measures in deficit reduction.  If they’re wrong, they may result in government decisions that damage the economy, increase unemployment, and create a lot of unnecessary misery.

This is not an idle speculation.

Consequences of austerity in Europe

Austerity policies have fared poorly in Europe, exacerbating the economic problems in many of the countries (unemployment in Spain 25%; in Greece 20%) rather than improving them.  The IMF, having helped push the EU towards austerity, recently came to admit it is not working.  In the UK, austerity caused the economy to fall into another recession.

Given such disappointing results from Europe, America needs to seriously question whether austerity is the direction in which it wants to take its fragile economy.

This was clearly on Ben Bernanke’s mind when he recently testified before Congress on the state of the economy.  Translating from Fed-speak, he warned that the “deficit is not a clear and present danger, spending cuts in a depressed economy are a terrible idea and premature austerity doesn’t make sense even in budgetary terms”.

If success in doing good polling comes with baby steps, a good place to start is with polling questions that are free of bias and yield meaningful and credible answers.

This article was originally posted on March 9, 2013 in iPolitics under the title “How polls demonized federal deficits (Part Two)“.

Posted in Polling | Tagged , , , , , , , , , , , | Leave a comment

How Polls Have Demonized Federal Deficits – Part I

Recent polls suggest that a majority of Americans have bought the austerity argument that deficits are bad for the economy and need to be eliminated.  However, a closer look reveals this conclusion is unwarranted. It appears the poll questions are biased by not providing respondents with a legitimate option of how deficits could be eliminated through a stimulus program to create jobs.

Austerity options lead to austerity choices

The New York Times/CBS News poll from January of this year is representative of this bias.

It asks:  “Overall, what do you think is the best way to reduce the federal budget deficit — by cutting federal spending, by increasing taxes, or by a combination of both?”.  A majority, 61%, support the combination of cutting spending and increasing taxes, while 33% cite federal spending cuts.

Only 3% support tax increases, which is understandable given the poll was conducted after the Fiscal Cliff agreement that increased taxes on everyone.

While the question provides deficit reduction options that focus on austerity (spending cuts, tax increases), it offers a respondent no options for deficit reduction by way of a stimulus approach (e.g., job creation to rebuild a crumbling national infrastructure).  It is therefore not surprising that a question offering only austerity options will yield austerity conclusions.

The stimulus option

But it is a stimulus approach, persuasively argued by many economists including Nobel Prize winners Paul Krugman and Joseph Stiglitz, that offers the best chance of reviving an economy that has been struggling since 2008.

While for the general public the argument to reduce deficits by stimulus spending — and hence temporarily increasing deficits — seems counterintuitive, it is based on fairly solid empirical foundations and the insightful economic ideas of John Maynard Keynes.

These ideas were spawned by the Great Depression of 1929.

Keynes argued that during a recession/depression, investment opportunities dim and private capital recedes from the marketplace. This results in reduced employment which of course reduces consumer spending.  For governments this reduces tax revenues while at the same time increasing social safety net expenditures like unemployment insurance benefits, food stamps, and welfare assistance.

In a crisis like this, Keynes argued that the absence of private capital needs to be offset by public capital i.e., government stimulus spending.

Government stimulus is counterintuitive

This is counterintuitive since most ordinary folk responsible for the household budget would conclude that with less revenue, household spending should go down. A household needs to live within its budget.

That was in fact the “intuitive” reaction of government policy after the crash of 1929.

We know only too well how gravely this policy impoverished a huge swath of America for many years afterwards. Everything went into austerity mode until President Franklin Delano Roosevelt began to resuscitate the nation with his stimulative New Deal.  This included massive government infrastructure projects e.g., Hoover Dam, that created jobs, increased demand and brought back private capital into the economy.

The economic destruction caused by inappropriate austerity policies was so great, it wasn’t until the onset of World War II that the American economy finally established a solid footing.

Critics of stimulus have argued that President Obama has tried it and failed.

The inadequacy of the Obama stimulus

While it is true that an $800 billion stimulus package was passed by Congress in 2009, Paul Krugman has argued from the very beginning that the amount was insufficient given the massiveness of the 2008 meltdown.

To be fair, the program did save many government jobs that would otherwise have disappeared.  However the stimulus was insufficient to create enough new jobs.

The Great Recession of 2008 resulted in the loss of nearly 9 million jobs.  Since then the weakened economy has only managed to recover about 5 million jobs.  Nearly a quarter of the workforce reported being laid off as a result of the recession.  The unemployment rate still hovers around 8% and 23 million Americans are either unemployed or underemployed.  The economy can’t even manage to create a sufficient number of jobs to absorb the growth in population.

The austerity bias tars the debt ceiling question

The austerity bias in the New York Times/CBS News poll was not limited to the framing of the deficit reduction question. It was replicated in a follow-up question related to the debt ceiling.

The question found that 60% favored the option that the debt ceiling should be raised “but only with the condition that the government also cuts spending to offset it”.  Raising the debt ceiling without conditions was favored by 17% and not raising it under any conditions was favored by 18%.

However, since it’s feasible that deficits and the debt can be successfully attacked through a stimulus approach, the condition that it be limited to spending cuts is a false dichotomy.

An easy fix

The bias could readily be corrected simply by changing the austerity wording to something more neutral e.g., “the debt ceiling should be raised but only with the condition that the government do something to reduce the deficits”.  That “something” would and rightly should be left to the Congress and the President to figure out, but it leaves the door open for stimulus to be that something.

Why the New York Times poll bias with Krugman on board?

What is truly perplexing about this austerity bias in the New York Times poll is why the newspaper that commissioned it would not seek advice on framing these critical questions from Krugman, who writes for the same paper? This is one man who understands the consequential differences between austerity and stimulus. His blog has numerous references on their impact on deficits.

So what’s the explanation for the New York Times asking such biased questions on deficits?  As with many conundrums that confound America, the answer lies with its political leaders.

Part II of this article explains how this happens and the consequences of eliciting fake public opinion on the deficits.

This article was originally posted on March 8, 2013 in iPolitics under the title “How polls demonized deficits in the U.S.“.

Posted in Polling | Tagged , , , , , , , , , , | Leave a comment

These eleven articles, originally published on iPolitics, will be reposted on this blog with section headings consistent with the earlier posts. In the meantime you can access the complete articles by clicking on the titles. Thanks for your patience.

How polls demonized federal deficits (Part Two)

By Oleh Iwanyshyn | Mar 9, 2013 10:13 am | 0 Comments

In part one of this article we examined how a highly respected newspaper like the New York Times employed biased polling questions to arrive at the questionable conclusion that the American public is in favor of austerity measures to cut deficits. In part two, we shed light on why this happens and its consequences for the U.S. economy….

How polls demonized deficits in the U.S. (Part One)

By Oleh Iwanyshyn | Mar 8, 2013 8:53 pm | 0 Comments

Recent polls suggest that most Americans have bought the austerity argument that deficits are bad for the economy and need to be eliminated.  However, a closer look reveals this conclusion is unwarranted. It appears the poll questions are biased because they don’t provide respondents with a legitimate option on how deficits could be eliminated through a stimulus program to create jobs….

Apocalypse not: The fiscal cliff, Washington and the polls

By Oleh Iwanyshyn | Dec 26, 2012 9:00 pm | 0 Comments

Public opinion on the so-called ‘fiscal cliff’ offers yet another sorry demonstration of how political and media interests have conspired to create a crisis atmosphere for something potentially far less traumatic….

The CBC will run out of money before excuses

By Oleh Iwanyshyn | Dec 8, 2012 5:00 am | 2 Comments

Some weeks ago I attended a memorial for Jim Murray.  For many years Jim was the executive producer of the CBC’s The Nature of Things with David Suzuki.  The show won numerous awards….

Polling the third presidential debate: Republican bias and attacking lies

By Oleh Iwanyshyn | Oct 26, 2012 2:06 pm | 1 Comments

The CNN/ORC poll of those who watched the third presidential debate again understated the degree to which Obama dominated his Republican adversary. As was the case in the second debate, the reason for this was an overrepresentation of Republicans in the sample that was selected….

The true measure of Obama’s victory in the second debate

By Oleh Iwanyshyn | Oct 21, 2012 5:16 am | 1 Comments

Public opinion polls giving President Obama a modest and indecisive advantage over Governor Romney in the second debate may have substantially underestimated the strength of his debate performance. The cause of this distortion can be tied to the degree to which there was significant overrepresentation of Republican voters in the poll samples…

What Bill Clinton and the polls say about politics in America

By Oleh Iwanyshyn | Sep 19, 2012 4:56 am | 0 Comments

Without resorting to over-the-top partisan vilification, Clinton’s DNC speech made it apparent that Republican policies could not achieve their goals as they pertain to deficit reduction, reducing taxes, and sustaining …

How America was robbed of its voice

By Oleh Iwanyshyn | Aug 9, 2012 4:31 am | 0 Comments

Associating partisan phraseology with Republican or Democratic positions, cues respondents to readily answer poll questions on complex issues. For pollsters, the partisan propaganda process yields a high level of response …

How the polls helped the PCs win in Alberta

By Oleh Iwanyshyn | May 2, 2012 5:01 am | 0 Comments

While the polls were dead wrong in predicting a Wildrose majority, they were critical in helping to craft the Progressive Conservative victory.

Déja vu: the manipulation of U.S. public opinion in favour of war

By Oleh Iwanyshyn | Apr 4, 2012 4:47 am | 0 Comments

Why are American media giants like the New York Times and CBS News disseminating polling results that are fanning the flames of war against Iran?

Propagating the myth of a “Divided America”

By Oleh Iwanyshyn | Mar 6, 2012 5:01 am | 0 Comments

It can be argued that the biggest problem America faces is not the deficit in its budget. It’s the deficit in truth and trust of its government institutions. Yet this …

Posted in Polling | Leave a comment

Pollsters use leading questions to manipulate the uninformed

While critics have admonished pollsters and the press for disseminating inaccurate portrayals of public opinion on important national issues, they have generally ignored perhaps the most serious and intractable problem in this endeavour — a poorly informed American public.

Polling ill-informed Americans

Poll after poll has shown that on complex issues like health care reform and raising the debt ceiling, many Americans freely admit that they have insufficient understanding of the issues. Estimates of the number of ill-informed range from about 40 per cent to over 70 per cent of the population.

Their opinions, because they are unformed, are easily manipulated. Whatever views they do hold are ephemeral and subject to change with new information. The evidence suggests the magnitude of this problem readily dwarfs statistical uncertainties of polls as characterized by confidence intervals (e.g., this percentage is accurate to ± 2.5 per cent 19 times out of 20).

And yet pollsters persist in asking their opinions regardless.

Why are pollsters so persistent? Why don’t they simply classify these respondents as having ‘no opinion’?

Pollsters need answers

Because when pollsters are commissioned to poll public opinion (often by press organizations), they need to deliver the goods. Reporting that half the public has no idea about the particulars of health care reform or the debt ceiling is not the headline the press has paid for.

This leaves pollsters with a dilemma. How do they get respondents to proffer an opinion on some topic when they really haven’t thought much about it?

Use of leading questions

They do what good litigators do. They ask leading questions where the answer to the question is embedded in the wording of the question.

How do they manage that?

The leading or suggestive wording comes from an intensive and broad-based media coverage of the political debate in Washington on the topic in question. Media reports, especially on TV, still carry a good deal of cachet among Americans, and the political messages they carry from the partisan debate in Congress tend to find their mark among the public.

The media bombardment conditions the ill-informed Americans to associate ideological tautologies that flow out of the partisan debate.

So for example, a Republican-leaning respondent associates “raising the debt ceiling” with “oppose”, and “increased debt” and “spending cuts” if the legislation is to pass. A Democratic-leaning respondent associates “raising the debt ceiling” with “support”, and “government default” and “financial chaos” if it does not happen. For respondents who know next to nothing on this complex topic, that’s all they need to know to answer the poll questions.

Leading questions mislead

But clearly these simplistic associations trivialize the complexity of the issue. The impact of spending cuts would increase what is an already high unemployment rate. Most Americans, be they Democrats or Republicans, regard the high unemployment as far more serious and immediate than debt reduction. How would that influence their support or opposition to raising the debt ceiling?

Yet another complication stems from the historical perspective. There have been many previous debt ceiling increases, including seven during the previous Bush presidency. These were never linked to legislated spending cuts. So why now?

For respondents who know little about the subject matter, finding out about the economic impact or historical pattern of debt ceiling legislation could have significantly influenced them in favor of raising the ceiling.

Explanatory preambles

It’s really important that when polling a complex national issue, pollsters include questions with a preamble explaining some aspect of the complexity, and then ask respondents whether they favor or oppose the issue being polled. These preambles can significantly alter the level of public support for a complex issue that for many is opaque.

During the health care debate some polls found that using this type of preamble question increased public support for Mr. Obama’s healthcare reforms. According to a NBC/Wall Street Journal poll, without a preamble explaining the reforms, only 36 per cent of Americans felt Mr. Obama’s plan was a good idea. With a preamble, 56 per cent said they were in favor of it.

In spite of these merits, these questions are not popular with pollsters because the extra verbiage makes them more demanding for respondents and can impact on survey response rates. Also, pollsters have to be careful in the wording of the preamble so as not to bias the response to favor or oppose the issue being polled.

Open-ended questions

Another strategy to consider when polling complex issues is to simply abdicate the traditional structure of leading questions and pre-formatted responses, and rely on open-ended questions that do not use leading phraseology to prompt answers. In effect, let respondents answer in their own words. This approach was taken by Gallup in one of their surveys. It yielded results that differed markedly from polls using preformatted responses.

For example, on the provision in the health care reform legislation requiring that all Americans who did not have health insurance to get it, a CNN poll using pre-formatted responses showed that 53 per cent of the public were opposed to it. Without prompting, the Gallup survey revealed that only 5 per cent were opposed to this provision.

Pollsters and the press do not favor these open questions for a number of reasons. One has to do with the logistics of converting polling data into news stories. Each response has to be manually categorized and coded. For today’s instantaneous news cycle this process is simply too slow and cumbersome.

Secondly, the coding scheme is subject to some degree of uncertainty due to coding decisions made by humans. However, these errors can be kept to a minimum through independent coding verification.

Thirdly and perhaps most importantly, it robs the press of its headlines. Without recourse to the prompting effect of pre-formatted responses, the open-ended responses tend to be scattered across a number of distinct categories, making it difficult to summarize them in a simple, catchy headline.

Focusing on informed respondents

Yet another approach might be to simply compare poll findings for those who admit they are ill-informed with those who claim they are sufficiently informed. The problem is that regardless of whether there are differences or similarities between the two groups, these results may be inconclusive due to the influence of other variables and factors. A more complex statistical analysis would need to be undertaken. There is also the problem of respondents’ self-assessment of being informed or not. One person’s assessment of being well-informed could be another’s of being ill-informed.

A more relevant and accurate comparison might be one based on behaviour — whether a respondent votes in elections. Comparisons between voters and nonvoters can test the hypothesis that those who vote have a greater familiarity with important national issues. Comparisons can also reveal if support for an issue is substantially different between voters and nonvoters. The rationale behind this approach is similar to that of polls predicting election results by trying to identify those most likely to vote on election day.

To put it another way, if a segment of the population has such a low sense of civic responsibility that they don’t vote, does it really matter what their opinion is on some complex national issue? Ultimately the political establishment will be judged on their performance by those who vote, not those who don’t. That said, it makes sense that the press should focus on public opinion that is politically meaningful, rather than confound it with possibly very different opinion of those who don’t vote and politically are not players.

Sacrificing accuracy for excitement and bottom line

All these strategies place a greater responsibility on the press and pollsters to do a much better job of identifying legitimate public opinion, particularly when a substantial segment of the population professes ignorance of a complex national issue. Rather than cherry pick questions that create an exciting narrative and coincidently helps sell papers, the press needs to make sure it has its public opinion facts right. The latter should also be the focus for pollsters. Instead their focus is on technological innovations like robo calling and flash internet surveys from respondent pools, the primary purpose of which seems to be to improve their bottom line.

The current state of affairs in polling public opinion in America is a disservice to the country. Ill-informed Americans are being influenced by self serving media stories and simplistic, suggestive polling questions to profer answers that do not reflect their true opinion on the issues. The proliferation of inaccurate public opinion is destructive to national dialogue. Ultimately, it isolates Americans from each other, advancing social division over common purpose.

This article was originally published by iPolitics on October 26, 2011.

Posted in Polling | Tagged , , , , , , , , , , , , | 3 Comments

Searching for Accuracy in Election Predictions: More Regulation of Polls or More Competition?

Lately, contradictory election polls seem to be as common as contradictory political messages during election campaigns.

Contradictory polls

The recent provincial election in Ontario is a good example.  During the campaign poll results were all over the map .  In one instance, an Abacus Data poll published by Sun Media on Tuesday, September 15 showed the Progressive Conservatives 9% ahead of the Liberals, while a few days earlier, on Friday, September 9, a Harris/Decima poll published by the Globe and Mail showed the Liberals 11% ahead of the Progressive Conservatives.  These massive differences cannot be explained by sampling error.

While the public expects contradictory political messages, contradictory polls are unsettling.  Aren’t polls supposed to be a scientifically accurate measurement of the popularity of candidates?  With their ubiquitous reference to the 95% confidence interval, accuracy is a polling virtue the industry assiduously fosters.

So if polls conflict, the public naturally wonders which ones are right and which ones are wrong.

Or are any of the polls correct?

In polling the 2011 Canadian Federal election, no poll predicted a majority Conservative government.

These conflicts tend to undermine the trust that the public has in prediction polls, and, by association, any media accounts referencing such polls.

Concerned about the damage this may cause to the election process and the credibility of the political debate, some have called for the regulation of the polling industry to ensure greater accuracy and consistency among polls.

Regulating polls to improve accuracy

Regulating polls would require some sort of agreement on standards for the different methodologies used by pollsters.  Unfortunately all of the commonly used methodologies have substantial deficits that impact their accuracy.

  • Household telephone polls miss many respondents who have cell phones but no household telephone.  The latter group is estimated to be between 20 and 25% of the public and is growing.  Household telephone polls need to and occasionally are supplemented by a representative subsample of cell phone only users.  For the longest time this methodology has been the gold standard for election polls.
  • Robo polls have a high rate of nonresponse because, well… people don’t like talking to robots.  It raises the question of who do the respondents represent.  Playing loosely with the laws of probability produces polls based on what more correctly should be called quota samples rather than probability samples.  In the absence of a probability sample, concepts like sampling error used to ascertain accuracy become relatively meaningless.  The undisputed virtue of this methodology is its low cost.
  • Internet polls are based on massive respondent pools that are supposed to represent the entire voter population.  The problem is that many in a pool are not randomly selected but volunteer simply to make a buck (self-selection).  This undermines the representivity of the pool in relation to the population.  Representivity of Internet polls is also limited by a lack of Internet accessibility for approximately one third of the population.  The undisputed virtue of this methodology is its quick turnaround.  Results can be produced overnight.

Weighting results for better accuracy

However, the irony here is that even if a methodology, due to its flaws, badly misrepresents the target population of voters, pollsters can still achieve very accurate predictions by weighting the raw data.

The weighting algorithm is generally regarded by pollsters as a trade secret.  Based on years of experience that have provided pollsters with empirical evidence on how to best allocate the undecideds among the competing candidates, and how to create an index that calculates the numerical probability that a respondent will actually vote on election day, the weighting algorithm can literally turn polling dross into polling gold.

The actual algorithm can be quite complex.  Since nearly half of all eligible voters don’t vote on election day, pollsters in essence have to figure out which half of the sample is the half that does.

In the recent provincial election in Ontario, the pretenders to the throne i.e., polls with relatively untested methodologies, managed to get numbers closer to election day results than polls using the household telephone methodology.

This so aggravated some pollsters employing the more traditional and expensive household telephone methodology that they have publicly challenged the rest of the industry about using second-rate methodologies.  While there is merit in their criticisms, the rest of the industry has basically shrugged it off as sour grapes.

It may well be that the prediction poll is a very specialized instrument, like a racing thoroughbred, that is good for one thing and one thing only.  The new methodologies and statistical tricks used to make election predictions may make the measurement of other opinions less accurate.  Presently there is insufficient comparative research between these methodologies from which to draw any conclusions.

However, so long as the public’s primary interest in election polls is discovering who’s winning the race, methodological concerns will be given short shrift.  In the prediction business, the ends justify the means.

Scientific versus subjective nature of polling

The truth is, all this hue and cry is simply a consequence of the fact that an election poll is an exercise affected as much by the art of polling as by the science.  The weighting algorithm includes subjective choices on the part of pollsters as to which variables will be instrumental in influencing voting decisions.  The art and the science combine to produce the algorithm.  The algorithm can be thought of as a black box that converts raw polling numbers to published predictions.  It is these black boxes that ultimately separate pollsters into winners and losers.

It follows that if accuracy is as much a function of subjective decisions as the application of scientific statistical principles, then deciding on a set of methodological standards to regulate the industry is next to impossible.

In fact, I would argue regulation is entirely the wrong way to go.

Benefits of competition among pollsters

Rather than focus on regulating polls, democracy would be better served if the number of election polls were increased during campaigns. Even going to the expense of offering polling companies tax breaks as inducements for mounting election polls would be worth considering given the stakes.

Increasing competition between pollsters would be beneficial for a number of reasons.  Most importantly:

  • The increased availability of polls would reduce the influence of outliers e.g., inaccurate, biased, or rogue polls.
  • Media would have a much wider choice of data points by which to formulate their stories on the public’s voting intentions. No doubt the task of writing an accurate account of what the polls say would be more difficult, but the challenge would inspire good journalism.
  • Greater competition would open up fresh thinking among participants, leading to more accurate polling data and a better understanding of what moves Canadians to vote as they do.

While it seems counterintuitive, increasing competition would not lead to chaos.

The reason?

Election polls are a very different animal from other forms of public opinion polling. There is a judgment day with predictions. Election Day results determine which predictions were accurate and which were not. It holds pollsters accountable for these predictions.

In that sense, competitive process is self regulating — it ensures the survival of the fittest and limits the proliferation of those that cannot make the grade.

Accuracy matters

It’s easy to dismiss the debate about the need for better accuracy from polls as some kind of petty food fight within the polling community.  After all, there is a mountain of social research revealing little consensus on any direct linkage between election polls and election outcomes.

But that completely misses the point.

Impact on the campaign narrative

Few would argue that when polls are combined with broad and intensive media coverage, they have a strong influence on the development of the public narrative during campaigns. That narrative determines the nature of public debate during election campaigns and influences how the public thinks and votes. The process can be quite complex as there are many factors that interact to produce voting decisions.

Crafting that narrative is the ongoing back-and-forth between polls, media, and the campaigns. Pollsters need to accept their responsibility in this engagement.

Yet in Canada, the polling community is still debating if polls influence electoral outcomes. In doing so, it hides behind the façade of a scientific model that pretends polls are simply a tool that measures public opinion but does not influence its essential nature.

This was the thinking in physics a century ago when physicists realized that scientific model simply didn’t work. It was replaced by quantum physics which understood that the act of measuring an activity changes the essential nature of the activity, the so-called Heisenberg uncertainty principle.

The time is long overdue for polling to make the leap into modernity. Pollsters need to address the complex process of how polling impacts voting intentions and behaviour through media dissemination.

Making sense of puzzling election results

The 2011 Canadian Federal election is a good example of just how complex this process can be.

In that election, the polls missed all the big stories. Pollsters had a hard time explaining why the NDP caught fire in Québec and decimated the Bloc Québecois. They also completely missed the magnitude of the Liberal collapse in Ontario that gave the Conservatives their majority government.

When respondents lie

Explanations for these politically seismic events can be found if one allows for the possibility that polls, through the agency of media, can influence voter intentions, and that voters don’t always reveal these intentions to pollsters.

These social dynamics suggest that the NDP wave would never have happened if the polls were not reporting its surge through various media during the campaign.

They also explain how the polls missed the depth of voter disaffection with the Liberals in Ontario. Many Liberal voters in traditional Liberal strongholds secretly rebelled but kept their decision to vote Conservative to themselves to avoid disapproval from those around them.  It’s not unlike the situation in Italy where polls consistently underreported Berlusconi’s popularity because many respondents were “too embarrassed” to publicly admit their approval of him.

In failing to predict the outcome of the 2011 Federal election, the polls revealed that they were not only measuring the effectiveness of the political campaign, they had become part of it.  And as part of it, they failed to fully factor in their role, with the help of media, in influencing voter opinion.

Lastly, having made the argument for greater competition, there is little argument needed for greater transparency.

The press and the public are entitled to a comprehensive disclosure of methodological practices with published surveys. A fair bit of this is in place already but it could stand improvement, particularly in the areas of sample coverage as it relates to different methodologies, the differences in response rates between these methodologies, weighted versus unweighted marginal counts, the exact wording of poll questions, and a complete list of poll sponsors.

However, while such data may be useful to the trained eye in revealing methodological concerns, they offer no guarantee for revealing the accuracy of predictions.

Role of the press in contributing to poll accuracy

Make no mistake.  As the public narrative develops over the course of a political campaign, accuracy of election polls cannot be separated from how well the press interprets these polls.

Polls, regardless of how accurate the methodology, cannot survive an inaccurate interpretation in the press, just as the press, regardless of how well intentioned, cannot survive inaccurate polls.  The two are joined at the hip.

In this context, regulating polls to ensure accuracy implies regulating the press for the same end.  It is not only technically impossible to do, it goes against one of the most venerated rights of our democracy — a free press.

Instead of asking if polls should be regulated, the more relevant question is: How can the public debate during political campaigns be improved to more accurately reflect public sentiment?

To that end, regulation would be a poor choice of solutions. The answer lies with more competition, more transparency, and a more discerning media.

This article was originally posted on December 6, 2011 in the iPolitics special feature “78 Ways to fix the way we do Politics” under the title “Should election polls be regulated?

Posted in Polling | Tagged , , , , , , , , , , , | Leave a comment

Can media be trusted to accurately report polls?

Reassurances from pollsters on the accuracy of results are suspect due to an obvious conflict of interest.  They’re marketing their product. The press also has a conflict of interest.  Media organizations often commission these polls.  Can you remember the last time a media organization has questioned the results of a poll it paid for? Even if a news organization has no financial stake in a poll, it usually doesn’t have the technical expertise to independently assess the poll’s accuracy.

Contradictory polls published during the current Ontario election campaign are just the latest example of the problem.  With pollsters criticizing each other’s methodology, the press seems helpless in deciding who’s right and who’s wrong.

Need for verification

Contradictory election polls create an obvious trust issue for the public.  While some polls may be right, others are definitely wrong. But trust issues also arise when published polls are not contradictory but where the published opinion is at odds with the real opinion held by the public. The public senses the dissonance; the press, confident in the science of polling, has no such doubts. The second situation is much harder to spot and remedy.

Obviously there is a need to somehow verify poll opinions. That’s easier said than done.
Traditionally, the press makes a big deal about independently verifying the accuracy of information in its investigative stories. The same standard of verification is curiously absent in its poll stories.

Why is that?  A poll, simply put, is an aggregated conversation on a common topic involving hundreds of individuals. Like any conversation, it’s easy to misinterpret its meaning.

So how does a media outlet verify (as much as it can) that the conversation was accurately reported? Usually it’s done by checking against results posted by other media outlets. (Of course if there is no consensus, as in the results from Ontario, this method is useless.)

If results are similar, all is well.  In this game, nobody wants to be the outlier. The problem with this approach is its reliance on herd mentality. The poll questions are not too different, the analyses are not too different, and press conclusions are not too different.

This can hardly be held as a reliable standard of independent confirmation of polling accuracy.

Checking for internal consistency

Perhaps more success can be had by stealing a page from what good detectives have been doing from time immemorial to crack their cases – checking for internal consistency.

Polls usually ask a bunch of questions on a topic of interest. So, for example, in America those opposed to raising the debt ceiling may see the national debt as a bigger problem than an economic downturn due to default. But an economic downturn implies a loss of jobs. If in fact the consistency check shows that lack of jobs is perceived as the bigger problem by most, then finding that many are opposed to raising the debt ceiling suggests the question has been improperly worded or improperly understood by respondents. Any definitive conclusion as to the nature of public opinion on the subject of increasing the debt ceiling needs to be hedged against this uncertainty.

When one applies this criterion of internal consistency to the multitude of stories carried by the press about the debt ceiling and the Obama health care reform debates, one comes to the unavoidable conclusion that the press disseminated an inaccurate description of public opinion as measured by the polls.

Example #1 – Raising the debt ceiling

According to the press, polls showed that America was deeply divided on raising the debt ceiling.  Half were opposed and were willing to entertain financial default, while half were in favor in order to prevent a financial calamity. Considering what was reported by other polls, this interpretation simply didn’t make any sense. Whether Republican or Democrat, polls have consistently shown that the top priority for most Americans (much higher than the debt issue) was to increase the availability of jobs.  Debt default would have tanked the economy, contributing to substantial job losses.

closer examination of the questions asked by the polls showed that the key question was badly framed, forcing respondents to be seemingly in conflict with one another.  More importantly, the response to other questions asked by the polls revealed that this division was illusory. They showed that a majority of the public wanted the politicians to compromise.

If the press had focused on this message from the start instead of embracing it only at the end of the debate, Republicans would not have been emboldened to hold America hostage to extremist views.  An agreement to raise the ceiling may have been accomplished in an orderly fashion well ahead of the August 2 deadline.  As it was, the political debacle that ensued influenced S&P in downgrading the US credit rating.

Its confidence shaken, the US stock market lost over 1 trillion dollars in the first week of August.  The damage was not restricted to America.  Worldwide, market losses were estimated between $2.5 and $4 trillion.

Example #2 – Obama healthcare reforms

In the case of the healthcare reform debate, the press repeatedly cited polls showing that America was deeply divided in their support of the reforms. This conclusion was also highly misleading. A closer examination of other questions asked by the polls showed that by a large majority Americans were in favour of most of President Obama’s reforms. However, the public did express legitimate doubts about the economic viability of the reforms. The Congress and the President were certainly not helpful in assuaging these concerns.

The press, in typically simplistic fashion, focused on a question that showed half the country was opposed to President Obama’s handling of health care reform and half were in in favour of it.  In fact, public opposition was not due to the proposed reforms but to President Obama. It was a terrible question. Anything Mr. Obama proposed would have been opposed by alarmed Republican voters, even a health care system based on a Republican blueprint from current Republican presidential candidate Mitt Romney.

Had the press focused on the common concerns of the majority of Americans rather than politically inspired divisions, there’s a good chance politicians would have been pressured by public opinion to behave more constructively. But that didn’t happen. In fact, ideological differences were so inflamed by the bitter debate that Republicans wanted to repeal the legislation after their midterm victory.

Why the press focus on the wrong questions

So why did the press choose to focus on the wrong question in their stories?

For the most part it was self-interest. Stories showing the public is in conflict, and divided into warring camps, produce an exciting narrative.  It attracts readers. Showing the public has opinions on which there is broad unanimity that transcend partisan differences is generally not perceived as exciting. When everyone agrees on a subject, where is the conflict? Without conflict, the narrative is one of unanimity. For the press, looking for a narrative that attracts readers, it’s nolo contendere. Conflict trumps unanimity every time.

Consequences of wrong choices

The problem with making the wrong choice, however, is that there are consequences. The public is deceived by the press as to what opinion it holds on important national issues.  Hence, instead of being united by opinion showing common purpose, the public is left with a sense of anger and frustration by reports of their illusory divisions. Instead of being united against political ineptitude, the public turns on itself.

For a fractured, dysfunctional Congress that revels in division, there could not be a more convenient outcome.  They saw their partisan divisions as a mirror of what was happening across the country. It justified their irresponsible behavior.

Responsibility of the press

For all the damage it has caused, the press seems to have no sense of responsibility for its role in this fraudulent exercise. It continues to see itself as simply a conduit of public opinion as discovered by the polls. If there’s any problem, the failure must lie with the polls. The editorial choices the press makes in how it reports on polls seem immune to criticism even though it seems clear that in its reporting of public opinion the press is actively influencing its essence in a nontrivial manner.  Why does this notion not seem to arouse any journalistic curiosity or suspicion.

Accurate reporting of public opinion by the press is a critical responsibility in a democracy.

The public relies on the press to cut through the partisan propaganda and deliver the facts as they are.  Between elections, the pressure of public opinion is an important lever in getting elected politicians to behave responsibly. That’s why the press should strive to achieve an accurate reflection of this opinion in their stories.  Focusing on ill-conceived poll questions whose primary justification is serving a narrative that sells papers is not the way to go.   Had the press shouldered its responsibility for accurately reporting public opinion, the politicians may have been shamed by good example of public sentiment into behaving like adults and not spoiled brats.

As for these wildly different Ontario election polls, we’ll know soon enough where the finger of shame points.

This story was originally posted on the iPolitics site on September 30, 2011

Posted in Polling | Tagged , , , , , , , , , , , | Leave a comment