What now for polling?

After last year’s election shocks, can we trust polls ever again? And if not, whose fault will that be?

When the success of Trump and Brexit took the experts by surprise, did it a) destroy the credibility of polls; b) prove journalists’ unreliability in reporting the findings of polls; or c) all of the above?


Did pollsters really call the wrong result in the British General Election of 2015 and in the UK EU referendum in 2016? And how did the pollsters fare in the US Presidential election last year? Many pollsters predicted a dead heat (‘too close to call’) in the British general election, and either a Remain victory or ‘too close to call’ in the UK EU referendum and a Clinton win or a ‘too close to call’; as we all know now, the true result was a Conservative majority, a Leave victory and a Clinton win of the popular vote but a Trump win at the electoral college.

So, the problem for the polls is not just their accuracy, but the public perception of their accuracy as reported in the media – which is seldom the same thing. Typically, polling companies try to predict results within the acceptable range for a margin of error of typically three per cent either way of the result for the market share of each main party, 95 per cent of the time. That means that one in 20 polls will legitimately report the wrong results (because the sample taken by chance does not correspond with that of the population). Pollsters however cannot call a result if the leader’s share of the vote and challenger’s share of the vote are in close contention and within the margin of error.

For a typical population, a sample size for a poll should be around 1,066 randomly selected respondents. Very few pollsters use random methods, because they are too expensive for their media clients, too time-consuming to undertake to achieve the response rate necessary, and because full lists of the population with accurate contact details do not exist. Previous research indicates quota-based polls (where the sample is matched to the population on a strict set of criteria such as working status, gender, age) can be accurate.

A public stage

National elections are a very public stage for the polling industry, affecting their reputation and standing in the community when it comes to being selected for commercial market research. They are the occasion when researchers have the opportunity to demonstrate the accuracy of their methodologies against an actual outcome. There’s no other situation where market research is held to account in this way – a firm might capture an extensive sample of attitudes to a new product, but it’s not then checked against the views of an entire population on an electoral register.

But just how inaccurate were they? Table 1 shows that on average the eight eve-of-election polls underestimated the Conservative lead against Labour by 6.5 per cent and had an average error in estimating the two main parties’ share of the vote by 3.3 per cent, outside the margin of error.

Poll performances in the UK EU referendum were also wide of the actual result, but the poll comparison is less fair because many of the polls were taken several days before the actual referendum and public opinion may have changed in the final days as people made up their minds. Table 2 illustrates that the error on Remain of the eight polls taken in the week before the referendum was around 3.8 per cent, outside the margin of error. Only Opinium and TNS-BMRB had Leave ahead but in such close contention that the result was too close to call.

In the US presidential, the 21 general election polls conducted on the eve of the election (listed on Real-Clear Politics) had an average Clinton lead of 3.3 per cent, when in fact Clinton’s lead in the actual popular vote was 2.1 per cent (48.2 per cent to 46.1 per cent). This result was within the margins of error and pollsters predicted the right result. The actual Presidential election outcome, based on electoral college votes, was a Trump win – but electoral college votes are more difficult to predict because this requires that polls be conducted in every single state.

Typically, it is aggregators and poll watchers that divine the electoral college result, rather than pollsters themselves. So here also, the electoral college vote was too close to call. If the political landscape continues to involve contests that are too close to call, where party shares of the vote are within the margin of error of each other, then the pressure on pollsters will increase.

“Elections are a very public stage for the polling industry, affecting their reputation and standing in the community.”

But such expectations of precision and accuracy are unfair. All surveys are imperfect and the polls only really tell the popular vote story and most elections are subject to the vagaries of electoral systems, which distort the final result. There’s no way of excluding the possibility of interviewer or interviewee error. Sampling populations instead of conducting a census will always mean there are limitations, such as the way in which telephone polls rule out those not on fixed line phones. Ultimately polls are dependent on a response rate, and response rates are increasingly lower than they were, increasing selection bias.

In other forms of research, experts can allow for these problems through a level of interpretation – the numbers don’t get to speak for themselves. Witness the confidence intervals that grace the results of clinical trials. This means making use of the experience and judgement of researchers to produce insights into the meaning, implications and limitations of the figures. But polls of public opinion are intended for use by the media – and political parties – and are served up raw for public consumption.

Zero shades of grey

So there is an issue of how the poll results are reported. On the one hand, it could be argued that journalists don’t always understand the complexities of statistics and how they can best be reported to avoid misconceptions. But a bigger issue is the way in which journalists either don’t want to – or aren’t able to, given newsroom pressures – report on statistical subtlety. For a story to make the cut it needs to be black and white. No fudging the issue with ifs and buts, otherwise the readers won’t understand.

The British Polling Council/Market Research Society has made numerous recommendations for change as far as UK pollsters are concerned. These include, among other suggestions, reporting confidence intervals around point estimates for shares of vote. Another is paying greater attention to turnout and the opinions of non-voters. Perhaps another recommendation would be to undertake surveys with larger sample sizes where the outcomes of the contests are so closely contested.

Another effective way to increase public trust in poll results would be to encourage greater understanding of the process of polling among journalists and the general public. Given all the random factors and uncertainties involved, it’s a miracle how accurate polls are most of the time.

A wider question for communication directors is what does this mean for you? Are the inaccuracies or imprecisions in polling reflected in your social media monitoring, market potential, market mix modelling, segmentation, stakeholder audit, or consumer insight studies? The short answer is they may be present there. They are especially likely to be present where a decision needs to be made around a relatively contentious area where opinion is evenly split and closely contested using a survey sample.

That said, there are plenty of other methods out there that don’t use surveys. As ever, a degree of judgment is required in interpreting the validity of the results of any study, regardless of whether it is a poll or not.

Paul Baines

Paul Baines is professor of political marketing at Cranfield School of Management, UK. He has worked on various communication research projects for UK government departments, and operates his own strategic marketing and market research consultancy, Baines Associates Limited.