When the success of Trump and Brexit took the experts by surprise, did it a) destroy the credibility of polls; b) prove journalists’ unreliability in reporting the findings of polls; or c) all of the above?
Did pollsters really call the wrong result in the British General Election of 2015 and in the UK EU referendum in 2016? And how did the pollsters fare in the US Presidential election last year? Many pollsters predicted a dead heat (‘too close to call’) in the British general election, and either a Remain victory or ‘too close to call’ in the UK EU referendum and a Clinton win or a ‘too close to call’; as we all know now, the true result was a Conservative majority, a Leave victory and a Clinton win of the popular vote but a Trump win at the electoral college.
So, the problem for the polls is not just their accuracy, but the public perception of their accuracy as reported in the media – which is seldom the same thing. Typically, polling companies try to predict results within the acceptable range for a margin of error of typically three per cent either way of the result for the market share of each main party, 95 per cent of the time. That means that one in 20 polls will legitimately report the wrong results (because the sample taken by chance does not correspond with that of the population). Pollsters however cannot call a result if the leader’s share of the vote and challenger’s share of the vote are in close contention and within the margin of error. For a typical population, a sample size for a poll should be around 1,066 randomly selected respondents. Very few pollsters use random methods, because they are too expensive for their media clients, too time-consuming to undertake to achieve the response rate necessary, and because full lists of the population with accurate contact details do not exist. Previous research indicates quota-based polls (where the sample is matched to the population on a strict set of criteria such as working status, gender, age) can be accurate.