Only a few days after Brexit, headlines across the media are blaring that the result was a “shock”, that financial traders were “caught off guard”, that “no one expected this”. People seemed to assume that because the government, the city, the church and nearly every major authority was supporting Remain, that Remain would prevail. This was reinforced by the latest polls and the overwhelming betting odds in favor of Remain. Even David Cameron, likely the most informed man on the planet regarding Brexit, thought he had a victory at 10pm on the day of the vote.

At Qriously, we used our mobile polling methodology to run a number of polls on the EU referendum, and were the only ones to show a consistent lead for Leave before the day of the vote and to make an accurate outcome prediction when polls closed at 10pm last Thurs. The best way to describe our method is ‘mobile in-app river sampling’ as we intercept smartphone users while they use their usual apps, by inviting them to take a short survey without leaving the app.

Q_twitter_2

 

Here is our first post-mortem analysis on what went wrong and how mobile polling could possibly be more accurate than telephone or even online polls.

How wrong were the polls?

Most voting intention polls published before the day of the vote were showing at best a 8 or 10 point lead for Remain and at worst a very narrow lead for Leave (TNS, Opinium). But had these Leave-favoring pollsters made an outcome prediction, they would likely have predicted a victory for Remain or a draw, as we know the 10-12% of undecided voters would mostly vote for Remain (around 55%-60% were predicted to pick the status quo option). It’s misleading to say that TNS or Opinium got it right – they just didn’t. Even the on-the-day prediction of YouGov was wrong (overall 4 pts off). After the unanimous fiasco of pollsters during the last general election in 2015, this could be the final straw for the polling industry, at least for traditional polling.

Post mortem_1

As it stands, we still claim to be only pollster to accurately and publicly predict the EU Referendum outcome. And we challenge anyone to prove us wrong!

Why were the polls so wrong?

It’s still a bit early to provide a definite answer, but considering the very sophisticated post-weighting procedures they use,, we have to consider that the raw material that pollsters work on (answers provided by respondents) is just not good enough. We suspect it comes from unrepresentative people who have the time and inclination to spend 10-20min answering telephone polls, and paid professional survey takers who make up a significant portion of online panels. In addition, there is a very likely ‘shy effect’ from Leave voters who were not straightforward with their voting and turnout intentions. Most pollsters will tell you how their turnout models were not accurate or voters changed their mind at the last minute but they are unlikely to address the quality of the data they collect in the first place.

Problem #1: Coverage of the Population

Conventional statistical wisdom indicates that sample size is not an issue (above a certain threshold, usually around 1,000 respondents), provided that the sample is representative. The typical metaphor is about a pot of soup. A chef can correctly guess the taste of the entire vat of soup from a single spoonful, providing that the soup is well-stirred. The same principle applies to polling – 1,000 people is enough to estimate the actions of millions, provided you get a well-stirred (= representative) group of people.

The first problem for the EU referendum pollsters was coverage – or lack thereof.

Telephone polls rely on calling landlines (and occasionally mobile phones), but there are two issues here. Firstly, not many people have a landline in 2016, and the people who do have specific characteristics – they tend to be older and more rural. Secondly, and maybe more relevant for this referendum, busy people do not have time to answer long political polls. Think about it – if a researcher called you and asked for 20 minutes of your time, could you honestly say ‘yes’? And if you could, what does that say about you? More polite than the average person? Less busy, less rushed, more relaxed? More open to discussions with strangers? Perhaps – more likely to vote Remain? The point is that in an era of very low answer rates, telephone polling is no longer representative. The soup isn’t stirred properly anymore.

So, what about online polls? They were generally closer, and performed better in this referendum. But there are problems here too. They rely on panels – people who have deliberately signed up to answer surveys on a variety of topics with the expectation to earn extra income. Again, we run into the soup-stirring problem. What sort of person signs up to make money from answering surveys? Someone who needs a little bit of extra cash? Someone who is relatively well-educated, enough to know about online surveys as a money-making venture? And again, the personality and attitudes of someone like this is unlikely to represent the general population.

Here at Qriously, we don’t face these problems. Thanks to our mobile sampling methodology, we can reach 50% of smartphone users, and since most adults have a smartphone, this equates to around two-thirds of the population 18+. In addition, because we intercept people who are using one of their apps, we get casual survey takers. Taking our survey does not require any sort of commitment (no sign-up), don’t take a lot of time (usually < 1-2 minute) and is not incentivized. That’s how we get a more representative, diverse sample of the whole population..

Problem #2: The ‘Shy Voter’ Effect (Social Desirability Bias)

Even if you have a well-stirred soup (a large-enough representative sample of the population), polling relies on the assumption that people are telling you the truth. You need your respondents to be honest, both with you and with themselves, so that you can accurately extrapolate their answers to the entire population you’re interested in. If your respondents don’t give you an accurate answer to what they will do on polling day, either because they honestly believe something but change their mind on the day, or because they’re a little embarrassed and aren’t completely honest with you, your poll is going to be off.

The ‘shy voter’ effect is a well-known issue in political polling, and has popped up a few times recently – most notably in the 2015 general election, where pollsters were wrong largely because they underestimated the Conservative vote. There was a ‘Shy Tory’ effect – people told pollsters (and maybe friends, partners, and other members of their social circle) that they were going to vote Labour, or Liberal Democrat, but they actually voted Conservative. This resulted in a massive Conservative victory that almost no pollster foresaw accurately.

This ‘shy voter’ effect is especially powerful via the telephone, and helps to explain why telephone polls underestimated Brexit so powerfully. People were unwilling to tell pollsters their true intentions (to vote Leave) –  probably due to the fact that Leave is associated in some regions of Britain with racism and other ugly philosophies such as fascism (e.g. the English Defense League). One of the public faces of the Leave campaign, Nigel Farage, was openly associated with xenophobic messaging, such as the infamous ‘Breaking Point’ poster that showed a stream of Syrian refugees entering Europe. Many people, when asked by a friendly voice on the telephone, indicated that they wanted to vote Remain (as the socially-acceptable thing to do) but ticked Leave in the polling booth.

Online polls didn’t show this effect to the same extent, as it’s easier for people to give more socially-unacceptable answers via an online interface, without the pressure of a human judging your actions. But still, panelists are aware that their answers are recorded and stored, and there was still a small element of a shy Leave effect to be found in online polls.

At Qriously, we circumvent this problem as well. Our respondents are all completely anonymous – we collect no personally identifiable information about them – and they have no idea who we are. We don’t know their real names, where they live, or anything else about them beyond the fact that they’re using a particular app. This gives respondents the freedom to be as honest as they want with us – which is why we consistently saw a stronger Leave vote than anyone else, and why we successfully predicted a Leave win on-the-day. We avoid the social desirability bias that plagues telephone polls and online panels.

The bottom line is that current opinion polling methods are broken (especially to predict election outcomes), and the only way to fix that is to explore new ones. We need to ask questions in a way that most people – real people – want to engage in and feel comfortable enough to share their actual thoughts and opinions. We feel our method is one of the most promising way of doing that.