Tag Archives: ge2015

Not Just #GE2015: Other 2015 Polling Failures

The failure of pollsters in #GE2015 was covered widely, including in a previous post here, but it was not the only opinion polling failure that year. Polls also failed to predict results by wide margins in the national elections in Israel and Poland.

The history of polling as we know it now is pretty short, dating back only to George Gallop in the US in the 1930s. Gallup successfully predicted Franklin D. Roosevelt’s win in the 1936 election, using a representative sample. The popular magazine The Literary Digest had predicted that Alfred Landon would win by a landslide. Unlike Gallup, The Literary Digest’s poll was based on an nonrepresentative sample – the magazine provided postcards for any reader to mail in with their preference.

The actual method used now by pollsters varies but most rely on some kind of automated selection of landline telephones. It is possible for polling firms to call cellphones as well but it is much more expensive, so few firms do. Of course, as people drop landlines in favor of cellphones, the pool of people responding to polls becomes less and less representative of the general population.

Israel and the Surprise Reelection of Binyamin Netanyahu

“It wasn’t a good night for Israel’s pollsters. The average of pre-election polls showed Binyamin Netanyahu’s Likud party on 21 seats, trailing the centre-left Zionist Union led by Isaac Herzog by four seats.,” Alberto Nardelli wrote in The Guardian of Israel’s March 2015 election. Instead Netanyahu was comfortably reelected. What happened?

Nardelli notes that it could be due to last-minute changes of heart in the electorate…or systematic error in the polls. In Israel polls cannot be published in the four days leading up to the election and it is possible that voters decided for Netanyahu and his Likud party in those final hours. Avi Degani a Professor at Tel Aviv University and a pollster himself blamed the poll errors on Internet-based methodologies in an interview with CNN, noting that not all voters in Israel are equally represented online.

Poland and the Unexpected Loss of Bronisław Komorowski

Just as the British Polling Council announced that it intended to carry out an investigation into the failures of opinion polls leading up to #GE2015, the Polish Association of Market and Opinion Research Organizations stated too that it planned to investigate why opinion polls failed to predict the results of the May presidential election. Contrary to predictions that the incumbent Komorowski would be comfortably reelected, the relatively unknown Andrzej Duda won instead in both the first and second rounds of voting. In an interview with the Associated Press, Miroslawa Grabowska, director of the CBOS polling agency in Warsaw, said that undecided voters would feel forced to state a preference when polled and so would point to the household name of the sitting president. Jan Kujawski, director of research with Millward Brown in Poland, pointed the blame at the fall in number of households with landlines.
Time will tell if this pattern holds true for upcoming national elections, or if more polling firms improve their results by contacting prospective voters through cellphones or other methods.

Image credit to Flickr user Mortimer62. 


GE2015: Polling Problems vs. Information Seeking Biases

By my bedtime on the night of the last UK General Election, May 7 2015, one thing, at least, was clear. No matter who won, the pollsters lost. This is what makes the possibility of the Social Election Project so exciting – the flaws of traditional polling. Leading up to the election, the polls showed the Conservative Party and the Labour Party neck in neck; yet, the Tories went on to win handily. What in the world happened with the polls? In contrast, the Social Election Prediction Project focuses on predicting elections from online information seeking behavior such as search trends, a source with its own set of biases. How do the the biases of this method compare to the issues with the polls leading up to the election?

Problems with Traditional Polling
First, it is important to note that the polls leading up to GE2015 were not just wrong on the whole; they were wrong in a consensus. All predicted a tie, or close to it, between the two majority parties when the outcome was anything but. Why were the polls are all so biased in similar ways? A few ideas…

Because the polls report percentages, not seats. Comres, for example, in their final predictions set the Conservatives at 35 percent, Labour at 34 percent and UKIP at 12 percent nationally. Of course, due to the first past the post system, voting percentages do not translate neatly to seats in the House of Commons.

…Because of how the polls ask about specific constituency preference. FiveThirtyEight noted that their model would have been far more accurate if they used a more “generic” question about party preference, as opposed to the question they used regarding preference for candidate in the respondent’s specific constituency.

…Because voters changed their minds. Peter Kellner, the President of the polling firm YouGov insinuated as much in an interview with The Telegraph.

…Because of the effect of earlier polls showing the strength of the SNP. Early polls showed the SNP gaining in strength and as the New York Times noted that later in the election the “Conservatives had adroitly exploited fears among voters that the Labour Party would be able to govern only in coalition with the Scottish National Party.”

…Because of the perennial issue of shy tories? Long a known factor in UK polling, more people will vote Conservative than will declare such to a pollster.

…Because of the way the poll participants were recruited? Poll respondents are representative by age, sex and social class, but as The Guardian notes, there still might be other divides between people who will respond to an online and telephone poll and those that will not.

…Because of the results of the other polls. Market research firm Survation noted that their final poll predicted a Conservative victory much in line with the final result but the poll seemed so out of sync with all the others that they declined to published it. This is not the first time in recent memory that UK opinion polling was notably inaccurate. In 1992, opinion polls leading up to the election predicted a Labour victory and yet the Conservatives won handily. A group of pollsters convened an inquiry following the embarrassment of the 1992 election and pinned the problem on: voters switching late in the election, unrepresentativeness in the people polled and shy tories.

Biases in Online Information Seeking
Using information from online information seeking, such as search trends, can remedy some of the above problems. One does not have to adjust for shy tories for example, or the wording of the question as search trend data constitutes demonstrated, not reported, behavior. Furthermore, search trend data would change as a voter considered new options and so could accommodate strategic voting. However, using online information seeking data presents its own issues. While the majority of the UK population uses the Internet, according to the 2013 Oxford Internet Survey, there is still 22 percent of population that does not. These people will not be represented at all in search trends.

Furthermore, of the UK voters that do use the Internet, they may not use our data sources such as Wikipedia, when they are looking for information. Yet the social election prediction project encompasses far more countries than just the UK; the next blog post will discuss how polling practices – and polling reliability – varies around the world.

Image credit to Flickr user ThePictureDrome

The (Local) General Election on Twitter

The UK’s national election is decided on a constituency basis: 650 odd separate small elections, each returning one MP. Despite the obvious importance of national parties and their leaders for shaping the election campaign as a whole, it is commonly accepted that the ability of local campaigns is also a significant factor. For example, the Liberal Democrats are well known for having highly organised local activities in their home constituencies; something which might help them hold onto seats despite their overall poor polling in national polls. Given this, for those interested in the influence of social media on the election, it’s worth looking not just at nationally relevant hashtags and Twitter accounts, but how local candidates have been using social media.

In a previous post Taha Yasseri and Stefano de Sabbata looked at the distribution of candidate accounts on Twitter, based on data from YourNextMP. In this post, using the same YNMP data as well as tweets collected by Scott Hale from the Twitter Streaming API over the last month, we look at the actual tweeting activity of MPs. The map below shows the level of activity of each MP in each British constituency for six UK parties in the month leading up to the election. The scale shows light, medium and heavy users of Twitter

OII - GE2015 - Candidate Activity on Twitter - Bright, Hale - web

Level of candidate activity on Twitter

Almost 450,000 tweets were sent by candidates of these six parties in the month leading up to the general election (the Labour party sent over 120,000, the Conservatives and the Green Party sent around 80,000 each, the Liberal Democrats just over 70,000, UKIP just over 60,000 and the SNP just over 15,000).

Compared to the map which Taha and Stefano produced on account distribution, in this new one regional patterns are clearly more apparent: whereas the major parties have candidates with Twitter accounts almost uniformly across the UK, their level of usage varies a lot. Only the SNP are uniform heavy users of Twitter: only two of their candidates sent less than 10 tweets in this period, and the majority sent more than 100. This also chimes with the fact that they are the party who has, relative to the overall number of candidates, created the most Twitter accounts – clearly they have a very active and organised social media presence.

OII - GE2015 - Tweet Histograms - Bright, Hale

Candidate activity on Twitter by party

The histogram above show more detail about the level of twitter activity and how it breaks down between different parties. Conservative, Green and Labour have broadly similar patterns, with the average candidate having sent around 100 tweets in the last month, whilst a few have sent several thousand. UKIP and the Liberal Democrats show a flatter distribution.

Of course, it’s one thing to tweet, but is anyone else actually listening? More on that soon…

Which parties are having the most impact on Twitter?

The previous two posts have shown that the amount of effort parties are putting in on Twitter at the local level is pretty variable. But what about the response they are getting? In this post we’ll look at the amount of mentions candidates receive on Twitter. A mention could be a retweet or it could be a message @ someone – any time the candidate’s name is in there. Data was harvested from Datasift, using the same YourNextMP data for the list of candidate Twitter handles.

In the week before the election candidates were mentioned over one million times. Lots of that activity, it goes without saying, goes to the party leaders: Ed Miliband accounts for almost 120,000 of those mentions alone, with, Cameron, Farage, Clegg and Bennett in places 2 – 5. Yet there was also a lot of activity for less nationally famous figures: over the 2,312 candidates in the YourNextMP dataset, only 12 weren’t mentioned even once during that week (and none of them tweeted either).

Why do some candidates get more attention than others? The most obvious explanation is that some candidates tweet more than others: and being active on social media ought to be a way of getting noticed. The image below plots all of the candidates in the dataset as a point, comparing the number of times they tweeted with the number of times they are mentioned, on a logarithmic scale. The positive relationship is clear.


Twitter mentions of local party candidates

However within all the points, there also seem to be some differences between the parties.  The figure below makes the different clearer by grouping all the candidates into a per party average. What it shows is that, while for every party writing more tweets tends to get more mentions, some parties have a much better “Tweet to mention” ratio than others. In other words, their tweets have on average more impact, and their presence is on average greater. Like the previous one, this graph is on a log scale, meaning that the differences between parties are in orders of magnitude. So, for example, 100 tweets from a Lib Dem candidate would give around 100 mentions; but the same amount from an SNP candidate would give over 1,000 mentions.


Twitter mentions of local party candidates – averaged by party

Broadly speaking, we can see the parties form three groups on social media in terms of outreach: the SNP are clearly in front, Labour and Conservatives are in the middle, and the Greens and Liberal Democrats at the back. UKIP are somewhere in between the middle and back groups. Interestingly, these relationships hold more or less regardless of the amount of tweets sent by the candidate (and the most famous candidates were by no means the most avid tweeters – Miliband for example only authored 20 tweets in this period, whilst others authored several hundred).

Summary? Some parties have a lot more impact on social media than others.

NB: Post was updated slightly @ 19.45 to correct a data collection issue – overall conclusions weren’t change.

Where do people mention candidates on Twitter?

In previous posts we’ve looked at people mentioning local party candidates on Twitter. In that post we basically assumed that people mentioning local candidates were based in the same constituency as the candidate themselves. But is that the case? It could be that the majority of tweets are coming from large cities, especially London, where the majority of the party machines are typically based.

Candidate Mention Locations

Candidate mention locations on Twitter in the month leading up to the UK General Election 2015

To provide a rough check of this, we looked at all mentions of candidates on Twitter during the last month which had geolocation enabled (usually because they are tweeted through a smartphone). Geolocated tweets are a fraction of the overall tweets produced (less than 5%); nevertheless, they provide a rough and ready way of checking that all of our candidate tweets are not from one place.

In short, candidate mentions are pretty evenly spread through the country (albeit based on a relatively small amount of data): there is no sense they are concentrated in one part of the country.

Could social media forecast political movements?

GE2015 turned out to be a bad night for some. Beyond the obvious political parties, the reputation of polling firms took a big hit: while the exit poll got more or less in the ball park, none of the pre-election polls were anywhere near. This, combined with the advance of the SNP, UKIP and Greens, lent the whole election a real “earthquake” feel, with people like David Dimbleby questioning whether politicians would ever take polling seriously again.

Considering the weaknesses of conventional polling, could social media have filled a gap in terms of forecasting the earthquake that was to come? Were people on Twitter in advance of the opinion polls?

The data we produced last night produces a mixed picture. We were able to show that the Liberal Democrats were much weaker than the Tories and Labour on Twitter, whilst the SNP were much stronger; we also showed more Wikipedia interest for the Tories than Labour, both things which chime with the overall results. But a simple summing of mention counts per constituency produces a highly inaccurate picture, to say the least (reproduced below): generally understating large parties and overstating small ones. And it’s certainly striking that the clearly greater levels of effort Labour were putting into Twitter did not translate into electoral success: a warning for campaigns which focus solely on the “online” element.


In terms of prediction the problem here, of course, is that there are many potential statistics which could be produced by social media, and many potential metrics to predict (from vote shares, to swings, to turnouts etc.). Some of them are bound to be “right” after the fact. In response to this, Taha Yasseri and I have recently written a draft paper trying to produce social election predictions more systematically using Wikipedia data. The main premise is that we need a theory informed model to drive social media predictions, which is based on an understanding of how the data is generated and hence enables us to correct for certain biases.

How could we apply this reasoning to our Twitter data? Well one of the suggestions we made last night was that, even though we were sure the Green Party wasn’t going to win the 46 constituencies shown on our Twitter map, perhaps these areas were nevertheless places where the Green vote was going to spike upwards disproportionately (they might, for instance, indicate a highly organised local party machine which would be capable of delivering extra votes). In order to check this, I took results data for the Green Party and UKIP from 50 constituencies in England and Wales (good data tables for the election results still haven’t been released – so I’m limited to the amount I could quickly collect by hand). The graph below plots the amount of percentage points each party’s results increased by against the amount of Twitter mentions candidates received in the run up to the election in each constituency.

Percentage point vote increase vs Twitter Mentions

Overall on the graph there is little apparent correlation for UKIP candidates; Green Party candidates show by contrast a rough though by no means perfect positive correlation. In other words, for the Green Party the Twitter mentions have a little predictive power, whereas for UKIP they have none at all. What is more striking is that the points on the graph group clearly into two sections: UKIP increasing more than their mentions would suggest, whilst the reverse is true for the Greens. This highlights one of the major difficulties in making predictions from social media: that voters of different parties make different uses of social media, and a predictive model would need to take these differences into account.

Once the results are announced in full, over the next few weeks we will be looking into this in more detail, for all parties, and across a wider range of metrics.

Social Media + Elections: A Recap

OII - GE2015 - Candidate Activity on Twitter - Bright, Hale - web

From Jonathan Bright and Scott Hale’s blog post on Twitter Use.

In the run-up to the general election we conducted a number of investigations into relative candidate and party use of social media and other online platforms. The site elections.oii.ox.ac.uk has served as our hub for elections-related data analysis. There is much to look over, but this blog post can guide you through.


What if mentions were votes?” by Jonathan Bright and Scott Hale

Which parties are having the most impact on Twitter?” by Jonathan Bright and Scott Hale

The (Local) General Election on Twitter” by Jonathan Bright

Where do people mention candidates on Twitter?” by Jonathan Bright

Twitter + Wikipedia 

Online presence of the General Election Candidates: Labour Wins Twitter while Tories take Wikipedia” by Taha Yasseri


Which parties were most read on Wikipedia?” by Jonathan Bright

Does anyone read Wikipedia around election time?” by Taha Yasseri

Google Trends

What does it mean to win a debate anyway?: Media Coverage of the Leaders’ Debates vs. Google Search Trends” by Eve Ahearn

Social Media Overall

Could social media be used to forecast political movements?” by Jonathan Bright

Social Media are not just for elections” by Helen Margetts