Category Archives: Election Prediction

GE2015: Polling Problems vs. Information Seeking Biases

By my bedtime on the night of the last UK General Election, May 7 2015, one thing, at least, was clear. No matter who won, the pollsters lost. This is what makes the possibility of the Social Election Project so exciting – the flaws of traditional polling. Leading up to the election, the polls showed the Conservative Party and the Labour Party neck in neck; yet, the Tories went on to win handily. What in the world happened with the polls? In contrast, the Social Election Prediction Project focuses on predicting elections from online information seeking behavior such as search trends, a source with its own set of biases. How do the the biases of this method compare to the issues with the polls leading up to the election?

Problems with Traditional Polling
First, it is important to note that the polls leading up to GE2015 were not just wrong on the whole; they were wrong in a consensus. All predicted a tie, or close to it, between the two majority parties when the outcome was anything but. Why were the polls are all so biased in similar ways? A few ideas…

Because the polls report percentages, not seats. Comres, for example, in their final predictions set the Conservatives at 35 percent, Labour at 34 percent and UKIP at 12 percent nationally. Of course, due to the first past the post system, voting percentages do not translate neatly to seats in the House of Commons.

…Because of how the polls ask about specific constituency preference. FiveThirtyEight noted that their model would have been far more accurate if they used a more “generic” question about party preference, as opposed to the question they used regarding preference for candidate in the respondent’s specific constituency.

…Because voters changed their minds. Peter Kellner, the President of the polling firm YouGov insinuated as much in an interview with The Telegraph.

…Because of the effect of earlier polls showing the strength of the SNP. Early polls showed the SNP gaining in strength and as the New York Times noted that later in the election the “Conservatives had adroitly exploited fears among voters that the Labour Party would be able to govern only in coalition with the Scottish National Party.”

…Because of the perennial issue of shy tories? Long a known factor in UK polling, more people will vote Conservative than will declare such to a pollster.

…Because of the way the poll participants were recruited? Poll respondents are representative by age, sex and social class, but as The Guardian notes, there still might be other divides between people who will respond to an online and telephone poll and those that will not.

…Because of the results of the other polls. Market research firm Survation noted that their final poll predicted a Conservative victory much in line with the final result but the poll seemed so out of sync with all the others that they declined to published it. This is not the first time in recent memory that UK opinion polling was notably inaccurate. In 1992, opinion polls leading up to the election predicted a Labour victory and yet the Conservatives won handily. A group of pollsters convened an inquiry following the embarrassment of the 1992 election and pinned the problem on: voters switching late in the election, unrepresentativeness in the people polled and shy tories.

Biases in Online Information Seeking
Using information from online information seeking, such as search trends, can remedy some of the above problems. One does not have to adjust for shy tories for example, or the wording of the question as search trend data constitutes demonstrated, not reported, behavior. Furthermore, search trend data would change as a voter considered new options and so could accommodate strategic voting. However, using online information seeking data presents its own issues. While the majority of the UK population uses the Internet, according to the 2013 Oxford Internet Survey, there is still 22 percent of population that does not. These people will not be represented at all in search trends.

Furthermore, of the UK voters that do use the Internet, they may not use our data sources such as Wikipedia, when they are looking for information. Yet the social election prediction project encompasses far more countries than just the UK; the next blog post will discuss how polling practices – and polling reliability – varies around the world.

Image credit to Flickr user ThePictureDrome
Advertisements

Ethics of Wikipedia Research

Ethics of Editing

The election results on this Wikipedia page are wrong, I can tell. As we collect data for the Social Election Prediction Project, I am reviewing many a Wikipedia political party page and every so often I see mistakes. For this project I am checking that the page exists, ensuring that the page existed before the date of the election so that a voter could have used it to find out political information beforehand. I am not, it should be noted, checking for accuracy of information. Yet sometimes there are errors that glare. As an occasional Wikipedia editor and a stickler for correcting errors, I feel a strong urge to correct the mistakes I come across. Yet, as an academic looking at this page in a research context I am hesitant to alter that which I am studying. What are the ethical boundaries for academics conducting research on Wikipedia?

In 2012, Okoli et al. wrote an overview of scholarship on Wikipedia, a huge and varied field, totaling almost 700 articles in peer-reviewed journals in disciplines ranging from Computer Science, to Economics to Philosophy (Okoli et al, 2012).  The Okoli article, titled, “The people’s encyclopedia under the gaze of the sages: A systematic review of scholarly research on Wikipedia,” is comprehensive on the subject of all Wikipedia research up to that date, but does not deal extensively with ethics. The ethical issues that are addressed are those that are linked with privacy concerns of studying the Wikipedia community. In their article on using wikis for research, Gerald Kane and Robert Fishman note that while all Wikipedia data is available under General Public License, or GPL, and so can be used without copyright concerns, researchers should still be cognizant of the privacy of Wikipedia editors (Kane & Fishman, 2009). For example many of the editors Kane and Fishman interacted with were hesitant to connect their real world identity with that of their identity on Wikipedia, and so did not want to conduct conversations through email or any other platform.

Of course, acting as a part of a community is not always a research taboo. Participatory action research, a method that arose from psychologist’s Kurt Lewin’s action research, emphasizes collaboration between researchers and the communities at hand. However, while participatory action research could apply for someone editing a Wikipedia article, studying the behavior of other editors and working with other editors to define the study, Wikipedia editors are not the subjects of the Social Election Prediction Project. The Social Election Prediction Project is a study of Wikipedia as an informational object. The subjects are voters seeking information before an election, and Wikipedia is simply a tool to help us measure their information-seeking behavior.

The ethical ambiguities of researching Wikipedia are just a symptom of Web 2.0., where everyone is a potential contributor. The same question could be asked of researchers studying Twitter for example, should they tweet? It depends on the objective of the study. For the Social Election Prediction Project, I have not edited any Wikipedia page that I am looking at for research purposes. While I could not alter the outcome for this specific project as we are looking at past elections and so historic page views, in some small way, improving political Wikipedia pages could make more people turn to Wikipedia for political news. However, I will continue to do minor edits for the Wikipedia pages I read in my own time. While not acting as researcher, I can be collaborator and reader both.

Kane, G., & Fichman, R. (2009). The Shoemaker’s Children: Using Wikis for Information Systems Teaching, Research, and Publication. Management Information Systems Quarterly, 33(1).
Okoli, C., Mehdi, M., Mesgari, M., Nielsen, F. Å., & Lanamäki, A. (2012). The People’s Encyclopedia Under the Gaze of the Sages: A Systematic Review of Scholarly Research on Wikipedia. Retrieved from http://papers.ssrn.com/sol3/Papers.cfm?abstract_id=2021326
This post has been cross-posted to the Oxford Internet Institute’s Elections and the Internet blog.

Subjectivity and Data Collection in a “Big Data” Project

SYRIZA

There remains a mistaken belief that qualitative researchers are in the business of interpreting stories and quantitative researchers are in the business of producing facts.” (boyd & Crawford, 2012) The Social Election Prediction project is once again in the data collection phase and we’re here to discuss some of the data collection decision points we have encountered thus far or, in other words, the subjective aspect of big data research. This is not to denigrate this type of quantitative research. The benefits of big data for social science research are too numerous to list here and likely any reader of this blog is more than familiar. In the era of big data, human behaviour that was previously only theorized is now observable at scale and quantifiable. This is particularly true for the topic of this project, information seeking behaviour around elections. While social scientists have long studied voting behaviour, historically they have had to rely on self-reported surveys for signals as to how individuals sought information related to an election.

Now, certain tools such as Wikipedia and Google Trends provide an outside indication as to how and when people search for information on political parties and politicians. However, although Wikipedia page views are not self-reported, this does not mean that they are objective. Wikipedia data collection requires the interjection of personal interpretation; the typical measure of subjectivity. These decisions tend to fall into two general categories: the problem of individuation and the problem of delimitation.

When is something considered a separate entity and when should it be grouped? The first is a frequently occurring question in big data collection. For this project, this question has reoccurred with party alliances and two-round elections. If we are collecting Wikipedia pages to study information-seeking behaviour related to elections, should we consider views only of the page of a party alliance or of the individual party as well? This is a problem of individuation, deciding when to consider discrete entities as disparate and when to count them as a single unit. The import of party alliances varies by country but big data collection necessitates uniformity for the analysis stage. So, a decision must be made. The same issue arises with two-round elections. Should they be considered as one election instance or two? Again, a uniform decision is necessary for the next step of data analysis.

For decisions of delimitation one must set a logical boundary on something continuous. Think, time. For the Social Election Prediction project, we are collecting the dates of all of the elections under consideration, so that we can compare the Wikipedia page views for the various political parties involved prior to the election. For most electoral systems, the date of an election is simple, but for countries like Italy and the Czech Republic with two-day elections, the question of when to end the information-seeking period arises. The day before the election begins? After the first day? There is uniform data solution to this question, only yet another subjective decision by the data collector.

In the article quoted above, boyd and Crawford question the objectivity of data analysis but the subjective strains in big data research begin even earlier, with the collection stage. Data is defined in the collection stage, and these definitions, as with the analysis, can be context specific. Social media research faces the same definitional problems but many of the collection decisions have already been made by social media platform. Of course, same criticisms could be raised about traditional statistical analysis as well. While there may be unique benefits to big data research, it faces many of the same problems as previous research methods. Big data often seen as some sort of “black box” but the process of building that box can be just as subjective as qualitative research.

 

This post has been cross-posted to the Oxford Internet Institute’s Elections and the Internet blog.