The majority of pollsters got it wrong about Trump, about Brexit and the Modi victory in India in 2014. What gives? How can experienced, smart pollsters get it wrong in three different democratic settings? Besides the obvious need for introspection for liberal politics, there is also a need to understand why polls get it so wrong. I had written earlier of the difficulty of conducting surveys in India, and this is a follow-up to that – here, I reflect a bit more on surveying itself.
I think two aspects are important in thinking about the usefulness of polls in predicting elections. First, social cleavages are far more complex than anyone can imagine. Prediction polls tend to paint people in broad strokes but people are complex beings with multiple identities. And belonging in one group (e.g. white women) does not mean that you subscribe to all views that members of that group are expected to hold (eg. support for Hillary Clinton).
A related dimension is that polls look for outcomes and rarely engage with the why that underlies decision-making. At different points in time, individuals may exhibit similar behaviour, but they are for entirely different reasons. In the absence of surveys capturing this information (many do not), the we are left with information that captures apparent choices, but not causes. This is significant – during fieldwork, I it puzzling as political relations in two parts of the field site were vastly different, though the party ID of voters was the same. It was only when I was able to move myself away from inquiring about party preference towards understanding why political relations took a particular form that I was able to understand shifts and changes better. I realised that particular material conditions brought about peculiar forms of political relations which allowed voters to switch more easily. Capturing the presence of these conditions helped me predict outcomes better.*
Second, the the existence of social desirability bias and our ability to capture it also needs re-thinking. The existence of the bias itself is unquestionable – we all want to seem better than we are. For example, during fieldwork here in India, voters of the BJP in a Congress dominated area who chose to share their choice with me did so in secret.* Similarly, in the US, in the face of demeaning characterisations for Trump supporters (not just his politics), it is quite likely voters shied away from committing support to seem like somebody else. Surveyors identify shy voters by looking at other behavioural traits – for instance, an inclination to oppose migrants and outsourcing may be seen as tacit support for Trump. But social desirability extends to these as well.
A related issue arises from confirmation bias – it is a tendency for us all to confirm initial hypotheses, beliefs and attitudes. In the US electoral context, Trump started out as a non-serious candidate and later on, he received little support from the GOP. On the other hand, the Democratic campaign was serious, Hillary Clinton had contested in the primaries earlier and it was widely believed she would win. This possibly fuelled pollsters assumptions and combined with people’s desire to appear better and compromised predictive ability.
In sum, surveys are an instrument whose efficacy is complicated both by the surveyor as well as t he surveyed. Lack of knowledge about the why and social desirability bias come together to refract the results.
*This desire for secrecy may be linked to the flows (or perceived flows) of benefits locally – these are often linked to party ID