How we got the Delhi polls all wrong
The problem is that not only did the seat forecasts go wrong, even vote share estimates were off in the Delhi Assembly election results.
- Total Shares
The AAP’s historic victory was underestimated by all exit polls and even those who caught the lead missed the extent of the wave. It wasn’t only poor fieldwork quality or sampling errors of an exit poll; something went wrong beyond the usual. With a 21 percentage point lead, simple swing models would give the AAP about 65 seats. So what went wrong? The problem is that not only did the seat forecasts go wrong, even vote share estimates were off.
Past experience suggests that polls tend to underestimate the quantum of victory in wave elections. Typical characteristics of a wave election are the complete rout of small parties and independents, and all major parties other than the winner are reduced to their core vote bank. Like in this election, small parties and independents are usually underestimated in polls, and are adjusted according to past trends. In 2013 “others” got about 12 per cent votes, while this time round the wave reduced them to a mere three point five percent. Adjusting for these “others” to five to eight per cent ate into the vote share of all parties resulting in reducing the extent of the lead. Most polls also overestimated the Congress. This could happen due to two reasons: first; adjusting for recall of the Congress votes in the last Assembly election and second; slight over reporting of the Congress voters who voted for the AAP in the polling booth. This kind of “late swing” is usual in wave elections as a few voters of trailing parties vote for the leading party, but report the party they came to vote for. This kind of momentum of the wave was captured in most credible opinion polls.
Till early January, the AAP was trailing behind the BJP and around the third week of January voters started swinging in favour of the AAP. The three-six percentage point trail of the AAP reported in the first week of January turned into a two-three percentage point lead in most snap polls conducted after the entry of Kiran Bedi. This consolidated into a six-ten percentage point lead around February 2. Ideally this momentum should have been captured by exit polls, but what turned out to be a 21 percentage point lead was underestimated by about ten-15 percentage points by most polls.
However, a six-eight percentage point lead should have also translated into in more than 50 seats, which on election night seemed to be being bullish on the AAP. After the error in vote estimates, seats forecasts were also skewed on the lower side, adding to the overall inaccuracy of exit polls this time round. Many seats were very close contests with a few thousand votes being the decisive factor. As swings are uniform in most models, they tend to underestimate a scenario where most seats are bagged by the leading party. The low “multiplier” of the BJP (with over 32 per cent votes the BJP got less than 5 per cent seats) reinstates this point. Another factor that skewed the seat forecasts was the adjustments made for the Congress. As the Congress had some very strong candidates like Shoib Iqbal and Mateen Ahmad, most pollsters seem to have either manually adjusted for such seats or made adjustments in their swing models in order to predict these seats accurately. This also backfired as all these seats went to the AAP’s kitty.
Finally, being a pollster I would imagine that most of us would be very hesitant in forecasting a number like 67 seats in a 70-member Assembly. Given the kind of pressure pollsters operate in, it is unlikely that one would have gone with such a high number. With constant scrutiny, and safeguarding one from the allegation of being biased, one might have ended up being conservative as forecasting that one single party will bag as many as 95 per cent of the total seats would have been a tough call.