How reliable are opinion polls

Spread the love

 

It is an accepted principle that the right to freedom of expression does not include the right to shout “fire! fire!” in a cinema hall if there is no conflagration. A stampede may not result from such an action, but by causing an unnecessary evacuation, you are hurting the right to free expression of those watching the film. In other words, the right to free speech can and should be curtailed if your free speech comes in the way of my free speech.

That principle applies to pre-election opinion polls too. Election canvassing is two-way communication between candidates and voters. The media’s job is to report on, contextualize and even give its opinions on this communication. But when the media starts telling the voter who’s going to win how many seats, it starts interfering in the communication between candidates and voters. Given that these polls come with a claim of scientific methodology and empirical truth, they are not the same as saying, “I think this party will win.” An opinion poll implies that the media has already asked the people who they will vote for, and reported it back to the people even before the people actually vote.

Opinion polls are so clearly a way of influencing voter behavior, it is surprising they have been allowed for so long. They are particularly insidious in a multi-party democracy with a first-past-the-post electoral system. In such a system, there may be as many as 10-20 candidates in the fray in a constituency. With so many options, voters don’t like to waste their votes. Typically, they want to know who is seriously in the fray and whose candidature is to be ignored. Such election discourse is known as “hawa”. Through rallies, speeches, posters, and the media, political parties try to create the perception that they are a serious option, and you will not be wasting your vote if you choose them.

This is why we have “paid news” at election time, a phenomenon that has been declared malpractice by the Election Commission. In recent years, parties and candidates have been willing to pay for advertisements that are published in the news columns, suggesting that they are in the fray and are likely to win. That is how important it is to create the illusion of victory.

Enter opinion polls. Hard data. Not your teashop chatter, not the speculation of the armchair political analyst, not the tall claims of partisans. Despite the skepticism of opinion polls because they are repeatedly wrong, these surveys suggest broad trends to voters and thus influence the hawa.

Since there are no studies showing the impact of opinion polls on voter behavior, one must speculate about the results they bring. One could argue that if the opinion polls for the Delhi assembly election recently hadn’t written off the Aam Aadmi Party, it could have won even more votes and seats than it did. But since the opinion polls largely suggested the Bharatiya Janata Party was coming to power with more than a clear majority, perhaps many voters thought they would be wasting their vote on the AAP.

You could argue that if opinion polls frequently get it wrong, how could they be accused of influencing voter behavior? It turns out that they are only too happy to deliberately manipulate their data to do just that. Seven reporters of a news channel, News Express, went to 11 opinion poll companies, posing as officials of political parties, asking them if they would put out manipulated data to say that their party was doing better. Apart from CVoter, the rest of the ten are not big names in election forecasting. Two large firms, AC Nielsen and CSDS-Lokniti, refused to entertain the undercover reporters, saying they were booked for election season.

In tapes that the channel released on February 25, CVoter managing director Yashwant Deshmukh is seen and heard as saying that minor tweaking of results would be possible, though he couldn’t completely invent false results. “Rafu” is possible he says, not “paiband”. We can darn the results, but not put a patch on them. The reference is to manipulating results a bit by increasing the margin of error from 3% to 5%. In many constituencies, however, the margin of victory can be less than 1%.

Deshumkh defended himself on Twitter, but did not deny or explain the “rafu” vs “paiband” statement. Though Deshmukh’s claims he refused the offer, the channel is not showing that part of his interaction with them.

Let us just look at CVoter’s performance. CVoter predicted a BJP win in the Delhi assembly elections in 2008. It predicted 39 seats for the BJP and 30 for Congress. The result was 23 for BJP and 43 for Congress. In the 2009 general elections, in predicted 189 seats for the BJP-led National Democratic Alliance alliance, and 195 seats for the Congress-led United Progressive Alliance. The result was 159 and 262. In the previous general election, in 2004, it had predicted 276 seats for the NDA, but the actual result was 181. The UPA won 218 seats, as against CVoter’s forecast of 173. This was their exit poll forecast, based on what the voters said after having cast their votes. It gets worse: CVoter has predicted two different results for two different channels.

The India Today Group put out a statement that they were suspending CVoter’s services, but the question is: why does a company that gets it wrong all the time, get commissioned again and again? In any other industry, repeated failure would mean you go out of business. In election forecasting, your business only seems to grow. The only conclusion you can draw from that is that news channels don’t really care about getting it right. Either they only care about the television ratings that broadcasting opinion polls bring, or worse, they’re also in on the game of creating hawa for cash.

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!