93% of Australians prefer a steaming pile of poo to Eddie McGuire

This week I posted a survey on twtpoll, asking people the following question:

Which would you rather have in your home: Eddie McGuire or a steaming pile of poo?

The results were emphatic:

  • Eddie McGuire: 2 votes (3%)
  • Steaming pile of poo: 61 votes (97%)

While this was a terribly entertaining result, I did it to point out some of the problems with the many, many survey stories we see in the media.

If a journalist received a media release with these survey results in it, would he or she carefully scrutinise the details? Or write an amusing story to fill a gap in the news section? After all, nobody likes Eddie McGuire.

But a careful examination would reveal a large number of flaws in my technique.

Sample size

How many people did the survey interview? Is this a representative sample of the population the study claims to represent?

In this case, there were 63 respondents, which is definitely not enough to gain a valid idea of the opinions of all 22 million Australians. For a population of that size, you would need hundreds or thousands of votes to be statistically valid. Even then, there would be a large margin of error (ie, the true result could be 5% higher or lower than the one in the survey).

Avoiding duplication

How rigorous was the survey? Did it take any efforts to ensure people weren’t counted twice, such as deduplicating data? In the case of an online survey, did it use cookies or prevent people from voting from the same IP address?

My survey was not at all rigorous. I allowed people to vote multiple times if they wanted. I voted three times. If I were writing up a media release about my survey, I wouldn’t mention this, or I’d put it in tiny print at the bottom. No one would check.

Selection bias

Were the people who responded to the survey from a reasonable and broad cross-section of society or from a particular group? Would this affect their likelihood of voting one way or another?

My survey had 63 respondents, but who were they and how were they selected? In this case, I posted the survey on Twitter and Facebook. A couple of people retweeted it. Which means the respondents were:

  • Regular internet users
  • (Mostly) people I know

Would these factors make them more keen on poo? Or less likely to be fans of Eddie McGuire? Especially given the timing of the survey, after Eddie made homophobic comments about American figure skater.

A good example of selection bias is the recent, widely reported survey on ISP-whinge community site Whirlpool which found that 92% of its members were against mandatory ISP filtering. But 32.5% of respondents said they worked in IT. 70.9% were aged under 40. 72.1% said they were technical ‘gurus’ or ‘power users’.

So, the fact that 92% of the people most likely to oppose internet censorship were against internet censorship was hardly surprising. But it would be a giant leap to say this figure could translate to the broader Australian population.

Leading language

Did the language of the question favour one result over another? Why did the survey ask that particular question?

There is a definite art to choosing and framing survey questions. Asking the right question – or often not asking the wrong one – is a great way to get the answer you want.

I was quite deliberate in my choice of words, asking people which they would rather have in their homes. Not, for instance, which they would rather watch on TV. My aim was to make people contemplate the idea of having a steaming (smelly) pile of poo in their houses, which I thought would swing a few votes Eddie’s way.

Clearly I underestimated how much people disliked Eddie.

Manipulation

When people took the survey, how were they directed to the question? Did the surveyors couch it with a preamble or introduction that might sway the respondents one way or the other?

In directing people to the survey, I started out just tweeting the question but gradually became more manipulative.

  • Which would you rather have in your home: Eddie McGuire or a steaming pile of poo?
  • Disturbingly, a steaming pile of poo is outpolling Eddie McGuire 7 votes to 0. Can this be true? Have your say.
  • I really thought at least one person would prefer Eddie McGuire to a steaming pile of poo. Do you?

Is it a coincidence that people only started voting for Eddie after I sent the third tweet?

Some thoughts for journos

Survey stories are great fun. They make for great headlines: a number, then a provocative or contentious topic. What’s more they give these matters of opinion or controversy a pseudo-scientific rigour.

But it’s fair to say of nearly all the survey press releases you receive that:

  • The organisation paying for the survey has a very strong interest in the result being a particular way
  • The organisation conducting the survey is aware of this interest (or is the same organisation that commissioned the survey)
  • Surprise surprise, the survey results turn out the way the commissioning organisation wanted them to.

Journalists who reprint media releases verbatim, or rewrite them with no additional research, are often subjected to scathing criticism from their peers.

But journos who publish the results of obviously biased and flawed surveys without any critical analysis usually seem to get away with it. It’s time we got a lot smarter.

Advertisements

Leave a reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s