Wednesday, May 20, 2009

Survey technique for awkwards questions and evasive answers

I enjoyed this piece by Sidin Vadukut in last Saturday's Mint (pasted below in italics). It relates to something we often have to deal with at work - how to estimate the prevalence of issues that people may be uncomfortable talking about. Not surprisingly the Randomized Response Technique described below involves a derived answer, one that thoroughly masks the source of individual responses. Nothing wrong with that of course. In fact ensuring the confidentiality of respondents should be part of a researchers equivalent of the Hippocratic Oath. I've pasted the article in its entirety below.

In a landmark 1965 paper called “Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias” author Stanley Warner outlined an interesting way of carrying out surveys. Let me explain the idea without going too much into the mathematics. Instead, I’ll focus on how it’s done, why it’s useful and what happened when we ran a little Lok Sabha exit poll here in the office using Warner’s Randomized Response method.

The Randomized Response Technique is used when you want to research the prevalence of issues that people feel uncomfortable talking about. A college would never be able to accurately survey its students for prevalence of drug use and cheating. A student would never risk the probability of being identified saying: “Yes.

And sometimes both at the same time.” Warner’s survey works in two stages: first you ask the respondent to roll a dice, pick a card and so on. Depending on the random result, you ask them to answer one of two questions without yourself knowing which question. (Hold on.) The respondent merely says if they agree or disagree with it. Then you survey the next respondent. And so on.

Finally the math comes in. Using formulae which look at the chance of a question being picked and the regularity with which an answer was given, you could approximate what the whole selection of respondents felt. Obviously, the bigger the sample, better the accuracy.

(More clear math in the second paper linked below.) But let me tell you what we tried in the office We took 16 identical pieces of paper and on 12, we wrote the statement “I did not vote for the UPA (United Progressive Alliance): Congress or allies”. On the remaining four, we wrote “I did vote for the UPA: Congress or allies”. Testy questions indeed. Then we shuffled the cards statement-side down and asked employees to pick one each. (The card was returned and the deck shuffled after each employee. Only people who actually voted were allowed to pick.) Each employee picked a card at random, looked at the statement and then merely said if they agreed with it or not. As a surveyor, all I am noting down is the number of agrees and don’t agrees. Nothing more. I have no idea which question they got, and so what their response implied. Then I ran the math. Using a sample size of 34 voters (very small, but good enough to blog about) and the 16 cards, we were able to approximate that 32.53% of the office voted for the UPA and the rest did not vote for the UPA. (So we really can’t say who they voted for. That wasn’t the question you see.) Of course, it should be wildly inaccurate given the sample size. But it’s a fun, mildly magical way to do exit polls, no? Why not try one in the office right now, process the results and then look like a genius? And unlike some of those TV channels, you have the math to prove it.

Warner’s paper can be found at: http://ihome.cuhk.edu.hk/~s0802340/ sta300308/ref4_1.pdf (Alternatively, use the following link: http://tinyurl.com/randompoll) An easier explanation can be found at www.eric.ed.gov/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=ED187753 (Use the following short address: http://tinyurl.com/ randompoll2)

No comments:

Post a Comment