And, just for the record, I finally found the article that describes this research method, and you can take a look at it here. This section, however, does a good job of explaining the idea:
Researchers have recently found another way to go about this, one that is even more sensitive to respondents who might want to hide bias and does not rely on proxy concerns or coded issues. In the late 1990s, a pair of Harvard political scientists probing opinions about affirmative action worried that few people would honestly answer a pollster’s questions on such a delicate subject. Instead, the researchers turned their surveys into an experiment, randomly dividing their sample into two groups. Each group of subjects was provided with a list of statements and asked merely to identify how many they agree with, rather than having to weigh in on specific statements directly. One group’s list would include an extra, “target” item—“I don’t approve of affirmative action,” say. Then researchers would compare the responses of each group, and attribute the difference in the number of statements chosen to the presence of the target item.
The author of the article also mentions this about the validity of the research methodology:
Academics have lodged minor methodological quarrels with the list-experiment technique, noting that the process can confuse respondents and requires the survey-taker to perfectly calibrate a list of items where most people will agree to some, but not all of them. If it is all or none, respondents could feel that their views on the target item are no longer masked.
To me, this method is a good way of getting around the problem of the Hawthorne Effect. People who know that their opinions or responses are being measured are likely to act or respond differently than they would under normal circumstances. Thus, the idea is either to gain the trust of these people or ask them questions in a way that does not make them wary or unsure about what answer to give.
Nevertheless, this doesn't solve the problem that many of the people I run into in London simply won't want to take the survey. Most surveys these days are taken through phones. It's incredibly difficult to get a truly random sample of a population, and randomly making phone calls and asking individuals what they believe is the easiest and most random method surveyors have come up with so far. I, on the hand, don't have the ability to do even that. My best bet is to find three or four districts in London that, together, are fairly representative of the entire country. Then I have to randomly talk to people from those districts and ask them to take the survey. In addition, I have to randomly assign people to a control group and a test group (one survey with say, five trends including the trend of declining religiosity, and the other with only four, without declining religiosity on the list).
I fear that I won't be able to find a random sample. I would guess that asking people on a bus (however awkward that sounds) to take a survey would be somewhat representative, since a majority of people in the country do take public transportation. However, if I knock on doors I will almost undoubtably talk to the people who tend to be home during the day, which probably includes mostly women. I can't exactly get into a taxi and ask a rich businessman to take my survey. And even if I could, chances are he probably would say no anyway. Should I get a phonebook and randomly call people who live in these districts and ask them the survey question for hours and hours each day? It might be extremely difficult to do that. Or at least extremely frustrating. I do have ninety days, and if I found 10 people willing to take the survey, five days a week, that would mean that after forty days of asking (or more than a month and a half, since those forty days don't include weekends and other days I don't ask) I'll have four hundred people in the study. That might be pretty good. It might be pretty tedious too though :P
No comments:
Post a Comment