Pages

Friday, March 2, 2012

Journal Entry #22 - 3/2/12

So I have been going over in my head how I'm going to approach people about my surveys while I'm in London. The good thing about the survey is that it is only one question. It will not be not clear from the survey itself exactly what we are measuring, since each survey will just include a list of current trends in the United Kingdom and will ask the person taking the survey to list how many of the given trends are troubling. Survey takers won't have to say which of the trends they believe are negative, which means they will probably be more honest about what they report.

And, just for the record, I finally found the article that describes this research method, and you can take a look at it here. This section, however, does a good job of explaining the idea:

Research­ers have re­cently found an­oth­er way to go about this, one that is even more sensitive to respondents who might want to hide bias and does not rely on proxy concerns or coded issues. In the late 1990s, a pair of Harvard po­lit­ical sci­entists prob­ing opin­ions about affirmative action worried that few people would hon­estly answer a pollster’s questions on such a del­icate subject. In­stead, the research­ers turned their sur­veys into an experi­ment, randomly di­viding their sample into two groups. Each group of subjects was pro­vided with a list of state­ments and asked merely to identi­fy how many they agree with, rather than having to weigh in on spe­cif­ic state­ments di­rectly. One group’s list would include an extra, “tar­get” item—“I don’t approve of affirmative action,” say. Then research­ers would com­pare the respons­es of each group, and attribute the differ­ence in the number of state­ments cho­sen to the pres­ence of the tar­get item.


The author of the article also mentions this about the validity of the research methodology:

Aca­demics have lodged minor method­o­logical quarrels with the list-experi­ment technique, noting that the process can confuse respondents and requires the sur­vey-taker to per­fectly calibrate a list of items where most people will agree to some, but not all of them. If it is all or none, respondents could feel that their views on the tar­get item are no longer masked.


To me, this method is a good way of getting around the problem of the Hawthorne Effect. People who know that their opinions or responses are being measured are likely to act or respond differently than they would under normal circumstances. Thus, the idea is either to gain the trust of these people or ask them questions in a way that does not make them wary or unsure about what answer to give.

Nevertheless, this doesn't solve the problem that many of the people I run into in London simply won't want to take the survey. Most surveys these days are taken through phones. It's incredibly difficult to get a truly random sample of a population, and randomly making phone calls and asking individuals what they believe is the easiest and most random method surveyors have come up with so far. I, on the hand, don't have the ability to do even that. My best bet is to find three or four districts in London that, together, are fairly representative of the entire country. Then I have to randomly talk to people from those districts and ask them to take the survey. In addition, I have to randomly assign people to a control group and a test group (one survey with say, five trends including the trend of declining religiosity, and the other with only four, without declining religiosity on the list).

I fear that I won't be able to find a random sample. I would guess that asking people on a bus (however awkward that sounds) to take a survey would be somewhat representative, since a majority of people in the country do take public transportation. However, if I knock on doors I will almost undoubtably talk to the people who tend to be home during the day, which probably includes mostly women. I can't exactly get into a taxi and ask a rich businessman to take my survey. And even if I could, chances are he probably would say no anyway. Should I get a phonebook and randomly call people who live in these districts and ask them the survey question for hours and hours each day? It might be extremely difficult to do that. Or at least extremely frustrating. I do have ninety days, and if I found 10 people willing to take the survey, five days a week, that would mean that after forty days of asking (or more than a month and a half, since those forty days don't include weekends and other days I don't ask) I'll have four hundred people in the study. That might be pretty good. It might be pretty tedious too though :P

No comments:

Post a Comment