What The Polls Can't Tell You
Somewhere deep in the chaos of the comments thread on my previous entry, Richard Henry Morgan and I have been having a discussion about the nature of evidence regarding the current state of the war. He finds reason for optimism and regards all media coverage of the war as suspect. I, obviously, don’t take the same position on either evidence or conclusions reached from it.
One of the side notes in this discussion has involved polls. Historians don’t typically generate polling data itself, though they may make use of polling data collected by others in the relatively recent historical periods in which it was generated. We may also make use of other bodies of evidence that either deliberately represent or implicitly speak to what typical or representative sentiments were in a particular place and time.
Because it’s not at the center of our methodological practice, I suppose I can afford to cavalierly suggest that I find virtually all polling data suspect, even polls which are friendly to my own arguments or interpretations.
Polls, even those designed with exquisite methodological care and being pursued in the best possible circumstances, with a population that can be sampled very evenly and randomly, run into two basic problems.
First, they often ask people to resolve out contradictions or ambiguities in their thinking about issues into a single discrete option or opinion. One might suggest that the manner that people jump in response to such a prompt can be usefully predictive of how they will or do act in relation to the issues being examined, but I think this is often not the case, particularly not when the ambiguities in the survey population are sharply pronounced. If we ask Iraqi citizens what they think of life since the occupation, I doubt very much that any of them have clear or straightforward opinions that can be resolved down into discrete sentiments. This would go for any circumstance full of conflict and anxiety: people do not think or believe any single thing about such circumstances, no matter when or where we might look.
Second, pollsters, rather like a very particular lineage of experimental social psychologists, tend to overlook the degree to which responses to polls are themselves communicative acts. This is something that any historian who has collected oral history knows very well, something that the ethnographic method is centrally concerned with.
In responding to a poll, many people are seeking to say something for effect to whomever they perceive to be the entity or institution responsible for the poll, just as an informant may answer the queries of an oral historian or anthropologist in part based on their assessment of the social identity of the questioners and his or her motives for asking the question. Like many Africanists, I’ve often found that I’m perceived by interview subjects as a representative of the local government or possibly of a development agency, and at least some of what they have to tell me is conditioned by messages they wish to convey about their lives and aspirations to those institutions. In societies where polling is a common practice, my default assumption is that at least some people answering a poll do so in a mischievious, monkey-wrenching manner designed to confuse the results, and others give answers that they think the pollster wants to hear.
There is a very narrow vein of polling that strikes me as actually predictive of external behaviors or outcomes, and even then only occasionally, say in tracking polls related to elections. A political historian could make very productive use of such data. I accept, for example, what the tracking polls are showing about the American electorate at the moment, which is that it is sharply divided and that the number of undecided voters in relation to the Presidential race is unprecedently small. But that is something, despite Richard Henry Morgan’s skepticism about reportage, that any intelligent commentator could tell you just by reading the cultural tea leaves and listening to messages on the wind.
Poll data over the longue duree, if kept well and consistently, might help to gauge deep changes in the foundational thinking of a society. Any single poll about gay marriage, for example, probably says very little. But a poll using the same set of questions, collected with the same methodology, over a forty-year time series might tell us something worth knowing.
Historians know that the study of mentalite, of what people are in general or in specific thinking about in past societies, is a difficult methodological art, and that any conclusions one reaches about this subject are always of necessity provisional. We know that we want, indeed need to know, about how people think, and about how their thinking conditions their practice. But such knowledge, if it is to be found anywhere, will almost certainly never be usefully found in a poll.
Richard Henry Morgan - 5/19/2004
This is tangential, I know, but you'll get an idea of some of the sources of my gripes if you visit overpressure.com, and compare what Kimmit said versus what the LA Times reported. There are some interesting observations there.
Richard Henry Morgan - 5/18/2004
There are many challenges to a methodologically sound poll. These are only compounded by the barriers of culture, language, and distrust, as one might find in Iraq. Fair enough. But the question is not whether reportage "could" (your word) tell you the facts (given the highly general nature of the question you put to your reporter), but whether it is a reliable method qua method more generally -- even a stopped clock tells the time correctly, albeit only twice daily.
Reportage suffers many of the same obstacles, but adds some of its own, like the herd mentality of reporters, the deadline, and personal and ideological biases. I offer two words: Walter Duranty. As Malcolm Muggeridge put it: "The greatest liar of any journalist I have met in fifty years of journalism." Given the reputation of journalists, that's quite a slam. How is that a man, whom another respected writer feels compelled to call a great liar, the recipient of a Pulitzer Prize for his lies? And how is that a NY Times reporter (same root as 'reportage') only last year could compound the lies of Duranty by writing in the Times that Duranty was merely too credulous -- too credulous, when he wrote in the Times that there was no starvation in the Ukraine, while telling the British Ambassador that actually 10 million had died? Reporters lie for the same reason that dogs lick their balls -- because they can.