In the polling world, no survey firm releases its microdata in a timely manner. When pollsters release it at all — usually months after publication, to an archive that requires a paid subscription for access — they seldom provide the detailed methodological explanations necessary to replicate the survey results.
Critics have raised charges of full-scale fabrication, like that alleged in the LaCour study, about a handful of pollsters in recent years, and such a wholly fraudulent poll might well be able to evade detection.
In particular, there is reason to think that pollsters engage in a behavior known as herding, in which they announce results that are similar to those of other recent polls. Why would they do this? In part, because it’s safer; they fear being wrong. Pollsters are judged by their results, and in an era of close elections, expectations of accuracy are high.
There is strong evidence that some firms have engaged in herding. Ahead of the 2012 presidential election, surveys by Public Policy Polling (PPP), a Democratic firm, showed an unusual pattern. As voters’ preferences shifted, the estimated racial composition of the electorate shifted as well. When President Obama lost support among white voters, for instance, the poll would include more nonwhite voters than a previous survey had. As a result, the top line results remained fairly stable.
Tom Jensen, Public Policy’s director, said that these shifts were a result of randomly deleting respondents until the sample matched PPP’s expectation for how respondents said they voted in 2008. But the practice was not enumerated in the firm’s public methodology statement, and the public releases of the survey did not include the statement.