This is one of the reasons I am a big proponent of replication and transparency in what we do.
How is it that economists, working in good faith, wind up with dubious results? To start, they can overanalyze the data. Modern computers spit out statistical regressions so fast that researchers can fit some conclusion around whatever figures they happen to have. “When you run lots of regressions instead of just doing one, the assumptions of classical statistics don’t hold anymore,” Mr. Roberts says. “If there’s a 1 in 20 chance you’ll find something by pure randomness, and you run 20 regressions, you can find one—and you’ll convince yourself that that’s the one that’s true.”
As if to prove the point, an economist two decades ago wrote an article charmingly titled “I Just Ran Two Million Regressions,” which found economic growth to be strongly correlated with Confucianism. Yet many studies aren’t so methodologically transparent. “You don’t know how many times I did statistical analysis desperately trying to find an effect,” Mr. Roberts says. “Because if I didn’t find an effect I tossed the paper in the garbage.”
Economists also look for natural experiments—instances when some variable is changed by an external event. A famous example is the 1990 study concluding that the influx of Cubans from the Mariel boatlift didn’t hurt prospects for Miami’s native workers. Yet researchers still must make subjective choices, such as which cities to use as a control group.
Harvard’s George Borjas re-examined the Mariel data last year and insisted that the original findings were wrong. Then Giovanni Peri and Vasil Yasenov of the University of California, Davis retorted that Mr. Borjas’s rebuttal was flawed. The war of attrition continues. To Mr. Roberts, this indicates something deeper than detached analysis at work. “There’s no way George Borjas or Peri are going to do a study and find the opposite of what they found over the last 10 years,” he says. “It’s just not going to happen. Doesn’t happen. That’s not a knock on them.”