In June I was at a small conference of social scientists, meeting in the Utah mountains. The setting was beautiful, the meals memorable. But all we talked about was l’affaire LaCour.
The actual events have been described elsewhere, including here. In short: An influential article published in Science by Michael J. LaCour, an up-and-coming young scholar, and his co-author, Donald P. Green, a senior scholar at Columbia University, was revealed to have been based on fabricated data.
Political scientists are already defensive about our status as a \”science,\” of course. National Science Foundation funding has several times been held up for questioning (at least) by Congress, and our technique and the status of our data are often debated in departments and graduate courses. What does this recent controversy mean for my discipline? I want to offer a two-part answer, personal pessimism and general optimism.
Personal pessimism: Je suis Don Green. A student at the Utah conference asked why the co-author whom LaCour used/collaborated with (depending on whom you believe) to secure the Science publication was Green, the Columbia professor, and \”not you, Munger?\” The participants laughed, but my answer shocked them: The reason, the only reason, is that Don was asked, and I wasn’t.
As the senior author, Green had obligations to be sure that the data had been collected in the way LaCour had claimed. He failed to carry out that duty, and so Green is at fault. On the other hand, it appears that Green did what I would have done (possibly more than I would have done) to satisfy those obligations. LaCour had detailed preliminary descriptive statistics, lists of graphs and tables, and what appeared to be careful documentation.
As Green said in an interview with NPR’s This American Life after the story broke: \”This is the thing I want to convey somehow. There was an incredible mountain of fabrications with the most baroque and ornate ornamentation. There were stories, there were anecdotes, my Dropbox is filled with graphs and charts. You’d think no one would do this except to explore a very real data set.\”
Of course, it’s obvious why LaCour contacted Professor Green and not Professor Munger: In any con (if that’s what this really was), it’s important to go big, and use the person’s own character against him. Green is careful; having his name on the paper would sell the con. When Jon Krosnick of Stanford University was contacted by This American Life when the paper was originally released, he was suspicious. But when Krosnick heard that Green was a co-author, he said, \”I trust [Green] completely, so I’m no longer doubtful.\”
So Green was recruited because his reputation would \”sell\” the result to others. That seems too clever, even evil, to be true, which is why it worked. When asked by Jesse Singal — the Science of Us reporter who led much of media inquiry — whether he had \”failed\” in obligations to check on the sources and methods, Green said that’s \”entirely fair. I am deeply embarrassed that I did not suspect and discover the fabrication of the survey data and grateful to the team of researchers who brought it to my attention.\”
To repeat: Green screwed up, badly. But most other senior political scientists would likewise have been taken in. Co-authorship is common in our discipline, and senior authors routinely lend their names to projects fairly late in the game.
Is this wrong? Should we rethink the norm? I don’t think so, but since I’m convinced that I would have been taken in also there does seem to be a kind of system failure rather than a flaw in Green’s character or judgment. We need to think about this some more.
General optimism: Truthiness. One question that I keep hearing is, \”How did LaCour think he would ‘get away’ with it?\” He didn’t get away with it. The \”irregularities\” were discovered and announced, the conclusions called into question, and the article itself retracted, in a remarkably short time.
Political science polices itself through esteem and shame. We credit scholars who correct mistakes or malpractice in research. And we heap scorn on violators. I heard it in graduate school from my advisers, and I warn my own students. Whether we need to be more vigilant on this front is something we’ll be talking about in the profession for some time to come.
The subfields of political science are self-correcting interpretive communities. Truth standards differ in different segments of this world, and many arguments that are (at best) speculative do get trotted out. But then they are examined. Most are found wanting and discarded. Some are rejected with prejudice, attracting esteem for those who find errors and shame for those who erred. We hope that that guilt will make scholars self-limit, but even sociopathic liars incapable of guilt (and I am not saying that describes LaCour) can be caught and found out in this system.
Why is this important? Because of \”truthiness\” — the term Stephen Colbert coined for claims that state conclusions or facts one feels or believes to be true, rather than concepts or facts that are known to be true.
Many people really (really) want to believe that: (a) if we just sit down and talk seriously, we can agree, and (b) people who oppose same-sex marriage are wrong, perhaps even homophobic. The LaCour study appealed to our truthiness, because it validated both the method and the result that many people \”feel\” is good.
And I must admit that lots of people in my tribe celebrated the result and tried to dissuade David Broockman, one of the whistle-blowing graduate students who took a lead role in uncovering the scam, from going public. But avenues for public exposure are now cheap and nearly instantaneous. Instead of having to wait for peer review or a response from a journal, young political scientists smashed the whole thing into pieces with nothing more than a web posting. Being able to jump over the obstacles that just a few years ago would have protected the fraud made all the difference.
LaCour did offer a defense, of course. In it, he made what seemed to some a strange charge: The critique had not been \”peer reviewed,\” and therefore should not be taken seriously. This response, though perhaps odd to an outsider, goes to the heart of the nature of self-correcting interpretive communities. LaCour was wrong: Publications require peer review; criticism is free and open to everyone.
Back in 2008, I wrote about blogs as interpretive communities and \”truthiness.\” Many people see the blogosphere as a wild, aggressively information-free zone. And sometimes it is. But that kind of connected social media is lightning-fast at evaluating truth claims. We are better than we have ever been at ferreting out mistakes and fraud, and rewarding (through electronic esteem) the truth warriors who topple falsehoods.
That grand project is carried out in the most mundane, even noisome, settings. For political science, it’s often a site called Political Science Rumors. It’s juvenile, full of gossip and scabrous trolling, and comments such as \”No one ever replicated Plato!\” But at the height of the LaCour revelations I was on that site, refreshing my screen two or three times an hour. There was more, and better, information, minute-by-minute, from PSR than anywhere else. And a diverse crowd of smart people was having a real-time public debate (though admittedly a debate full of expletives and genital references) about an important topic.
Truthiness is a feeling about what is right, or should be right, and acting as if that is actually truth. For all its failings and repulsiveness, the system in place for vetting and \”taking down\” truthiness worked — eventually. Three young scholars worked to reveal a fraud, though they approved the conclusions the fraud would have supported. And thousands of people had a conversation about political science. The system worked.
Author Bio: Michael C. Munger is a professor of political science and head of the philosophy, politics, and economics program at Duke University. He was chair of political science at Duke 2000-10, and served two terms on the National Science Foundation’s Panel for Political Science.