What Social Science can learn from the LaCour scandal

Share:

\"Ethics\"

A veritable firestorm hit political science this past week, with revelations that a promising young star, Michael J. LaCour, a doctoral student at the University of California at Los Angeles, apparently had falsified data. The data served as the basis for a recently published paper in Science. Contrary to other studies, LaCour’s paper, written with Donald P. Green of Columbia University and using a novel field experiment, suggested that conversations with gay canvassers can change voters\’ opinions on gay marriage.

Field experiments are an exciting turn in social science. They combine the value of rigorous experimentation with realistic settings that make the results more believable. The experiment by LaCour and Green overturned conventional wisdom and provided evidence for a result that was pleasing to many activists and people on the political left. But it was based on a shaky foundation, as we know now.

A couple of graduate students and an assistant professor working in the same area attempted to replicate LaCour and Green’s findings in establishing their own study. They couldn’t. After trying to reconcile the differences with LaCour, they wrote an analysis with their new findings. Green immediately asked Science to retract the original paper, and it subsequently did. Is this an example of how broken science is? Replication — or repeating a study to obtain similar results — can come in multiple forms. Often replications involve a slight change in the original study, such as new subjects, a different setting, or a different time period. The technique is the bedrock of science.

In our \”Intro to Research Methods\” courses, and a replication workshop that we teach, we ask students to identify a study of interest to them and simply replicate the findings. In short: get the data, repeat the statistical analysis of the data, and compare the findings with what was published. It is now not surprising, after years of doing this, how often students are unable to obtain the data, how often they cannot simply recover estimates from a published paper, and how rarely they receive responses from scholars. Many students provide a considerable service to the academic community by replicating such work, and a few of them go on to publish the results.

So why don’t more researchers replicate? Because replication isn’t sexy. Our professional incentives are to come up with novel ideas and data, not confirm other people’s prior work. Replication is the yeoman’s work of social science. It is time-consuming, it is frustrating, and it does not gain any accolades for your CV. Worse, critics of students\’ doing replications state that they are amateurs, or that they may jeopardize their reputations by starting their scientific careers as \”error hunters.\” The LaCour scandal shows that critics could not be more wrong. Scientific knowledge is built on the edifice of prior work. Before we get to a stage where we need more new ideas, we need to have a better sense of what works given the data.

Others have argued that the LaCour incident shows the weakness of the social sciences. Some have decided to make this some kind of steamy academic soap opera, even dubbing it LaCourGate, with daily revelations about fake awards and fake funding. While Americans love to shame, this episode is not about LaCour or Green or what is or was not the cause of the errors in the study. This is about openness, transparency, and replication.

The important lesson, however, is that replication works. It is a verification tool that improves science and our knowledge base. The takeaway is that we need to provide more incentives for such work. We need a new, highly respected journal that is just about replication. More funding sources are needed for replications. Each current journal in all of the social sciences should establish policies that require data, tools, and processes to be completely open-source upon publication.

The data given to Science provided the evidence needed to identify errors in LaCour and Green. What prevents this from occurring more often is an incentive for others to replicate. Students can be a crucial force, and colleges should start embedding replication in their courses more rigorously and systematically. And instructors should encourage students to publish their work; currently most replications done in class are an untapped resource.

In fact, LaCour and the uproar surrounding the scandal did supporters of replication and data transparency a big favor. The field of political science was already undergoing changes toward more reproducibility. Top journals — but not all journals in the field — have started to adopt strict replication policies requiring authors to provide their materials upon publication. The American Political Science Association released new guidelines on data access and research transparency.

Those new trends toward higher-quality research were not based on a crisis in political science itself. For example, there were hardly any retractions, accusations of fraud, plagiarism, or large-scale irreproducibility scandals in political science before this one. But there were scandals in psychology, economics, and cancer research that sparked a discussion in our discipline. In fact, political science has been feeding off crises in other fields without bleeding itself. We’ve often wondered: If there were more scandals in political science, could a change toward higher research quality be more rapid, and more profound? Enter LaCour.

Author Bios: Joseph K. Young is an associate professor in the School of Public Affairs and the School of International Service at American University, and Nicole Janz is a political scientist and research-methods associate at the University of Cambridge.

Tags: