The broken escalator; or, can you ever really retract a paper?



It’s a clear, curious, irresistible finding. In a study published in March of last year in the Journal of Experimental Social Psychology, researchers tracked donations to the Salvation Army from mall shoppers who had just taken the up escalator versus those who had just stepped off the down. They found that more than twice as many of the recently elevated gave money (16 percent compared with 7 percent).

Articles about the study appeared in Scientific American, New Scientist, and multiple other outlets, each with the obligatory escalator stock photo like the one above. Even though the finding is pretty recent, it’s showed up in several books, including Get Lucky: How to Put Planned Serendipity to Work for You and Your Business and Brainfluence: 100 Ways to Persuade and Convince Consumers With Neuromarketing.

The lead author of the study is Lawrence Sanna, who resigned in May from the University of Michigan at Ann Arbor after another social psychologist, Uri Simonsohn, raised questions about the escalator study and other papers by Sanna. According to Nature, Sanna has requested that that paper and two others be retracted.

But retracting the paper doesn’t mean it will go away. It will probably continue to pop up in Google Scholar searches, just like the papers of the disgraced psychologist Diederik Stapel. Those news articles will still get stumbled on, forwarded, Tweeted. Unsuspecting readers will pick up those books and read aloud the passage about the incredible escalator trick to their spouses. There may be Salvation Army bell ringers standing hopefully at the tops of escalators next Christmas season, counting on the magic of science.

Maybe that isn’t a big deal in this case. It was a harmlessly interesting study. But it goes to show how once a study leaves journal-land for the wider world, there’s really no erasing it. A nifty finding takes on a life of its own, even if it’s flawed or fraudulent.

Simonsohn, who is at the University of Pennsylvania, exposed Sanna and another researcher, Dirk Smeesters, of Erasmus University Rotterdam. Smeesters also resigned after statistical irregularities in his papers were identified. I asked Simonsohn a few questions by e-mail about his investigations:

You said, regarding Sanna’s elevation study, that every result was “super-significant” and that the standard deviations seemed too similar to be believable. You seemed to be saying that there were obvious red flags. Is this something that reviewers should have picked up on?

    I wouldn’t say they were obvious red flags. They just happened to catch my eye. I think there was a great deal of chance in me noticing those things. I was reading a set of papers with similarly far-fetched predictions but much weaker evidence, so these stood out. I am not surprised reviewers didn’t pick up on Sanna’s excessively similar standard deviations, that is a very subtle pattern.

    I am more surprised about Dirk Smeester’s papers, which contain several graphs that defy what real data looks like in a way that is evident to the naked eye; but I know that hindsight is 20/20.

One of the commonalities in these cases is that one researcher controlled the data. You contacted three of Sanna’s grad students and they said they weren’t involved in collecting data. Stapel also kept the data to himself. Is one of the lessons here that co-authors need to demand access to the data?

    Dirk Smeesters, did the same.

    To answer this question, however, we need to know how many honest researchers keep all the data for themselves. It is inefficient for every co-author to redo every analysis, so some level of compartmentalization is to be expected.

    What does strike me as odd is tenured professors doing the everyday routine data-collection work associated with graduate students and junior researchers more generally. Smeesters, Stapel, and Sanna did this, and that I do think is uncommon and suspicious. Part of a junior person’s training is doing this everyday work, why would a senior person do an aversive task that deprives a junior colleague of useful experience? It would be like a senior newspaper editor telling a junior writer she will do the spellcheck for him.

    Dirk Smeesters was coding multiple-choice answers by hand despite having a junior co-author and access to research assistants that could do such tasks. Why?

Simonsohn said that the response he has received to his investigative work, even from colleagues he thought might be critical, has been entirely supportive.

The English professor Margaret Soltan, on her blog Universities Diaries, has repurposed a term in Simonsohn’s honor to refer to a researcher who gets caught cooking data: Simonized. Let’s hope that catches on.