Lessons learned from the Facebook study

Share:

\"\"

By now, anyone who is remotely interested knows that the Facebook data-science team, in collaboration with some researchers at Cornell University, recently published a paper reporting “experimental evidence of massive-scale emotional contagion through social networks.” If you’ve heard about this study, you probably also know that many people are upset about it. Even the journal that published it, the Proceedings of the National Academy of Sciences, has issued an “editorial expression of concern” about potential violations of ethical standards.

Much of the concern has focused on the issue of informed consent, and whether or not the Facebook terms-of-service agreement constitutes such a thing. That focus is understandable, but it has distracted attention from the real problem: the failure of ethical-review procedures to keep up with technology.

Consider what the National Science Foundation has to say about informed consent on human-subject research:

The fundamental principle of human subjects protection is that people should not (in most cases) be involved in research without their informed consent, and that subjects should not incur increased risk of harm from their research involvement, beyond the normal risks inherent in everyday life.

So yes, informed consent is always preferable, and in many cases mandatory, but not in all cases. What are those cases? Once again, according to the NSF:

An IRB may … waive the requirements to obtain informed consent provided the IRB finds and documents that: (1) The research involves no more than minimal risk to the subjects; (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects; (3) The research could not practicably be carried out without the waiver or alteration; and (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

Reasonable people can differ on what constitutes “minimal” risk and whether or not the Facebook study would have passed such a test. But it’s worth reading about what the researchers actually did, rather than relying on sensationalized media summaries. In fact, they did not “manipulate emotions” in any direct sense at all. Rather, they scored users’ posts for emotional content on the basis of their use of words like “great” (positive) or “awful” (negative), and then randomly prevented some of those posts from showing up on friends’ news feeds. No actual content was altered, and users could always see all posts by visiting their friends’ pages. Given that Facebook already makes innumerable decisions every day about what content is posted to news feeds (only a fraction of what your friends post shows up), the manipulation applied by the researchers was relatively tiny—well within “the normal risks inherent in everyday life.” (The effects, also measured by word counts, were even smaller—roughly 1 percent—but that’s another matter.)

There’s also a strong argument to be made that the research could not practicably be carried out without the waiver. Thoughtful commenters have suggested that Facebook could have recruited a separate “opt in” research pool of users who were required to read and sign a special terms-of-use agreement. Although useful for some study designs (e.g., the kind of “virtual lab” experiments that my colleagues and I run), however, this approach has at least two problems for the kind of study Facebook conducted. First, any such pool would be highly self-selected, potentially biasing the results; and second, unless a separate pool was recruited with its own customized agreement for every study, users might still object that they didn’t understand the implications of what they were agreeing to. Debriefing 700,000 subjects, meanwhile, might well have caused more confusion and consternation than it would have averted.

For all these reasons, the study—had it been subject to institutional review—very likely would have been approved, without the requirement of informed consent. But was it subject to any such review at all? And if not, should it have been?

This is where things get murky. Technically, the Cornell researchers examined only “secondary data,” meaning anonymized data from a study that was conducted by another party (Facebook), and hence were not required to submit to IRB approval. Also technically, Facebook is not required to have its research approved by an IRB (which is required only for federally funded institutions). So technically, no individual did anything wrong. Nevertheless, the absence of clear procedures of ethical review allowed everyone involved to assume either that no action was required, or that if was, someone else had taken care of it.

And that is the real ethical issue at stake here. It is not that the study itself was unethical, but rather that no one involved was required to address the ethical implications before embarking on it.

These implications are not always clear-cut. If, say, the researchers had proposed seeding Facebook users’ news feeds with fictitious stories rather than simply adjusting the existing filter, that design might not pass an ethical test—it would really depend on the details. Nor can researchers be relied upon to evaluate the ethical implications of their own work. Indeed, the history of human-subject regulation is littered with socially valuable psychology experiments—Milgram’s shock treatments, the Stanford Prison Experiment—that the researchers themselves felt were unproblematic but that today we regard as unethical.

Partly in response to those early missteps, today we have pretty good procedures for reviewing and approving psychology experiments done in university labs. We also have a pretty clear idea of how to handle survey research or ethnographic studies. But research done on Facebook doesn’t fit neatly into any of those categories. It’s kind of like a lab, in the control that it affords researchers, but it’s also kind of like a survey, in the remote, hands-off relationship between researcher and subject. It’s even a bit like an ethnographic study, in that it allows researchers to observe interactions among subjects in their own environment.

The benefits are that it’s far more naturalistic than a traditional lab, experiments can be run on much larger scales, and much richer data can be collected. Potentially, Facebook and other web platforms—including Twitter and Amazon, but also email, search, and media services—can shed new light on many important questions of social science, such as the nature of human cooperation and conflict, the dynamics of public-opinion formation, and the relationship between organizational structure and performance.

But, as in other areas of life, technology is opening up exciting capabilities faster than our institutions for regulating those activities can adapt. I submitted my first IRB proposal for web-based social science 14 years ago, and since then I have had experience with review procedures both at Columbia University as well as in corporate research labs (first at Yahoo! Research, where we implemented an IRB-like process, and now at Microsoft). Although progress has been made over that time, many university IRBs still have little experience with the mechanics of web-based research. Meanwhile researchers in private companies, who do understand the mechanics, typically don’t receive formal training in human-subject research. Finally, it doesn’t help that most web platforms blur the boundary between a research site and a commercial product—domains that are currently regulated by different federal agencies.

What we need is an ethics-review process for human-subject research designed explicitly for web-based research, in a way that works across the regulatory and institutional boundaries separating universities and companies. For the past two years, my colleagues at Microsoft Research have been designing precisely such a system, which is to be rolled out shortly.

It is still a work in process, and many details are liable to change as we learn what works and what doesn’t, but the core principle is one of peer review. Although we have an ethics board composed of experienced researchers (including me), the idea is not to have every proposal submitted to the board for review—a recipe for bottlenecks and frustration. Rather, it is to force researchers to engage in structured, critical discussions with educated peers, where everyone involved will be accountable for the outcome and hence will have strong incentives to take the review seriously. Unproblematic designs will be approved via an expedited process, while red flags will provoke a full review—a two-tier system modeled on existing IRBs.

Aside from its inherent scalability, the peer-review approach also has the benefit of involving the entire research community in discussions about ethics. Rather than placing the burden of review on a small committee of experts, everyone will have to undergo some basic training and consider the ethical implications of their research. The goal is to create an educated community that, in subjecting all cases to diverse viewpoints, lets fewer errors slip through. And because the process is designed to run continuously, insights arising from novel cases will diffuse quickly.

Lest this picture sound utopian, let me add that not everyone likes the idea of ethical peer review, or even the idea of institutional review of any kind. Even among those in favor, different people have different ideas of what is acceptable and what isn’t, and they all have strong opinions. I expect that I’ll be having many arguments with my colleagues as we roll this out, and none of us will entirely get our way. In fact, that’s sort of the point.

I’m hopeful that our peer-based approach to ethical review will become a model for industry and academic research. No doubt other approaches will be proposed—indeed, some already have. Regardless of which model wins out, if we have learned one lesson from this latest controversy, it should be that all human-subject research, whether conducted in companies or at universities, whether online or offline, whether “massive scale” or not, should be subject to ethical review. The public trust in social science is at stake.

Author Bio: Duncan J. Watts, a principal researcher at Microsoft Research, has been doing web-based social science research for 14 years, at Columbia University, Yahoo! Research, and Microsoft.

Tags: