Taking Back Our Stolen History
Facebook Launched a Secret Experiment with Cornell to Manipulate the Emotions of 689,003 Users
Facebook Launched a Secret Experiment with Cornell to Manipulate the Emotions of 689,003 Users

Facebook Launched a Secret Experiment with Cornell to Manipulate the Emotions of 689,003 Users

It’s become farcical. Whoever we ask, nobody seems to know anything. Did the study have ethical approval? First the answer was yes. Then it was no. Then it was maybe. Then it was no again. Was it funded by the US army? First the university said yes. Then it said no, without explanation. Why did the scientific journal not state whether the study was ethically approved, as required by its own policy? Sorry, editor Susan Fiske told me, I’m too busy to answer that question.

I’m referring of course to the study published last week by the Proceedings of the National Academy of Sciences, in which researchers from Facebook and Cornell University teamed up to study “emotional contagion”. Over a one-week period in 2012, they changed the content of news feeds for a random sample of Facebook users. For one group of users they removed content that contained positive words, for another group they removed content that contained negative words. They then measured whether subtly biasing the emotional content in this way changed the emotional content of status updates by the users. Sure enough it did. Making feeds more negative led to more negative behavior, and vice versa.

Scientifically, the study is remarkable in some ways and unremarkable in others. The sample size of 689,003 is truly huge – possibly the largest in the history of psychology. And the results are interesting insofar as they show that very small changes in the emotional state of our environment can have knock-on effects for how we act (and presumably how we feel) in social networks. On the other hand, the effects in the study are minuscule, among the smallest statistically significant results ever published. As psychologist Tal Yarkoni has pointed out, were the effects to be expressed in terms of average human height, they would have an effect of just one 20th of an inch across the entire male population of the United States.

Yet what has driven this study into the spotlight isn’t the scientific implications – it is the fact that the researchers failed to obtain consent from the thousands of Facebook users who were subjected to the intervention. Informed consent is a core principle of human research ethics, established in the aftermath of the second world war. In important cases where the question is deemed vital and consent isn’t possible (or would prevent a fair test), it can be legally bypassed. But this is rare, and it is doubtful whether the Facebook study would qualify for such an exemption.

In this case, the researchers took advantage of the fine print in Facebook’s data use policy to conduct a scientific experiment without informed consent. Even though the academic researchers collaborated with Facebook in designing the study, it appears that they only obtained ethical approval after the data collection was finished. Then, since the data was already collected, the ethics committee in question seems to have awarded it an “approval lite”. So it appears that the researchers rather cleverly exploited an ethical loophole.

I use terms like “appears” and “seems” because, bizarrely, these simple facts are not entirely clear and straight answers are hard to come by. Earlier this week I contacted Princeton University professor, Susan Fiske, who edited the study at the Proceedings of the National Academy of Sciences. According to the journal’s own policy, ethical approval (known in the US as Institutional Review Board, or IRB, approval) is a condition of publication, and the research must state which IRB gave the go ahead. Fiske was unable to tell me why the article failed to mention ethical approval, citing demands on her time. “I can’t because of sheer volume [of questions] and another deadline I have.” In a 115-word email that would have taken longer to write than the answer I asked for, Fiske added, “I judged that PNAS should not second-guess the relevant IRB.

Even putting that judgment aside, to what IRB is Fiske referring? None is mentioned in the published article, and until last night no IRB had stepped forward to claim responsibility for the study. I also contacted the Cornell University IRB and the authors of the paper. On Monday evening, the university responded with the brief statement that because the academic researchers devolved responsibility for data collection and analysis to Facebook, they were “not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required”. This is in spite of the fact that the Cornell academics helped design the project. Meanwhile, the academics themselves have said nothing, relying on Facebook researcher Adam Kramer to issue vague comments that provide few answers. It seems they have descended into lockdown.

This situation is, quite frankly, ridiculous. In what version of 2014 is it acceptable for journals, universities and scientists to offer weasel words and obfuscation in response to simple questions about research ethics? How is it acceptable for an ethics committee to decide that the same authors who assisted Facebook in designing an interventional study to change the emotional state of more than 600,000 people did, somehow, “not directly engage in human research”?

Whether the study was ethically questionable is itself debatable, and there are no black-and-white answers. Those defending the study have pointed out, quite rightly, that Facebook and many other online companies routinely perform such studies for their own benefit or as part of social experiments. They don’t need our consent to do such research and nobody seemed to care before, so why such an uproar now when the findings are published in a scientific journal? Facebook may well have done the exact same experiment anyway, and by collaborating with scientists, aren’t they doing it in a way that is publicly transparent and beneficial? Critics warn that too strong a backlash might dissuade such companies from joining forces with science in the future.

These are important points but they overlook the fact that, for better or worse, publicly funded science is held to a higher ethical standard than comparable research in the private sector. Once academic scientists get involved the bar is raised, never lowered. In fact, if this case has highlighted anything, it is how marketing research can be so unregulated. The Facebook study paints a dystopian future in which academic researchers escape ethical restrictions by teaming up with private companies to test increasingly dangerous or harmful interventions.

Once the brouhaha dies down, the researchers in this case may well be left with one nagging question. How did a major online company, a prestigious scientific journal, and an Ivy League university all fail to see this coming? Were it not so amateurish, one might be tempted to think this is all a ruse – that the real experiment is watching how the world reacts to revelations that Facebook conducts covert experiments on its customers.

Source: The Guardian