Archive for July, 2014

On the Facebook Emotion study

July 7, 2014

This is a response to a response. The article I’m responding to is here.

In short, Facebook research recently published a study about manipulating the moods of its users in an experimental setting. It is not clear how much the users were able to consent to the study. Much ink has been spilled over the study and, in particular, the author of the piece I am responding to is worried that the extreme reaction to the Facebook study will result in less openness, not more, from the industrial research community, with teams preferring to keep results internally rather than publishing them. The poster also points out other examples of experimentation, e.g. on Wikipedia, that did not raise so much outrage.

I sympathize with the poster’s point of view, but I must respectfully disagree. I will respond in reverse order. First, as to the point of the other examples: I think user consent is important in ALL situations. A questionable experiment on Wikipedia should get as much criticism as a questionable experiment on Facebook. So the existence of one most certainly does not excuse the existence of the other.

Second, as to the point about the extreme reaction — I believe it was well-warranted. Most users of Facebook, Twitter and similarly-scaled social media sites really do not understand just how much power is concentrated in these sites. As one of my colleagues says, imagine if the US Federal Government asked a sizable fraction of American adults to report their age, gender, relationship status, location, likes, and so on in a centralized repository, and to keep that information up to date. There would be widespread outrage — and yet, that collection is precisely what Facebook has access to. The poster mentions Milgram’s experiments — those, due to logistical limitations, had a sample size in the low hundreds. A social media site has access to hundreds of millions of people and, as Facebook’s study shows, can manipulate their emotions. The implications are truly serious.

In light of these implications, it is natural for companies like Facebook to retreat into the safety of internal studies and never publish any results. It is a natural reaction of a guilty party to deny anything bad happened, downplay its seriousness, and move on. However, this sort of denial does nobody any good: the users continue to be experimented upon without any knowledge; the social media service loses credibility and suffers attrition; the academic community’s reputation is tainted and social science loses research funding that it desperately needs. The way to break out of this vicious cycle is openness and, yes, making mistakes and apologizing for them and suffering the consequences of temporary suspicion. Facebook would be much better off continuing to publish its research, conforming to standards of behavioral experimentation, providing users with the ability to consent, or not, to studies that manipulate their social media experience, and suffering through the growing pains of becoming a trusted social media research institution. The alternative is, as the poster suggests, increased secrecy, lack of oversight, and inevitably, a scandal the size of AOL’s leaked data.

Industrial researchers can do better. We should be willing to expose our world-class science to the rigorous examination of the public and respond to criticism in a mature and responsible way. That is our obligation to the billions of human beings who use our services.