“Dear &HouseholderName, Congratulations!
You have already qualified for entry in our &CashAmount prize draw.
Just return the enclosed entry slips, and in just a few days you could be
driving down &StreetName in a new &CarBrand!”
Exciting stuff, isn’t it? At least, the first time. After a while, though, letters like this don’t even raise the pulse rate. They either go straight in the bin or, better still, the Mail Preference Service[1] prevents them from arriving in the first place. In other words, there’s a regulatory system in place that gives me an effective choice to exercise, and if I withhold my consent to receive this kind of unsolicited, rather clumsy manipulation, I can tell if it’s working or not. There is a framework of regulation, choice, consent and transparency.
What happens if some or all of these elements are missing? Well, two thirds of a million people now have a case study to reflect on, in the form of Facebook’s “emotional contagion” study. Or rather, two thirds of a million people were Facebook’s “emotional contagion” study, and we all have an opportunity to reflect on that and its implications.
In their defence, Facebook say that the study “was consistent with [the] data use policy” to which all Facebook users agree by creating an account. They also note that the study was presented to, and approved by, an Institutional Review Board.
Prof. Daniel Solove (George Washington University)[2] and Prof. James Grimmelmann (University of Maryland)[3] both offer expert analysis of Facebook’s position, and I don’t propose to try and improve on that here. Nor would I aim to better Laurie Penny’s insights into the broader social implications[4]. Instead, I would like to relate this episode to some work the Internet Society has recently done on ethical data-handling. Solove and Grimmelmann both note something that is most pithily captured by researcher and academic Kate Crawford:
“Let’s call the Facebook experiment what it is: a symptom of a much wider failure to think about ethics, power and consent […]”[5]
In our paper on “Ethical Issues in Online Trust”, presented at the CREDS workshop[6] earlier this year, we identified a number of ethical data-handling principles that bear directly on cases like the “emotional contagion” experiment. Three of those principles came vividly to mind as I read Facebook’s research report.
• No surprises
• Legitimacy, as opposed to legality
• Catering for multiple stakeholder perspectives
For all its simplicity, the principle of “no surprises” is an unexpectedly reliable guide when it comes to deciding what to do with privacy-related data. Here are some of the practices we identified which can result in unwelcome surprises for data subjects:
“Perhaps they signed up to a usage statement that was general enough to cover a multitude of sins; perhaps they didn’t sign up, but were simply opted in by default; or perhaps there was an implicit consent step of which they were not made sufficiently aware.”
I think you will detect echoes of all of those in the current case.
Second, there is the closely related idea of legitimacy, as opposed to legality. Here’s what we said about that:
“legitimacy implies fairness; even if something is legal, if it is manifestly unfair, then it is probably also unethical. Intuitively, if a data controller acts unfairly, they have the option of another course of action which would be more fair (but might, of course, benefit them less than the unfair alternative).”
Legitimacy also implies lack of deceit; if the data subject is kept in the dark about what is done with personal data, it is hard to see how notions of consent and respect can be upheld. A criticism often levelled at social networks and their bedfellows, the targeted advertisers, is that users find their behaviour ‘creepy’. I would argue that this is because users are deliberately under-informed about the collection and re-use of data about them, such that when the re-use becomes apparent, users are unpleasantly surprised by it. It is, of course, in the interests of social networks to lull the user into a feeling of confidential interaction with their peers. Users reveal more about themselves if they don’t feel there is a third party listening in on the conversation.”
And third, we considered the importance of remembering that any online interaction involves multiple stakeholders, each of whom has legitimate interests to be taken into account. In this instance, I find it shocking (but, regrettably, not surprising) that the Facebook study appears to have taken no account of possible cross-border or cross-cultural concerns with what it did. Just as I, somehow, arbitrarily “qualified” for entry into the prize draw I mentioned at the beginning of this post, so Facebook users “qualified” for inclusion in the study by virtue of the fact that they “viewed Facebook in English”[7].
698,003 x 0 is still Zero
Once, in a workshop, while discussing mechanisms for privacy preference expression, I said I would be happier for data subjects to have some means of expressing a preference than none. An older, wiser participant made the following wry remark: “That only brings a benefit if someone is prepared to give weight to their preference. If not… well, ten million times zero is still zero”. And that’s the weight Facebook appears to have given to the legitimate interests of its data subjects.
Footnotes
[1] For example, in the U.K (MPS Online). May not apply in your jurisdiction…
[2] Daniel Solove: Consent, Privacy and Manipulation
[3] James Grimmelmann: As Flies To Wanton Boys
[4] Laurie Penny: When do we start to worry?
[5] Kate Crawford (@katecrawford)
[6] CAIDA conference, CREDS workshop
[7] Facebook study: “Experimental evidence of massive-scale emotional contagion through social networks” (PDF)