ISBN 978-0-300-17313-0
“The fantasy of a truly ‘open’ platform is powerful, resonating with deep, utopian notions of community and democracy—but it is just that, a fantasy. There is no platform that does not impose rules, to some degree. Not to do so would be simply untenable.”
No matter what web platforms you use, the contents presented to you inside that software shell are shaped by a series of policies and decisions which are probably largely invisible to you as the end user. Focusing on the major English language platforms, Custodians of the Internet analyzes the myth of the neutral platform, introduces the US regulatory scheme that gave rise to the current state of affairs, and examines the strengths and weaknesses of the different moderation methods currently in use, as well as making some modest proposals for how adjust the situation going forward. Tarleton Gillespie is both an academic and a tech industry insider, employed by Microsoft Research New England, as well as Cornell University. The book is published by Yale University Press.
Custodians of the Internet aims to focus our attention on the hidden work that the social media platforms would rather have remain invisible. Content moderation functions silently behind the scenes, and the end user never knows what it is they do not see. Moreover, thanks to personalization algorithms, they do not know what they see that others do not, and vice-versa. The content is not only moderated, it is also curated, often to maximize engagement and time on screen. Platforms have worked very hard to preserve this illusion of smooth operation, requiring their third-party moderators to sign non-disclosure agreements, and remaining tight-lipped about how they decide what to allow on their sites, and how their algorithms function. Most people spend little or no time thinking about what isn’t on the platforms they use, or why they see what they do see, but these invisible boundaries are what shape and distinguish these spaces, and constitute them into usable, monetizable products.
Gillespie also attempts to encompass the inherent and irreconcilable complexity of the moderation endeavour, and the broad range of unseen work it entails, from policy teams, to crowd workers, to individual users who are deputized rate or report content. He includes analysis of three main moderation strategies, which are editorial review, user flagging, and automatic detection. Each strategy has constrains and weaknesses. For example, editorial review is hugely labour intensive, flagging mechanisms can be abused for social or political purposes, and even potential violations automatically detected by a computer often need to be verified by human eyes. While it is easy for users or the media to criticize a particular moderation decision or policy, Gillespie is determined to highlight the broader context and framework inside which each individual decision is ultimately made and disputed.
Gillespie identifies two categories that platforms tend to fall into when it comes to moderation; they position themselves either as “speech machines” or “community keepers,” and build their policies around those stances. However, he does not oversimplify, noting the tension and interplay between the two camps, and how platforms ricochet between these justifications when trying to position themselves in the best possible light, often after an individual decision comes under scrutiny. As Gillespie puts it, “If social media platforms were ever intended to embody the freedom of the web, then constraints of any kind run counter to these ideals, and moderation must be constantly disavowed. Yet if platforms are supposed to offer anything better than the chaos of the open web, then oversight is central to that offer—moderation is the key commodity, and must be advertised in the most appealing possible terms.” It is a contradiction that can never be fully reconciled, and one that is inevitably shaped by the economic imperatives of making a platform profitable as well as functional.
For those unfamiliar with American law, Gillespie includes an introduction to Section 230, the provision of telecommunications regulation better known as “safe harbor” that holds intermediaries or conduits innocent of any responsibility for the speech or content of their users. It further stipulates that moderation in good faith does not change this provision. This regime was designed for the telephone era, and Gillespie convincingly argues that social media platforms, which the law could not have foreseen, “violate the century-old distinction between deeply embedded in how we think about media and communication,” and further that they constitute “a hybrid that has not been anticipated by information law or public debate.” The book is not largely focused on solutions, but Gillespie does propose that safe harbour need not be unconditional. Rather, platforms could be asked to meet certain requirements in order to maintain that status, whether that means greater transparency or improved appeal structures. However it seems likely that the platforms would vociferously oppose any change to this generous provision, which grants them the best of both worlds—the right to remove any content they please, but responsibility for none of it.
Gillespie is largely interested in looking at the big picture, and at the breadth of content which platforms host and police. Policies must be designed to cover a wide range of content, and Gillespie seems less interested in specific case studies, except in so far as they show how a broad dictate such as “no nudity” can come into conflict with a more specific situation, such as breastfeeding, to which he dedicates a chapter. Gillespie is also interested in problems of scale, and the issues that arise when a platform is home to multiple communities of people with conflicting values, and differing ideas about where lines should be drawn. Small, homogeneous online communities that believe they do not require moderation often get a rude awakening when they receive a large influx of new users who do not share their presumed values.
In this broad discussion, Custodians of the Internet is laying the groundwork for our emerging conversation about the role the platforms have played during the growth of the web as our dominant form of media, and the role we want these platforms to play in public discourse going forward. This is part of a larger discussion about not only moderation, harassment and free speech, but also data privacy, the gig economy, microtargeting, algorithmic bias, and more. The distribution of power and responsibility will shape our future in ways we have only begun to comprehend.
Ooh, this sounds fascinating! I’d love to learn more about how different platforms do content moderation and to hear from an expert about issues that might not have occurred to me, like the question of breastfeeding pictures vs other nudity.