Fighting online ugliness

69

Situationer

M. Ziauddin

For some years now online ugliness—from misogyny, racism, anti-religious and other forms of abuse to disinformation, propaganda, and terrorist content— is on the increase. The public resentment is justifiably high. Most users of internet believe programmers can train the software to spot bogus stories and outwit the people producing this garbage. But the problems seem not so simple. Company lawyers and Facebook, Twitter, YouTube, and other technology companies are yet to find an answer to this problem.
Europeans, meanwhile, are seemingly moving to impose restrictions on the expression that Internet companies can permit on their platforms. Although these moves reflect legitimate concerns about the abuse of online space, many of these moves risk interfering with fundamental rights to freedom of expression.
Meanwhile, according to David Kaye (How Europe’s New Internet Laws Threaten Freedom of Expression—published in Foreign Affairs on Dec. 18, 2017)European regulation of online speech has roots in a continental willingness to protect vulnerable groups against “speech harms.” But more recent actions show European courts and legislators pushing companies to act as speech regulators themselves.
Terrorism and crimes against minorities and refugees have led the European Commission to take a number of steps to force companies to regulate digital space. In 2016, the commission pressured Facebook, Microsoft, Twitter, and YouTube to agree to a code of conduct that pushes them to review “illegal hate speech” within 24 hours of notice and promptly remove it. It goes even further, with the companies agreeing to continue their work as mild propaganda machines “identifying and promoting independent counter-narratives.” The code parallels developments in the European Court of Human Rights, which has been toying with imposing monitoring requirements and liability on platforms for failure to remove certain kinds of hateful content.
In September of this year, the commission doubled down on these principles, adopting a formal communication that urges “online platforms to step up the fight against illegal content.” The communication puts the companies themselves in the position of identifying, especially through the use of algorithmic automation, illegal content posted to their platforms. But, as Daphne Keller of Stanford’s Center for Internet and Society has argued, the idea that automation can solve illegal content problems without sweeping in vast amounts of legal content is fantasy. Machines typically fail to account for satire, critique, and other kinds of context that turn superficial claims of illegality into fully legitimate content. Automation thus involves a disproportionate takedown of legal content all to target a smaller amount of illegal material online. As a matter of law, as attorney and legal analyst Graham Smith noted, the commission process reverses the normal presumptions of legality in favour of illegality, with safeguards so weak that companies will likely err on the side of taking down content.
The communication expressly avoids the problem of disinformation and propaganda. But regulation of such content may also be on the horizon, as the commission has announced creation of a High-Level Group to address it. Even the staunchest promoters of freedom of expression in European politics recognize that disinformation is a major problem. Marietje Schaake, a Dutch member of the European Parliament and a leading proponent of respect for human rights in Europe, captured a widespread view on the continent when she said in parliamentary debate that she is “not reassured when Silicon Valley or Mark Zuckerberg are the de facto designers of our realities or of our truths.”
Content restrictions extend beyond Brussels to the national level. Germany enacted a law this year that places strict obligations on major Internet companies to remove “manifestly illegal content” within 24 hours of notice, with heavy fines that incentivize quickly taking down posts rather than performing careful evaluations. The United Kingdom adopted a Digital Economy Act this year, with the goal of protecting minors from “harmful content” but likely encouraging removal of lawful adult content in order to avoid sanctions. Spain took drastic measures to crack down on Catalan separatists online. French legislators sought to criminalize browsing of content “glorifying terrorism,” only to be struck down by the Constitutional Council. Poland has strengthened national security controls over activity on the Internet. In each of these cases, governments are putting pressure on companies to remove illegal content, a predictable response to online harms. However, the pressure works in only one direction—leaving up illegal content will lead to penalties, whereas taking down legal content will not. Unless governments also constrain takedown of legitimate content, companies will almost certainly over-regulate.
Beyond hate speech, abuse, and disinformation, one draft article in a European Commission-proposed copyright directive poses a significant potential threat to creative expression. In most online copyright law, including in the United States under the Digital Millennium Copyright Act, companies have until now typically processed claims of breach on the basis of “notice and takedown” obligations. That is, the platforms are not expected to take down such content unless they are notified of its existence. This principle is restated in the Communication and Code of Conduct, even with the exceptional time frames chipping away at its availability. Article 13 of the proposed directive, however, would reverse the accepted practice with a requirement that companies “prevent the availability” of copyright-protected content, encouraging the use of “effective content recognition technologies.” Here again is the mania for automation. Although this specific provision would only apply in copyright claims, its adoption could set a precedent for significant regulation of other kinds of content. It could impose the kind of monitoring of uploads, with the accompanying threat of over-regulation, that notice-and-takedown procedures have been designed to avoid, and it would apply across a range of creative endeavours.
These rules should concern anyone who cares about freedom of expression, as they involve limitations on European uses of online platforms. European policymakers have good faith reasons to advocate them, such as countering rampant abuse at a time of human dislocation, political instability, and rise of far-right parties. Yet the tools used often risk overregulation, incentivizing private censorship that could undermine public debate and creative pursuits. Companies may be forced into the position of facilitating practices that undermine their customers’ access to information. Europeans should be concerned, as many are.
Why should anyone else care? In the analog era, after all, a fair response in the United States to speech regulation across the pond (or anywhere else) might have been: that’s the way they do it in Europe. They have different experiences, giving some support (if very limited) to rules that U.S. courts would never permit—such as those against Holocaust denial or the glorification of terrorism.
But online space is different. All of the major companies operate at scale, and there is significant risk that troubling content regulations in Europe will seep into global corporate practices with an impact on the uses of social media and search worldwide. The possibility of global delinking of search results may be the most obvious form of content threat, but all of the rules and proposals noted above may slowly move to undermine freedom of expression. For instance, once a company invests the considerable funding required to develop sophisticated content filters for European markets, the barriers to applying them in American contexts are likely to come down.
To be clear, global attacks on online freedom of expression are severe. Illiberal governments around the world are imposing liability on individuals for posts and tweets and blogs that merely criticize public authorities or allegedly spread false information. That kind of regulation, a popular tool of repressive states, creates direct forms of censorship and individual harm. By contrast, European states have traditionally presented a protective environment for freedom of expression, with some—Scandinavian countries setting a model example—providing the strongest protection worldwide.
These are not easy policy questions. Companies themselves should be developing approaches—and many now are—to counter abuse that often successfully aims to push people, especially women and minorities, off the platforms. Those approaches should be rooted in the common standards of human rights law. Companies should be providing easy access to flagging tools to notify harassment quickly and be transparent about their processes for takedowns and be responsive when they get it wrong.
Governments should encourage these kinds of responsible steps by companies, just as many in civil society are doing, while avoiding the stiff penalties and the outsourcing of speech regulation that have been recent hallmarks of European responses to Internet harms. When they demand takedowns, their courts should be available for parties to appeal.
The proposals above, however, risk leading to a shrinking of space in the most important forums for expression available in history. They will be hard to contain in practice, principle, or in terms of geography. To the extent that they involve outsourcing adjudication to private actors, they limit the possibility of democratic accountability. They should be reconsidered, limited, and enforced through the traditional tools of the rule of law.