Our Bipolar Free-Speech Disorder And How To Fix It (Part 3)

Part 1 and Part 2 of this series have emphasized that treating today’s free-speech ecosystem in “dyadic” ways—that is, treating each issue as fundamentally a tension between two parties or two sets of stakeholders—doesn’t lead to stable or predictable outcomes that adequately protect free speech and related interests.

As policymakers consider laws that affect platforms or other online content, it is critical that they consider Balkin’s framework and the implications of this “new-school speech regulation” that the framework identifies. Failure to apply it could lead—indeed, has led in the recent past—to laws or regulations that indirectly undermine basic free expression interests.

A critical perspective on how to think about free speech in the twenty-first century requires that we recognize the extent to which free speech is facilitated by the internet and its infrastructure. We also must recognize that free speech is in some new ways made vulnerable by the internet and its infrastructure. In particular, free speech is particularly enhanced by the lowering barriers to entry for speakers that the internet creates. At the same time, free speech is made vulnerable insofar as the internet and the infrastructure it provides for freedom of speech is subject to legal and regulatory action that may not be transparent to users. For example, a government may seek to block the administration of a dissident website’s domain name, or may seek to block the use by dissident speakers of certain payment systems.

There are of course non-governmental forces that may undermine or inhibit free speech—for example, the lowered barriers to entry make it easier for harassers or stalkers to discourage individuals from participation. This problem is in some sense an old problem in free-speech doctrine—the so-called “heckler’s veto”—is a subset of this problem. The problem of harassment may give rise to users’ complaints directly to the platform provider, or to demands that government regulate the platforms (and other speakers) more.

Balkin explores the methods in which government can exercise both hard and soft power to censor or regulate speech at the infrastructure level. This can include direct changes of the law aimed at compelling internet platforms to censor or otherwise limit speech. This can include pressure that doesn’t rise to the level of law or regulation, as when a lawmaker warns a platform that it must figure out how to regulate certain kinds of troubling expression because “[i]f you don’t control your platform, we’re going to have to do something about it.” It can include changes in law or regulation aimed at increasing incentives for platforms to self-police with a heavier hand. Balkin characterizes the ways in which government can regulate speech of citizens and press indirectly, through pressure on or regulation of platforms and other intermediaries like payment systems, as “New School Speech Regulation.”

The important thing to remember is that government itself, although often asked to arbitrate issues that arise between internet platforms and users, is not always a disinterested party. For example, a government may have its own reasons for incentivizing platforms to collect more data (and to disclose the data it has collected), such as with National Security Letters. Because the government may regulate speech indirectly and non-transparently, there is a sense in which government cannot position itself on all issues as a neutral referee of competing interests between platforms and users. In a strong sense, the government itself may have its own interests that themselves may be in opposition to either user interests or platform interests or both.

Toward a new Framework

It is important to recognize that entities at each corner of Balkin’s “triangular” model may each have valid interests. For example, governmental entities may have valid interests in capturing data about users, or in suppressing or censoring certain (narrow) classes of speech, although only within a larger human-rights context in which speech is presumptively protected. End-users and traditional media companies share a presumptive right to free speech, but also other rights consistent with Article 19 of the ICCPR:

“Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”

The companies, including but not limited to internet infrastructure companies in the top right corner of the triangle, may not have the same kind of legal status that end users or traditional media have. By the same token they may not have the same kind of presumptively necessary role in democratic as governments have. But we may pragmatically recognize that they have a presumptive right to exist, pursue profit, and innovate, on the theory that their doing so ultimately redounds to the benefit of end users and even traditional media, largely by expanding the scope of voice and access.

Properly, we should recognize all these players in the “triangular” paradigm as “stakeholders.” With the exception of the manifestly illegal or malicious entities in the paradigm (e.g., “hackers” and “trolls”), entities at all three corners each have their respective interests that may be in some tension with actors at other corners of the triangle. Further, the bilateral processes between any two sets of entities may obscure or ignore the involvement of the third set in shaping goals and outcomes.

What this strongly suggests is the need for all (lawful, non-malicious) entities to work non-antagonistically towards shared goals in a way that heightens transparency and that improves holistic understanding of the complexity of internet free speech as an ecosystem.

Balkin suggests that his free-speech-triangle model is a model that highlights three problems: (1) “new school” speech regulation that uses the companies as indirect controllers and even censors of content, (2) “private governance” by companies that lacks transparency and accountability, and (3) the incentivized collection of big data that makes surveillance and manipulation of end users (and implicitly the traditional media) easier. He offers three suggested reforms: (a) “structural” regulation that promotes competition and prevents discrimination among “payment systems and basic internet services,” (b) guarantees of “curatorial due process,” and (c) recognition of “a new class of information fiduciaries.”

Of the reforms, the first may be taken as a straightforward call for “network neutrality” regulation, a particular type of regulation of internet services that Balkin has expressly and publicly favored (e.g., his co-authored brief in the net neutrality litigation). But it actually articulates a broader pro-competition principle that has implications for our current internet free-speech ecosystem.

Specifically, the imposition of content-moderation obligations by law and regulation actually inhibits competition and discriminates in favor of incumbent platform companies. Which is to say, because content moderation requires a high degree both of capital investment (developing software and hardware infrastructure to respond to and anticipate problems) and of human intervention (because AI filters make stupid decisions, including false positives, that have free-speech impacts), highly capitalized internet incumbent “success stories” are ready to be responsive to law and regulation in ways that startups and market entrants generally are not. The second and third suggestions—that the platforms provide guarantees of “due process” in their systems of private governance, and that the companies that collect and hold Big Data meet fiduciary obligations—need less explanation. But I would add to the “information fiduciary” proposal that we would properly want such a fiduciary to be able to invoke some kind of privilege against routine disclosure of user information, just as traditional fiduciaries like doctors and lawyers are able to do.

Balkin’s “triangle” paradigm, which gives us three sets of discrete stakeholders, three problems relating to the stakeholders’ relationships with one another, and three reforms is a good first step to framing internet free-speech issues non-dyadically. But while the taxonomy is useful it shouldn’t be limiting or necessarily reducible to three. There are arguably some additional reforms that ought to be considered, at a “meta” level (or, if you will, above and outside the corners of the free-speech triangle). With this in mind let us add the following “meta” recommendations to Balkin’s three specific programmatic ones.

Multistakeholderism. The multipolar model that Balkin suggests, or any non-dyadic model, actually has been addressed in different ways by institutionalized precursors in the world of internet law and policy. That model is multistakeholderism. Those precursors, ranging from hands-on regulators and norm setters like ICANN to broader and more inclusive policy discussion forums like the Internet Governance Forum, are by no means perfect and so must be subjected to ongoing critical review and refinement. But they’re better at providing a comprehensive, holistic perspective than lawmaking and court cases. Governments should be able to participate, but should be recognizes as stakeholders and not just referees.

Commitment to democratic values, including free speech, on the internet. Everyone agrees that some kinds of freedom of expression are disturbing and disruptive on the internet—yet, naturally enough, not everybody agrees about what should be banned or controlled. We need to work actively to uncouple the commitment to free speech on the internet—which we should embrace as a function of both the First Amendment and international human-rights instruments—from debates about particular free-speech problems. The road to doing this lies in bipartisan (or multipartisan, or transpartisan) commitment to free-speech values. The road away from the commitment lies expressly in the presumption that “free speech” is a value that is more “right” than “left” (or vice versa). To save free speech for any of us, we must commit in the establishment of our internet policies to what Brandeis called “freedom for the thought that we hate.”

Commitment to “open society” models of internet norms and internet governance institutions. Recognition, following Karl Popper’s The Open Society and Its Enemies (Chapter 7) that our framework for internet law and regulation can’t be “who has the right to govern” because all stakeholders have some claims of right regarding this. And it can’t be “who is the best to govern” because that model leads to disputed notions of who’s best. Instead, as Popper frames it,

“For even those who share this assumption of Plato’s admit that political rulers are not always sufficiently ‘good’ or ‘wise’ (we need not worry about the precise meaning of these terms), and that it is not at all easy to get a government on whose goodness and wisdom one can implicitly rely. If that is granted, then we must ask whether political thought should not face from the beginning the possibility of bad government; whether we should not prepare for the worst leaders, and hope for the best. But this leads to a new approach to the problem of politics, for it forces us to replace the question: Who should rule? by the new question: How can we so organize political institutions that bad or incompetent rulers can be prevented from doing too much damage?”

Popper’s focus on institutions that prevent “too much damage” when “the worst leaders” in charge is the right one. Protecting freedom of speech in today’s internet ecosystem requires protecting against the excesses or imbalances that necessarily result from merely dyadic conceptions of where the problems are or where the responsibilities for correcting the problems lie. If, for example, government or the public want more content moderation by platforms, there need to be institutions that facilitate education and improved awareness about the tradeoffs. If, as a technical and human matter it’s difficult (maybe impossible) to come up with a solution that (a) scales and (b) doesn’t lead to a parade of objectionable instances of censorship/non-censorship/inequity/bias, then we need create institutions in which that insight is fully shared among stakeholders. (Facebook has promised more than once to throw money at AI-based solutions, or partial solutions, to content problems, but the company is in the unhappy position of having a full wallet with nothing that’s worth buying, at least for that purpose. (See “Can Mark Zuckerberg Fix Facebook Before It Breaks Democracy?”). The alternative will be increasing insistence that platforms engage in “private governance” that’s both inconsistent and less accountable. In the absence of an “ecosystem” perspective, different stakeholders will insist on short-term solutions that ignore the potential for “vicious cycle” effects.

Older models for mass-medium free-speech regulation were entitles like newspapers and publishers, with high degrees of editorial control, and common carriers like the telephone and telegraph, which mostly did not make content-filtering determinations. There is likely no version of these older models that would work for Twitter or Facebook (or similar platforms) while maintaining the great increase in freedom of expression that those platforms have enabled. Dyadic conceptions of responsibility may lead to “vicious cycles,” as when Facebook is pressured to censor some content in response to demands for content moderation, and the company’s response creates further unhappiness with the platform (because human beings who are the ultimate arbiters of individual content-moderation decisions are fallible, inconsistent, etc.). At that point, the criticism of the platform may frame itself as a demand for less “censorship” or for more “moderation” or for the end of all unfair censorship/moderation. There may also be the inference that platforms have deliberately been socially irresponsible. Although that inference may be correct in some specific cases, the general truth is that the platforms have more typically been wrestling with a range of different, competing responsibilities.

It is safe to assume that today’s mass-media platforms, including but not limited to social media, as well as tomorrow’s platforms will generate new models aimed at ensuring that the freedom of speech is protected. But the only way to increase the chances that the new models will be the best possible models is to create a framework of shared free-speech and open-society values, and to ensure that each set of stakeholders has its seats at the table when the model-building starts.

Mike Godwin (@sfmnemonic) is a distinguished senior fellow at the R Street Institute.

Permalink | Comments | Email This Story

Techdirt.