Tag Archive for: content

Content Moderation And Human Nature

It should go without saying that communication technologies don’t conjure up unfathomable evils all by themselves. They are a convenience-enhancer, a conduit, and a magnifying lens amplifying something that’s already there: our deeply flawed humanity. Try as we might to tame it (and boy have we tried), human nature will always rear its ugly head. Debates about governing these technologies should start by making the inherent tradeoffs more explicit.

Institutions

First, a little philosophizing. From the social contract onwards, a significant amount of resources have been allocated to attempting to subdue human nature’s predilection for self-preservation at all costs. Modern society is geared towards improving the human condition by striving to unlearn — or at least overpower — our more primitive responses.

One such attempt is the creation of institutions, with norms, rules, cultures and, on paper, inherently stronger principles than those rooted deep inside people.

It’s difficult to find ideologies that don’t allow for some need for institutions. Even the most ardent of free market capitalists acquiesce to the — limited, in their mindset — benefits of certain institutions. Beyond order and a sense of impartiality, institutions help minimize humans’ unchecked power in consequential choices that can impact wider society.

One ideal posits that institutions (corporations, parties, governments) given unfettered control over society could rid us of the aspects of our humanity that we’ve so intently tried to escape, bringing forth prosperity, equality, innovation, and progress. The fundamental flaw in that reasoning is that institutions are still intrinsically connected to humanity; created, implemented, and staffed by fallible human beings.

However strict the boundaries in which humans are expected to operate, the potential for partial or even total capture is very high. The boundaries are rarely entirely solid, and even if they were, humans always have the option to not comply. Bucking the system is not just an anomaly, it’s revered in a large portion of non-totalitarian regimes as a sign of independence, strong individuality, and as a characteristic of those lauded as mavericks.

The power of institutional norms tasked with guarding against the worst of what humans can offer is proven to be useless when challenged by people for whom self-preservation is paramount. A current and facile example is the rise to power of Donald Trump and his relentless destruction of society-defining unwritten rules.

Even without challenging the institution, a turn towards self-indulgence is easily achievable, forging a path to a reshaping in its image. The most obvious example is that of communism, wherein the lofty goal of equality is operationalized through a party-state apparatus to ostensibly distribute equally the spoils of society’s labor. As history has shown, this is contingent on the sadly unlikely situation wherein all those populating institutions are genuinely altruistic. Invariably, the best-case scenario dissipates, if it ever materialized, and inequality deepens — the opposite of the desired goal.

This is not a tacit endorsement of a rule-less, institution-less dystopia simply because rules and institutions are not adept at a practically impossible task. Instead, this should be read as a cautionary tale for overextending critical aspects of society and treating them as panacea, rather than a suitable and mostly successful palliative.

Artificial Intelligence

Armed with the continuous failure of institutions to overcome human nature, you’d think we would stop trying to remove our imperfect selves from the equation.

But what we’ve seen for more than a decade now has been technology that directly and distinctly promises to remove our worst impulses, if not humans entirely, from thinking, acting, or doing practically anything of consequence. AI, the ultimate and literal deus ex machina, is advertised as a solution for a large number of much smaller concerns. Fundamentally, its solution to these problems is ostensibly removing the human element.

Years of research, experiments, blunders, mistakes and downright evil deeds have led us to safely conclude that artificial intelligence is as successful at eliminating the imperfect human as the “you wouldn’t steal a car” anti-piracy campaign was at stopping copyright infringement. This is not to denigrate the important and beneficial work scientists and engineers have put into building intelligent automation tasked with solving complex problems.

Technology, and artificial intelligence in particular, is created, run and maintained by human beings with perspectives, goals, and inherent biases. Just like institutions, once a glimpse of positive change or success is evident, we extrapolate it far beyond its limits and task it with the unachievable and unenviable goal of fixing humanity — by removing it from the equation.

Platforms

Communication technology is not directly tasked with solving society, it simply is meant as a tool to connect us all. Much like AI, it has seemingly elegant solutions for messy problems. It’s easy to see that thanks to tech platforms, be they bulletin boards or TikTok, distance becomes trivial in maintaining connection. Community can be built and fostered online, otherwise marginalized voices can be heard, and businesses can be set up and grow digitally. Even loneliness can be alleviated.

With such a slew of real and potential benefits, it’s no wonder that we started to ascribe them with increasingly more consequential roles for society; roles these technologies were never built for and are far beyond their technical and ethical capabilities.

The Arab Spring in the early 2010s wasn’t just a liberation movement by oppressed and energized populations. It was also an opportunity for free PR for now tech-giants Twitter and Facebook, as various outlets and pundits branded revolutions with their names. It didn’t help that CEOs and tech executives seized on this narrative and, in typical Silicon Valley fashion, took to promising things akin to a politician trying to get elected.

When you set the bar that high, expectations understandably follow. The aura of tech solutionism implies such earth-shattering advancements as ordinary.

Nearly everyone can picture the potential good for society these technologies can do. And while we may all believe in that potential, the reality is that, so far, communication technologies have mostly provided convenience. Sometimes this convenience is in fact live-saving, but mostly it’s just an added benefit.

Convenience doesn’t alter our core. It doesn’t magically make us better humans or create entirely different societies. It simply lifts a few barriers from our path. This article may be seen as an attempt to minimize the perceived role of technology in society, in order to subsequently deny it and its makers any blame for how society uses it. But that is not what I am arguing.

An honest debate about responsibility has to fundamentally start with a clear understanding of the actual task something accomplishes, the perceived task others attribute to it, and its societal and historical context. A technology that provides convenience should not be fundamental to the functioning of a society. Convenience can easily become so commonplace that it ceases to be an added benefit but an integral part of life where the prospect of it being taken away is met with screams of bloody murder.

Responsibility has to be assigned to the makers, maintainers and users of communication technology, by examining which barriers are being lifted and why. There is plenty of responsibility there to be had, and I am involved in a couple of projects that try to untangle this complex mess. However, these platforms are not the reason for the negative parts of life, they are merely the conduit.

Yes, a sentient conduit can tighten or loosen its grip, divert, amplify, temporarily block messages, but it isn’t the originator of those messages, or of the intent behind it. It can surely be extremely inviting for messages of hate and division, maybe because of business models, maybe because of engineering decisions, or maybe simply because growth and scale never actually happened in a proper way. But that hate and division is endemic to human nature, and to assume that platforms can do what institutions have persistently failed to do, namely entirely eradicate it, is nonsensical.

Regulation

It is clear that platforms, reaching the size and ubiquity that they have, require updated and smart regulations in order to properly balance their benefits and the risks. But the push (and counter-push) to regulate has to start from a perspective that understands both fundamental leaps: platforms are to human nature what section 230 (or any other national-level intermediary liability law) is to the First Amendment (or any national level text that inscribes the social consensus on free speech).

If your issue is with hate and hate speech, the main thing you have to contend with are human nature and the First Amendment, not just the platforms and section 230. Without a doubt, both the platforms and section 230 are choices and explicit actions built on top of the other two, and are not fundamentally the only or best form of what they could be.

But a lot of the issues that bubble up within the content moderation and intermediary liability space come from a concern over the boundaries. That concern is entirely related to the broader contexts rather than the platforms or the specific legislation.

Regulating platforms has to start from the understanding that tradeoffs, most of which are cultural in nature, are inevitable. To be clear: there is no way to completely stop evil from happening on these platforms without making them useless.

If we were to simply ignore hate speech, we’d eliminate convenience and in some instances invalidate the very existence of these platforms. That should not be an issue if these platforms were still seen as simple conveyors of convenience, but they are currently being seen as much more than that.

Tech executives and CEOs have moved into the fascinating space wherein they have to protect their market power to assuage their shareholders, treat their products as mind-meltingly amazing to gain and keep users, yet imply their role in society is transient and insignificant in order to mollify policy-makers all at the same time.

The convenience afforded by these technologies is allowing nefarious actors to cause substantial harm to a substantial number of people. Some users get death threats, or even have their life end tragically because of interactions on these platforms. Others will have their most private information or documents exposed, or experience sexual abuse or trauma through a variety of ways.

Unfortunately, these things happen in the offline world as well, and they are fundamentally predicated on the regulatory/institutional context and the tools that allow them to manifest. The tools are not off the hook. Their propensity to not minimize harm, online and off, are due for important conversations. But they are not the cause. They are the conduit.

Thus, the ultimate goal of “platforms existing without hate or violence” is very sadly not realistic. Neither are tradeoffs such as being ok with stripping fundamental rights in exchange for a safer environment, or being ok with some people suffering immense trauma and pain simply because one believes in the concept of open speech.

Maybe the solution is to not have these platforms at all, or ask them to change substantially. or maybe it’s to calibrate our expectations, or maybe yet, to address the underlying issues in our society. Once we see what the boundaries truly are, any debate becomes infinitely more productive.

This article is not advancing any new or groundbreaking ideas. What it does is identify crucial and seemingly misunderstood pieces of the subtext and spell it out. Sadly, the fact that these more or less evident issues needed to be said in plain text should be the biggest take-away.

As a qualitative researcher, I learned that there is no way to “de-bias” my work. Trying to remove myself from the equation results in a bland “view from nowhere” that is ignorant of the underlying power dynamics and inherent mechanisms of whatever I am studying. However, that doesn’t mean we take off our glasses when trying to see for fear of the glasses influencing what we see, because that would actually make us blind. We remedy that by acknowledging our glasses as well.

A communication platform (company, tech, product) that doesn’t have inherent biases is impossible. But that shouldn’t mean that we can’t try to ask it to be better, either through regulation, collaboration or hostile action. We just have to be cognizant of the place we’re standing when asking, the context, potential consequences and as this piece hopefully shows, what it can’t actually do.

The conversation surrounding platform governance would benefit immensely from these tradeoffs being made explicit. It would certainly dial down the rhetoric and (genuine) visceral attitudes towards debate as it would force those directly involved or invested in one outcome to carefully assess the context and general tradeoffs.

David Morar, PhD is an academic with the mind of a practitioner and currently a Fellow at the Digital Interests Lab and a Visiting Scholar at GWU’s Elliott School of International Affairs.

Techdirt.

Content Moderation Knowledge Sharing Shouldn’t Be A Backdoor To Cross-Platform Censorship

Ten thousand moderators at YouTube. Fifteen thousand moderators at Facebook. Billions of users, millions of decisions a day. These are the kinds of numbers that dominate most discussions of content moderation today. But we should also be talking about 10, 5, or even 1: the numbers of moderators at sites like Automattic (WordPress), Pinterest, Medium, and JustPasteIt—sites that host millions of user-generated posts but have far fewer resources than the social media giants.

There are a plethora of smaller services on the web that host videos, images, blogs, discussion fora, product reviews, comments sections, and private file storage. And they face many of the same difficult decisions about the user-generated content (UGC) they host, be it removing child sexual abuse material (CSAM), fighting terrorist abuse of their services, addressing hate speech and harassment, or responding to allegations of copyright infringement. While they may not see the same scale of abuse that Facebook or YouTube does, they also have vastly smaller teams. Even Twitter, often spoken of in the same breath as a “social media giant,” has an order of magnitude fewer moderators at around 1,500.

One response to this resource disparity has been to focus on knowledge and technology sharing across different sites. Smaller sites, the theory goes, can benefit from the lessons learned (and the R&D dollars spent) by the biggest companies as they’ve tried to tackle the practical challenges of content moderation. These challenges include both responding to illegal material and enforcing content policies that govern lawful-but-awful (and mere lawful-but-off-topic) posts.

Some of the earliest efforts at cross-platform information-sharing tackled spam and malware such as the Mail Abuse Prevention System (MAPS) — which maintains blacklists of IP addresses associated with sending spam. Employees at different companies have also informally shared information about emerging trends and threats, and the recently launched Trust & Safety Professional Association is intended to provide people working in content moderation with access to “best practices” and “knowledge sharing” across the field.

There have also been organized efforts to share specific technical approaches to blocking content across different services, namely, hash-matching tools that enable an operator to compare uploaded files to a pre-existing list of content. Microsoft, for example, made its PhotoDNA tool freely available to other sites to use in detecting previously reported images of CSAM. Facebook adopted the tool in May 2011, and by 2016 it was being used by over 50 companies.

Hash-sharing also sits at the center of the Global Internet Forum to Counter Terrorism (GIFCT), an industry-led initiative that includes knowledge-sharing and capacity-building across the industry as one of its 4 main goals. GIFCT works with Tech Against Terrorism, a public-private partnership launched by the UN Counter-Terrrorism Executive Directorate, to “shar[e] best practices and tools between the GIFCT companies and small tech companies and startups.” Thirteen companies (including GIFCT founding companies Facebook, Google, Microsoft, and Twitter) now participate in the hash-sharing consortium.

There are many potential upsides to sharing tools, techniques, and information about threats across different sites. Content moderation is still a relatively new field, and it requires content hosts to consider an enormous range of issues, from the unimaginably atrocious to the benignly absurd. Smaller sites face resource constraints in the number of staff they can devote to moderation, and thus in the range of language fluency, subject matter expertise, and cultural backgrounds that they can apply to the task. They may not have access to — or the resources to develop — technology that can facilitate moderation.

When people who work in moderation share their best practices, and especially their failures, it can help small moderation teams avoid pitfalls and prevent abuse on their sites. And cross-site information-sharing is likely essential to combating cross-site abuse. As scholar evelyn douek discusses (with a strong note of caution) in her Content Cartels paper, there’s currently a focus among major services in sharing information about “coordinated inauthentic behavior” and election interference.

There are also potential downsides to sites coordinating their approaches to content moderation. If sites are sharing their practices for defining prohibited content, it risks creating a de facto standard of acceptable speech across the Internet. This undermines site operators’ ability to set the specific content standards that best enable their communities to thrive — one of the key ways that the Internet can support people’s freedom of expression. And company-to-company technology transfer can give smaller players a leg up, but if that technology comes with a specific definition of “acceptable speech” baked in, it can end up homogenizing the speech available online.

Cross-site knowledge-sharing could also suppress the diversity of approaches to content moderation, especially if knowledge-sharing is viewed as a one-way street, from giant companies to small ones. Smaller services can and do experiment with different ways of grappling with UGC that don’t necessarily rely on a centralized content moderation team, such as Reddit’s moderation powers for subreddits, Wikipedia’s extensive community-run moderation system, or Periscope’s use of “juries” of users to help moderate comments on live video streams. And differences in the business model and core functionality of a site can significantly affect the kind of moderation that actually works for them.

There’s also the risk that policymakers will take nascent “industry best practices” and convert them into new legal mandates. That risk is especially high in the current legislative environment, as policymakers on both sides of the Atlantic are actively debating all sorts of revisions and additions to intermediary liability frameworks.

Early versions of the EU’s Terrorist Content Regulation, for example, would have required intermediaries to adopt “proactive measures” to detect and remove terrorist propaganda, and pointed to the GIFCT’s hash database as an example of what that could look like (CDT joined a coalition of 16 human rights organizations recently in highlighting a number of concerns about the structure of GIFCT and the opacity of the hash database). And the EARN-IT Act in the US is aimed at effectively requiring intermediaries to use tools like PhotoDNA—and not to implement end-to-end encryption.

Potential policymaker overreach is not a reason for content moderators to stop talking to and learning from each other. But it does mean that knowledge-sharing initiatives, especially formalized ones like the GIFCT, need to be attuned to the risks of cross-site censorship and eliminating diversity among online fora. These initiatives should proceed with a clear articulation of what they are able to accomplish (useful exchange of problem-solving strategies, issue-spotting, and instructive failures) and also what they aren’t (creating one standard for prohibited — much less illegal— speech that can be operationalized across the entire Internet).

Crucially, this information exchange needs to be a two-way street. The resource constraints faced by smaller platforms can also lead to innovative ways to tackle abuse and specific techniques that work well for specific communities and use-cases. Different approaches should be explored and examined for their merit, not viewed with suspicion as a deviation from the “standard” way of moderating. Any recommendations and best practices should be flexible enough to be incorporated into different services’ unique approaches to content moderation, rather than act as a forcing function to standardize towards one top-down, centralized model. As much as there is to be gained from sharing knowledge, insights, and technology across different services, there’s no-one-size-fits-all approach to content moderation.

Emma Llansó is the Director of CDT’s Free Expression Project, which works to promote law and policy that support Internet users’ free expression rights in the United States and around the world. Emma also serves on the Board of the Global Network Initiative, a multistakeholder organization that works to advance individuals’ privacy and free expression rights in the ICT sector around the world. She is also a member of the multistakeholder Freedom Online Coalition Advisory Network, which provides advice to FOC member governments aimed at advancing human rights online.

Techdirt.

Twitch And Reddit Ramp Up Their Enforcement Against ‘Hateful’ Content

On Monday, both Twitch and Reddit ramped up their efforts to deal with various forms of hateful content on their platforms — and both of them ended up shutting down some forums related to President Trump — which inevitably (but incorrectly) resulted in people again screaming about “anti-conservative bias.” Reddit kicked things off by announcing new content policies (which you can read here). The key change was an expanded rule against communities that “promote hate based on identity or vulnerability.”

Based on that, Reddit has permanently shuttered around 2,000 subreddits, including, most notably the r/The_Donald subreddit for Trump fans. However, as if they were expecting the bogus claims of anti-conservative bias to show up in response, Reddit also shut down r/ChapoTrapHouse, which might be considered the flip side to The_Donald subreddit, but from the left end of the traditional political spectrum. Both communities were known for their anger spewing wackos. Reddit painted its decision to suspend both as a way to show that it is applying the rules equally across all its subreddits:

All communities on Reddit must abide by our content policy in good faith. We banned r/The_Donald because it has not done so, despite every opportunity. The community has consistently hosted and upvoted more rule-breaking content than average (Rule 1), antagonized us and other communities (Rules 2 and 8), and its mods have refused to meet our most basic expectations. Until now, we’ve worked in good faith to help them preserve the community as a space for its users—through warnings, mod changes, quarantining, and more.

Though smaller, r/ChapoTrapHouse was banned for similar reasons: They consistently host rule-breaking content and their mods have demonstrated no intention of reining in their community.

To be clear, views across the political spectrum are allowed on Reddit—but all communities must work within our policies and do so in good faith, without exception.

Of course, because content moderation at scale is impossible to do well, I’ve already seen plenty of complaints about other Reddit forums that the site failed to take down. And I fully expect that at some point a forum will be shut down by overzealous moderators. Because that’s the nature of content moderation.

Meanwhile, over on the Twitch side, the site has been coming under increasing attacks for enabling a lot of harassment. Since much of Twitch is live-streaming, it’s that much more impossible to monitor. Last week, the company promised to take harassment claims more seriously and began suspending some users. On Monday, that included a temporary ban of the president’s campaign account on the site. Apparently, the move was in response to comments made at recent Trump rallies, that Twitch claims violated its policies.

Twitch pointed to comments made at two rallies that led to its decision. At a campaign rally in 2016, which was recently rebroadcast on the platform, Trump said Mexico was sending over its bad actors, such as rapists or drug dealers. Twitch also pointed to Trump’s recent Tulsa rally, where he told a fictional story of a ‘tough hombre’ invading someone’s home.

“Hateful conduct is not allowed on Twitch. In line with our policies, President Trump’s channel has been issued a temporary suspension from Twitch for comments made on stream, and the offending content has been removed,” a Twitch spokesperson told CNBC.

Again, these platforms are in an impossible position — which we detailed in our post about the content moderation impossibility theorem. If they do nothing, tons of people will call out these platforms for inaction. But in pulling down these accounts, a bunch of other people will now be furious as well. And sooner or later these platforms will pull down other accounts that lots of other people (no matter what they’re political leanings) will get upset about as well. This is the nature of content moderation.

Techdirt.

Content Delivery Networks and clouds join MANRS internet security effort

But, as the recent Google services failure from a single router failure showed, the internet remains fragile. That’s why the Internet Society and the Mutually Agreed Norms for Routing Security (MANRS) …
internet security – read more