Tag Archive for: because

Gizmodo: Why Can’t YouTube Do ‘Good’ Content Moderation? Answer: Because It’s Fucking Impossible

We’ve had something of a long-running series of posts on the topic of content moderation, with our stance generally being that any attempt to do this at scale is laughably difficult. Like, to the point of being functionally impossible. This becomes all the more difficult when the content in question is not universally considered objectionable.

Tech firms tend to find themselves in the most trouble when they try to bow to this demand for content moderation, rather than simply declaring it to be impossible and moving on. The largest platforms have found themselves in this mess, namely Facebook and YouTube. YouTube, for instance, has released new moderation policies over the past two months or so that seek to give it broad powers to eliminate content that it deems to be hate speech, or speech centered on demographic supremacy. Wanting to eliminate that sort of thing is understandable, even if you still think it’s problematic. Actually eliminating it at scale, and in a way that doesn’t sweep up collateral damage and garners wide support, is impossible.

Which makes it frustrating to read headlines such as Gizmodo’s recent piece on how YouTube is doing with all of this.

YouTube Said It Was Getting Serious About Hate Speech. Why Is It Still Full of Extremists?

Because it’s fucking impossible, that’s why. There is simply no world in which YouTube both successfully eliminates all, or even the majority, of speech that some large group or another considers hate speech or “extreme.” That’s never going to happen. YouTube never should have suggested it would happen. The screw up here is YouTube not properly setting the public’s expectations as to what its policy would achieve. Yeah, there is still a good deal of extremist content on YouTube. Whipping up anger at content that’s available at this moment is trivially easy.

Making it more frustrating is Gizmodo’s assertion, with a sinister connotation, that all of this is “part of YouTube’s plan.”

Strangely, this isn’t a simple oversight by YouTube’s parent company, Google. In fact, it’s the policy working as planned. YouTube hosts more than 23 million channels, making it impossible to identify each and every one that is involved with the hate movement—especially since one person’s unacceptable hate speech is another person’s reasonable argument. With that in mind, we used lists of organizations promoting hate from the Southern Poverty Law Center, Hope Not Hate, the Canadian Anti-Hate Network, and the Counter Extremism Project, in addition to channels recommended on the white supremacist forum Stormfront, to create a compendium of 226 extremist YouTube channels earlier this year.

While less than scientific (and suffering from a definite selection bias), this list of channels provided a hazy window to watch what YouTube’s promises to counteract hate looked like in practice. And since June 5th, just 31 channels from our list of more than 200 have been terminated for hate speech. (Eight others were either banned before this date or went offline for unspecified reasons.)

Before publishing this story, we shared our list with Google, which told us almost 60 percent of the channels on it have had at least one video removed, with more than 3,000 individual videos removed from them in total. The company also emphasized it was still ramping up enforcement. These numbers, however, suggest YouTube is aware of many of the hate speech issues concerning the remaining 187 channels—and has allowed them to stay active.

I would suggest that these numbers actually likely represent YouTube blocking too much content, rather than not enough. In a politically divided country like ours getting some significant number of people to state that even a relatively innocuous video is “extreme” would be pretty easy. Add to that fact that the selection bias mentioned above is way understated in this article, and the problem deepens. Layer on top of that the simple fact that some of the sources for this list of “extremist” content — namely the SPLC — have been caught quite recently being rather cavalier about the labels they throw around, and this whole experiment begins to look bunk.

Making Gizmodo’s analysis all the worse is that it seems to complain that YouTube is only policing the content that appears on its platform, rather than banning all content from uploaders who take nefarious actions off of YouTube’s platform.

To understand why these channels continue to operate, it’s important to know how YouTube polices its platform. YouTube’s enforcement actions are largely confined to what happens directly on its website. There are some exceptions—like when a channel’s content is connected to outside criminality—but YouTube generally doesn’t consider the external behavior of a group or individual behind an account. It just determines whether a specific video violated a specific policy.

Heidi Beirich, who runs the Southern Poverty Law Center’s Intelligence Project, charges that YouTube’s approach puts it far behind peers like Facebook, which takes a more holistic view of who is allowed to post on its site, prohibiting hate groups and their leaders from using the social network.

“Because YouTube only deals with the content posted, it allows serious white supremacists like Richard Spencer and David Duke to keep content up,” Beirich said. “In general, our feeling is that YouTube has got to get serious about removing radicalizing materials given the impact these videos can have on young, white men.”

It’s an insane request. Because a person or group says some things that are obviously objectionable, we want their voices silenced on YouTube, even when the content there isn’t objectionable? That’s fairly antithetical to how our country operates. YouTube is of course not governed by the First Amendment and can take down whatever content it chooses, but the concept of free speech and the free exchange of ideas in America is much more global as an ideal than the specific prescriptions outlined in the Constitution. Silencing all potential speech from a party simply because some of that speech is objectionable is quite plainly un-American.

Gizmodo then complains about the inconsistencies in enforcing this impossible policy.

The apparent inconsistencies go on: The channel of South African neo-Nazigroup AWB was terminated. Two others dedicated to violent Greek neo-Naziparty Golden Dawn remain active. The channel of white nationalist group American Identity Movement, famous for distributing fliers on college campuses, is still up. As is a channel for the white nationalist group VDARE. And, notably, none of the 33 channels on our list run by organizations designated by the Southern Poverty Law Center as anti-LGBTQ hate groups have been removed from the platform.

In addition to giving many hateful channels a pass, this agnosticism to uploaders’ motives means that some channels with no interest in promoting white supremacy have been punished as YouTube enforces its policies.

Unlike what Gizmodo — and even YouTube — says, this is a bug, not a feature. We cannot say this enough: there is no good way to do this. Frankly, save for criminal content, YouTube probably shouldn’t even be trying. Alternatively, if it does want to try, it probably would be more satisfying if YouTube’s public stance was something like: “We’ll block whatever we want, because we’re allowed to. If those blocks don’t seem to make sense to you, deal with it.” At least that would set the proper expectations with the public.

And then maybe there would be less consternation as to why YouTube hasn’t yet achieved the impossible. Impossible, in this case, being both doing content moderation at scale and simultaneously making everybody happy.

Permalink | Comments | Email This Story

Techdirt.

LAPD Infiltrated An Anti-Fascist Protest Group Because The First Amendment Is Apparently Just A Suggestion

Maybe the LAPD doesn’t have the experience its counter-coastal counterpart has in inflicting damage to rights and liberties, but it’s trying, dammit! The NYPD’s brushes with the Constitution are numerous and perpetual. The LAPD may have spent more time working on the Fourth and Fifth Amendments during its Rampart peak, but now it’s rolling up on the First Amendment like a repurposed MRAP on a small town lawn.

The Los Angeles Police Department ordered a confidential informant to monitor and record meetings held by a political group that staged protests against President Trump in 2017, a move that has drawn concern and consternation from civil rights advocates.

On four separate occasions in October 2017, the informant entered Echo Park United Methodist Church with a hidden recorder and captured audio of meetings held by the Los Angeles chapter of Refuse Fascism, a group that has organized a number of large-scale demonstrations against the Trump administration in major U.S. cities, according to court records reviewed by The Times.

Perhaps no entities show more concern about opposition to fascism than law enforcement agencies, for some weird and completely inexplicable reason. Somehow, this investigation involved the Major Crimes Division, which felt the need to get involved because of all the major criminal activity that is the hallmark of protest groups.

What sort of major crimes are we talking about? Well, let’s just check the record…

Police reports and transcripts documenting the informant’s activities became public as part of an ongoing case against several members of Refuse Fascism who were charged with criminal trespassing…

I see the term “major” has been redefined by the Major Crimes Division to encompass anything it might feel the urge to investigate. Supposedly, this incursion on the First Amendment was the result of an “abundance of caution” following reports of violent clashes between anti-fascists and alt-right demonstrators at other protests/rallies.

Again, the LAPD seems to not understand the meaning of the words it uses, because an “abundance of caution” should have resulted in steering clear of First Amendment-protected activities, rather than infiltrating them.

Also, an abundance of caution might have resulted in the LAPD checking out the other set of theoretical combatants, but the Los Angeles Times reports a police official said no attempt was made to infiltrate any far-right protest groups.

“Major.” “Caution.” “Consistency.” These words are beyond the department’s comprehension. And here’s the kicker: the Major Crimes Division did not send its informant in until after the demonstration was already over, the freeway had already been blocked, and criminal trespassing charges had already been brought. This wasn’t an investigation. It was a fishing expedition targeting people who don’t like fascists that used the First Amendment as a doormat. Calls to the LAPD’s Irony Division were not returned.

I guess we’re all supposed to feel better about this now that the LAPD has promised to investigate itself over its First Amendment-infringing infiltration. But it seems a department that routinely struggles to use words properly and cannot steer clear of the Constitutional shoreline shouldn’t be trusted to run a fax machine, much less an internal investigation.

Permalink | Comments | Email This Story

Techdirt.

Magistrate Judge Says Grande Shouldn’t Be Able To Use The DMCA Safe Harbors Because It Didn’t Really Terminate Infringers

We’ve written a few times about a key DMCA case in Texas, involving the ISP Grande Communications and Universal Music Group (and, by proxy, the copyright trolling operation Rightscorp). The case has had a lot of up and downs, with the judge tossing UMG’s “vicarious infringement” claims, while letting the “contributory infringement” claims move forward. In October, the court rejected UMG’s attempt to bring back the vicarious infringement claims which had already been dismissed, with some fairly harsh words directed at UMG for attempting that.

The latest, as first noted by Torrentfreak, is that the magistrate judge has recommended rejecting Grande’s use of the DMCA safe harbor defense. I still have general issues with the idea that the “repeat infringer” part of the DMCA is being accurately described in these cases (specifically: the courts are now applying it to accusations of infringement, rather than actual infringers, which requires a court adjudication). However, the magistrate basically points out that Grande can’t make use of the safe harbors because… it had no repeat infringer policy at all. Or, rather, it did, but in 2010 it stopped using it, and then never had a policy through 2016.

So, without a policy, they couldn’t have reasonably implemented it… and thus, no safe harbors. Given the facts of the case, that’s perhaps not that surprising. The DMCA requires you to have a reasonably implemented policy (Cox lost its similar lawsuit not because it didn’t have a policy, but because it didn’t follow its own policy).

Of course, that doesn’t necessarily mean that UMG is going to win the case. Not having the safe harbor makes it harder for Grande, but not fatal. UMG will still need to prove contributory infringement, which is going to be fairly difficult to show. Earlier in the case, the court had noted “that this is not yet a well-defined area of law, and that there are good arguments on both sides of this issue.” Effectively, UMG will need to show that Grande “induced” infringement by its actions, and Grande will claim it did no such thing. But it can’t just use the DMCA safe harbors to get the case dismissed, rather it will need to focus specifically on the question of whether it induced people to infringe.

Permalink | Comments | Email This Story

Techdirt.

[Update: Maybe not] Tasker to lose SMS and phone call functionality because of Google security changes

  1. [Update: Maybe not] Tasker to lose SMS and phone call functionality because of Google security changes  Android Police
  2. Full coverage

android security news – read more