Carlos Maza had enough. A video producer for the center-left website, Maza had been the regular target of hazing by the conservative comedian and broadcaster Steven Crowder over his sexual orientation and ethnicity, and he decided to do something about it. So, Maza assembled a devastating compilation of attacks on his identity—a risk anyone should take into account when making a foil of a video producer—and uploaded it to Twitter where it took off.

The montage of obscene conduct was soon seen not just as an indictment of Crowder but of the platform on which he hosted his videos. Google, which owns YouTube, is the focus of a Washington Post report that is devoted more to the injustice of YouTube’s failure to impose any consequences on the right-wing broadcaster than to the broadcaster’s poor taste.

“I understand that speech always involves gray areas, but that it’s hard to enforce hate-speech policies should not distract from the fact that it’s sometimes extremely clear-cut,” Maza told the Post. He has a point. The company’s anti-hate-speech directives would appear to have been violated in this case. Anyone can speculate about why those policies are being selectively applied, but their selective application seems undeniable. And that’s the rub. Any mandate banning “hate speech” on platforms that host self-published content is almost always selectively applied because violations of codes of civil conduct are subject to interpretation.

According to the Post, Crowder insists he is the “victim of forces that seek to silence him” even though YouTube has declined to enforce Maza’s claim. They noted that Crowder had not instructed his viewers to harass Maza or release his personal information publicly (ordeals that the video producer endured nevertheless), thus failing to justify prosecution by the powers that be at YouTube.

No doubt, the conservative columnist Dennis Prager would also perceive himself to be a victim of ideological censorship when he learned that “PragerU” videos were “blocked” or relegated to “restricted content” due to political concerns. One video featuring a pro-Israel Muslim from Britain discussing politics in the Middle East was labeled “hate speech” and blocked. Another video criticizing Planned Parenthood and debunking its claims received similar treatment. But when a U.S. district judge was asked to weigh in on the matter, she determined that Google and YouTube “are private entities” that are at liberty to “make decisions about whether and how to regulate content that has been uploaded on that website.”

The contours of the internal debate in Silicon Valley over what constitutes “hate speech” and how to police it grown less productive as it has become more granular. Vanity Fair’s thorough examination of how Facebook’s speech police enforce their rules uses the word “feels” enough to demonstrate that their doctrine isn’t compatible with consistent enforcement. “We live in a world where we now acknowledge there are many genders, not just men and women,” said Facebook’s Monika Bickert. She noted that her “instinct” is that those innumerable other genders and women are subject to more harassment than men, but if you were to apply that logic in policy it could disenfranchise men. That conclusion frustrates social-justice advocates who believe that harassment and prejudice cannot be decoupled from group power imbalances. “People recognize power dynamics and feel like we are tone-deaf not to address them,” said David Caragliano, YouTube’s content policy lead. But “power dynamics” are unquantifiable, subjective, and dynamic, and they vary across nations and even regions.

This muddle of identity politics and social justice nostrums eventually led those dedicated to establishing hate-speech standards to an inevitable conclusion: They must treat people unequally in the name of justice. Vanity Fair‘s summary of the deliberations included the idea that social media should “punish attacks against gender less harshly than, say, attacks against race,” or maybe “treat the genders themselves differently.” The report sums up the social-media censor’s conundrum in one particularly succinct anecdote: “It was easy, [Obama State Department alum Mary] deBree said, to find reasons to defend a statement like ‘men are disgusting.’ But it felt wrong to let users say, ‘gay men are disgusting’ or ‘Chinese men are disgusting.’” These biases would have been institutionalized but for some self-awareness on the part of social media’s brand managers, many of whom are Obama administration alumni, who understood that this would be bad for their employers’ bottom lines.

It’s easy to see why so many defectors from Silicon Valley have accused the technology industry of maintaining an institutionalized liberal bias—an accusation that people like Facebook CEO Mark Zuckerberg and Twitter’s Jack Dorsey do not dispute. Even those who share their political persuasion must concede that the distinctions hate-speech policing would seek to impose on society would amount to tiered systems of justice and social stratification if they were governmental directives. Just this week, for example, YouTube announced that it would be removing content that denies “well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.” This new policy proscribes videos that claim “Jews secretly control the world, those that say women are intellectually inferior to men” or “suggest that the white race is superior to another race,” the New York Times reported. Despite the undeniable harm unscrupulous conspiracists can do to society, the limiting principle here is a vague one.

Some climate-change advocates have advocated social and even criminal penalties for those who deny the latest claims attributable to “science” on the grounds that it is tantamount to murder. Despite the lack of any substantiation for the notion that genetically modified foods have adverse health effects, environmental activists have managed to convince foreign governments to forego these lifesaving advances. Do anti-GMO activists qualify for sanction? In the end, the answer to these questions will be found not in the guidelines but in the beliefs of the individuals interpreting them.

Policing discomfiting speech that fails to rise to the level of criminal conduct has wisely been left to the public square because the public square is accountable to its denizens. Social media’s scope precludes this kind of accountability. No one should be comfortable vesting in these platforms the authority to arbitrarily censor based on the assumption that their views will always be accepted and their culture will always be dominant. Maza does not deserve his ordeal, and the treatment he has received has been shameful. But the answer does not seem to be to relegate impermissible content to the fringes of the Internet, consigning it to the thriving black market of ideas where there are no adept video producers willing to challenge it.

The Problem with Policing ‘Hate Speech’ Online via @commentarymagazine
+ A A -
You may also like
Share via
Copy link