Don’t Look to the State to Keep Social Media Companies From Imposing Ideological Conformity

JD Tuccille:

This wouldn’t be the first time, of course. YouTube has implemented a policy against “hate speech”—a grab-bag category that includes vile stuff, such as explicit racism, but which is so amorphous that it can easily encompass anything that raises a moderator’s hackles. “There is a fine line between what is and what is not considered to be hate speech,” YouTube acknowledges, and the company ended up apologizing after newly hired staff pulled the plug on right-wing videos and whole accounts that didn’t violate anything other than somebody’s sense of propriety.

Ditching “hate speech” became a popular goal for tech companies last summer, after lethal political violence in Charlottesville, Virginia. To that end, Twitter—which once described itself as “the free speech wing of the free speech party”—implemented a creepy Trust and Safety Council to “ensure that people feel safe expressing themselves on Twitter.” Inevitably, that resulted in a purge of not just open bigots, but also people with edgy politics or trollish behavior.

Facebook, which warns that speech that “attacks people based on their actual or perceived race, ethnicity, national origin, religion, sex, gender or gender identity, sexual orientation, disability or disease is not allowed” ran into its own trouble trying to parse among vigorous debate, run-of-the-mill meanness, and forbidden hate speech. ProPublica reviewed more than 900 posts alleged to violate such content rules and found that Facebook’s “content reviewers often make different calls on items with similar content, and don’t always abide by the company’s complex guidelines.” In response, the social media giant noted the difficulty in distinguishing between hateful and heated in “content that may be controversial and at times even distasteful” but which “does not cross the line into hate speech.”