Every public communication platform you can name—from Facebook , Twitter and YouTube to Parler, Pinterest and Discord—is wrestling with the same two questions:
How do we make sure we’re not facilitating misinformation, violence, fraud or hate speech?
At the same time, how do we ensure we’re not censoring users?
The more they moderate content, the more criticism they experience from those who think they’re over-moderating. At the same time, any statement on a fresh round of moderation provokes some to point out objectionable content that remains. Like any question of editorial or legal judgment, the results are guaranteed to displease someone, somewhere—including Congress, which this week called the chief executives of Facebook, Google and Twitter to a hearing on March 25 to discuss misinformation on their platforms.