There are two main genres of content moderation controversy. In the first, a platform takes down a post. Then there are the inevitable screams of “First Amendment!!” and protests that such infringements on speech are downright un-American. This is followed by others rolling their eyes, exasperated at the need to remind people yet again that the First Amendment only applies to the government and that private companies are free to moderate speech as they see fit, dummies! Eventually the controversy dies down. … Until a platform takes down another high-profile post. Then it’s rinse and repeat.
In the second genre of content moderation controversy, platforms do not take down a high-profile controversial post. The post sits there while commentators and politicians erupt into a furor and lambast the platforms for failing to remove such harmful speech. The First Amendment is then invoked in another way. The speech is obviously protected, say critics, and it would be un-American to suggest otherwise! Don’t you remember that the First Amendment is exceptional and extraordinary in its protection for the thought we hate?
What unites both genres of controversy is the tendency of many of those who participate to invoke the First Amendment as a conversation-ending and self-evident trump card. But these invocations often make two false assumptions about First Amendment law: first, that First Amendment precedents are unambiguous in how they apply; second, that First Amendment rules are set in stone. But neither of these assumptions holds. The First Amendment—in all cases, but especially with respect to new technologies—is anything but clear or fixed. As we will show in this series of blog posts, neither the past nor the future of the First Amendment, nor how it will apply to the internet, are certain. The result is a public debate that rests on an oversimplified understanding of what the First Amendment is and can do.