When we think of censorship, we think of totalitarian states exercising violent control over their population, crushing dissent and stifling the press. Against such an adversary, technologies that provide censorship resistance seem like a positive step forward, since they promote individual liberty and human rights.
However, often the adversary is not a totalitarian state, but other users. Censorship resistance means that anybody can say anything, without suffering consequences. And unfortunately there are a lot of people out there who say and do rather horrible things. Thus, as soon as a censorship-resistant social network becomes sufficiently popular, I expect that it will be filled with messages from spammers, neo-nazis, and child pornographers (or any other type of content that you consider despicable). One person’s freedom from violence is another person’s censorship, and thus, a system that emphasises censorship resistance will inevitably invite violence against some people.
I fear that many decentralised web projects are designed for censorship resistance not so much because they deliberately want to become hubs for neo-nazis, but rather out of a kind of naive utopian belief that more speech is always better. But I think we have learnt in the last decade that this is not the case. If we want technologies to help build the type of society that we want to live in, then certain abusive types of behaviour must be restricted. Thus, content moderation is needed.
The difficulty of content moderation
If we want to declare some types of content as unacceptable, we need a process for distinguishing between acceptable and unacceptable material. But this is difficult. Where do you draw the line between healthy scepticism and harmful conspiracy theory? Where do you draw the line between healthy satire, using exaggeration for comic effect, and harmful misinformation? Between legitimate disagreement and harassment? Between honest misunderstanding and malicious misrepresentation?