There is a popular line of reasoning in platform regulation discussions today that says, basically, “Platforms aren’t responsible for what their users say, but they are responsible for what the platforms themselves choose to amplify.” This provides a seemingly simple hook for regulating algorithmic amplification—the results for searches on a search engine like Google or within a platform like Wikipedia; the sequence of posts in the newsfeed on a platform like Twitter or Facebook; or the recommended items on a platform like YouTube or Eventbrite. There’s some utility to that framing. In particular it is useful for people who work for platforms building product features or refining algorithms.
For lawyers or policymakers trying to set rules for disinformation, hate speech, and other harmful or illegal content online, though, focusing on amplification won’t make life any easier. It may increase, rather than decrease, the number of problems to be solved before arriving at well-crafted regulation. Models for regulating amplification have a great deal in common with the more familiar models from intermediary liability law, which defines platforms’ responsibility for content posted by users. As with ordinary intermediary liability laws, the biggest questions may be practical: Who defines the rules for online speech, who enforces them, what incentives do they have, and what outcomes should we expect as a result? And as with those laws, some of the most important considerations—and, ultimately, limits on Congress’s power—come from the First Amendment. Some versions of amplification law would be flatly unconstitutional in the U.S., and face serious hurdles based on human or fundamental rights law in other countries. Others might have a narrow path to constitutionality, but would require a lot more work than anyone has put into them so far. Perhaps after doing that work, we will arrive at wise and nuanced laws regulating amplification. For now, I am largely a skeptic.