Social media have become the new custodians of knowledge. This matters.

Chad Wellmon:

In her first post-election public appearance, Hillary Clinton decried an “epidemic of fake news.” Salacious stories and fraudulent claims about politicians and their supporters had spread unfiltered and unconstrained through social media. With some concocted content from Macedonian teenagers and young American college graduates, Facebook, suggested some, threw the election to Trump. Mark Zuckerberg, Facebook’s chairman and co-founder, denied that his company had any responsibility. “More than 99 percent of what people see” on Facebook, he said shortly after the election, “is authentic.” It was a “pretty crazy idea” to suggest that Facebook could affect an election. Trust us, counselled Zuckerberg, we only give you facts and friends.

Zuckerberg’s refusal to acknowledge Facebook’s possible role in the US election is both disingenuous—Facebook has conducted experiments on the effects particular kinds of posts have on people’s voting decision—and irresponsible. The social media behemoth is now the primary news medium in the United States. Zuckerberg casts his company as a neutral medium that simply connects friends, shares information, and facilitates democracy. But Facebook is now a social institution that people rely on and, however implicitly, trust.

Implicit in the entire project is a basic dissatisfaction with the current digital environment that Google helped create.

Compare Zuckerberg’s initial response to Google’s recent attempts to reinvent its search engine as an arbiter of facts and trustworthiness. Acknowledging that most search engines evaluate web sources based on their popularity, a team of Google engineers described their attempts to evaluate the “trustworthiness” of 119 million web pages in an article titled “Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources.” Google, so it seems, wants to automate trust.

Google’s method for extracting facts from the web, evaluating them, and then determining a score for individual web pages represents a significant shift from its earlier assumptions about how information is organized and transmitted in our digital age, and I will return to these important details later in this essay. But what I find most significant about Google’s “Knowledge-Based Trust” project is Google’s interest in trust in the first place.

The authors provide technical details for algorithms and machine-learning processes, but implicit in the entire project is a basic dissatisfaction with the current digital environment that Google helped create. And now Google wants to reform that media environment by redefining what it means to trust and what counts as authoritative knowledge in our digital age. In little more than a decade since its founding, Google is moving from helping us access the web pages we want to determining what web pages we should trust.

But what kind of custodian of knowledge and trust is Google? For centuries, universities and academies have served this cultural function. They were bulwarks against falsehood and institutions for truth. They did not always live up to the epistemic and ethical ideals they propounded, but one of their primary tasks was to make facts and beliefs correspond. In important ways, then, universities are the cultural forebears of Facebook and Google.