Commentary on software generated writing and human learning

John Symons:

Three or four months into the COVID pandemic, depending on how one counts such things, the OpenAI corporation released their GPT-3 language model. GPT-3 is an automated system for generating texts that are difficult to distinguish from those from a human being in response to prompts and questions. It consists of a machine learning model with 175 billion parameters built on a vast corpus of data including petabytes of information stored by Common Crawl, a non-profit that provides a free archive of the contents of the public internet. 

Alan Turing had originally conceived of a text-based imitation game as a way of thinking about our criteria for assigning intelligence to candidate machines. If something can pass what we now call the Turing Test; if it can consistently and sustainably convince us that we are texting with an intelligent being, then we have no good reason to deny that it counts as intelligent. It shouldn’t matter that it doesn’t have a body like ours or that it wasn’t born of a human mother.  If it passes, it is entitled to the kind of value and moral consideration that we would assign to any other intelligent being. Turing’s test was intended to remove irrelevant conditions on our judgments regarding the physical features, material composition, etc. of the interlocutors. Large language models (LLM) like GPT-3 are likely to be a central part of projects to build artificial general intelligence systems for reasons that Turing had foreseen. While many philosophers were correctly impressed by the power of GPT-3 in the summer of 2020; they focused on its consequences for traditional philosophical questions about intelligence, cognition, and the like, for me, GPT-3 represented a hack that potentially undermined the kind of writing intensive course that had served as the backbone of my teaching for two decades. I was less worried about whether GPT-3 is genuinely intelligent and more worried about whether the development of these tools would make us less intelligent. 

GPT-3 is impressive and has impressed the media. While it’s difficult to know how much contemporary media coverage of a new technology is shaped by clever public relations efforts, there is something important about these systems independent of the usual Californian hype. The effects of LLMs of this kind are potentially significant, with implications in a range of contexts from obvious commercial applications to the less obvious effects on our psychological well-being, relationships, political discourse, social inequality, child development, care for the elderly, and education. We are becoming increasingly sensitive to the ways that technology changes society. 

The philosopher Bruno Latour argued that technology is “society made robust.” But rather than being simply the projection of culture onto the physical world, technology has reshaped culture, society, and politics. Whereas mobile telephony had unexpected effects on love, friendships, and politics, LLMs will change the traditional relationship between writing and thinking. The initial effects will be obvious to teachers as we head into the coming school year. AI is looming over the education system and while LLMs have received relatively little attention, classroom teachers will soon see the early stages of what promises to be a transformation in our relationship to writing.

More.