There are two broad narratives about what has happened to universities in the English-speaking world over the past forty years.

William Davies:

They are very different from each other, yet both have some plausibility. The first runs roughly as follows. The rise of the New Right in the 1980s introduced a policy agenda for universities aimed at injecting enterprise and competition into a sector that had previously seen itself as somewhat insulated from the market. Measures such as the 1980 Bayh-Dole Act in the United States encouraged scientists and universities to treat their research as a private good, yielding financial returns on investment. In the UK, the Thatcher government’s deployment of the Research Assessment Exercise in 1986 (later the Research Excellence Framework) introduced a research scoring system in an effort to awaken the competitive instincts of universities and their managers.

The influence of ‘new growth theory’ on the policy agendas of the Democratic Party in the US and the Labour Party in the UK in the 1990s, when both parties were seeking to refashion themselves for a post-socialist age, placed universities firmly within the purview of economic policymaking. Universities would be tasked with building the ‘human capital’ that would generate productivity gains for the economy at large. They would also be at the centre of regional ‘clusters’ of innovation and enterprise, as their research was spun out into start-ups.

Meanwhile, the increasing prominence of national and international university league tables, often managed by the business press, further heightened competition between institutions, and anxiety at the prospect of failure. Salaries for senior managers began to escalate as universities were reconceived as a highly profitable export industry; new postgraduate courses were dreamed up, along with debt-fuelled construction projects to house the students who would ‘consume’ them.