There is little doubt these days that social media has had a hugely toxic effect on political discourse. Cancel culture, dehumanisation and bullying, purity spirals and echo chambers: some of our most tribal behaviour has been supercharged and now plays out on an unprecedented scale. What is less clear is how much of this is driven by human dynamics playing out over a new communication medium, and how much is down to the ubiquitous algorithmic feed of recommendations and suggestions employed by the social media giants.
At their most simple, machine learning recommenders are optimising for some specific goal. The classic and most easily understandable example is monetisation, ie how to get people to spend as much as possible.
For example it is common to target the conversion rate – what percentage of users turn from non-spender to spender – and treat increasing this figure as one primary goal for system optimisation. The approach then is to run experiments, using everything from changes to user interface, to the variety of recommendations presented, to user-specific high-value deals, to the timing and rate at which these are presented to the user, all of which will produce different rates of conversion. When an experiment produces a higher conversion rate, then that is deemed a success and becomes the default, and further experiments are run against that, in a relentless, automated drive to optimise the target metric: converting non-spenders into spenders.