When Algorithms Rule, Values Can Wither

Dirk Lindebaum, Vern Glaser, Christine Moser, and Mehreen Ashraf

Interest in the possibilities afforded by algorithms and big data continues to blossom as early adopters gain benefits from AI systems that automate decisions as varied as making customer recommendations, screening job applicants, detecting fraud, and optimizing logistical routes.1 But when AI applications fail, they can do so quite spectacularly.2

Consider the recent example of Australia’s “robodebt” scandal.3 In 2015, the Australian government established its Income Compliance Program, with the goal of clawing back unemployment and disability benefits that had been made inappropriately to recipients. It set out to identify overpayments by analyzing discrepancies between the annual income that individuals reported and the income assessed by the Australian Tax Office. Previously, the department had used a data-matching technique to identify discrepancies, which government employees subsequently investigated to determine whether the individuals had in fact received benefits to which they were not entitled. Aiming to scale this process to increase reimbursements and cut costs, the government developed a new, automated system that presumed that every discrepancy reflected an overpayment. A notification letter demanding repayment was issued in every case, and the burden of proof was on any individuals who wished to appeal. If someone did not respond to the letter, their case was automatically forwarded to an external debt collector. By 2019, the program was estimated to have identified over 734,000 overpayments worth a total of 2 billion Australian dollars ($1.3 billion U.S.).4