Who’s to blame when a machine botches your surgery?

Robert David Hart:

Medicine is an imprecise art, and medical error, whether through negligence or honest mistake, is shockingly common. Some experts believe it to be the third-biggest killer in the US. In the UK, as many as one in six patients receive an incorrect diagnosis from the National Health Service.

One of the great promises of artificial intelligence is to drastically reduce the number of mistakes made in the world of health care. For some conditions, the technology is already approaching—and in some cases matching and even exceeding—the success rates of the best specialists. Researchers at the John Radcliffe Hospital in Oxford, for instance, claim to have developed an AI system capable of outperforming cardiologists in identifying heart-attack risk by examining chest scans. The results of the study have yet to be published, but if the AI is indeed successful, the technology will be offered, for free, to NHS hospitals all over the UK. And this is just one of the latest in a string of successful medical image-reading AIs, including one that can diagnose skin cancer, another that can identify an eye condition responsible for around 10% of global childhood vision-loss, and a third that can recognize certain kinds of lung cancer.

That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?

This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.