“The questions are, ‘Can humans say “no” to AI, and can AI say “no” to humans?’”

by Jamais Cascio

“There are two critical uncertainties as we imagine 2040 scenarios:

Do citizens have the ability to see the role AI plays in their day-to-day lives, and, ideally, have the ability to make choices about its use?
Does the AI have the capacity to recognize how its actions could lead to violations of law and human rights and refuse to carry out those actions, even if given a direct instruction?
“In other words, can humans say ‘no’ to AI, and can AI say ‘no’ to humans? Note that the existence of AIs that say ‘no’ does not depend upon the presence of AGI; a non-sapient autonomous system that can extrapolate likely outcomes from current instructions and current context could well identify results that would be illegal (or even unethical).

A world in which most people can’t control or understand how AI affects their lives and the AI itself cannot evaluate the legality or ethics of the consequences of its processes is unlikely to be one that is happy for more than a small number of people. I don’t believe that AI will lead to a cataclysm on its own; any AI apocalypse that might come about will be the probably-unintended consequence of the short-term decisions and greed of its operators.

“It’s uncertain whether people would intentionally program AIs to refuse instructions without regulatory or legal pressure, however; it likely requires as a catalyst some awful event that could have been avoided had AIs been able to refuse illegal orders.


As ever, it is up to us.