Thinking machines are ripe for a world takeover

Anjana Ahuja:

If it looks like a duck and sounds like a duck, then it probably is a duck. That is the inelegant logic behind one of the challenges posed in artificial intelligence: the Turing test, which sets out to answer the question, “can machines think?”

The stroke of genius from Alan Turing, the second world war codebreaker, was to recognise that while actual sentience in machines is virtually impossible to verify, the illusion of sentience is absolutely testable. He proposed that if a machine could “converse” with a person so convincingly that the user thinks they are interacting with a real person, then that machine can be said to think.

According to weekend reports, Turing’s benchmark of artificial intelligence, which dates back to 1960, has been met by a supercomputer disguised as a teenager from Ukraine. In a test devised by the University of Reading, a third of judges having a five-minute text conversation with “Eugene Goostman” believed he was a 13-year-old boy, rather than an advanced natural language computer program. Such advances, the organisers say, will set the scene for a new, sinister kind of cybercrime, in which trusting people are fooled by clever machines into handing over sensitive information.