When Alan Turing asked in 1950 if machines could think, it is unlikely he thought the answer might come in the first part of the next century by a computer pretending to be a teenager.
Actually, the question of artificial life has been around for hundreds of years, beginning with René Descartes in 1637 proposing that machines (then known as automata) were capable of responding to human interactions but unable to respond appropriately to things said in their presence in the way that any human can. As technology progressed over the next few centuries, machines became more sophisticated, and the idea of autonomous robotic organisms became a pop culture favorite.
Turing, a philosopher-slash-mathematician-slash-philosopher, who worked with various researchers on the exploration of artificial intelligence, developed a test that tasked a human judge to engage in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human. If that judge cannot reliably tell the machine apart from the human, the machine passed the test.
A Russian team is claiming such a thing has just happened.
A supercomputer in a chat-based challenge fooled 33 percent of judges into thinking that it was Eugene Goostman; Groostman is a false name for the computer to identify itself as a fictional 13 year old boy. The 33 percent number may be significant because that’s just above the commonly accepted Turing test’s 30 percent threshold. Developers Vladimir Veselov and Eugene Demchenko say that the key ingredients were both a plausible personality (a teen who thinks he knows more than he does) and a dialog system adept at handling more than direct questions.
Many blogs are already going nuts over the discovery, but it is important to note that while 30% is “commonly accepted”, Turing never specified 30% as the minimum acceptance rate for passing his thesis. If the Russian team is being fully honest, this result is really one modest, but important, step closer to a computer truly passing Turing’s challenge. For a computer to reach Turing’s thesis, we would want at least 90-95% sigma from a large sampling of trained participants, with no restrictions on the AI synthesis.
In other words, the computer needs to pass as a reasonably intelligent adult human being, not just a teenager. Of course, if computers do pass the Turing test only to take selfies, say “YOLO”, and yell unintelligibly at people online, we’re all doomed.