Turing test for Augmented Intelligence

This weekend, while many of us were enjoying the Pentecost holidays, chatterbot Eugene Goostman became the first computer program to pass the famous Turing test. The team of Russian computer scientists led by Vladimir Veselov managed to convince 33% of the jury members that they were instant messaging with a 13-year-old boy called Eugene, instead of a computer program. During the past days, many of the major tech blogs have reported about this historic event. While this certainly is an enormous milestone in the development of artificial intelligence, I would like to propose a new test, one which in my opinion better suits the relation between human and artificial intelligence. But let’s first have a closer look at the original Turing test.

The Turing test

The Turing test was proposed by Alan Turing in his 1950 paper ‘Computing Machinery and Intelligence’. In this paper, Turing -considered to be one of the fathers of computer science and artificial intelligence- sets out to answer the question whether machines can think. Because of the ambiguity of this question, he instead turns to the question whether we could design machines that can do the same things that we (as thinking entities) can do.

The Turing test passed by Eugene (there actually exist several versions of the test) a computer has to have a five minute keyboard conversation with a human judge. The judge can ask the contestant questions about anything: personal experiences, their interpretation of a scientific paper or their political preferences. After this, the judge has to decide whether he or she was talking to a person or a computer. When a computer program can fool more than 30% of the judges, it is said to have passed this version of the Turing test.

Eugene Goostman interface
Screenshot of the chat interface with chatterbot Eugene Goostman

Iconic and controversial

Turing’s reasoning was that a computer program sophisticated enough to pass the Turing test must possess intelligence at a fully human level. This reasoning has found much support, for example by futurologist and inventor Ray Kurzweil. Kurzweil has expressed this view in his Long Bet with Mitchell Kapor that before 2029 a computer will pass the Turing test. If the attempt of Vladimir Veselov is accepted by them, Kapor has to pay the Kurzweil Foundation $20,000.

However, in the field of artificial intelligence the Turing test is not only the most iconic, but also the most controversial milestone. Ever since its introduction, the Turing test has been subject to many lines of criticism. The achievement of Veselov and his team can give rise to many different questions, such as:

  • Does passing the Turing test really mean that the the computer program possesses intelligence equivalent to that of a human?
  • Can true intelligence even emerge from the “meaningless” processing of formal symbols? (See: Chinese Room argument
  • Does it matter that the computer’s approach to perform this task is very different from a human approach?
  • Why are the chatting sessions only 5 minutes long? And why does the computer program has to convince only 30% of the judges?
  • To what extent do the computer programs make use of clever tricks to successfully fool people?
  • Isn’t it odd that Eugene imitates a 13-year-old non-native English speaker, which must make the test considerably easier? Did Eugene truly pass the test as Turing imagined it?

Brains and computers

To me, the Turing test and the supposed equivalence to human intelligence it proofs, represents a somewhat outdated view on artificial intelligence. The digital, modular, serial, disembodied processes of computers have turned out to be very different from the analogue, networked, parallel, adaptive and embodied processes of the human brain. Because of this, computers and humans have very different capabilities and specialties. In some tasks we’re completely outrun by computers, while in other disciplines computers are no match for us. To paraphrase Andy Clark: while humans are generally good at Frisbee and bad at logic, with computers it’s the other way around.

In the early days of computing the differences between human and artificial intelligence weren’t understood very well yet. The Turing test illustrates what the most important goal was of artificial intelligence in those days: to recreate human intelligence. During the same decade however, pioneers such as Douglas Engelbart did research on how computers could augment human intelligence, rather than recreate it. His imaginations of knowledge workers flying through information space, solving complex problems together by using the new digital tools, became a dominant view in the mainstream introduction of information technology.

xkcd Extended Mind

Turing test for Augmented Intelligence

When you embrace the differences between artificial and human intelligence, it makes much more sense to study how these types of intelligence can supplement each other. Instead of recreating human intelligence, I’m more interested in how artificial intelligence can augment human intelligence in a symbiotic way. Instead of the Turing test, I would therefore like to propose the ‘Turing test for Augmented Intelligence’ 1

In this test the contestant has to convince a judge in a face-to-face conversation that she has knowledge about a certain subject, while her biological brain contains only very little information on this subject. The contestant can make use of all available technology: from brain implants to AR headsets. After a fixed period of time the judge has to decide whether the contestant she was talking with has or has not made use of any technological help.

This test gives rise to many challenges. The contestant has to make use of technology that can provide information in a very ambient, non-intrusive way. To make this test more feasible, it can be agreed upon that all the contestants and unaided humans have to wear an Augmented Reality headset for example. The device must be able to provide context sensitive information on precisely the right moment. It has to find answers to the questions of the judge and present the information in small and simple bits that can be understood quickly. The contestant has to process the information very rapidly, show no signs that she is presented with all sorts of new information (by vision, hearing or direct brain stimulation) and hold a convincing conversation at the same time.

Like, share and discuss

Quite a challenge indeed! Would you say we’ll be able to achieve this? If so, when? And how would this change our concept of what it means to know something? Join the discussion below!

If you’ve enjoyed this article, help this starting website by liking our Facebook page or following us on Twitter!

xkcd Imposter

Notes:

  1. Intelligence Amplification and Extended Cognition are other, quite similar concepts.
The following two tabs change content below.
Robin is the founder of the Mind Extensions website and one of its main contributors. He is a researcher and entrepreneur who is doing his PhD research at the Media Technology group at Leiden University. Robin has a MSc degree in Media Technology and a bachelor's degree in Physics and Philosophy from Leiden University. Robin is the owner of BijlesinWassenaar, a homework guidance and tutoring company and works as a freelance App Developer, Video Producer and teacher. More information about Robin can be found on his personal website

Latest posts by Robin de Lange (see all)

  • http://www.graus.nu/ graus

    “Does passing the Turing test really mean that the the computer program possesses intelligence equivalent to that of a human?”

    “The Turing test illustrates what the most important goal was of artificial intelligence in those days: to recreate human intelligence.”

    I’d say: no and no. The value of the Turing test (in my eyes) is that it exactly points out the opposite — instead of trying to recreate human intelligence, it suffices to make us (humans, with intelligence) believe that output of algorithms is actually intelligence at work.

    Now whether or not you agree to the value of this achievement is another matter (personally, in the setting of the Turing test I don’t see much value in it — other than an interesting engineering effort).

    But the skeptical articles and blogs that appeared after the hyped ones were about as bad as the hyped ones. I read nitpickery about whether or not it was a supercomputer (as far as I know, Eugene runs on AWS, so I’d say it’s a supercomputer), and something about scripts versus AI (really?).

    Also, the skeptical articles that pick apart algorithms and functionality are funny in a bittersweet way, as the Turing test motivates the creation of stupid algorithms that fool people. And now people are whining about how stupid the algorithms are! I say, the more stupid and less supercomputer the chatbot, the more fool on us ;-)!