29 January 2015

Daden Chatbot Significantly Passes Modified Turing Test in a Virtual World

As reported in a peer reviewed paper just published by Loyola Marymount University a chatbot avatar based on our Discourse chatbot engine recently significantly passed a modified Turing Test run in the virtual world of Second Life. The highlights from the paper are: 

  • The first natural language Turing Test conducted in a 3D virtual environment. 
  • 78% of participants incorrectly judged a chatbot to be human. 
  • Agency judgments were due to the quality of the AI engine and psychological factors. 
  • Passage of the Turing Test involves a complex process of human–computer interaction. 

The test itself was based on a test we set out in another paper back in 2009. There we described how most current Turing Tests were very unbalanced, since both judges and "hidden humans" knew they were taking part in a test, and often the only person who didn't was the chatbot! We described how virtual environments such as Second Life offered an ideal level playing field, since everyone, human or chatbot, looked like an avatar, could have access to the same information about the environment, and crucially the expectation of most people in the world was that the other avatars they talk to are other humans - so all the bot has to do is not give away the fact that it was a bot, rather than try and convince an already sceptical judge that it was a human!

In the modified Turing Test that we proposed - a "covert Turing Test" - subjects (aka judges) went into a virtual world for supposedly another purpose, and whilst there had a conversation with a bot (or human). Only after the event were they asked whether they thought the avatar they were talking to was a bot or a human.

This is exactly the test that Loyola Marymount set up. You can read the full details and results in the paper, but the essence was that subjects were invited into Second Life to give comment on a new shop layout and design, and that there would be a shop assistant in there to help them. 50% of the time the assistant was a chatbot, and 50% a human. A nice touch in the double-blind design was that the humans playing the assistant also didn't know that they were taking part in a Turing Test, so they couldn't act "over-human", or even "more robotic" - as often happens in physical Turings.


The very simple avatar used by both the humans and the chatbot.

The bottom line was that 78% of the test subjects did not recognise the avatar as being a chatbot when run by the chatbot - well in excess of the 30% target set in the full Turing Test, or even a 50% random choice. 

So what does this mean? 

Well first we don't see the Turing Test as a measure of "intelligence" or "sentience" - it is purely a reasonable test for how good a chatbot is at mimicing human conversation. And seeing as having "human-like" conversations with computers could be useful for a whole range of things (training, health-care etc) then it's a reasonable test of the state of the art. 

The problem has been that most current Turing Test implementations are VERY artificial - I've been a "hidden human" and I know how unnatural the conversations are on both sides. With the covert Turing we were trying to create a more "fit for purpose" test - can a human tell the difference between a human and a chatbot in a practical, realistic setting with no pre-conceptions - the sort of test we need if we want to deploy chatbots into real world applications. Yes it is a lower bar than a full-on Turing Test (which itself is almost taking on the artificiality of a linguistic chess game), but for us it is a very valid waymark on the route to the full Turing Test.

If you'd like more information on Discourse, or our use of chatbots within our Trainingscapes then please get in touch.

PS. You may be able to get a free copy of the article by following  the PDF link against the articles listing in the journals' contents page, rather than from the article page!

28 January 2015

Virtual Field Trip - Second Life Workshops

As part of the the Virtual Field Trips project we're running a couple of introductory presentations/discussions/workshops with the Open University in Second Life. The first on 3rd Feb is kindly being hosted by The Science Circle, and the second on 12th Feb is kindly being hosted by the Virtual Worlds Education Roundtable. Check out their web sites for locations and times. We will then be hosting a more focussed design workshop session on 3rd March on the Open University's Deep Think island - contact us for details.

Remember we also still have the survey running - please take part.

26 January 2015

Virtual Reality for Immersive Learning - Presentation

We've just put our virtual reality presentation on Slideshare. This hopefully takes a balanced view of the potential, and issues, for virtual reality in the training, learning, education and visualisation spaces based on our own first hand experience of Oculus Rift DK1 and DK2 and integrating them into our applications. You can also download the original white paper.

If you'd like a demo of the VR work we've done so far let us know and we'll see if we can arrange a demo.

15 January 2015

Virtual Field Trips Survey - Take Part!

As part of our InnovateUK funded Virtual Field Trips as a Service project we are running a survey to find out how people currently approach physical field trips and how a virtual field trip service might be able to help them. The survey takes only 5-10 minutes. There are separate surveys for teachers/lecturers, for students, and for specialist fieldwork tutors. Please let us have your thoughts, and do let us know if you'd like to see the results.