In a previous blog post we introduced our AI Landscape diagram. In this post I want to look at how it helps us to identify the main challenges in the future development of AI.
On the diagram we’ve already identified how that stuff which is currently called “AI” by marketeers, media and others is generally better thought of as being automated intelligence or “narrow” AI. It is using AI techniques, such as natural language or machine learning, and applying them to a specific problem, but without actually building the sort of full, integrated, AI that we have come to expect from Science Fiction.
To grow the space currently occupied by today’s “AI” we can grow in two directions – moving up the chart to make the entities seem more human, or moving across the chart to make the entities more intelligent.
The “more human” route represents Challenge 1. It is probably the easiest of the challenges and the chart we showed previously (and repeated below) shows an estimate of the relative maturity of some of the more important technologies involved.
There are two interesting effects related to work in this direction:
- Uncanny Valley - we're quite happy to deal with cartoons, and we're quite happy to deal with something that seems completely real, but there's a middle ground that we find very spooky. So in some ways the efficacy of developments rise as they get better, then plummet as they hit the valley, and then finally improve again once you cannot tell them for real. So whilst in some ways we've made a lot of progress in some areas over recent years (e.g. visual avatars, text-to-speech) we're now hitting the valley with them and progress may now seem a lot slower. Other elements, like emotion and empathy, we're barely started on, so may take a long time to even reach the valley.
- Anthropomorphism - People rapidly attribute feelings and intent to even the most inanimate object (toaster, printer). So in some ways a computer needs to do very little in the human direction for us to think of it as far more human than it really is. In some ways this can almost help us cross the valley by letting human interpretation assume the system has crossed the valley even though it's still a lot more basic than is thought.
The upshot is that the next few years will certainly see systems that seem far more human than any around today, even though their fundamental tech is nowhere near being a proper "AI". The question is whether a system could pass the so-called "Gold" Turing Test ( a Skype like conversation with an avatar) without also showing significant progress along the intelligence dimension. Achieving that is probably more about the capability of the chat interface as it seems that CGI and Games will crack the visual and audio elements (although ding them in real-time is still a challenge) - so it really remains the standard Turing challenge. An emotional/empathic version of the Turing Test will probably prove a far harder nut to crack.
We'll discuss the Intelligence dimension in Part 2.
Post a Comment