28 September 2017

Fieldscapes and Midwifery Training at Bournemouth University

As you may have seen from our Fieldscapes Twitter stream we've just reached the delivery stage of our first Midwifery Fieldscapes lesson (on Urinalysis) for Bournemouth University's Midwifery training team. We had a number of meetings with the team over the summer and then went away and customised our existing Simple Clinic space into "Daden Community Hospital", and added more generic medical props, and also brought in some BU specific posters. Nash then created the first exercise based on an agreed flowchart/storyboard and now we're just getting to the end of the iterations with the team, stakeholders and students. The final step for us will be to train the BU team on how to use Fieldscapes to continue to maintain and develop the exercise (and create other exercises), before they then start their evaluation with the current student cohort.

Response so far from all involved has been excellent with comments such as:

  • "I had such a cool day at work recently – I got to play with the first of my VR healthcare education environments using Oculus Rift"
  • " I absolutely love this! A brilliant way to learn" - student feedback
  • "So amazing to see my project becoming a reality – I hope the students love this way of bridging the gap between classroom theory and clinical practice"
  • "That was brilliant loved it! can’t wait to do more. Very informative" - student feedback
  • "Delighted that the Oculus Rift dramatically altered the look and feel of the clinical room, and that the handheld Haptic feedback controls added to the experience"

Being Fieldscapes the exercise can be experiences on a PC/Mac or Android device, and in VR on Oculus Rift or Google Cardboard on Android. One of our final tasks is integrating one of the £2-£3 hand controllers for the Cardboard go go along with the c.£15-£20 VR-BOX headsets that BU have (VR doesn't have to be expensive!).

We'll keep you posted as developments and evaluation progress and we're already talking to BU about other exciting way to take the training.

You can read more about the BU view on the project on their blog posts at:

26 September 2017

The Three Big Challenges in AI Development: #2 Generalism and #3 Sentience

Following on from the previous post I now want to look at what happens when we try and move out of the "marketing AI" box and towards that big area of "science fiction" AI to the right of the diagram. Moving in this direction we face two major challenges, #2 and #3 of our overall AI challenges:

Challenge #2: Generalism

Probably the biggest "issue" with current "AI" is that it is very narrow. It's a programme to interpret data, or to drive a car, or to play chess, or to act as a carer, or to draw a picture. But almost any human can make a stab at doing all of those, and with a bit of training or learning can get better at them all. This just isn't the case with modern "AI". If we want to get closer to the SF ideal of AI, and also to make it a lot easier to use AI in the world around us, then what we really need is a "general purpose AI" - or what is commonly called Artificial General Intelligence (AGI). There is a lot of research going into AGI at the moment in academic institutions and elsewhere, but it is really early days. A lot of the ground work is just giving the bot what we would call common-sense - just knowing about categories of things, what they do, how to use them - the sort of stuff a kid picks up before they leave kindergarten. In fact one of the strategies being adopted is to try and build a virtual toddler and get it to learn in the same way that a human toddler does.

Whilst the effort involved in creating an AGI will be immense, the rewards are likely to be even greater - as we'd be able to just ask or tell the AI to do something and it would be able to do it, or ask us how to do it, or go away and ask another bot or research it for itself. In some ways we would cease to need to programme the bot.

Just as a trivial example, but one that is close to our heart. If we're building a training simulation and want to have a bunch of non-player characters filling roles then we have to script each one, or create behaviour models and implement agents to then operate within those behaviours. It takes a lot of effort. With an AGI we'd be able to treat those bots as though they were actors (well extras) - we'd just give them the situation and their motivation, give some general direction, shout "action" and then leave them to get on with it.

Not also that moving to an AGI does not imply ANY linkage to the level of humanness. It is probably perfectly possible to have a fully fledged AGI that only has the bare minimum of humanness in order to communicate with us - think R2D2.

Challenge #3: Sentience

If creating an AGI is probably an order of magnitude greater problem than creating "humanness", then creating "sentience" is probably an order of magnitude greater again. Although there are possibly two extremes of view here:

  • At one end many believe that we will NEVER create artificial sentience. Even the smartest, most human looking AI will essentially be a zombie, there's be "nobody home", no matter much much it appears to show intelligence, emotion or empathy.
  • At the other, some believe that if we create a very human AGI then sentience might almost come with it. In fact just thinking back to the "extras" example above our anthropological instinct almost immediately starts to ask "well what if the extras don't want to do that..."
We also need to be clear about what we (well I) mean when I talk about sentience. This is more than intelligence, and is certainly beyond what almost all (all?) animals show. So it's more than emotion and empathy and intelligence. It's about self-awareness, self-actualisation and having a consistent internal narrative, internal dialogue and self-reflection. It's about being able to think about "me" and who I am, and what I'm doing and why, and then taking actions on that basis - self-determination.

Whilst I'm sure we code a bot that "appears" to do much of that, would that mean we have created sentience - or does sentience have to be an emergent behaviour? We have a tough time pinning down what all this means in humans, so trying to understand what it might mean (and code it, or create the conditions for the AGI to evolve it) is never going to be easy.

So this completes our chart. To move from the "marketing" AI space of automated intelligence to the science-fiction promise of "true" AI, we face three big challenges, each probably an order of magnitude greater than the last:

  • Creating something that presents as 100% human across all the domains of "humanness"
  • Creating an artificial general intelligence that can apply itself to almost any task
  • Creating, or evolving, something that can truly think for itself, have a sense of self, and which shows self-determination and self-actualisation
It'll be an interesting journey!

25 September 2017

The Three Big Challenges in AI Development: #1 Humanness

In a previous blog post we introduced our AI Landscape diagram. In this post I want to look at how it helps us to identify the main challenges in the future development of AI.

On the diagram we’ve already identified how that stuff which is currently called “AI” by marketeers, media and others is generally better thought of as being automated intelligence or “narrow” AI. It is using AI techniques, such as natural language or machine learning, and applying them to a specific problem, but without actually building the sort of full, integrated, AI that we have come to expect from Science Fiction.

To grow the space currently occupied by today’s “AI” we can grow in two directions – moving up the chart to make the entities seem more human, or moving across the chart to make the entities more intelligent.


The “more human”  route represents Challenge 1. It is probably the easiest of the challenges and the chart we showed previously (and repeated below) shows an estimate of the relative maturity of some of the more important technologies involved.

There are two interesting effects related to work in this direction:

  • Uncanny Valley - we're quite happy to deal with cartoons, and we're quite happy to deal with something that seems completely real, but there's a middle ground that we find very spooky. So in some ways the efficacy of developments rise as they get better, then plummet as they hit the valley, and then finally improve again once you cannot tell them for real. So whilst in some ways we've made a lot of progress in some areas over recent years (e.g. visual avatars, text-to-speech) we're now hitting the valley with them and progress may now seem a lot slower. Other elements, like emotion and empathy, we're barely started on, so may take a long time to even reach the valley.
  • Anthropomorphism - People rapidly attribute feelings and intent to even the most inanimate object (toaster, printer). So in some ways a computer needs to do very little in the human direction for us to think of it as far more human than it really is. In some ways this can almost help us cross the valley by letting human interpretation assume the system has crossed the valley even though it's still a lot more basic than is thought.
The upshot is that the next few years will certainly see systems that seem far more human than any around today, even though their fundamental tech is nowhere near being a proper "AI". The question is whether a system could pass the so-called "Gold" Turing Test ( a Skype like conversation with an avatar) without also showing significant progress along the intelligence dimension. Achieving that is probably more about the capability of the chat interface as it seems that CGI and Games will crack the visual and audio elements (although ding them in real-time is still a challenge) - so it really remains the standard Turing challenge. An emotional/empathic version of the Turing Test will probably prove a far harder nut to crack.

We'll discuss the Intelligence dimension in Part 2.

18 September 2017

Automated Intelligence vs Automated Muscle

As previously posted I've long had an issue with the "misuse" of the term AI. I usually replace "AI" with "algorithms inside" and the marketing statement I'm reading still makes complete sense!

Jerry Kaplan speaking on the Today programme last week was using the term "automation" to refer to what a lot of current AI is doing - and actually that fits just as well, and also highlights that this is something more than just simple algorithms, even if it's a long way short of science-fiction AIs and Artificial General Intelligence.

So now I'm happy to go with "automated intelligence" as what modern AI does - it does automate some aspects of a very narrow "intelligence" - and the use of the word automated does suggest that there are some limits to the abilities (which "artificial" doesn't).

And seeing as I was at an AI and Robotics conference last week that also got me to thinking that robotics is in many ways just "automated muscle", giving us a nice dyad with advanced software manifesting itself as automated intelligence (AI), and advanced hardware manifesting as automated muscle (robots).

15 September 2017

AI & Robotics: The Main Event 2017

David spoke at the AI & Robotics: The Main Event 2017 conference yesterday. The main emphasis was far more on AI (well machine learning) rather than robotics. David talked delegates through the AI Landscape model before talking about the use of chatbots/virtual characters/AI within the organisation in roles such as teaching, training, simulation, mentoring and knowledge capture and access.

Other highlights from the day included:

  • Prof. Noel Sharkey talking about responsible robotics and his survey on robots and sex
  • Stephen Metcalfe MP and co-chair of the All Party Parliamentary Group on AI talking about the APPG and Government role
  • Prof. Philip Bond talking about the Government's Council for Science and Technology and its role in promoting investment in AI (apparently there's a lot of it coming!)
  • Pete Trainor from BIMA talking about using chatbots to help avoid male suicides by providing SU, a reflective companion - https://www.bima.co.uk/en/Article/05-May-2017/Meet-SU
  • Chris Ezekial from Creative Virtual talking about their success with virtual customer service agents (Chris and I were around for the first chatbot boom!)
  • Intelligent Assistants showing the 2nd highest growth in interest from major brands in terms of engagement technologies
  • Enterprise chat market worth $1.9bn
  • 85% of enterprise customer engagement to be without human contact by 2020
  • 30% increase in virtual agent use (forecast or historic, timescale - not clear!)
  • 69% of consumers reported that they would choose to interact with a chatbot before a human because they wanted instant answers!
There was also a nice 2x2 matrix (below) looking at new/existing jobs and human/machine workers. 

This chimed nicely with a slide by another presenter which showed how as automation comes in workers initially resist, then accept, then as it takes their job over say the job wasn't worth doing and that they've now found a better one - til that starts to be automated. In a coffee chat we were wondering where all the people from the typing pools went when PCs came in. Our guess is that they went (notionally) to call centres - and guess where automation is now striking! Where will they go next?

14 September 2017

Daden at Number 10

Daden MD David Burden was part of a delegation of Midland's based business owners and entrepeneurs to 10 Downing Street yesterday to meet with one of the PM's advisors on business policy. The group represented a wide range of businesses from watchmakers to construction industry organisations, and social enterprises and charity interests were also well represented. Whilst the meeting of itself was quite short it is hopefully the start of a longer engagement with Government for both this group and Daden (we also submitted evidence to the House of Lord's Select Committee on AI last week and are exploring some other avenues of engagement).

6 September 2017

An AI Landscape

In the old days there used to be a saying that "what we call ‘artificial intelligence’ is basically what computers can’t do yet" - so as things that were thought to take intelligence - like playing chess - were mastered by a computer they ceased to be things that needed "real" intelligence. Today, it's almost as though the situation has reversed, and to read most press-releases and media stories it now appears to be that "what we call 'artificial intelligence'" is basically anything that a computer can do today".

So in order to get a better handle on what we (should) mean by "artificial intelligence" we've come up with the landscape chart above. Almost any computer programme can be plotted on it - and so can the "space" that we might reasonably call "AI" - so we should be able to get a better sense of whether something has a right to be called AI or not.

The bottom axis shows complexity (which we'll also take as being synonymous with sophistication). We've identified 4 main points on this axis - although it is undoubtably a continuum, and boundaries will be blurred and even overlapping - and we are probably also mixing categories too!:

  • Simple Algorithms - 99% of most computer programmes, even complex ERP and CRM systems, they are highly linear and predicatable
  • Complex Algorithms - things like (but not limited to) machine learning, deep learning, neural networks, bayesian networks, fuzzy logic etc where the complexity of the inner code starts to go beyond simple linear relationships. Lots of what is currently called AI is here - but really falls short of a more traditional definition of an AI.
  • Artificial General Intelligence - the holy grail of AI developers, a system which can apply itself using common sense and  general knowledge to a wide range of problems and solve them to a similar laval as a human
  • Artificial Sentience - beloved of science-fiction, code which "thinks" and is "self-aware"

The vertical axis is about "presentation" - does the programme present itself as human (or indeed another animal or being) or as a computer. Our ERP or CRM system typically presents as a computer GUI - but if we add a chatbot in front of it it instantly presents as more human. The position on the axis is influenced by the programmes capability in a number of dimensions of "humanness":

  • Text-to-speech: Does it sound human? TTS has plateaued in recent years, good but certainly recognisably synthetic
  • Speech Recognition: Can it recognise human speech without training. Systems like Siri have really driven this on recently.
  • Natural Language Generation: This tends to be template driven or parroting back existing sentences. Lots more work needed, especially on argumentation and story-telling
  • Avatar Body Realism: CGI work in movies has made this pretty much 100% except for skin tones
  • Avatar Face Realism: All skin and hair so a lot harder and very much stuck in uncanny valley for any real-time rendering
  • Avatar Body Animation: For gestures, movement etc. Again movies and decent motion-capture have pretty much solved this.
  • Avatar Expression (& lip sync): Static faces can look pretty good, but try to get them to smile or grimace or just sync to speech and all realism is lost
  • Emotion: Debatable about whether this should be on the complexity/sophistication axis (and/or is an inherent part of an AGI or artificial sentient), but it's a very human characteristic and a programme needs to crack it to be taken as really human. Games are probably where we're seeing the most work here.
  • Empathy: Having cracked emotion the programme then needs to be able to "read" the person it is interacting with and respond accordingly - lots of work here but face-cams, EEG and other technology is beginning to give a handle on it.
The chart gives a very rough assessment of the maturity of each.

There are probably some alternative vertical dimensions we could use other than "presentation" to give us an view on interesting landscape - Sheridan's autonomy model could be a useful one which we'll cover in a later post.

So back on the chart we can now plot where current "AI" technologies and systems might sit:

The yellow area shows the space that we typically see marketeers and others use the term AI to refer to!

But compare this to the more popular, science-fiction derived, view of what is an "AI".

Big difference - and zero overlap!

Putting them both on the same chart makes this clear.

So hopefully a chart like this will give you, as it has us, a better understanding of what the potential AI landscape is, and where the current systems, and the systems of our SF culture, sit. Interestingly it also raises a question about the blank spaces and the gaps, and in particular how do we move from today's very "disappointing" marketing versions of AI to the one's we're promised in SF from "Humans" to Battlestar Galactica!

4 September 2017

Hurricane Harvey SOS Data

Seeing as we're also doing a project at the moment about evacuation from major disasters we were interested in seeing what data we coudl find around Hurricane Harvey. It so happens that volunteers have been co-ordinating efforts at @HarveyRescue and have been collating the SOS reports from various sources, from which the media has been building maps such as those on the New York Times.

We were able to download the raw data from the @HarveyRescue site and bring it pretty quickly into Datascape. Unfortunately the first ~5000 or ~11000 record all showed the same date and time, so we couldn't use them for a space-time plot, but the remaining records were OK.

Our overview visualisation is shown above. You can launch it in WebGL in 3D in your own browser (and in mobile VR with Google Cardboard on your smartphone) by going to:


On the visualisation:

  • Height is time, newest at the top
  • Colour is:
    • Cyan: Normal SOS
    • Black: involves visually impaired people
    • Magenta: involves children
    • Green: involves elderly
  • Shape is priority:
    • Sphere = normal
    • Tetraheden = semi-urgent
    • Cube = urgent/emergency
  • Size is # of people effected, roughly logarithmic

You can fly all around and through the data, and hover on a point to see the summary info. We've removed the more detailed information for privacy reasons.

It's a pity that we haven't got the early events data, but you can still see the time effects in a variety of places:

  • The whole Port Arthur area kicks of way later than downtown Houston
  • There is another time limited cluster around Kingwood, peaking around 9/10am on 29th
  • And another lesser one around Baytown at 9/12am on 29th
  • There is some evidence of an over-night lull in reporting, about 2am-6am
The Port Arthur cluster
We're now looking at the Relief stage data and will hopefully get something up on that later in the week.

Don't forget to try the visualisation.