25 November 2020

Case Study - In-Patient Diabetes Management

​With Bournemouth University we've been developing a training exercise for students to practice their response to a diabetes situation with an in-patient on the ward. 

Students could access the exercise with their from a normal PC, tablet or smartphone, or using a Google Cardboard VR headset.


The exercises was used for an A-B comparison with traditional eLearning and students on the 3D/VR exercise which found that the latter group performed "significantly better". 

Several research papers are coming, but our favourite student response was "“I really enjoyed it. Can we have more scenarios please?”!

You can read a bit more about the project and findings in our short case-study.


23 November 2020

Trainingscapes Design Decisions

 


I thought it might be useful to write a bit about the key design decision which informed the development of Trainingscapes - and which going forward may give us some novel ways of deploying the system.

An early PIVOTE exercise in Second Life

Trainingscapes has its roots in PIVOTE. When we first started developing 3D immersive training in Second Life we rapidly realised that we didn't want to code a training system in Second Life (Linden Scripting Language is fun, but not that scalable). Looking around we found that many Med Schools used "virtual patients" - which could just be a pen-and-paper exercise, but more often a 2D eLearning exercise, and that there was an emerging XML based standard for exchanging these virtual patients called Medbiquitous Virtual Patients (MVP). We realised that we could use this as a basis of a new system - which we called PIVOTE. 

A few years later PIVOTE was succeeded by OOPAL - which streamlined the engine, and also added a 2D web-based layout designer (which then put props in their correct places in SL), and then when we built Fieldscapes/Trainingscapes which refined the engine further.

The key point in PIVOTE is that the objects which the user sees in the world (which we call props) are, of themselves, dumb. They are just a set of 3D polygons. What is important is their semantic value - what they represent. In PIVOTE the prop is linked to its semantic value by a simple prop ID. The propID is what is the exercise file, not the prop itself.

Within the exercise file we define how each prop responds to the standard user interactions - touch, collide, detect (we've also added sit and hear in the past), any rules which control this, and what actions the prop should do in response to the interaction. So the PIVOTE engine just receives from the exercise environment  message that a particular propID has experienced a particular interaction, and then sends back to the environment a set of actions for a variety of props (and the UI, and even the environment) to do in response. The PIVOTE engine looks after any state tracking.

The main Trainingscapes exercise/prop authoring panel

This design completely separates the "look" of an exercise from its logic. This is absolutely key to Trainingscapes, and in day-to-day use has a number of benefits:

  • We can develop exercises with placeholders, and switch them out for the "final" prop just by changing the 3D model that's linked to the prop ID.
  • We can use the same prop in multiple exercises, and its behaviour can be different in every exercise.
Original SL version of the PIVOTE flow - but just change SL for any 3D (or audio) environment



More significantly though it opens up up a whole bunch of opportunities:
  • The PIVOTE "player" can sit at the end of a simple web API, it doesn't have to be embedded in the player (as we currently do in Trainingscapes so that it can be off line)
  • If a 3D environment can provide a library of objects and a simple scripting language with web API calls then we can use PIVOTE to drive exercises in it. This is what we did with Second Life and OpenSim - perhaps we can start to do this in AltSpaceVR, VirBela/Frame, Mozilla Hubs etc.
  • By the same measure we could create our own WebGL/WebXR/OpenXR environment and let users on the web play Trainingscapes exercises without downloading the Trainingscapes player.
  • There is no reason why props should be visual, digital 3D objects. They could be sound objects, making exercises potentially playable by users with a virtual impairment - we've already done a prototype of an audio Trainingscapes player. 
  • They could even be real-world objects - adding a tactile dimension!

Sounds like that should keep us busy for a good few DadenU days - and if you'd like to see how the PIVOTE engine could integrate into your platform just ask!


20 November 2020

Revisiting AltSpaceVR

 



I haven't really been in AltSpaceVR much since we were doing some data visualisation experiments with its wonderful (but now I think discontinued) Enclosures feature - such as this pic of visualising USGS Earthquake data though an AFrame web page into AltSpaceVR). And this was back before the rescue of AltSpaceVR by Microsoft.

So it was fun to spend some time in there this week and see what's going on. The initial driver was an ImmerseUK informal meetup, held on the Burning Man site - some nice builds and all very reminiscent of SL circa 2008.



The avatars are less blocky than the originals, but still deliberately cartoony, but better proportioned than the VirBela ones. Things I liked (this was all via Oculus Quest):

  • Voice seemed pretty seamless
  • Very much an "SL" type experience
  • Big spaces to explore
  • Point-and-click movement was functional
  • The simple interactables (eg fireworks) were fun
  • Tutorial was good, and quick and easy to install on Quest
  • The barebones Friends system let you message and send Teleport lures
  • The menu disc worked pretty well, and pics were available from the web and didn't need downloading from the Quest
  • There is flying
Things I was less keen on:
  • When you do joystick movement your field of view is blinkered to about half - totally wrecking any immersion and making it hard to keep track of people you are following. This one issue alone makes me reticent about doing much else in the world - which is a pity
  • Rotational movement is in degree blocks not smooth - again destroys immersion and you temporarily lose your sense of location
  • When you fly your still vertical
  • No run ability - lost of multiple point-and-clicks the only answer
So a pity, if it wasn't for those basic movement issues it would be a pretty good place.

The next question for me of course was can I build there? AltSpaceVR has the concept of Universes and Worlds. The Burning Man thing was a Universe, and then each of the artistic sites you went to from it were Worlds. I think its hierarchical, so one world has to be part of one Universe, and access/permissions is controlled by the Universe owner, but I think can then vary between Worlds.

As a user (think you have to click the Beta Programme and Worlds boxes) you get a default Universe which contains your "home" World where you start each time - so you can customise that which is nice. From the web you can then create new Worlds (and I guess new Universes), choosing a starting template (you cant do this from VR). You then go into VR, and choose the World you want to go into and edit.

The main edit panel


Most templates have the same low-poly look as the rest of AltSpaceVR, and most are single room type size- although some are bigger. From the edit menu you can edit the objects in there, change the skybox, and then place new objects. There is quite a good library of objects already (arranged into sets), and you can import individual objects in glTF format. Textures/images can be loaded too - but a lot of this import looks like its via the web interface.

I found most of the templates too claustrophobic so I loaded one which is just a skybox, then rezzed a plane object and made it 50m+, so I started to have a nice place to play with. I then tried placing some objects from the libraries to get a feel for the thing. Manipulation in 3D/VR was OK but not as quick as 2D, and I'd hate to do precision work in there. There's also no scripting at present. Anyway here's a few pics:




The really interesting thing (for us) is that since AltSpaceVR is built on Unity the object sets are just Unity Asset Bundles, and there is a community built importer for them - as well as for location templates. And of course Trainingscapes also uses Unity, and Unity Asset Bundles for locations and inventories. So it might "just" be possible that we can import Trainingscapes assets into AltSpaceVR. Of course with no scripting there's not much we could do, but if there was some simple scripting and a basic API then possibly we could keep the Trainingscapes "engine" on the web, and play the exercise in AltSpaceVR - just as we used to with SL! Of course even though our builds tend to be relatively low poly they are nowhere near as low as AltSpaceVR - so that may be a blocker or just kill performance, but is might be worth a try.

Otherwise it's good to see how AltSpaceVR has come one, and if they fix that blinker issue I might be tempted to come in and build some meaningful spaces - and I'll certainly look out for some decent events and gatherings to join in world - far closer to an SL experience than VirBela or EngageVR for me ( but then that's more what its designed to be).




13 November 2020

PIVOTE on Israeli TV

 


A lovely bit of video has just emerged that show long time friend and VR/VW colleague Dr Hanan Gazit demonstrating our old PIVOTE system (the forerunner of Trainingscapes) on Israeli TV about a decade ago! The system ran in Second Life and Open Sim, but crucially the exercise server was on the web, and training content could be edited on the web, not in Second Life. Exercises were defined by an XML file, and that heritage and approach is still in Trainingscapes - keep the exercise definition and player logic independent of the actual 3D (or 2D or audio) environment you use to experience it in. It's a logic that should let us develop a nice WebXR player for Trainingscapes relatively easily.

You can watch the video here (in Hebrew - although it's got sign language!):



11 November 2020

Virtual Humans - Out in Paperback!

 


David's book on Virtual Humans is now out in Paperback. Buy now at Amazon and all good (open) bookstores!

10 November 2020

DadenU Day: NodeJS NLP and Grakn


Over the last few months we've been looking at alternative to Chatscript and Apache Jena for chatbot development. We really like the chatbot-knowledge graph architecture, but we've found Chatscript too idiosyncratic for our purposes,  and lacking any solid scripting/programming capability, and Jena a bit buggy and limited.

On the Chatbot front we've had a play with all the standard offerings, all Machine Learning (ML) based, including Dialogflow, Microsoft LUIS, RASA, IBM Watson etc. In most cases it wasn't the performance as such that we found wanting but rather the authoring. All of them seemed very fiddly, with no proper "authoring" support (just coding or tables), and in several cases the intent-pattern-action grouping that is the core of the ML part was split over multiple files so authoring was a real pain (with RASA some people were even writing their own Python pre-processors so they could work with a more logical layout that then spat out the RASA format).

For the knowledge graph Neo4j looked a very good solution, looks fantastic and lots of capability. It doesn't have a "pure" semantic triples model, but through the neosemantics plug in it would read out Turtle files and work more like a triples store. But we then ran into deployment issues, with the plug-in not available on the community edition, the server hosting need certificates for clients as well as server, and the hosted version being very expensive as you're billed all the time the server is up, not just when used. We might come back to it, but at this stage the challenges were too great.

So our front-runners have ended up as NodeJS for the NLP side of the bot, and Grakn for the knowledge graph/triplesbase, and DadenU gave me a day to play with them a bit more, and get them talking to each other!

NodeJS

NodeJS is the "server" form of Javascript. There's a lot to love about NodeJS - it's good old familiar Javascript and feels relatively lightweight - in the sense you're not weighed down with IDEs, modules for everything and lines of code that you just have to trust. There is though a very full module library if you want to use it, and its robust and eminently scalable.

For NLP there are libraries that include both takes on Machine Learning intent-action style chatbots and traditional part-of-speech NLP analysis. The nice thing is that you don't have to choose, you can use both!

For ML there is a wonderful library released into open-source by Axa (the insurance people) called simply nlp.js which takes a standard RASA/LUIS/Watson ML approach but everything is controlled by a simple single data object such as:

{
      "intent": "agent.age",
      "utterances": [
        "your age",
        "how old is your platform",
        "how old are you",
        "what's your age",
        "I'd like to know your age",
        "tell me your age"
      ],
      "answers": [
        "I'm very young",
        "I was created recently",
        "Age is just a number. You're only as old as you feel"
      ]
}

You can have as many as you want, spread these over multiple files, and having everything in one place makes things dead easy.

Then there's another library called Natural from NaturalNode which provides Parts-of Speech Tagging, stemming and other NLP functions. Adding this to the bot meant that I could:

  • Use Natural to identify discourse act types (eg W5H question etc)
  • Use Natural to extract nouns (which will be the knowledge graph entities)
  • Use Natural to do any stemming I need
  • Use Axa-NLP to identify intent (using "it" and a dummy word in place of the noun) and pass back the predicate or module needed to answer the question
  • Assess NLP confidence scores to decide whether to get the answer from the knowledge graph (if needed) or a fallback response.

Grakn

Grakn is a knowledge graph database which you can install locally or on the cloud. Most of it is open source, just the enterprise management system is paid for. There is a command line database server (core), and a graphical visualiser and authoring tool (Workbase). 

With Grakn you MUST set up your ontology first, which is good practice anyway. It is defined in GRAQL, a cross between Turtle and SPARQL, and which is very readable, eg:

person sub entity,
  plays employee,
  has name,
  has description,
  has first-name,
  has description,
  has last-name;

Data can then be imported from a similar file, or created programmatically (eg form a CSV) - and in fact the ontology can also be done programmatically.

$dc isa product,
    has description "Discourse is Daden's chatbot and conversational AI platform which uses a combination of machine-learning, semantic knowledge graphs, grammar and rules to deliver innovative conversational solutions.",
    has benefit "Discourse seperates out the tasks of language understanding from content management, so clients can focus on information management, not linguistics",
    has name "Discourse";

With the data entered you can then visualise it in Workbase and run queries, visualising the data or the ontology.


Interfacing NodeJS and Grakn

Luckily there is a nice simple interface for NodeJS to Grakn at https://dev.grakn.ai/docs/client-api/nodejs. This lets you make a GRAQL call to Grakn, retrieve a JSON structure with the result (often a graph fragment), and then use as required.

const GraknClient = require("grakn-client");

async function graql (keyspace, query) {
const client = new GraknClient("localhost:48555");
const session = await client.session(keyspace);
const readTransaction = await session.transaction().read();
  answerIterator = await readTransaction.query(query);
const response = await answerIterator.collect();
await readTransaction.close();
await session.close();
client.close();
return(response);
}

(Re)Building Abi

So I think I've got the makings of a very nice knowledge graph driven chatbot/conversational AI system, leveraging Machine Learning where it makes sense, but also having traditional linguistic NLP tools available when needed. The basic flow is an extension of that presented above:

  • Use Natural to identify discourse act types (eg W5H question etc)
  • Use Natural to extract nouns (which will be the knowledge graph entities, we can do our own toeknisation for compound nouns and proper names)
  • Use Natural to do any stemming I need
  • Use Axa-NLP to identify intent (using "it" and a dummy word in place of the noun) - typically the predicate needed of the noun/subject entity in order to answer the question, or a specialised module/function
  • Make the Graql call to Grakn for the object entity associated with the noun/subject and predicate
  • Give the answer back
Of course having a full programming language like NodeJS available means that we can make things a lot more complex than this.

As a test case of an end-to-end bot I'm now rebuilding Abi, our old website virtual assistant to see how it all fares, and it it looks good we'll plug this new code into out existing user-friendly back end, so that chatbot authors can focus on information and knowledge, and leave Discourse to look after the linguistics.










9 November 2020

A 3-Dimension UX Analysis of VR

 


Christian Schott and Stephen Marshall of the Victoria University of Wellington have quite a nice paper out in the Australasian Journal of Educational Technology, 2021, 37(1) and on open access at https://ajet.org.au/index.php/AJET/article/view/5166/1681.

Coming from an Experiential Education (EE) perspective they draw on a UX model by Hassenzahl and Tractinsky (2006) which consists "of three facets guided by a positive focus on creating outstanding emotional experiences, rather than on adopting a mostly instrumental, task-oriented view of interactive products, which is a core criticism of traditional HCI." 

  • "The first facet responds to this criticism of traditional HCI  and  is  termed  beyond  the  instrumental.  It  incorporates  aesthetics,  a  holistic  approach,  and  hedonic  qualities as features of the user experience. "
  • "The second facet builds on affective computing and extends emotion and affect to the user through positivity, subjectivity and the dynamic of antecedents and consequences. "
  • "The third facet is the experiential, which emphasises situatedness and temporality, and is characterised by user experiences that are complex, situated, dynamic, unique, and temporally-bounded".

The research used a VR experience of a Fijian island being developed to help better understand tourism economics on a Pacific Island, and the experimental subjects were Tourism Management students.

The UX evaluation identified 8 clusters of responses which map onto the three evaluation dimensions as shown above are were:

  • Sense of Place
  • Sensory Appeal
  • Natural Movement
  • Learning Enrichment
  • Comprehensive Vision
  • Hardware Concerns
  • Screen Resolution
  • Hyperreal Experience
  • Motion Sickness

Pity that Agency wasn't on there, and it would be interesting to see how a 2D/3D evaluation of the same experience would change the ratings and clusters give.

The authors conclude that "The evolution of VR technology is increasingly enabling high fidelity and motivating EE learning activities to be offered at a relatively low cost, particularly when the logistical, resourcing,  and  ethical  issues  of  alternative  approaches  are  considered. The nuanced  analysis  of  the  identified  positive  and  negative  themes,  through  the  lens  of  Hassenzahl  and  Tractinsky’s  (2006)  adapted  three  UX  facets,  has  provided  valuable  albeit  indicative  guidance  where  to  concentrate refinement efforts. However, it is also evident that a great deal of further research on the user experience  is  required  to  extend  our  understanding  of  full-immersion  VR  technology  as  an  important  opportunity for EE and higher education more broadly."

Nice paper, well worth a read, and the UX model and dimensions could be a useful one to bear in mind when looking at evaluating other 3D/VR experiences.

Hassenzahl, M., & Tractinsky, N. (2006). User experience - a research agenda. Behaviour & Information Technology, 25(2), 91-97. https://doi.org/10.1080/01449290500330331


2 November 2020

Time to Revisit 2D/3D?

The COVID epidemic is bringing a whole host of challenges to tutors and trainers in trying to deliver engaging teaching to students and staff – and in particular learning experiences which give the student a sense of context, place and shared experience. 

With on-site and classroom learning facing the greatest problems, many are naturally turning to remote teaching by Zoom (and similar), eLearning systems, or video. The diagram below briefly highlights key challenges with each of these - and we'll dive into more detail next week.



Whilst the uptake of headset-based virtual reality (HMD-VR) for both entertainment and training/education has been increasing in the last year or so (and Oculus sold out of the Quest headset early in the pandemic, and its new Quest2 had 5 times the pre-orders of the Quest1), the HMD-VR approach has its own challenges during COVID:

Sharing headsets within a central location needs rigorous cleaning between use

Posting/couriering headsets around remote students is costly and risky

The cost of equipping every student with their own VR HMD is too expensive for most organisations/institutions.

And this is quiet apart from the challenges of the space to use them (although we’ve found the garden is good – at least in summer!) and the discomfort that some users feel.

So given the current situation shouldn't tutors and trainers be re-considering the 2D/3D approach to immersive learning – the Sims/Computer Game style where you operate in a 3D environment but the experience is delivered on an ordinary 2D screen – be that on a PC/Mac, tablet or even smartphone.

 


Such systems have the advantages that:

  • Compared to Zoom you place students in the environment and they can learn when they want and a their own pace.
  • Compared to eLearning they have more agency, being able to tackle tasks in a variety of orders and ways, and can even undertake collaborative activities.
  • Compared to video the students again have more agency, more ability to explore the environment, and content can be changed on a regular basis for minimal or no cost.
  • Compared to HMD-VR there are no headsets to buy, clean or distribute, and students can potentially use their own smartphones or tablets alongside their own laptops



Such 2D/3D approaches appear to have been forgotten and brushed aside in the race to the latest "shiny" VR experiences, despite the fact that whilst there is a lot of evidence comparing traditional eLearning to VR there is little that shows significant benefits from VR as against 2D/3D - and even some which shows a negative "benefit"!

So isn't it time to re-assess the 2D/3D approach (and perhaps find a better name for it!)?

And of course with Trainingscapes you can generate content for 2D/3D and VR at the same time, so if the situation changes and VR becomes more do'able (for some students, some of the time) then you'll be ready.