21 December 2020

Daden Video Presentation(s) at the GIANT Health Conference

 





David presented at the GIANT Health Conference on 2nd Dec, giving an overview of our increasing number of health related immersive 3D/VR projects. You can watch TWO versions of  the 20 min presentation and Q&A session by clicking on the images above due to some confusion in GIANT arrangements, the second has an interesting chat about future VR ecosystems at the end of the Q&A.


(Both videos should start about 10-15 secs before David's piece)






9 December 2020

New JISC Report - Learning and Teaching Reimagined (not?)



JISC - the body which advises FE/HE on the use of technology has issued an important new report on Learning and Teaching Reimagined. 

"The flagship learning and teaching reimagined report is the result of a five-month higher education initiative to understand the response to COVID-19 and explore the future of digital learning and teaching. Learning and teaching reimagined, with the support of its advisory board, and more than 1,000 HE participants, provides university leaders with inspiration on what the future might hold, guidance on how to get there and practical tools to develop your plans."

As a member of JISC's Virtual Reality Framework we were fascinated to see what the report has to say about immersive learning technologies.

The report identifies 4 possible future scenarios - more to drive discussion than as future options:
  • A very familiar learning experience on campus, for students who have already adapted to a socially-distanced world. Important and distinctive here is institutional resilience is the sole driver for use of technology-enhanced learning and teaching, the ability to adapt to significant change and move operations fully online as and when required. Learning and teaching is predominantly face to face and, as such, is far less reliant on technology-enhanced learning.
  • Technology-enhanced learning supplements a ‘traditional’, lecture-led, synchronous learning and teaching experience. It feels familiar to students, while offering a broader range of learning opportunities. Significantly it offers confidence in the resilience of the university experience. Leaders are increasingly aware of the benefits of technology-enhanced learning and recognise the efficiencies gained from supplementing their preferred campus-based models.
  • A ‘step change’ in the higher education offering. Students experience flexibility and convenience of learning, increasing enabling adaptive and self-directed learning, with more active learning opportunities. Leaders are increasingly fluent in technology-enhanced learning and appreciative of the opportunities to adapt their offering to reach wider range of markets. Investment has begun to improve the quality and coherence of the learning and teaching experience, increasingly appealing for a diverse student population with a more inclusive and accessible experience.
  • Students embrace the fully-online experience, seeking greater flexibility and an increasingly personalised learning experience. With most higher education providers adapting and enhancing their technology-enhanced learning and teaching experience in some way, the imperative for leaders and staff in this scenario is to demonstrate the high quality, unique nature of their fully online offering.  
Their "primers" for senior leaders on topics such as digital learning and innovation mention no technologies specifically, but their case studies seem more to be about good 2D VLE implementations and blended learning (thought that was a given) than more innovative approaches, such as immersive 3D, despite such technology being used in HE learning for almost 15 years now! The report has one case study which includes VR - and that is 360degree video in surgical training.

"Mixed Reality" fares slightly better with 3 mentions (two the same). There is also mention of "Virtual Worlds" (twice, duplicates), but not definition of what they mean - so could just infer VLEs. The phrases "In 2030 UK higher education learning and teaching is regarded as world class because it is
attractive to all students, seamlessly spans the physical and virtual worlds and is of the highest
academic quality" and "Students move fluidly across physical, digital and social experiences. The integration of mixed reality technologies strengthens the strong sense of university identity and community, no matter how students choose to participate and learn." are nicely aspirational but could mean anything and the report gives no hint as to how they might be achieved - or even really paints a picture of what they might mean. The closest is "We also do field trips, using virtual reality and 3-D
walkthroughs, to places I’d never be able to go otherwise" - which seems a bit 2020 (at least for some) not 2030.

Interestingly the report does have a bit more to say/paint around the idea of students having an "AI" learning guide/coach to help them with their learning, guide them to things they need to know, and possibly support them with alternate routes/methods when they struggle - time to dust off our Virtual Tutor work!

Potentially confusingly the report also links to QAAs Building a Taxonomy for Digital Learning which described immersive digital learning as an "Immersive digital engagement/experience where digital learning and teaching activities are designed by a provider as the only way in which students will engage, both with the programme and with each other. Students will be required to engage with all the digital activities and will not be offered the opportunity to engage with learning and teaching activities onsite at the provider." - so more like language learning by immersion than the use of any immersive technology. The same document has a glossary listing for AR, but none at all for VR!

So all in all it does seem a bit of a business as usual report. Whilst the scenarios are interesting there is very little in there that might encourage educational leaders to look at the potential of immersive 3D/VR - and indeed hardly anything to suggest to them that it might exist. Make better use of your VLE seems the bottom line to me!

I'd originally put a pic of a learner using a VR headset at the top of this post, but the new image seems more appropriate somehow!

If you want to be excited by what  Learning and Teaching Reimagined might be. perhaps you'd be better offer reading Chapter 4 of Ready Player One?

“During our world history lesson that morning, Mr. Avenovich loaded up a standalone simulation so that our class could witness the discovery of King Tut's tomb by archaeologists in Egypt in AD 1922. The day before, we visited the same spot in 1334 BC and had seen Tutankhamen's empire in all its glory. In my next class, biology, we traveled through a human heart and watched it pumping from the inside just like in that old movie Fantastic Voyage. In art class, we toured the Louvre while all of our avatars wore silly berets. In my astronomy class, we visited each of Jupiter's moons. We stood on the volcanic surface of Io while our teacher explained how the moon has originally formed. As our teacher spoke to us, Jupiter loomed behind her, filling half the sky. It's Great Red Spot, turning slowly just over her left shoulder.”

You can download a copy of the report from https://www.jisc.ac.uk/learning-and-teaching-reimagined

Of course it's pretty easy to criticise, so let's be a bit more constructive. What would my vision of a 2030s learning, and specifically immersive 3D learning, look like:
  • Based on open-source technologies, probably WebGL/OpenXR and their descendants
  • Web-delivered, no downloads
  • Every experience available on desktop/laptop, smartphone/tablet or VR headset, albeit tweaked for each platform, and primarily a student choice as to which to use
  • A consistent identity system (like OpenID/OpenAvatar) across experiences, so you don't need a different account for each one
  • Consistent interfacing to VLEs/LMSs, e.g. via xAPI to manage student access and analytics
  • A way for teachers, tutors, and students to author their own content, and with the training to do so and the supporting pedagogy to design it well
  • Each school/tutor/student/subject can have its own "home space" where creativity and social interaction is encouraged
  • Exercises authored/encoded in a semantic way so accessible versions can be easily generated (e.g. a sound world for a student with visual impairment), and to more readily generate lesson variations (eg for learning and assessment) for the same location/props
  • A way of publishing experiences/exercises/assets to others, with IP control, eventually providing asset and exercises sets that cover the entire curriculum, and tutor/student developed
  • Some flagship centralised resources, such as a complete virtual hospital
  • A way of linking content together - portals from one experience to another, so the entire system can run on a federated system of servers
  • The ability to have experiences as single or multi-user, to cope with asynchronous and synchronous learning and differing student needs
2030 is only 9 years away, but most of this is do'able now if we set our minds to it.

Perhaps it's time to write another White Paper?



 



7 December 2020

New Case Study - A VRLE For Midwifery Students

 


Over the last year or so we've delivered 3 simulations for use by midwifery students at Bournemouth University as part of a Virtual Reality Learning Environment (VRLE) project. The exercises cover:

  • A Urinalysis appointment at a clinic with an expectant mother
  • A home visit to do a routine post-birth baby & mother check and a check on safeguarding issues
  • Dealing with a post-partum haemorrhage in a hospital ward
Student responses have included:

  • “It is a different and enjoyable way to learn"
  • “I think it is a good way of demonstrating what it is like to be in someone else’s shoes.”
  • "The VRLE is a really useful learning tool that I hope is continuously used throughout the duration of our course.”

  • You can view a video presentation on the exercises by BU project lead Denyse King on YouTube:



    Detailed results will be published in Denyse's doctoral research but “the evidence gathered from 311
    healthcare students demonstrates that VRLE have a statistically significant positive impact on: learning, confidence, clinical intuition, humanisation of healthcare and students’ perceived positive onward impact on service user safety.”

    Our latest case study summarising the project is also now available - just click to download.

    1 December 2020

    Video, eLearning and VR - and the 3 Fidelities

    One design model I've always liked is Prof Bob Stone's "3 fidelities" for designing immersive 3D/VR experiences.  The three fidelities are:

    • Context/Environment Fidelity - how well do the surroundings model the real environment and context - but more is not necessarily better;
    • Task Fidelity - how well to the sequence of actions (and their consequences) model the real task;
    • Interaction Fidelity - how well do the manipulation and haptics mirror those experienced in real life.



    My favourite example of Interaction Fidelity was talking to medics who gave students bananas to practice their incisions and suturing on - the bananas bruise and tear so easily it's easy to spot mistakes. Absolutely no context fidelity, and a part trainer so no real task fidelity needed, but a far better experience than trying to reproduce the activity in even the latest VR gear!

    So I was wondering recently how do some of the current methods of distanced learning match on to the these fidelities. We know that it should be a case of blended learning - not one method to rule them all - but where do each of the current approaches have strengths or weaknesses?

    The table below summaries our initial take as to how video, video-conferencing, eLearning and immersive 3D/VR map onto each of these fidelities.

    Click image for a larger version

    Note just how low eLearning scores against all of these - it may be OK for more "knowledge" based learning, but for learning that relates to places, physical tasks or interactions it really isn't a good fit.

    Video can be very good for giving the sense of environment, but you can't explore beyond the frame. And whilst there is some great interactive branching video out there that explores the task dimension it is very costly to produce, and next to impossible to change.

    Zoom style video conferencing can leverage some of the advantages of ordinary video - such as video tours or how-to, but again the leaner tends to be a fairly passive participant in these. Where it would score more strongly is the in the discussion/analysis phase of cementing learning.

    Immersive 3D/VR can give a range of environmental fidelities, and is very good at modelling task fidelities and giving the user the options and agency to explore different routes. But like the others it fails on the interaction fidelity although we are beginning to see improvements in that area - whether it's thru haptics or features such as facial expression and body language - both of which contribute to the interaction.

    So we hope that this has given you a different lens with which to look at different types of remote learning technology, and that it will help you choose the appropriate technologies for the learning task at hand.








    25 November 2020

    Case Study - In-Patient Diabetes Management

    ​With Bournemouth University we've been developing a training exercise for students to practice their response to a diabetes situation with an in-patient on the ward. 

    Students could access the exercise with their from a normal PC, tablet or smartphone, or using a Google Cardboard VR headset.


    The exercises was used for an A-B comparison with traditional eLearning and students on the 3D/VR exercise which found that the latter group performed "significantly better". 

    Several research papers are coming, but our favourite student response was "“I really enjoyed it. Can we have more scenarios please?”!

    You can read a bit more about the project and findings in our short case-study.


    23 November 2020

    Trainingscapes Design Decisions

     


    I thought it might be useful to write a bit about the key design decision which informed the development of Trainingscapes - and which going forward may give us some novel ways of deploying the system.

    An early PIVOTE exercise in Second Life

    Trainingscapes has its roots in PIVOTE. When we first started developing 3D immersive training in Second Life we rapidly realised that we didn't want to code a training system in Second Life (Linden Scripting Language is fun, but not that scalable). Looking around we found that many Med Schools used "virtual patients" - which could just be a pen-and-paper exercise, but more often a 2D eLearning exercise, and that there was an emerging XML based standard for exchanging these virtual patients called Medbiquitous Virtual Patients (MVP). We realised that we could use this as a basis of a new system - which we called PIVOTE. 

    A few years later PIVOTE was succeeded by OOPAL - which streamlined the engine, and also added a 2D web-based layout designer (which then put props in their correct places in SL), and then when we built Fieldscapes/Trainingscapes which refined the engine further.

    The key point in PIVOTE is that the objects which the user sees in the world (which we call props) are, of themselves, dumb. They are just a set of 3D polygons. What is important is their semantic value - what they represent. In PIVOTE the prop is linked to its semantic value by a simple prop ID. The propID is what is the exercise file, not the prop itself.

    Within the exercise file we define how each prop responds to the standard user interactions - touch, collide, detect (we've also added sit and hear in the past), any rules which control this, and what actions the prop should do in response to the interaction. So the PIVOTE engine just receives from the exercise environment  message that a particular propID has experienced a particular interaction, and then sends back to the environment a set of actions for a variety of props (and the UI, and even the environment) to do in response. The PIVOTE engine looks after any state tracking.

    The main Trainingscapes exercise/prop authoring panel

    This design completely separates the "look" of an exercise from its logic. This is absolutely key to Trainingscapes, and in day-to-day use has a number of benefits:

    • We can develop exercises with placeholders, and switch them out for the "final" prop just by changing the 3D model that's linked to the prop ID.
    • We can use the same prop in multiple exercises, and its behaviour can be different in every exercise.
    Original SL version of the PIVOTE flow - but just change SL for any 3D (or audio) environment



    More significantly though it opens up up a whole bunch of opportunities:
    • The PIVOTE "player" can sit at the end of a simple web API, it doesn't have to be embedded in the player (as we currently do in Trainingscapes so that it can be off line)
    • If a 3D environment can provide a library of objects and a simple scripting language with web API calls then we can use PIVOTE to drive exercises in it. This is what we did with Second Life and OpenSim - perhaps we can start to do this in AltSpaceVR, VirBela/Frame, Mozilla Hubs etc.
    • By the same measure we could create our own WebGL/WebXR/OpenXR environment and let users on the web play Trainingscapes exercises without downloading the Trainingscapes player.
    • There is no reason why props should be visual, digital 3D objects. They could be sound objects, making exercises potentially playable by users with a virtual impairment - we've already done a prototype of an audio Trainingscapes player. 
    • They could even be real-world objects - adding a tactile dimension!

    Sounds like that should keep us busy for a good few DadenU days - and if you'd like to see how the PIVOTE engine could integrate into your platform just ask!


    20 November 2020

    Revisiting AltSpaceVR

     



    I haven't really been in AltSpaceVR much since we were doing some data visualisation experiments with its wonderful (but now I think discontinued) Enclosures feature - such as this pic of visualising USGS Earthquake data though an AFrame web page into AltSpaceVR). And this was back before the rescue of AltSpaceVR by Microsoft.

    So it was fun to spend some time in there this week and see what's going on. The initial driver was an ImmerseUK informal meetup, held on the Burning Man site - some nice builds and all very reminiscent of SL circa 2008.



    The avatars are less blocky than the originals, but still deliberately cartoony, but better proportioned than the VirBela ones. Things I liked (this was all via Oculus Quest):

    • Voice seemed pretty seamless
    • Very much an "SL" type experience
    • Big spaces to explore
    • Point-and-click movement was functional
    • The simple interactables (eg fireworks) were fun
    • Tutorial was good, and quick and easy to install on Quest
    • The barebones Friends system let you message and send Teleport lures
    • The menu disc worked pretty well, and pics were available from the web and didn't need downloading from the Quest
    • There is flying
    Things I was less keen on:
    • When you do joystick movement your field of view is blinkered to about half - totally wrecking any immersion and making it hard to keep track of people you are following. This one issue alone makes me reticent about doing much else in the world - which is a pity
    • Rotational movement is in degree blocks not smooth - again destroys immersion and you temporarily lose your sense of location
    • When you fly your still vertical
    • No run ability - lost of multiple point-and-clicks the only answer
    So a pity, if it wasn't for those basic movement issues it would be a pretty good place.

    The next question for me of course was can I build there? AltSpaceVR has the concept of Universes and Worlds. The Burning Man thing was a Universe, and then each of the artistic sites you went to from it were Worlds. I think its hierarchical, so one world has to be part of one Universe, and access/permissions is controlled by the Universe owner, but I think can then vary between Worlds.

    As a user (think you have to click the Beta Programme and Worlds boxes) you get a default Universe which contains your "home" World where you start each time - so you can customise that which is nice. From the web you can then create new Worlds (and I guess new Universes), choosing a starting template (you cant do this from VR). You then go into VR, and choose the World you want to go into and edit.

    The main edit panel


    Most templates have the same low-poly look as the rest of AltSpaceVR, and most are single room type size- although some are bigger. From the edit menu you can edit the objects in there, change the skybox, and then place new objects. There is quite a good library of objects already (arranged into sets), and you can import individual objects in glTF format. Textures/images can be loaded too - but a lot of this import looks like its via the web interface.

    I found most of the templates too claustrophobic so I loaded one which is just a skybox, then rezzed a plane object and made it 50m+, so I started to have a nice place to play with. I then tried placing some objects from the libraries to get a feel for the thing. Manipulation in 3D/VR was OK but not as quick as 2D, and I'd hate to do precision work in there. There's also no scripting at present. Anyway here's a few pics:




    The really interesting thing (for us) is that since AltSpaceVR is built on Unity the object sets are just Unity Asset Bundles, and there is a community built importer for them - as well as for location templates. And of course Trainingscapes also uses Unity, and Unity Asset Bundles for locations and inventories. So it might "just" be possible that we can import Trainingscapes assets into AltSpaceVR. Of course with no scripting there's not much we could do, but if there was some simple scripting and a basic API then possibly we could keep the Trainingscapes "engine" on the web, and play the exercise in AltSpaceVR - just as we used to with SL! Of course even though our builds tend to be relatively low poly they are nowhere near as low as AltSpaceVR - so that may be a blocker or just kill performance, but is might be worth a try.

    Otherwise it's good to see how AltSpaceVR has come one, and if they fix that blinker issue I might be tempted to come in and build some meaningful spaces - and I'll certainly look out for some decent events and gatherings to join in world - far closer to an SL experience than VirBela or EngageVR for me ( but then that's more what its designed to be).




    13 November 2020

    PIVOTE on Israeli TV

     


    A lovely bit of video has just emerged that show long time friend and VR/VW colleague Dr Hanan Gazit demonstrating our old PIVOTE system (the forerunner of Trainingscapes) on Israeli TV about a decade ago! The system ran in Second Life and Open Sim, but crucially the exercise server was on the web, and training content could be edited on the web, not in Second Life. Exercises were defined by an XML file, and that heritage and approach is still in Trainingscapes - keep the exercise definition and player logic independent of the actual 3D (or 2D or audio) environment you use to experience it in. It's a logic that should let us develop a nice WebXR player for Trainingscapes relatively easily.

    You can watch the video here (in Hebrew - although it's got sign language!):



    11 November 2020

    Virtual Humans - Out in Paperback!

     


    David's book on Virtual Humans is now out in Paperback. Buy now at Amazon and all good (open) bookstores!

    10 November 2020

    DadenU Day: NodeJS NLP and Grakn


    Over the last few months we've been looking at alternative to Chatscript and Apache Jena for chatbot development. We really like the chatbot-knowledge graph architecture, but we've found Chatscript too idiosyncratic for our purposes,  and lacking any solid scripting/programming capability, and Jena a bit buggy and limited.

    On the Chatbot front we've had a play with all the standard offerings, all Machine Learning (ML) based, including Dialogflow, Microsoft LUIS, RASA, IBM Watson etc. In most cases it wasn't the performance as such that we found wanting but rather the authoring. All of them seemed very fiddly, with no proper "authoring" support (just coding or tables), and in several cases the intent-pattern-action grouping that is the core of the ML part was split over multiple files so authoring was a real pain (with RASA some people were even writing their own Python pre-processors so they could work with a more logical layout that then spat out the RASA format).

    For the knowledge graph Neo4j looked a very good solution, looks fantastic and lots of capability. It doesn't have a "pure" semantic triples model, but through the neosemantics plug in it would read out Turtle files and work more like a triples store. But we then ran into deployment issues, with the plug-in not available on the community edition, the server hosting need certificates for clients as well as server, and the hosted version being very expensive as you're billed all the time the server is up, not just when used. We might come back to it, but at this stage the challenges were too great.

    So our front-runners have ended up as NodeJS for the NLP side of the bot, and Grakn for the knowledge graph/triplesbase, and DadenU gave me a day to play with them a bit more, and get them talking to each other!

    NodeJS

    NodeJS is the "server" form of Javascript. There's a lot to love about NodeJS - it's good old familiar Javascript and feels relatively lightweight - in the sense you're not weighed down with IDEs, modules for everything and lines of code that you just have to trust. There is though a very full module library if you want to use it, and its robust and eminently scalable.

    For NLP there are libraries that include both takes on Machine Learning intent-action style chatbots and traditional part-of-speech NLP analysis. The nice thing is that you don't have to choose, you can use both!

    For ML there is a wonderful library released into open-source by Axa (the insurance people) called simply nlp.js which takes a standard RASA/LUIS/Watson ML approach but everything is controlled by a simple single data object such as:

    {
          "intent": "agent.age",
          "utterances": [
            "your age",
            "how old is your platform",
            "how old are you",
            "what's your age",
            "I'd like to know your age",
            "tell me your age"
          ],
          "answers": [
            "I'm very young",
            "I was created recently",
            "Age is just a number. You're only as old as you feel"
          ]
    }

    You can have as many as you want, spread these over multiple files, and having everything in one place makes things dead easy.

    Then there's another library called Natural from NaturalNode which provides Parts-of Speech Tagging, stemming and other NLP functions. Adding this to the bot meant that I could:

    • Use Natural to identify discourse act types (eg W5H question etc)
    • Use Natural to extract nouns (which will be the knowledge graph entities)
    • Use Natural to do any stemming I need
    • Use Axa-NLP to identify intent (using "it" and a dummy word in place of the noun) and pass back the predicate or module needed to answer the question
    • Assess NLP confidence scores to decide whether to get the answer from the knowledge graph (if needed) or a fallback response.

    Grakn

    Grakn is a knowledge graph database which you can install locally or on the cloud. Most of it is open source, just the enterprise management system is paid for. There is a command line database server (core), and a graphical visualiser and authoring tool (Workbase). 

    With Grakn you MUST set up your ontology first, which is good practice anyway. It is defined in GRAQL, a cross between Turtle and SPARQL, and which is very readable, eg:

    person sub entity,
      plays employee,
      has name,
      has description,
      has first-name,
      has description,
      has last-name;

    Data can then be imported from a similar file, or created programmatically (eg form a CSV) - and in fact the ontology can also be done programmatically.

    $dc isa product,
        has description "Discourse is Daden's chatbot and conversational AI platform which uses a combination of machine-learning, semantic knowledge graphs, grammar and rules to deliver innovative conversational solutions.",
        has benefit "Discourse seperates out the tasks of language understanding from content management, so clients can focus on information management, not linguistics",
        has name "Discourse";

    With the data entered you can then visualise it in Workbase and run queries, visualising the data or the ontology.


    Interfacing NodeJS and Grakn

    Luckily there is a nice simple interface for NodeJS to Grakn at https://dev.grakn.ai/docs/client-api/nodejs. This lets you make a GRAQL call to Grakn, retrieve a JSON structure with the result (often a graph fragment), and then use as required.

    const GraknClient = require("grakn-client");

    async function graql (keyspace, query) {
    const client = new GraknClient("localhost:48555");
    const session = await client.session(keyspace);
    const readTransaction = await session.transaction().read();
      answerIterator = await readTransaction.query(query);
    const response = await answerIterator.collect();
    await readTransaction.close();
    await session.close();
    client.close();
    return(response);
    }

    (Re)Building Abi

    So I think I've got the makings of a very nice knowledge graph driven chatbot/conversational AI system, leveraging Machine Learning where it makes sense, but also having traditional linguistic NLP tools available when needed. The basic flow is an extension of that presented above:

    • Use Natural to identify discourse act types (eg W5H question etc)
    • Use Natural to extract nouns (which will be the knowledge graph entities, we can do our own toeknisation for compound nouns and proper names)
    • Use Natural to do any stemming I need
    • Use Axa-NLP to identify intent (using "it" and a dummy word in place of the noun) - typically the predicate needed of the noun/subject entity in order to answer the question, or a specialised module/function
    • Make the Graql call to Grakn for the object entity associated with the noun/subject and predicate
    • Give the answer back
    Of course having a full programming language like NodeJS available means that we can make things a lot more complex than this.

    As a test case of an end-to-end bot I'm now rebuilding Abi, our old website virtual assistant to see how it all fares, and it it looks good we'll plug this new code into out existing user-friendly back end, so that chatbot authors can focus on information and knowledge, and leave Discourse to look after the linguistics.










    9 November 2020

    A 3-Dimension UX Analysis of VR

     


    Christian Schott and Stephen Marshall of the Victoria University of Wellington have quite a nice paper out in the Australasian Journal of Educational Technology, 2021, 37(1) and on open access at https://ajet.org.au/index.php/AJET/article/view/5166/1681.

    Coming from an Experiential Education (EE) perspective they draw on a UX model by Hassenzahl and Tractinsky (2006) which consists "of three facets guided by a positive focus on creating outstanding emotional experiences, rather than on adopting a mostly instrumental, task-oriented view of interactive products, which is a core criticism of traditional HCI." 

    • "The first facet responds to this criticism of traditional HCI  and  is  termed  beyond  the  instrumental.  It  incorporates  aesthetics,  a  holistic  approach,  and  hedonic  qualities as features of the user experience. "
    • "The second facet builds on affective computing and extends emotion and affect to the user through positivity, subjectivity and the dynamic of antecedents and consequences. "
    • "The third facet is the experiential, which emphasises situatedness and temporality, and is characterised by user experiences that are complex, situated, dynamic, unique, and temporally-bounded".

    The research used a VR experience of a Fijian island being developed to help better understand tourism economics on a Pacific Island, and the experimental subjects were Tourism Management students.

    The UX evaluation identified 8 clusters of responses which map onto the three evaluation dimensions as shown above are were:

    • Sense of Place
    • Sensory Appeal
    • Natural Movement
    • Learning Enrichment
    • Comprehensive Vision
    • Hardware Concerns
    • Screen Resolution
    • Hyperreal Experience
    • Motion Sickness

    Pity that Agency wasn't on there, and it would be interesting to see how a 2D/3D evaluation of the same experience would change the ratings and clusters give.

    The authors conclude that "The evolution of VR technology is increasingly enabling high fidelity and motivating EE learning activities to be offered at a relatively low cost, particularly when the logistical, resourcing,  and  ethical  issues  of  alternative  approaches  are  considered. The nuanced  analysis  of  the  identified  positive  and  negative  themes,  through  the  lens  of  Hassenzahl  and  Tractinsky’s  (2006)  adapted  three  UX  facets,  has  provided  valuable  albeit  indicative  guidance  where  to  concentrate refinement efforts. However, it is also evident that a great deal of further research on the user experience  is  required  to  extend  our  understanding  of  full-immersion  VR  technology  as  an  important  opportunity for EE and higher education more broadly."

    Nice paper, well worth a read, and the UX model and dimensions could be a useful one to bear in mind when looking at evaluating other 3D/VR experiences.

    Hassenzahl, M., & Tractinsky, N. (2006). User experience - a research agenda. Behaviour & Information Technology, 25(2), 91-97. https://doi.org/10.1080/01449290500330331


    2 November 2020

    Time to Revisit 2D/3D?

    The COVID epidemic is bringing a whole host of challenges to tutors and trainers in trying to deliver engaging teaching to students and staff – and in particular learning experiences which give the student a sense of context, place and shared experience. 

    With on-site and classroom learning facing the greatest problems, many are naturally turning to remote teaching by Zoom (and similar), eLearning systems, or video. The diagram below briefly highlights key challenges with each of these - and we'll dive into more detail next week.



    Whilst the uptake of headset-based virtual reality (HMD-VR) for both entertainment and training/education has been increasing in the last year or so (and Oculus sold out of the Quest headset early in the pandemic, and its new Quest2 had 5 times the pre-orders of the Quest1), the HMD-VR approach has its own challenges during COVID:

    Sharing headsets within a central location needs rigorous cleaning between use

    Posting/couriering headsets around remote students is costly and risky

    The cost of equipping every student with their own VR HMD is too expensive for most organisations/institutions.

    And this is quiet apart from the challenges of the space to use them (although we’ve found the garden is good – at least in summer!) and the discomfort that some users feel.

    So given the current situation shouldn't tutors and trainers be re-considering the 2D/3D approach to immersive learning – the Sims/Computer Game style where you operate in a 3D environment but the experience is delivered on an ordinary 2D screen – be that on a PC/Mac, tablet or even smartphone.

     


    Such systems have the advantages that:

    • Compared to Zoom you place students in the environment and they can learn when they want and a their own pace.
    • Compared to eLearning they have more agency, being able to tackle tasks in a variety of orders and ways, and can even undertake collaborative activities.
    • Compared to video the students again have more agency, more ability to explore the environment, and content can be changed on a regular basis for minimal or no cost.
    • Compared to HMD-VR there are no headsets to buy, clean or distribute, and students can potentially use their own smartphones or tablets alongside their own laptops



    Such 2D/3D approaches appear to have been forgotten and brushed aside in the race to the latest "shiny" VR experiences, despite the fact that whilst there is a lot of evidence comparing traditional eLearning to VR there is little that shows significant benefits from VR as against 2D/3D - and even some which shows a negative "benefit"!

    So isn't it time to re-assess the 2D/3D approach (and perhaps find a better name for it!)?

    And of course with Trainingscapes you can generate content for 2D/3D and VR at the same time, so if the situation changes and VR becomes more do'able (for some students, some of the time) then you'll be ready.


    26 October 2020

    Live - Virtual - Constructive - Autonomous? - A Framework for Training Approaches

    In military and defence training circles there is a very commonly used acronym called "LVC". It stands for Live, Virtual , Constructive:

    • Live training: Real soldiers exercising against real soldiers on a training ground (and more broadly real soldiers practicing with real kit anywhere physical)
    • Virtual training: Real soldiers exercising against real soldiers in a simulation, probably (but not necessarily) digital (and again more broadly real soldiers practicing with virtual kit anywhere non-physical, would also cover Fortnite!)
    • Constructive training: Real soldiers exercising against so-called "computer generated forces" (CGF) with a simulator (think almost any traditional first person shooter)
    When I first came across the model I struggled to remember what was what as so tried to put it into some sort of 2x2 matrix. The dimensions I eventually decided were:
    • Whether the training environment was the real physical world, or a digital (or other) simulation of it, and
    • Whether the opposition was being controlled by a human or a computer
    One of the reasons I struggled was that nobody ever talked about the 4th space on the matrix - where you train in a physical environment but the opposition is computer controlled - which I've labelled Autonomous - hence LVCA. This is odd since this mode is not as far fetched as it sounds, fighter pilots have been practicing against UAVs for a while (although they may be remote-controller not autonomous), and there is some emerging work on mobile semi-autonomous robot targets (see https://www.youtube.com/watch?v=tlqMlPQpeAo)

    So a complete matrix might look like this:


    Now whilst emerging from the military and defence world it does seem to me that this LVCA gives  useful model for thinking about skills and process type training within the civilian world. Of course we need to think about "non-player characters" (actors? - who may represent clients, patients, customers or colleagues) instead of the "opposition", but I think the model holds up pretty well:

    • Live training: Students learning with real people/kit – e.g. role-play, practical hands-on
    • Virtual training: Students learning with real people via a simulator (e.g. Trainingscapes) (or maybe Zoom) (virtual role-play)
    • Constructive training: Students learning by interacting with computer controlled NPCs in a simulation
    • Autonomous training: Students learning by interacting with computer controlled physical entities (e.g. hi-spec mannequins)

    As ever the key is that these approaches aren't competing, it's about finding the right blend of each given the subject, students, situation and budget in order to deliver the best possible training - and to help provide follow-up to keep that training fresh.

    Hopefully this LVCA matrix will give you some fresh insights into the training you are trying to deliver, and perhaps open up some new ideas as to how it could be delivered for the benefit of your organisation and, of course, your students.







    23 October 2020

    Daden Newsletter - October 2020

     


    In this latest issue of the Daden Newsletter we cover:

    • COVID19 and 3D Immersive Learning (still!) -  With COVID19 showing no signs of abating delivering corporate training and academic syllabuses out into 2021 remains challenging to say the least. Where do "traditional" approaches such as Zoom, eLearning and video fall short, what issues does a VR route face, and how can 2D/3D immersive learning improve the mix?
    • WebXR - With Facebook introducing stricter account and content controls on Oculus headsets WebXR offers a way to deliver VR content to VR headsets without needing to download anything. The same approach can also deliver 2D/3D content to an ordinary web browser. A promising way forward?
    • Plus snippets of other things we've been up to in the last 6 months - like starting work on a virtual hospital ward and experimenting with VR in the garden!

    Download your PDF copy of the newsletter here.

    We hope you enjoy the newsletter, and do get in touch if you would like to discuss any of the topics raised in the newsletter, or our products and services, in more detail!


    19 October 2020

    XR 2x2 Segmentation

    (click image for better resolution)

    Trying to unpick all the different emerging systems in the "extended reality" space such as Augmented Reality (AR), Mixed Reality (MR), Virtual Reality (VR) and then the "traditional" approaches such as 3D video games and virtual worlds can be a challenge, and we keep trying to find new ways to understand and communicate it.

    In the graphic above we've tried to show the difference between the systems in terms of the access device and what they are trying to do to reality.

    In one dimension we have whether the system is trying to completely replace what the user is seeing/experience - they only see the digital world, or whether it is augmenting it - adding new content into what the user otherwise sees as the physical world.

    In the other dimension we have whether the system is accessed through a flat screen or a headset. The flat screen might be an ordinary PC screen, or a tablet or smartphone, whilst the headset might be a VR one or an MR one. A key point is that for the AR/MR solutions the access devices has some way of also showing the physical world which is being manipulated - via the phone camera for AR, or a transparent visor for MR.

    These two dimensions then give us our four main use cases:

    • AR - which is overlaying reality but viewed through a (conventional) flat screen
    • MR - which is overlaying reality but viewed through a headset (visor)
    • 2D/3D systems like Second Life/computer games - which is completely replacing reality and viewed through a flat 2D screen
    • HMD VR - which is completely replacing reality and viewed through a headset (screen)
    A key point is that the dimensions are about how we access and what we are trying to achieve. They are not about the software technology being used to generate the environments, or how its branded!

    Let us know if you've any comments on this segmentation, and/or if you find it of use.




     


    16 October 2020

    Video: David's talk on Virtual Personas at AI Tech North

    David's talk at AI Tech North on "Enriching Virtual Humans through the Semantic Web and Knowledge Graphs" is now available on video:



    It's a 20min watch, and followed by another interesting session on analysis of language in social media use.



    7 October 2020

    MS&T Feature Daden Thoughts on the impact of COVID19 on Virtual Training


    Military Simulation And Training Magasine Editor Andy Fawkes has published a video drawing on the views of S&T industry leaders to discuss how the pandemic has accelerated existing digital trends and that this is the time to reimagine the management and delivery of simulation and training. Prompted by some of our posts here on the relative merits and experiences of 3D and VR immersive training, and the impact of COVID on the market, Andy includes some of our thoughts in the video.

    You can view the video at: https://www.halldale.com/articles/17613-digital-trends-accelerated


    MS&T Editor Andy Fawkes



    5 October 2020

    World Space Week - WebXR Solar System playground

     


    Normally at this time of the year we'd be down at The Hive in Worcester for the BIS World Space Day event - the largest of its kind in the UK. Of course with COVID19 that's not happening, so here instead is a little "work in progress" of a solar system playground in WebXR. You can't view this on the web, but if you have a VR headset that is WebXR compatible (most of them now) you can use the headsets web browser to go to:

    • https://www.daden.co.uk/webxr

    Then follow the Solar System Playground link from that page to the WebXR page, click on the Enter VR button, and immerse yourself in the Solar System.

    This is a quick tour of what you should see and what you can do:




    The main features are:

    • Set sizes of planets to linear or log scale
    • Set orbit sizes to linear or log scale
    • Just grab hold of a planet to bring it up close to look at it, and turn it over in your "hands"
    • Display labels on planets and/or audio naming as you click on them
    • Randomise planet positions, sort them into order, and then have your solution scored
    • Hide the floor (not for those with vertigo!)
    • Move with joystick or by clicking on the footsteps
    • Set sizes and orbit sizes to "real" values, scaled to the sun. This make things VERY small and VERY spread out. There's a guideline to help you find all the planets, and Pluto (and maybe Neptune) might actually be outside of the "star" bubble - feels very otherworldly that far out!

    Note that this is still a work-in-progress and may have a few bugs, but with World Space Week happening and lots of us in some sort of lockdown we thought it a good time to get it out!

    Any bug reports or comments for improvements in the comment or by email to wsw@daden.co.uk.

    Enjoy, and hopefully we can meet physically for World Space Week next year!




    2 October 2020

    Daden make the Midlands Tech 50 for the 2nd Year in a row

     

    We're pleased to announce that we've made it onto the Midlands tech 50 link for the 2nd year in a row.  The Tech50 awards, organised by BusinessCloud, celebrate the most innovative new tech companies for consumers, business and society at large.  


    30 September 2020

    Daden MD talking about Virtual Humans


    Earlier during lockdown Daden MD David Burden recorded a short talk on virtual humans inspired by his book on the subject published last year.

    Taylor and Francis have now posted the talk on their web site in their AI Knowledge Hub strand. You can view it at https://aibusiness.com/video.asp?section_id=803&doc_id=763252 or below:


    David's co-author, Prof Maggi Savin-Baden has a similar piece on Digital Afterlife at https://aibusiness.com/video.asp?section_id=803&doc_id=764214

    28 September 2020

    Fire Extinguisher Exercise Walkthrough

     


    We've just posted up a video walkthrough of our fire extinguisher exercise which shows you how different fire extinguisher types are applicable to different types of fires. We find its a great application to quickly get people immersed into 3D/VR immersive training and to "have a go" at the technology as fires and fire extinguishers are something we can all relate too. This is not intended as a fire training exercise, but rather an opportunity to see how 3D/VR immersive training can be applied to a topic.



    The exercise was authored using our Trainingscapes authoring system, with a "Fire" widget to control the relationship between the fire types and extinguisher types.

    Just get in touch if you'd like to talk through how this sort of immersive (and remote) learning could be applied in your organisation, particularly at a time when it remains challenging to do face-to-face training and teaching.




    7 September 2020

    Aura Blogpost - "From Virtual Personas to a Digital Afterlife"

     




    David has a blog post on "From Virtual Personas to a Digital Afterlife" on the blog of the new Aura website/service- "designed to help you prepare your memories, important information and connect with loved ones before you die", part of the burgeoning digital afterlife/memorial sector but also looking to open up the conversations ahead of time.

    You can read the blog post in full at: https://www.aura.page/articles/from-virtual-personas-to-a-digital-afterlife/

    It's also interesting to compare some of the ideas in the post with Channel 4's recent documentary on Peter: The Human Cyborg as there certainly seems to be some common ground worth exploring.





    27 August 2020

    Virtual Reality - A Future History?

     A few days ago I got together with a group of experts in 3D immersive learning and virtual reality and together we brainstormed what we saw as the major challenges facing 3D immersive learning/training and VR over the next decade or so.

    The graphic below summarises our thoughts.

    In the short-term (0-3 years) we identified the big challenges as being:

    a) To make access easy and seamless. If people are going to use these environments they’ve got to be dead easy to use, and for organisations dead easy to manage. Oculus Quest is a good step forward (totally self-contained, automatic roomscale sensing), as is WebXR (no need to download to a headset or PC). But even within the experience, the “grammar” of how you navigate and use and interact with the space has got to be self-evident and common-sense. And no crashes or glitches or other “odd” happenings otherwise any sense of immersion is totally lost. And they need to integrate with your other digital presences - be that your desktop or work/social media accounts.

    b) To make the applications desired. People and organisations have got to want to use this stuff. There has got to be user pull. For entertainment the experience has got to be worth all the hassle of setting up and clearing a space and totally isolating yourself from reality for a while, otherwise a film on Netflix or a game on Steam is going win out. For organisations the benefits of virtual learning and training have got to be clear and well understood. Yes there are lots of different case studies that show the benefits, but they aren’t well distributed or consolidated, and often aren’t too rigorous when comparing to BOTH the main alternatives (physical training and 2D eLearning).


    In the medium-term (5-10 yrs) we identified two major challenges:

    a) Mobility. Immersive learning needs to be available where and when people want to use it - it needs to be mobile. Yes an Oculus Quest is pretty mobile (I’ve even used it in my garden!), but in normal, non-COVID times, it's not feasible to use it on the bus or train into college, or sat in a cafe (locked away from the outside world with a purse or laptop ready to be stolen by your side), or sat on the sofa with half an eye on Love Island. Headset Mounted Display (HMD) VR needs to be complemented by mobile/tablet (and even laptop) based versions of the same experience. Yes there are advantages to HMDs (visceral immersion, scale, isolation), but there are also disadvantages (social isolation, nausea, convenience, cost). The user should decide HMD or non-HMD, not the software developer or trainer. 

    b) Integration. We need to move away from walled gardens and towards standard based environments and applications. VR today is a bit like the pre-web Internet, different walled gardens, different access devices, multiple accounts. Users don’t want that - they want to hear about something and just access it with whatever device they have to hand, and with their long-standing personalised avatar. WebXR is helping here, and yes even the web still has many of these issues, and I know that asset wise we have broad portability between the platforms, but not in the scripting of the experience, or the management of the data associated with it. And does the “virtual world” approach of Second Life and Snowcrash offer a better model than the “app” approach of most current offerings. Many have talked about the Metaverse or Multiverse, being able to seamlessly (that word again) move from one virtual environment to another. There have been metaverse initiatives in the past - is it time for another one?


    For the long term the group again identified 2 main issues:

    a) Radical Interfaces. The VR HMD is a great step forward, but they are  still large, clunky, moderately uncomfortable for prolonged use, and not very portable. I’m pretty convinced that we need another big step change in HMDs before they become real consumer items where everyone one in they way that they currently (probably) have a tablet. What I have in mind is more like the holobands of Caprica than the Quest. Something that integrates VR, AR and MR, lets us readily see the physical world, tracks our hands, and, perhaps most important, manages to give us the “feeling” of locomotion, and perhaps the other senses. My guess is that this is as much a neurological interface as it is a visual one, and hence probably a decade or more out. 

    b) Societal Change. VR is not just impacted by attitudes to it, but could also impact society itself. COVID has made us re-evaluate remote working and remote relationships. Popular media is full of stories based around virtualised people and places (Devs and Upload being just the latest examples). Even a decade ago virtual worlds were being used by hostile actors,  I doubt today’s environments are any different. How would a Caprica style virtual world, readily accessible by all, and with the capacity to do almost anything effect the way we all live and interact? Would it be for good or ill - and would it let us weather a second COVID that much better?


    So there you are, 6 perspectives, 2 each for the short, medium and long term. You may not agree with all the details, but I hope that you can appreciate the general thrust of each, and each offers a timely call to action for the VR community.


    Now scroll down a bit.











    OK, I told a little white lie at the beginning there. The gathering of immersive learning experts wasn’t a few days ago, it was about 3,285 days ago at the ReLive (Research Into Learning in Virtual Environments) Conference. held at the Open University way back in 2011. Here's the original graphic - and you can find a fuller presentation I did later that years based upon it at https://www.slideshare.net/davidburden/virtual-worlds-a-future-history



    But I think you’ll agree that the general vision and issues being raised back in 2011 differ little from what a similar analysis would yield in 2020 or even early in 2021 - ten years later! Some of the specifics might be different, and my commentary above reflects a contemporary take, but the big picture items are pretty much the same:

    • This stuff still isn’t seamless, although with Quest and WebXR we’re taking some great strides
    • The entertainment and business case is still struggling to be made. I know that Quests sold out early in lockdown, but I’ve also seen numerous reviews of technology to help with lockdown that haven’t even mentioned VR and immersive 3D.
    • We’ve actually made great strides in mobility if you consider non-HMD VR, I can now run avatar style experiences quite happily on my phone or tablet if they’re not too high-rez, and Quest again helps with instant set-up, but it’s still much of an either-or choice.
    • Integration seems further away than ever as VirBela, Immerse, AltSpaceVR, Sominium Space, Hubs etc all compete for users.
    • Radical interfaces is actually the one we achieved first, I was in SL in an Oculus DK1 in 2013, only 2 years after ReLive2011 - but as mentioned above there is still a long way to go for the ordinary consumer.
    • Societal change may be driven as much by COVID (and the fear of similar future outbreaks) and climate change, but VR is having far more of an impact on popular culture than it did a decade ago, and that triumvirate of VR capability, external pressures and cultural exemplars may well be driving change more quickly - although perhaps not a quickly as we thought back in 2011.


    So I hope you’ll forgive my little deception, but I thought it might be a nice way to not only to illustrate how many things have stayed the same despite the apparent “improvements” in technology, but also highlights how much there is that current VR practioners can learn from the work on immersive environments that was being done a decade ago. For inspiration just check out the agenda and papers from ReLive11 (and the earlier ReLive08), still available on the OU website.

    10 August 2020

    Virtual Reality vs Immersive 3D - the Search for the Right Words!



    As a company that been creating immersive experiences for over 15 years we find that the contemporary obsession with headset based virtual reality (HMD-VR) is often at risk of a) forgetting what valuable work has been done in the past in non-HMD immersive 3D environments and b) not highlighting to potential  clients that a lot of the benefits of "VR" can be obtained without an HMD, and not having the funds for, or access to (esp in COVID) HMDs does not need to stop a VR project in its tracks.


    One problem is that we just don't have the right terminology, and what terminology we have is constantly changing.

    "VR" has almost always been assumed to mean HMD-based experiences - using headsets like the Quest, Rift or Vive - or even their forerunners like the old Virtuality systems.

     But in that fallow period between Virtuality and Oculus DK1 3D virtual worlds such as Second Life, There.com and ActiveWorlds were enjoying a boom-time, and often found themselves labelled as "virtual reality".


    One problem is that there seems to be no commonly accepted terms for the classic Second Life (or even Fortnite) experience, where you can freely roam a 3D environment but you have a 3rd (or sometime 1st) person avatar view of it. It's certainly not 2D. It's sort of 3D - but not as 3D as the stereoscopic experience using a VR-HMD. I've seen 2D/3D or "3D in 2D" but both are cumbersome. We sometimes refer to it as "first-person-shooter" style (but that doesn't go down well with some audiences), or "The Sims-like". 

    There's also a qualitative difference between say a 3D CAD package where you're rotating a 3D model on screen (called an allocentric view) and the experience of running through Fortnite, Grand Theft Auto, or Second Life (called an egocentric view).  You feel "immersed" in the latter group, not just because of the egocentric view point but also because of the sense of agency and emotional engagement.

    At a recent Engage event I went to I'd guess (from avatar hand positions) that about 50% of attendees were in VR-HMD and 50% using the immersive-3D desktop client. So should it be described as a VR or immersive 3D system? Our Trainingscapes is the same, we can have users on mobile, PC and VR-HMD devices all in-world, all interacting. And Second Life is often "dismissed" as not being "proper VR" - but when Oculus DK1 was around I went into SL in VR - see below - so did it stop being VR when they went from DK1 to DK2?


    So if a system can support both - is it a 2D/3D system or a VR system? That is why we tend to refer to both the 2D/3D  approach and the VR-HMD approach as being "immersive 3D" - as long as you have a sense of agency and presence and the egocentric view. It's the experience and not the technology that counts.

    And don't get me started on what "real" and "virtual" mean!

    No wonder clients get confused if even we can't sort out what the right terms are, and its far too late for some de jure pronouncement. But perhaps we all could try and be a little bit more precise about what terms we do use, and whether they are just referring to the  means by which you access an experience (e.g. VR-HMD) or to the underlying experience itself (such as a virtual world or virtual training exercise).

    In later posts I'll try and look more closely at the relative affordances of the 2D/3D approach (better name please!) vs the VR approach, what researchers experiences of virtual worlds can teach us about VR, and also how "virtual worlds" sit against other immersive 3D experiences.