11 January 2021

Immersive 3D Visual Analytics in WebXR


Whilst this application was originally built as a side project it shows some principles and ideas which are probably worth sharing here. The application uses WebXR to display data in VR. The data is fetched from a web service, and the user can choose what data parameters are used on two of the point features, in this case size and height.  The key thing about WebXR is that it lets this visualisation run in a VR headset without any download or installs. Just point the in-built browser at the URL, click on Enter VR and the browser fades away and you are left in a 3D space with the data. 

The data itself is the star map used in the classic GDW Traveller game (think D&D in space) - but as we say the data isn't important, just the novel interaction.

To try out our Data Visualisation in WebXR just use the browser in your WebXR compatible VR headset to get to this page, and then click on the link below. Once the new page loads just click on the Enter VR button. 

Traveller 3D Map

​Note that whilst the page will open in an ordinary browser you won't be able to do much. But watch the video here to get an idea:


You can also read more about 3D Immersive Visual Analytics and the differences between allocentric and egocentric visualisation in our white paper

21 December 2020

Daden Video Presentation(s) at the GIANT Health Conference

 





David presented at the GIANT Health Conference on 2nd Dec, giving an overview of our increasing number of health related immersive 3D/VR projects. You can watch TWO versions of  the 20 min presentation and Q&A session by clicking on the images above due to some confusion in GIANT arrangements, the second has an interesting chat about future VR ecosystems at the end of the Q&A.


(Both videos should start about 10-15 secs before David's piece)






9 December 2020

New JISC Report - Learning and Teaching Reimagined (not?)



JISC - the body which advises FE/HE on the use of technology has issued an important new report on Learning and Teaching Reimagined. 

"The flagship learning and teaching reimagined report is the result of a five-month higher education initiative to understand the response to COVID-19 and explore the future of digital learning and teaching. Learning and teaching reimagined, with the support of its advisory board, and more than 1,000 HE participants, provides university leaders with inspiration on what the future might hold, guidance on how to get there and practical tools to develop your plans."

As a member of JISC's Virtual Reality Framework we were fascinated to see what the report has to say about immersive learning technologies.

The report identifies 4 possible future scenarios - more to drive discussion than as future options:
  • A very familiar learning experience on campus, for students who have already adapted to a socially-distanced world. Important and distinctive here is institutional resilience is the sole driver for use of technology-enhanced learning and teaching, the ability to adapt to significant change and move operations fully online as and when required. Learning and teaching is predominantly face to face and, as such, is far less reliant on technology-enhanced learning.
  • Technology-enhanced learning supplements a ‘traditional’, lecture-led, synchronous learning and teaching experience. It feels familiar to students, while offering a broader range of learning opportunities. Significantly it offers confidence in the resilience of the university experience. Leaders are increasingly aware of the benefits of technology-enhanced learning and recognise the efficiencies gained from supplementing their preferred campus-based models.
  • A ‘step change’ in the higher education offering. Students experience flexibility and convenience of learning, increasing enabling adaptive and self-directed learning, with more active learning opportunities. Leaders are increasingly fluent in technology-enhanced learning and appreciative of the opportunities to adapt their offering to reach wider range of markets. Investment has begun to improve the quality and coherence of the learning and teaching experience, increasingly appealing for a diverse student population with a more inclusive and accessible experience.
  • Students embrace the fully-online experience, seeking greater flexibility and an increasingly personalised learning experience. With most higher education providers adapting and enhancing their technology-enhanced learning and teaching experience in some way, the imperative for leaders and staff in this scenario is to demonstrate the high quality, unique nature of their fully online offering.  
Their "primers" for senior leaders on topics such as digital learning and innovation mention no technologies specifically, but their case studies seem more to be about good 2D VLE implementations and blended learning (thought that was a given) than more innovative approaches, such as immersive 3D, despite such technology being used in HE learning for almost 15 years now! The report has one case study which includes VR - and that is 360degree video in surgical training.

"Mixed Reality" fares slightly better with 3 mentions (two the same). There is also mention of "Virtual Worlds" (twice, duplicates), but not definition of what they mean - so could just infer VLEs. The phrases "In 2030 UK higher education learning and teaching is regarded as world class because it is
attractive to all students, seamlessly spans the physical and virtual worlds and is of the highest
academic quality" and "Students move fluidly across physical, digital and social experiences. The integration of mixed reality technologies strengthens the strong sense of university identity and community, no matter how students choose to participate and learn." are nicely aspirational but could mean anything and the report gives no hint as to how they might be achieved - or even really paints a picture of what they might mean. The closest is "We also do field trips, using virtual reality and 3-D
walkthroughs, to places I’d never be able to go otherwise" - which seems a bit 2020 (at least for some) not 2030.

Interestingly the report does have a bit more to say/paint around the idea of students having an "AI" learning guide/coach to help them with their learning, guide them to things they need to know, and possibly support them with alternate routes/methods when they struggle - time to dust off our Virtual Tutor work!

Potentially confusingly the report also links to QAAs Building a Taxonomy for Digital Learning which described immersive digital learning as an "Immersive digital engagement/experience where digital learning and teaching activities are designed by a provider as the only way in which students will engage, both with the programme and with each other. Students will be required to engage with all the digital activities and will not be offered the opportunity to engage with learning and teaching activities onsite at the provider." - so more like language learning by immersion than the use of any immersive technology. The same document has a glossary listing for AR, but none at all for VR!

So all in all it does seem a bit of a business as usual report. Whilst the scenarios are interesting there is very little in there that might encourage educational leaders to look at the potential of immersive 3D/VR - and indeed hardly anything to suggest to them that it might exist. Make better use of your VLE seems the bottom line to me!

I'd originally put a pic of a learner using a VR headset at the top of this post, but the new image seems more appropriate somehow!

If you want to be excited by what  Learning and Teaching Reimagined might be. perhaps you'd be better offer reading Chapter 4 of Ready Player One?

“During our world history lesson that morning, Mr. Avenovich loaded up a standalone simulation so that our class could witness the discovery of King Tut's tomb by archaeologists in Egypt in AD 1922. The day before, we visited the same spot in 1334 BC and had seen Tutankhamen's empire in all its glory. In my next class, biology, we traveled through a human heart and watched it pumping from the inside just like in that old movie Fantastic Voyage. In art class, we toured the Louvre while all of our avatars wore silly berets. In my astronomy class, we visited each of Jupiter's moons. We stood on the volcanic surface of Io while our teacher explained how the moon has originally formed. As our teacher spoke to us, Jupiter loomed behind her, filling half the sky. It's Great Red Spot, turning slowly just over her left shoulder.”

You can download a copy of the report from https://www.jisc.ac.uk/learning-and-teaching-reimagined

Of course it's pretty easy to criticise, so let's be a bit more constructive. What would my vision of a 2030s learning, and specifically immersive 3D learning, look like:
  • Based on open-source technologies, probably WebGL/OpenXR and their descendants
  • Web-delivered, no downloads
  • Every experience available on desktop/laptop, smartphone/tablet or VR headset, albeit tweaked for each platform, and primarily a student choice as to which to use
  • A consistent identity system (like OpenID/OpenAvatar) across experiences, so you don't need a different account for each one
  • Consistent interfacing to VLEs/LMSs, e.g. via xAPI to manage student access and analytics
  • A way for teachers, tutors, and students to author their own content, and with the training to do so and the supporting pedagogy to design it well
  • Each school/tutor/student/subject can have its own "home space" where creativity and social interaction is encouraged
  • Exercises authored/encoded in a semantic way so accessible versions can be easily generated (e.g. a sound world for a student with visual impairment), and to more readily generate lesson variations (eg for learning and assessment) for the same location/props
  • A way of publishing experiences/exercises/assets to others, with IP control, eventually providing asset and exercises sets that cover the entire curriculum, and tutor/student developed
  • Some flagship centralised resources, such as a complete virtual hospital
  • A way of linking content together - portals from one experience to another, so the entire system can run on a federated system of servers
  • The ability to have experiences as single or multi-user, to cope with asynchronous and synchronous learning and differing student needs
2030 is only 9 years away, but most of this is do'able now if we set our minds to it.

Perhaps it's time to write another White Paper?



 



7 December 2020

New Case Study - A VRLE For Midwifery Students

 


Over the last year or so we've delivered 3 simulations for use by midwifery students at Bournemouth University as part of a Virtual Reality Learning Environment (VRLE) project. The exercises cover:

  • A Urinalysis appointment at a clinic with an expectant mother
  • A home visit to do a routine post-birth baby & mother check and a check on safeguarding issues
  • Dealing with a post-partum haemorrhage in a hospital ward
Student responses have included:

  • “It is a different and enjoyable way to learn"
  • “I think it is a good way of demonstrating what it is like to be in someone else’s shoes.”
  • "The VRLE is a really useful learning tool that I hope is continuously used throughout the duration of our course.”

  • You can view a video presentation on the exercises by BU project lead Denyse King on YouTube:



    Detailed results will be published in Denyse's doctoral research but “the evidence gathered from 311
    healthcare students demonstrates that VRLE have a statistically significant positive impact on: learning, confidence, clinical intuition, humanisation of healthcare and students’ perceived positive onward impact on service user safety.”

    Our latest case study summarising the project is also now available - just click to download.

    1 December 2020

    Video, eLearning and VR - and the 3 Fidelities

    One design model I've always liked is Prof Bob Stone's "3 fidelities" for designing immersive 3D/VR experiences.  The three fidelities are:

    • Context/Environment Fidelity - how well do the surroundings model the real environment and context - but more is not necessarily better;
    • Task Fidelity - how well to the sequence of actions (and their consequences) model the real task;
    • Interaction Fidelity - how well do the manipulation and haptics mirror those experienced in real life.



    My favourite example of Interaction Fidelity was talking to medics who gave students bananas to practice their incisions and suturing on - the bananas bruise and tear so easily it's easy to spot mistakes. Absolutely no context fidelity, and a part trainer so no real task fidelity needed, but a far better experience than trying to reproduce the activity in even the latest VR gear!

    So I was wondering recently how do some of the current methods of distanced learning match on to the these fidelities. We know that it should be a case of blended learning - not one method to rule them all - but where do each of the current approaches have strengths or weaknesses?

    The table below summaries our initial take as to how video, video-conferencing, eLearning and immersive 3D/VR map onto each of these fidelities.

    Click image for a larger version

    Note just how low eLearning scores against all of these - it may be OK for more "knowledge" based learning, but for learning that relates to places, physical tasks or interactions it really isn't a good fit.

    Video can be very good for giving the sense of environment, but you can't explore beyond the frame. And whilst there is some great interactive branching video out there that explores the task dimension it is very costly to produce, and next to impossible to change.

    Zoom style video conferencing can leverage some of the advantages of ordinary video - such as video tours or how-to, but again the leaner tends to be a fairly passive participant in these. Where it would score more strongly is the in the discussion/analysis phase of cementing learning.

    Immersive 3D/VR can give a range of environmental fidelities, and is very good at modelling task fidelities and giving the user the options and agency to explore different routes. But like the others it fails on the interaction fidelity although we are beginning to see improvements in that area - whether it's thru haptics or features such as facial expression and body language - both of which contribute to the interaction.

    So we hope that this has given you a different lens with which to look at different types of remote learning technology, and that it will help you choose the appropriate technologies for the learning task at hand.








    25 November 2020

    Case Study - In-Patient Diabetes Management

    ​With Bournemouth University we've been developing a training exercise for students to practice their response to a diabetes situation with an in-patient on the ward. 

    Students could access the exercise with their from a normal PC, tablet or smartphone, or using a Google Cardboard VR headset.


    The exercises was used for an A-B comparison with traditional eLearning and students on the 3D/VR exercise which found that the latter group performed "significantly better". 

    Several research papers are coming, but our favourite student response was "“I really enjoyed it. Can we have more scenarios please?”!

    You can read a bit more about the project and findings in our short case-study.


    23 November 2020

    Trainingscapes Design Decisions

     


    I thought it might be useful to write a bit about the key design decision which informed the development of Trainingscapes - and which going forward may give us some novel ways of deploying the system.

    An early PIVOTE exercise in Second Life

    Trainingscapes has its roots in PIVOTE. When we first started developing 3D immersive training in Second Life we rapidly realised that we didn't want to code a training system in Second Life (Linden Scripting Language is fun, but not that scalable). Looking around we found that many Med Schools used "virtual patients" - which could just be a pen-and-paper exercise, but more often a 2D eLearning exercise, and that there was an emerging XML based standard for exchanging these virtual patients called Medbiquitous Virtual Patients (MVP). We realised that we could use this as a basis of a new system - which we called PIVOTE. 

    A few years later PIVOTE was succeeded by OOPAL - which streamlined the engine, and also added a 2D web-based layout designer (which then put props in their correct places in SL), and then when we built Fieldscapes/Trainingscapes which refined the engine further.

    The key point in PIVOTE is that the objects which the user sees in the world (which we call props) are, of themselves, dumb. They are just a set of 3D polygons. What is important is their semantic value - what they represent. In PIVOTE the prop is linked to its semantic value by a simple prop ID. The propID is what is the exercise file, not the prop itself.

    Within the exercise file we define how each prop responds to the standard user interactions - touch, collide, detect (we've also added sit and hear in the past), any rules which control this, and what actions the prop should do in response to the interaction. So the PIVOTE engine just receives from the exercise environment  message that a particular propID has experienced a particular interaction, and then sends back to the environment a set of actions for a variety of props (and the UI, and even the environment) to do in response. The PIVOTE engine looks after any state tracking.

    The main Trainingscapes exercise/prop authoring panel

    This design completely separates the "look" of an exercise from its logic. This is absolutely key to Trainingscapes, and in day-to-day use has a number of benefits:

    • We can develop exercises with placeholders, and switch them out for the "final" prop just by changing the 3D model that's linked to the prop ID.
    • We can use the same prop in multiple exercises, and its behaviour can be different in every exercise.
    Original SL version of the PIVOTE flow - but just change SL for any 3D (or audio) environment



    More significantly though it opens up up a whole bunch of opportunities:
    • The PIVOTE "player" can sit at the end of a simple web API, it doesn't have to be embedded in the player (as we currently do in Trainingscapes so that it can be off line)
    • If a 3D environment can provide a library of objects and a simple scripting language with web API calls then we can use PIVOTE to drive exercises in it. This is what we did with Second Life and OpenSim - perhaps we can start to do this in AltSpaceVR, VirBela/Frame, Mozilla Hubs etc.
    • By the same measure we could create our own WebGL/WebXR/OpenXR environment and let users on the web play Trainingscapes exercises without downloading the Trainingscapes player.
    • There is no reason why props should be visual, digital 3D objects. They could be sound objects, making exercises potentially playable by users with a virtual impairment - we've already done a prototype of an audio Trainingscapes player. 
    • They could even be real-world objects - adding a tactile dimension!

    Sounds like that should keep us busy for a good few DadenU days - and if you'd like to see how the PIVOTE engine could integrate into your platform just ask!