7 September 2020

Aura Blogpost - "From Virtual Personas to a Digital Afterlife"

 


David has a blog post on "From Virtual Personas to a Digital Afterlife" on the blog of the new Aura website/service- "designed to help you prepare your memories, important information and connect with loved ones before you die", part of the burgeoning digital afterlife/memorial sector but also looking to open up the conversations ahead of time.

You can read the blog post in full at: https://www.aura.page/articles/from-virtual-personas-to-a-digital-afterlife/

It's also interesting to compare some of the ideas in the post with Channel 4's recent documentary on Peter: The Human Cyborg as there certainly seems to be some common ground worth exploring.





27 August 2020

Virtual Reality - A Future History?

 A few days ago I got together with a group of experts in 3D immersive learning and virtual reality and together we brainstormed what we saw as the major challenges facing 3D immersive learning/training and VR over the next decade or so.

The graphic below summarises our thoughts.

In the short-term (0-3 years) we identified the big challenges as being:

a) To make access easy and seamless. If people are going to use these environments they’ve got to be dead easy to use, and for organisations dead easy to manage. Oculus Quest is a good step forward (totally self-contained, automatic roomscale sensing), as is WebXR (no need to download to a headset or PC). But even within the experience, the “grammar” of how you navigate and use and interact with the space has got to be self-evident and common-sense. And no crashes or glitches or other “odd” happenings otherwise any sense of immersion is totally lost. And they need to integrate with your other digital presences - be that your desktop or work/social media accounts.

b) To make the applications desired. People and organisations have got to want to use this stuff. There has got to be user pull. For entertainment the experience has got to be worth all the hassle of setting up and clearing a space and totally isolating yourself from reality for a while, otherwise a film on Netflix or a game on Steam is going win out. For organisations the benefits of virtual learning and training have got to be clear and well understood. Yes there are lots of different case studies that show the benefits, but they aren’t well distributed or consolidated, and often aren’t too rigorous when comparing to BOTH the main alternatives (physical training and 2D eLearning).


In the medium-term (5-10 yrs) we identified two major challenges:

a) Mobility. Immersive learning needs to be available where and when people want to use it - it needs to be mobile. Yes an Oculus Quest is pretty mobile (I’ve even used it in my garden!), but in normal, non-COVID times, it's not feasible to use it on the bus or train into college, or sat in a cafe (locked away from the outside world with a purse or laptop ready to be stolen by your side), or sat on the sofa with half an eye on Love Island. Headset Mounted Display (HMD) VR needs to be complemented by mobile/tablet (and even laptop) based versions of the same experience. Yes there are advantages to HMDs (visceral immersion, scale, isolation), but there are also disadvantages (social isolation, nausea, convenience, cost). The user should decide HMD or non-HMD, not the software developer or trainer. 

b) Integration. We need to move away from walled gardens and towards standard based environments and applications. VR today is a bit like the pre-web Internet, different walled gardens, different access devices, multiple accounts. Users don’t want that - they want to hear about something and just access it with whatever device they have to hand, and with their long-standing personalised avatar. WebXR is helping here, and yes even the web still has many of these issues, and I know that asset wise we have broad portability between the platforms, but not in the scripting of the experience, or the management of the data associated with it. And does the “virtual world” approach of Second Life and Snowcrash offer a better model than the “app” approach of most current offerings. Many have talked about the Metaverse or Multiverse, being able to seamlessly (that word again) move from one virtual environment to another. There have been metaverse initiatives in the past - is it time for another one?


For the long term the group again identified 2 main issues:

a) Radical Interfaces. The VR HMD is a great step forward, but they are  still large, clunky, moderately uncomfortable for prolonged use, and not very portable. I’m pretty convinced that we need another big step change in HMDs before they become real consumer items where everyone one in they way that they currently (probably) have a tablet. What I have in mind is more like the holobands of Caprica than the Quest. Something that integrates VR, AR and MR, lets us readily see the physical world, tracks our hands, and, perhaps most important, manages to give us the “feeling” of locomotion, and perhaps the other senses. My guess is that this is as much a neurological interface as it is a visual one, and hence probably a decade or more out. 

b) Societal Change. VR is not just impacted by attitudes to it, but could also impact society itself. COVID has made us re-evaluate remote working and remote relationships. Popular media is full of stories based around virtualised people and places (Devs and Upload being just the latest examples). Even a decade ago virtual worlds were being used by hostile actors,  I doubt today’s environments are any different. How would a Caprica style virtual world, readily accessible by all, and with the capacity to do almost anything effect the way we all live and interact? Would it be for good or ill - and would it let us weather a second COVID that much better?


So there you are, 6 perspectives, 2 each for the short, medium and long term. You may not agree with all the details, but I hope that you can appreciate the general thrust of each, and each offers a timely call to action for the VR community.


Now scroll down a bit.











OK, I told a little white lie at the beginning there. The gathering of immersive learning experts wasn’t a few days ago, it was about 3,285 days ago at the ReLive (Research Into Learning in Virtual Environments) Conference. held at the Open University way back in 2011. Here's the original graphic - and you can find a fuller presentation I did later that years based upon it at https://www.slideshare.net/davidburden/virtual-worlds-a-future-history



But I think you’ll agree that the general vision and issues being raised back in 2011 differ little from what a similar analysis would yield in 2020 or even early in 2021 - ten years later! Some of the specifics might be different, and my commentary above reflects a contemporary take, but the big picture items are pretty much the same:

  • This stuff still isn’t seamless, although with Quest and WebXR we’re taking some great strides
  • The entertainment and business case is still struggling to be made. I know that Quests sold out early in lockdown, but I’ve also seen numerous reviews of technology to help with lockdown that haven’t even mentioned VR and immersive 3D.
  • We’ve actually made great strides in mobility if you consider non-HMD VR, I can now run avatar style experiences quite happily on my phone or tablet if they’re not too high-rez, and Quest again helps with instant set-up, but it’s still much of an either-or choice.
  • Integration seems further away than ever as VirBela, Immerse, AltSpaceVR, Sominium Space, Hubs etc all compete for users.
  • Radical interfaces is actually the one we achieved first, I was in SL in an Oculus DK1 in 2013, only 2 years after ReLive2011 - but as mentioned above there is still a long way to go for the ordinary consumer.
  • Societal change may be driven as much by COVID (and the fear of similar future outbreaks) and climate change, but VR is having far more of an impact on popular culture than it did a decade ago, and that triumvirate of VR capability, external pressures and cultural exemplars may well be driving change more quickly - although perhaps not a quickly as we thought back in 2011.


So I hope you’ll forgive my little deception, but I thought it might be a nice way to not only to illustrate how many things have stayed the same despite the apparent “improvements” in technology, but also highlights how much there is that current VR practioners can learn from the work on immersive environments that was being done a decade ago. For inspiration just check out the agenda and papers from ReLive11 (and the earlier ReLive08), still available on the OU website.

10 August 2020

Virtual Reality vs Immersive 3D - the Search for the Right Words!



As a company that been creating immersive experiences for over 15 years we find that the contemporary obsession with headset based virtual reality (HMD-VR) is often at risk of a) forgetting what valuable work has been done in the past in non-HMD immersive 3D environments and b) not highlighting to potential  clients that a lot of the benefits of "VR" can be obtained without an HMD, and not having the funds for, or access to (esp in COVID) HMDs does not need to stop a VR project in its tracks.


One problem is that we just don't have the right terminology, and what terminology we have is constantly changing.

"VR" has almost always been assumed to mean HMD-based experiences - using headsets like the Quest, Rift or Vive - or even their forerunners like the old Virtuality systems.

 But in that fallow period between Virtuality and Oculus DK1 3D virtual worlds such as Second Life, There.com and ActiveWorlds were enjoying a boom-time, and often found themselves labelled as "virtual reality".


One problem is that there seems to be no commonly accepted terms for the classic Second Life (or even Fortnite) experience, where you can freely roam a 3D environment but you have a 3rd (or sometime 1st) person avatar view of it. It's certainly not 2D. It's sort of 3D - but not as 3D as the stereoscopic experience using a VR-HMD. I've seen 2D/3D or "3D in 2D" but both are cumbersome. We sometimes refer to it as "first-person-shooter" style (but that doesn't go down well with some audiences), or "The Sims-like". 

There's also a qualitative difference between say a 3D CAD package where you're rotating a 3D model on screen (called an allocentric view) and the experience of running through Fortnite, Grand Theft Auto, or Second Life (called an egocentric view).  You feel "immersed" in the latter group, not just because of the egocentric view point but also because of the sense of agency and emotional engagement.

At a recent Engage event I went to I'd guess (from avatar hand positions) that about 50% of attendees were in VR-HMD and 50% using the immersive-3D desktop client. So should it be described as a VR or immersive 3D system? Our Trainingscapes is the same, we can have users on mobile, PC and VR-HMD devices all in-world, all interacting. And Second Life is often "dismissed" as not being "proper VR" - but when Oculus DK1 was around I went into SL in VR - see below - so did it stop being VR when they went from DK1 to DK2?


So if a system can support both - is it a 2D/3D system or a VR system? That is why we tend to refer to both the 2D/3D  approach and the VR-HMD approach as being "immersive 3D" - as long as you have a sense of agency and presence and the egocentric view. It's the experience and not the technology that counts.

And don't get me started on what "real" and "virtual" mean!

No wonder clients get confused if even we can't sort out what the right terms are, and its far too late for some de jure pronouncement. But perhaps we all could try and be a little bit more precise about what terms we do use, and whether they are just referring to the  means by which you access an experience (e.g. VR-HMD) or to the underlying experience itself (such as a virtual world or virtual training exercise).

In later posts I'll try and look more closely at the relative affordances of the 2D/3D approach (better name please!) vs the VR approach, what researchers experiences of virtual worlds can teach us about VR, and also how "virtual worlds" sit against other immersive 3D experiences.



30 July 2020

Garden VR



OK, why has it taken me 4 months of lockdown to realise that I've got the ideal room-scale VR space out in my garden! Having thought of the idea I did have some doubts about a) were there too few straight lines for it to manage the tracking, b) would rough grass flag as intruders in the scene and c) what happened if the dog walked through, but in the end it all worked swimmingly.

Wifi reaches about half way down - so that may be an issue, although I found it hard to draw out more  the first half the garden as a space. Oculus kept putting its "draw boundary" panel right where I was looking and walking and drawing didn't help - but I'll see if I can do better another time. I ended up with a space 15 paces by 7 - far bigger than the attic (and no slopey ceilings).

The image below shows a rough mapping of the space to the WebXR demo room - so I could walk about half of it (Oculus hides the warning barriers in photos - annoying in this case as I'd set myself up to show the exact extent!)



After that everything worked just as though I was indoors - apart from the occasional need to walk back closer to the recover the wifi. I certainly lost all sense of where I was in the garden and alignment, the soft grass didn't interfere with the immersion, and the slight slope up to the house end was a useful warning!

Not related to being in the garden I did notice that I felt more latency/unease with the 3D photospheres (and even more with the stereophotospheres) than with the true 3D spaces - where I felt none at all. Perhaps one reason why there were a lot of reports of unease with VR is a lot of people were having photosphere experiences - although admittedly true latency issues remain (but made worse by doing "crazy" things in VR - like rollercoasters - rather than just walking around in a room!

One experience which was heightened was the Mozilla vertigo experience - walking on ever smaller blocks over nothing. I suppose because I could move more in the garden I could better explore it and fully immerse myself in it - and it certainly made me check I could feel grass under my feet before I stepped - particularly when I just stepped off the blocks and into space.

Anyway all the space allowed me to have a good walk around the solar system model without using teleports and actually get the planets lined up for the first time! Even in the garden they are proving too big so need to at least halve the sizes!






13 July 2020

Further Adventures in WebXR - Playing with the Solar System



Having a bit more of a play with WebGL/WebXR and now have a nice draggable solar system! Could be a neat learning tool once finished to get the planets in the right order, too  look at their globes in more detail, and perhaps access further information about them. With World Space Day/Week going virtual might be time to set up a gallery of experiences for people to try that week. 

Need to sort a few more things first though - like rings for Saturn, a starscape backdrop, and change the highlight colours. Maybe also the option to check your order and give the you the right one. Also need to add a sun, and shrink the planets even further!



The more we play with WebGL/WebXR the more excited we are by it as a tactical solution, quickly creating small but powerful bespoke VR experiences that can be instantly accessed by anyone with a WebXR compatible VR headseat without any need for an install!


9 July 2020

Virtual Archaeology Review publishes Virtual Avebury Paper




The Virtual Archaeology Review has just published Professor Liz Falconer's paper on the Virtual Avebury project we did last year. The paper looks at the response to the VR experience by visitors to the National Trusts Avebury Visitor Centre - where two people at a time could collaboratively explore the Avebury site as was - i.e. without a village being built in the middle of it and all the missing stones replaced!

You can read the paper at: https://polipapers.upv.es/index.php/var/article/view/12924/12360


Key findings included:

  • More than 1200 members of the public experienced a 3D, fully immersive simulation of Avebury Henge, Wiltshire, UK over a nine-month period.
  • Patterns of use and familiarity with information technology (IT), and using mobile technologies for gaming were found that did not follow age and gender stereotypes.
  • There was little correlation between age, gender and IT familiarity with reactions to Virtual Avebury, suggesting that such simulations might have wide appeal for heritage site visitors.

Some of the key data are shown below:


Emotional Responses to Virtual Avebury



Experiences of Virtual Avebury


Responses to the Virtual Avebury Soundscape


Read the full paper at: https://polipapers.upv.es/index.php/var/article/view/12924/12360




6 July 2020

DadenU Day: WebXR


MozVR Hello WebXR Demo Room

For my DadenU Day I decided to get to grips with WebXR. WebXR is an new standard (well an evolution of WebVR) designed to enable web-based 3D/VR applications to detect and run on any connected VR or AR hardware, and to detect user input controls (both 6POS and hand controllers). This should mean that:

  • You can write and host VR applications natively on the web and launch then from a VR headsets built-in web browser
  • Not worry whether its Oculus, HTC or A.N.Other headset, both for display and for reading controllers
  • Have a 2D/3D view automatically available in the web browser for people without a VR HMD.
What WebXR does NOT do is actually build the scene, you use existing WebGL for that (essentially a 3D HTML standard, not to be confused with WebXR or WebVR!) through something like the Three.js or A-Frame frameworks.


To get a good sense of what web-delivered VR (via WebXR) can do I headed over to Mozilla's demo at https://blog.mozvr.com/hello-webxr/. This room has a bunch of different demos, and a couple of "doorways" to additional spaces with further demos. If you view on a 2D browser you just see the room, but can't navigate or interact (don't see why WebXR should pick up ASDW same way as its picks up a 6DOF controller). If you go to the page in your Oculus Quest (or other) browser you also see the same 3D scene in 2D. BUT it also offers you an "Enter VR" button, click this and your VR lobby and the 2D browser disappears and you are fully in the VR space as though you'd loaded a dedicated VR app. Awesome. In the space you can:

  • Play a virtual xylophone (2 sticks and sounds)
  • Spray virtual graffiti
  • Zoom in on some art
  • View 360 photospheres - lovely interface clicking on a small sphere that replaces the VR room with a full 360/720 photosphere. I'd always been dubious about mixing photospheres and full 3D models in the same app but his works well
  • View a stereoscopic 360 photosphere - so you can sense depth, pretty awesome
  • Enter a room to chase sound and animation effects
  • View a really nice photogrammetry statue which proves that web VR doesn't need to mean angular low-rez graphics 
MozVR Photogrammetry Demo

There's a really good "how we did it" post by the Mozilla team at: https://blog.mozvr.com/visualdev-hello-webxr/

Having seen just what you can do with WebXR the next step was to learn how its done. For that I went to the WebXR sample pages at https://immersive-web.github.io/webxr-samples/

Although thee are a lot simpler than the MozVR one, each shows how to do a particular task - such as user interaction, photospheres etc. You can also download the code and libraries for each from GitHub at https://github.com/immersive-web/webxr-samples.

Movement demo

Controller demo

The only downside of these seems to be that they use Cottontail - a small WebGL/WebXR library/framework purely developed for these demos and not recommended for general use - so adapting them to your own needs is not as simple as it would be if they were written in Three.js or A-Frame.

Keen to actually start making my own WebXR I started by copying the GitHUb repository to my own server and running the demo's up. Issue #1 was that any link from a web page to the WebXR page MUST use https, using http fails!

Starting simply I took the photosphere demo and replaced the image with one of my own. The image had worked fine on the photosphere display in Tabletop Simulator but refused to work in WebXR. Eventually I found that the image had to be in 2048x1024, higher resolutions (but same ratio) fail. Also the photosphere demo is for stereoscopic photospheres so you have to remove the " displayMode: 'stereoTopBottom'" parameter.

Hougoumont Farm at Waterloo in WebXR Photosphere

Next up was to try and add my own 3D object. I liked the block room in one of the demos and worked out how to remove their demo blocks form the middle, and to hide the stats screen. Nice empty room loaded up. Then I hit the bump that I usually write in Three.js or A-Frame and I could't just cut-and-past into their WebXR/Cottontail template. Then I ran out of time (it was Friday after all!)

I've now found a page of really basic Three.js WebXR demos at https://threejs.org/examples/?q=webxr so the aim for this week is to get those working and start on my own WebXR spaces.

It's obviously early days for WebXR, but given the MozVR demo this really could be a lovely download-free way of delivering both 2D/3D to ordinary browsers, and full VR to headsets without any downloads. Joy!




29 June 2020

9 Business as Usual Uses for VR and AR in College Classrooms - Our take!


Saw an interesting looking article on "9 Amazing Uses for VR and AR in College Classrooms" on  Immersive Learning news the other day - although actually a retweet of a 2019 article. But reading it I was struck by how most of the uses they talk about are things that we've been doing for years.

So here's their Top 9 uses, and what we've done that's identical or close.

1) Grasping Concepts



When we built a virtual lab for the University of Leicester we also built 3D animations of what happens at  a molecular level. Students had found it hard to link the theory of a process with the mechanics of using the kit, and the combination of both really helped them to link and understand the two.

In another example a finance trainer we helped build for the University of Central Florida represented financial flows as tanks of water and piles of virtual money so as to better enable students to grasp more complex financial concepts.


2) Recreating Past Experiences for New Learners



Not one of ours but there was an awesome recreations of the WW1 trenches, augmented by the poetry of the war created by the University of Oxford back in the 2000s. We have though also used immersive 3D to recreate conversations between analysts and patients so that new learners can revisit these and actually sit in the virtual shoes of the analyst or patient.


3) Stagecraft for Theater Students



One of the first projects we got involved with was helping theatre educators at Coventry University make of us immersive 3D to teach stage craft and even create new cross-media pieces. There was also the wonderful Theatron project back in the 2000s that recreated a set of ancient theatres in order to better understand how they were used by staging virtual plays, and we did the Theatrebase project were we built Birmingham's Hippodrome Theatre and digitised a set of scenery from their archives to show how virtual environments could be used to both teach stagecraft but also to act as an interactive archives and to help plan and share stage sets between venues.

4) Virtual Reconstruction of History



With Bournemouth University and the National Trust we recreated Avebury Ring as part of an AHRC funded project and ran it for the summer at the visitors centre so that visitors could explore the Ring as it was 5000 years ago in VR - and without the village that has now been built in the middle of it!


5) Going on Space Walks



We've done the Apollo 11 Tranquility Base site 3 times now, in Second Life, Open Sim and now Trainingscapes. We've also done an exploration of the 67P comet and a whole Solar System explorer.


6) Reimagining the Future

                               

Back in 2010 we built the new Library of Birmingham virtually (hence VLOB) for Birmingham City Council so they could use it to plan the new building and to engage with the public and later subcontractors. The multi-user space even had a magic carpet ride!

7) Practicing Clinical Care



We have done almost a dozen immersive 3D exercises for health and care workers, ranging from paramedics and urinalysis to end of life pathway care and hospitalised diabetic patients.


8) Hands-on Railroading



OK, hands-up, we've never built a virtual railroad - but we have done equipment operation simulations on things ranging from air conditioners to jet engines!


9) Feeling the Impact of Decisions




In the article this is actually about team-work and collaboration within virtual spaces. Whilst we have had some "fun" builds - for instance virtual snowballs for Christmas parties we're also really interested in how to use these spaces to discuss issues and approaches through tools like walk-maps and 3D post-it notes. The classic though has got to be the fire demo where if you choose the wrong extinguisher then the fire blows up in your face - and as seen from the image above your body flinches away exactly as it would do in real life!


So there you are, 9 business as usual use cases for immersive 3D and VR as far as we're concerned!



25 June 2020

Daden joins Team iMAST



We're pleased to announce that Daden has been selected as a member of Team iMAST, the Babcock and Qinetiq led team which is bidding to support the modernisation of the UK Royal Navy’s individual maritime training.

Down selected to bid earlier this year, the bespoke Team iMAST collaboration – led by Babcock and comprising, QinetiQ and Centerprise International along with the Universities of Portsmouth and Strathclyde – has recently been joined by Thales and Learning Technologies Group to further bolster its highly-experienced offering. And boasting its Innovation Ecosystem of more than 50 Small to Medium sized Enterprises (SMEs) - including Daden, Team iMAST is ready to deliver training to the Royal Navy when and where it is required, if selected.

Team iMAST and the Innovation Ecosystem will enable critical technology integration, backed by proven naval training resources, to drive future-ready training solutions for all elements of the Royal Navy. To launch this Ecosystem, two successful events have already been held with the most recent hosted by Team iMAST at the Digital Catapult, the UK’s leading agency for the early adoption of advanced digital technologies.

With its wealth of proven expertise, Team iMAST is uniquely placed to support this training outsource programme through its unrivalled industry know-how. The programme will provide an opportunity to help shape the future of Royal Navy training as a strategic partner and drive efficiencies and new technology. 

Daden is focusing on a variety of use cases of virtual humans in support of the project.



23 June 2020

Intelligent Virtual Personas and Digital Immortality




David's just done a guest post for the VirtualHumans.org site on "Intelligent Virtual Personas and Digital Immortality", pulling together some of our current work on Virtual Personas with David's writings on Digital Immortality and the site's interest in Virtual Influencers.

You can read the full article here: https://www.virtualhumans.org/article/intelligent-virtual-personas-and-digital-immortality




11 June 2020

Daden Newsletter - June 2020




In the latest issue of the Daden Newsletter we cover:

  • COVID19 and 3D Immersive Learning -  With corporate training and academic syllabuses and delivery being revised to cope with the challenges of social distancing stretching out until mid 2021 at least, to what extent will trainers and educators look again at the potential of 3D immersive learning and virtual reality - or will they fallback on the more "traditional" approaches of VLEs and Zoom?

  • Virtual Conferences - It's not just in virtual training and learning that immersive 3D can help - several organisations are now using immersive 3D conference and meeting environments to give participants more sense of "being there" and encouraging more serendipitous networking than yet another Zoom webinar. David reports on two recent events he attended.

  • Trainingscapes 2.0 Sneak Peak - We're getting close to the launch of version 2.0 of Trainingscapes - see some screenshots of the new-look application.

  • Plus snippets of other things we've been up to in the last 6 months - like being named one of the West Midlands Top 50 most innovative companies.

We hope you enjoy the newsletter, and do get in touch if you would like to discuss any of the topics raised in the newsletter, or our products and services, in more detail!

8 June 2020

Daden U Day: My Beautiful Soup

From Darrell Smith:

On a recent project we had difficulties in scraping the summary paragraph from Wikipedia article pages and Beautiful Soup was suggested as a possible tool to help with this.  The Beautiful Soup Python library has functions to iterate, search and update the elements in the parsed tree of a html (and xml) document.


So download and install the library do a quick test was to fetch the URL of the web page we’re interested using the ‘requests’ HTTP library to make things easy. The http document is then passed to create a ‘soup’ object,.

result = requests.get("https://en.wikipedia.org/wiki/HMS_Sheffield_(D80)")

src = result.content

soup = BeautifulSoup(src, 'lxml')

print(soup)

 

The prettify # makes the html more readable by indenting the parent and sibling structure

print(soup.prettify())

 

Searching for tag types (such as ‘a’ for anchor links) is simple using ‘find’ (first instance) or ‘find_all’,  this shows all internal (Wikimedia links) and external links (“https://”)


 

Lets just get links that refer to “HMS …”


 

Now lets get the text paragraphs we’re interested in, this can be done using the ‘p’ tag


Then index to 2nd paragraph using list to get summary paragraph (n.b. first paragraph is blank) we’re after.


Dedicated Wikipedia Library

While Beautiful Soup is a good generic tool for parsing web pages, it turns out that for Wikipedia there are dedicated python utilities for dealing with the content such as the Wikipedia library (https://pypi.org/project/wikipedia/) which wraps the Wikimedia API simply


wp.search(“HMS Sheffield”) returns the Wikipedia pages for all incarnations of HMS Sheffield, and we can use wp.summary(“HMS Sheffield (D80)”)  to give hte element from page we’re interested in.

The wp.page(“HMS Sheffield (D80)”) also gives the full text content in a readable form with headings.




Again we can select the first paragraph for the summary (exclude URL), and possible use other paragraphs using the headings as index/topic markers.

 

Smart Quotes!  While trying this out I also found a useful function to get rid of those pesky Microsoft smart quotes causing trouble in RDF definitions on the same task. Unicode, Dammit converts Microsoft smart quotes to HTML or XML entities:






1 June 2020

Choices in Immersive Learning Design




When designing a new immersive learning experience we find that there are a number of dichotomies or spectra  that it is helpful to talk through with a client in order to ensure that all parties have a good idea of what is driving the immersive learning design and what the experience might feel like. Often there is a lot taken for granted, a lot discounted or assumed, and its not until you start talking about all these options that some of the preconceptions on both sides emerge.

To help us talk these through with clients we've even realised them as cubes within our virtual campus so that we can go in remotely with clients and move the boxes around as we talk about them, and typically lay them out on a cost/effort vs importance floor map - the sheer act of doing that helps to create visual and spatial cues which help in recall and even help to show the thinking that is going on.

So here are what we think are some of the key dichotomies, and you can find a fuller list and discussion of the remaining items in our Immersive Learning White Paper.




- Simulation vs Serious Game

In recent years this has become the big one – to what extent do you want the immersive experience to be a “simulation” of reality (so high on accuracy), and to what extent do you want it to be game-like (and so highly motivating)? The situation gets even further confused when people start talking about “gamification”. Having been involved in games design since before the days of personal computers we know that this really all comes down to game mechanics. To us something becomes a “game” as soon as you start to introduce (or exclude) rules or features that do not exist in the real world. Those things you introduce are called game mechanics – and might range from a simple countdown timer or scoring system to highly artificial features such as power-ups and upgrades.


- Linear vs Freeform

When we first engage with tutors and learning designers who have been used to working on eLearning projects we find that they tend to come with a very linear mindset. The learning is a sequence of actions and tasks, and each screen only provides a few options as you don't want to crowd the screen or confuse the learner. Coming from a virtual worlds background we are far more used to open learning spaces with lots of possibilities – trying to get tutors and designers to “unlearn” can be hard. One of the best approaches we have found is to get them to think of a learning exercise in terms of drama, or even e-drama. In fact, it's not even scripted drama we're often after, it's improvised drama. It's telling the student: this is the scene, here are the props, the actors are going to do something and you need to respond.

- Single vs Multi User

A major design decision is whether an environment is designed to be used by a single user (so they only see themselves) or by multiple users (so everyone sees and can interact with everyone else). Obviously multi-user is essential if you are looking at team and collaborative learning, or you want staff (or actors) to role-play characters in the simulation “live”. But multi-user suggests an element of scheduling, and also requires the users to have a network connection, so doesn't give the individual learner the maximum flexibility (e.g. learning on the underground), or let them practice in private.


- Synchronous vs Asynchronous

This choice is only relevant in multi-user mode – should the environment be designed for asynchronous use – i.e. everyone uses it at their own time and pace, or for synchronous use – more like a physical world team learning session where the team (and the tutor/assessor) are all present at the same time.  In asynchronous mode we are really talking about lots of individual single-user experiences, people using the environment as and when. With synchronous mode we are talking about timetabling and co-ordination, but the benefit is that we get to practice those team tasks that it may just not be feasible to practice and rehearse in the physical world due to limitations of time or distance.





We hope that's made you think through our ideas for immersive in a new way, and don't forget to check out the white paper for more details, or contact us if you'd like to talk them through - or even play around with the box set in our virtual collaborative 3D space.


22 May 2020

A Tale of Two Seminars





Yesterday I attended two seminars in "3D", one in EngageVR and one in Second Life. Whilst in many ways they shared similar features, and both miles away from your typical Zoom webinar, they couldn't have been more different.

EngageVR



The Engage VR event was an ImmerseUK event on the Future of Work in VR: Training the Next Workforce. The format was very conventional with 4 speakers, each presenting their work/views and then a panel session to take questions, followed by networking. Some attendees (~50%?) were using VR HMDs, and the rest the 2D/3D interface from their PCs. There was also a livestream to YouTube. No idea what the level of knowledge or background of attendees was - but just knowing of the event and getting into VR suggests a reasonably knowledgable crowd - about 30 I'd guess.

I don't want to dwell on the session itself, all the presentations were interesting and had something to say, although some going back over decades now of arguments about affordances and use cases of immersive 3D/VR. Some nice multi-user/multi-site virtual surgery, and a new use of an "in their shoes" perspective for safeguarding training where trainees played the exercise as the counsellor, and then sat in the "victims" position and saw their own avatar reply what they did! Mind you one speaker talked about how they "brought down their project costs into 5 figures" whereas our clients tend to get upset if they go up into 5 figures!

What I do want to do is just collect my reflections from the event - my first "proper" VR seminar, and at 2h15m probably the longest I've had my Quest on for in one go. So, in no particular order:

  • The whole space very reminscent of 2010 SL, nice distant backdrop of Golden Gate to set it in "reality"
  • No on-boarding experience, only found out afterwards how to "hold" my tablet, and not sure there was anyway to "sit", I kept being sat by an organiser
  • When I heard a helicopter I started looking for it in the sky - but it was in RL. Truly immersed.
  • Attendee avatars were just trunk/head/hands, whereas presenters (at least when on panel) where full body, which also seems to be the Engage default, I just the lo-rez attendees to keep performance up
  • If you wanted to ask a question you just stuck you hand up - no need for a "raise hand" button, very natural
  • Not being able to go into 3rd person in order to see myself made me very self-conscious - I couldn't remember what outfit I had on, and was I sat half in the concrete like some people? I had to use the selfie mode on the camera to at least check my outfit. My sense of proprioception was completely gone with 3rd person or proper arms or legs. I almost felt embarrassed -  a bit like newbie SL users wondering what happens to their avatars when they log out
  • In VR you currently can't multi-task - no checking emails or twitter or working on a document whilst half listening to the seminar. I could take some notes (which this post is derived from) using the in-world tablet, but with a pick keyboard very slow. It also means that the content has got to be ace to keep the attention - and whilst this was OK it wasn't ace and I did find myself almost wishing it was on Zoom so I could also do some other stuff - being in VR, or at least HMD VR didn't really add a lot at this stage
  • The absence of any text chat made the whole event seem very passive. I'm used to SL conference (and see below) where text chat and voice run in parallel (and a good RL/twitter event) so people can side comment and research and have private one-to-ones.
  • This whole text entry in VR is an ongoing issue. As one speaker said voice may be part of the solution, but wouldnt cope with multiple streams very easily. Thinking back, the "classic" image from Neuromancer era Cyberpunk is of the cyber-hacker with a "deck" (keyboard) and VR headset or jack. So why haven't we gone down this route - why can't I get my Bluetooth keyboard to hook up to my VR HMD - probably still a faster option than finger tracking and virtual keyboards (UPDATE: See Facebook announcement).
  • May be able to solve this next time I go in but why couldnt I just sit my virtual tablet on my knees so it doesn't block the view. 
  • Would be really useful if avatars had an HMD/No HMD icon on them. In SL days we also experimented with things like icons to show what timezone you were in so you know if you were talking to someone for whom it was the middle of the night.
  • When the presenters switched to the panel session it was very "realistic" since they now had full bodies and as they naturally moved their arms and head it just looked so natural. I think they should have been given full bodies when they did their own bits for this reason.
  • Really need "teleport" effect as each presenter popped on stage
  • Certainly with a Quest it was impossible to read the smaller print on the slides - just a pixel blur. KEEP TEXT BIG on VR slidedecks.
  • I really missed the SL fly-cam so I could zoom in on slides or presenters, or to get an overview of the site.
  • Why stick to conventional PPT one slide at a time? My standard slide viewer in SL leaves a copy of each viewed slide visible so that people can refer back, and also in questions lets me quickly hop back to slides.
  • The headset weight was noticeable, but bearable for the 2hrs. I noticed the fan humming a lot (it was 23 degrees outside), but actually gave a bit of a cool breeze. I got a power warning about 2h in, but cable is long enough to plug in.
  • No user profiles so I couldn't click on people to find out more about them - either from scripted event badges or their Engage profile.
  • You need a straw if you want to drink an RL drink when wearing a VR HMD!
When formal session ended and it opened up into networking the whole space suddenly felt far more "real". There was certainly that RL anxiety over who to speak to, whether to hand on the edge of a group already engaged with each other or to button-hole the 1 person you may have briefly met. Spatial voice was switched on so it was a very noisy place. In the end I started talking to Christophe, one of the speakers and CEO of Bodyswop VR. We actually had to walk over to a quieter part of the build to talk due to the noise of other discussions (I don't think cocktail effect works in VR). Again animation of the hands and head and pretty good "gabber" for the mouth all made it seem sort of natural - probably the weight of the HMD was the thing that anchored me most back to this being "virtual". In the end Christophe and I both noticed how quiet it had got and looking around found we were the only people left - so we obviously had a good chat, just as good as we'd have managed in RL.



So overall a seminar of two halves - some basic SL era lessons to be learned, some affordances and challenges of HMDs to be dealt with, and apart from the desire to multi-task an improvement on most Zoom calls - at least I had the sense of being there in that place with everyone.

Second Life

Pity the SL images came out so dark - should have turned the sun up!


Two hour later, post 8pm clap, and I'm in Second Life for a meeting of the Virtual Worlds Education Roundtable to listen to Mark Childs of the OU (and long time SL/RL colleague) talk about "Choose your reality".

A slightly smaller crowd (~20), most long term SL residents, some newer, all passionate RL educators and SL enthusiasts. Only a few had every tried HMD VR. This session was on the standard (only) SL interface of 2D/3D on a PC screen driving your avatar in 1st or 3rd person.

In contrast to the formal ampitheatre space in Engage everyone started sat around a large outdoor table, or in deck chairs aroudn the edge. But once Mark got going we never saw our chair (or sat) again!

Mark had a number of questions for us to rate out of 5 and discuss. All very combinations of how good is RL/SL/VR in terms of ease of use/creativity/fun. Mark used a novel variation of a walk map. A walk-map is where you put a big graphic on the floor and have people walk to the relevant point on it that reflects what is being discussed or their view (so it could be a process map, a 2x2 matrix, a set of post-it note topics etc). But in this particular system (the Opinionator?) you walked in to the coloured and number "pen" for your choice and then a central pie chart dynamically adjusted itself to show the relevant spread of people. Neat! And remember that this was built and scripted by an SL user, not by Linden Lab. One enterprising attendee was then taking photos of the central pie and posting them onto panels around the event to provide a history.



Most of the discussion was on text-chat, and only some on voice. This is not only often easier to follow but encourages more people to contribute and provides an immediate record to circulate and analyse afterwards. A few people used voice, and the balance was about right.

Every avatar looked unique, not the clones of Engage, and being SL many looked incredibly stylish (my jeans and jacket based on my RL ones looking pretty faded in comparison). The costuming really helps with 2 things 1) remembering who is who and 2) getting some sense of how the person wants to project themselves - which may of course be different to (or more a more honest reflection of?) their RL personality.

I could happily multi-task, so whilst I was typing furiously I could also keep one eye on the Clueless Tweetalong streaming by on my iPad.

As well as the main text chat a couple of people DM'd me so we could have side chats and renew Friend links.

It was only a 1 hour session, but full-one  the whole time and making pretty full use of the capabilities of the platform and of "virtual reality".

Thoughts


Of course this is a bit of an apples and pears comparison in terms of events, and possibly in terms of platforms, and I know that Engage does have some collaboration tools (although they do seem to slavishly follow RL). At the moment (and probably forever) Engage is a meeting, conferencing, training and collaboration space, whereas SL is a true virtual world - with more or less full agency and with full persistence. One of the presenters at the engage event talked about the different platforms identifying their own furrow and sticking to it, and I'm sure that that's what Engage and Linden Lab will do, and certainly what we're doing with Trainingscapes.

One final point though was around Mark's comparison of SL to RL and VR. One participant talked about how we should think in terms of multiple realities rather than extended realities. SL in particular isnt just an extension of  RL, it is its own reality. The point that got me though was comparing SL to VR. Back in the day SL was defined as VR, in the sense of being a "virtual" reality. Nowadays of course VR is associated with HMD delivered experiences - but back the Oculus DK-1 days there was an HMD enabled browser for Second Life, and I've certainly experienced SL through a VR HMD (walking through our Virtual Library of  Birmingham), and it was as good as any other VR experience a that time.

Virtual Library of  Birmingham in SL through Oculus DK1

So to me comparing SL to "VR" is perhaps a category error (but an acceptable - if lazy :-) - shorthand). The distinction is perhaps between true virtual worlds (VWs - with full agency/persistence as in SL) and more constrained Virtual Environments (CVEs, the MUVEs beloved of academics) - SL being just the most real-life-like MUVE and least constrained VE. I feel a 2x2 grid coming on........