22 May 2020

A Tale of Two Seminars





Yesterday I attended two seminars in "3D", one in EngageVR and one in Second Life. Whilst in many ways they shared similar features, and both miles away from your typical Zoom webinar, they couldn't have been more different.

EngageVR



The Engage VR event was an ImmerseUK event on the Future of Work in VR: Training the Next Workforce. The format was very conventional with 4 speakers, each presenting their work/views and then a panel session to take questions, followed by networking. Some attendees (~50%?) were using VR HMDs, and the rest the 2D/3D interface from their PCs. There was also a livestream to YouTube. No idea what the level of knowledge or background of attendees was - but just knowing of the event and getting into VR suggests a reasonably knowledgable crowd - about 30 I'd guess.

I don't want to dwell on the session itself, all the presentations were interesting and had something to say, although some going back over decades now of arguments about affordances and use cases of immersive 3D/VR. Some nice multi-user/multi-site virtual surgery, and a new use of an "in their shoes" perspective for safeguarding training where trainees played the exercise as the counsellor, and then sat in the "victims" position and saw their own avatar reply what they did! Mind you one speaker talked about how they "brought down their project costs into 5 figures" whereas our clients tend to get upset if they go up into 5 figures!

What I do want to do is just collect my reflections from the event - my first "proper" VR seminar, and at 2h15m probably the longest I've had my Quest on for in one go. So, in no particular order:

  • The whole space very reminscent of 2010 SL, nice distant backdrop of Golden Gate to set it in "reality"
  • No on-boarding experience, only found out afterwards how to "hold" my tablet, and not sure there was anyway to "sit", I kept being sat by an organiser
  • When I heard a helicopter I started looking for it in the sky - but it was in RL. Truly immersed.
  • Attendee avatars were just trunk/head/hands, whereas presenters (at least when on panel) where full body, which also seems to be the Engage default, I just the lo-rez attendees to keep performance up
  • If you wanted to ask a question you just stuck you hand up - no need for a "raise hand" button, very natural
  • Not being able to go into 3rd person in order to see myself made me very self-conscious - I couldn't remember what outfit I had on, and was I sat half in the concrete like some people? I had to use the selfie mode on the camera to at least check my outfit. My sense of proprioception was completely gone with 3rd person or proper arms or legs. I almost felt embarrassed -  a bit like newbie SL users wondering what happens to their avatars when they log out
  • In VR you currently can't multi-task - no checking emails or twitter or working on a document whilst half listening to the seminar. I could take some notes (which this post is derived from) using the in-world tablet, but with a pick keyboard very slow. It also means that the content has got to be ace to keep the attention - and whilst this was OK it wasn't ace and I did find myself almost wishing it was on Zoom so I could also do some other stuff - being in VR, or at least HMD VR didn't really add a lot at this stage
  • The absence of any text chat made the whole event seem very passive. I'm used to SL conference (and see below) where text chat and voice run in parallel (and a good RL/twitter event) so people can side comment and research and have private one-to-ones.
  • This whole text entry in VR is an ongoing issue. As one speaker said voice may be part of the solution, but wouldnt cope with multiple streams very easily. Thinking back, the "classic" image from Neuromancer era Cyberpunk is of the cyber-hacker with a "deck" (keyboard) and VR headset or jack. So why haven't we gone down this route - why can't I get my Bluetooth keyboard to hook up to my VR HMD - probably still a faster option than finger tracking and virtual keyboards (UPDATE: See Facebook announcement).
  • May be able to solve this next time I go in but why couldnt I just sit my virtual tablet on my knees so it doesn't block the view. 
  • Would be really useful if avatars had an HMD/No HMD icon on them. In SL days we also experimented with things like icons to show what timezone you were in so you know if you were talking to someone for whom it was the middle of the night.
  • When the presenters switched to the panel session it was very "realistic" since they now had full bodies and as they naturally moved their arms and head it just looked so natural. I think they should have been given full bodies when they did their own bits for this reason.
  • Really need "teleport" effect as each presenter popped on stage
  • Certainly with a Quest it was impossible to read the smaller print on the slides - just a pixel blur. KEEP TEXT BIG on VR slidedecks.
  • I really missed the SL fly-cam so I could zoom in on slides or presenters, or to get an overview of the site.
  • Why stick to conventional PPT one slide at a time? My standard slide viewer in SL leaves a copy of each viewed slide visible so that people can refer back, and also in questions lets me quickly hop back to slides.
  • The headset weight was noticeable, but bearable for the 2hrs. I noticed the fan humming a lot (it was 23 degrees outside), but actually gave a bit of a cool breeze. I got a power warning about 2h in, but cable is long enough to plug in.
  • No user profiles so I couldn't click on people to find out more about them - either from scripted event badges or their Engage profile.
  • You need a straw if you want to drink an RL drink when wearing a VR HMD!
When formal session ended and it opened up into networking the whole space suddenly felt far more "real". There was certainly that RL anxiety over who to speak to, whether to hand on the edge of a group already engaged with each other or to button-hole the 1 person you may have briefly met. Spatial voice was switched on so it was a very noisy place. In the end I started talking to Christophe, one of the speakers and CEO of Bodyswop VR. We actually had to walk over to a quieter part of the build to talk due to the noise of other discussions (I don't think cocktail effect works in VR). Again animation of the hands and head and pretty good "gabber" for the mouth all made it seem sort of natural - probably the weight of the HMD was the thing that anchored me most back to this being "virtual". In the end Christophe and I both noticed how quiet it had got and looking around found we were the only people left - so we obviously had a good chat, just as good as we'd have managed in RL.



So overall a seminar of two halves - some basic SL era lessons to be learned, some affordances and challenges of HMDs to be dealt with, and apart from the desire to multi-task an improvement on most Zoom calls - at least I had the sense of being there in that place with everyone.

Second Life

Pity the SL images came out so dark - should have turned the sun up!


Two hour later, post 8pm clap, and I'm in Second Life for a meeting of the Virtual Worlds Education Roundtable to listen to Mark Childs of the OU (and long time SL/RL colleague) talk about "Choose your reality".

A slightly smaller crowd (~20), most long term SL residents, some newer, all passionate RL educators and SL enthusiasts. Only a few had every tried HMD VR. This session was on the standard (only) SL interface of 2D/3D on a PC screen driving your avatar in 1st or 3rd person.

In contrast to the formal ampitheatre space in Engage everyone started sat around a large outdoor table, or in deck chairs aroudn the edge. But once Mark got going we never saw our chair (or sat) again!

Mark had a number of questions for us to rate out of 5 and discuss. All very combinations of how good is RL/SL/VR in terms of ease of use/creativity/fun. Mark used a novel variation of a walk map. A walk-map is where you put a big graphic on the floor and have people walk to the relevant point on it that reflects what is being discussed or their view (so it could be a process map, a 2x2 matrix, a set of post-it note topics etc). But in this particular system (the Opinionator?) you walked in to the coloured and number "pen" for your choice and then a central pie chart dynamically adjusted itself to show the relevant spread of people. Neat! And remember that this was built and scripted by an SL user, not by Linden Lab. One enterprising attendee was then taking photos of the central pie and posting them onto panels around the event to provide a history.



Most of the discussion was on text-chat, and only some on voice. This is not only often easier to follow but encourages more people to contribute and provides an immediate record to circulate and analyse afterwards. A few people used voice, and the balance was about right.

Every avatar looked unique, not the clones of Engage, and being SL many looked incredibly stylish (my jeans and jacket based on my RL ones looking pretty faded in comparison). The costuming really helps with 2 things 1) remembering who is who and 2) getting some sense of how the person wants to project themselves - which may of course be different to (or more a more honest reflection of?) their RL personality.

I could happily multi-task, so whilst I was typing furiously I could also keep one eye on the Clueless Tweetalong streaming by on my iPad.

As well as the main text chat a couple of people DM'd me so we could have side chats and renew Friend links.

It was only a 1 hour session, but full-one  the whole time and making pretty full use of the capabilities of the platform and of "virtual reality".

Thoughts


Of course this is a bit of an apples and pears comparison in terms of events, and possibly in terms of platforms, and I know that Engage does have some collaboration tools (although they do seem to slavishly follow RL). At the moment (and probably forever) Engage is a meeting, conferencing, training and collaboration space, whereas SL is a true virtual world - with more or less full agency and with full persistence. One of the presenters at the engage event talked about the different platforms identifying their own furrow and sticking to it, and I'm sure that that's what Engage and Linden Lab will do, and certainly what we're doing with Trainingscapes.

One final point though was around Mark's comparison of SL to RL and VR. One participant talked about how we should think in terms of multiple realities rather than extended realities. SL in particular isnt just an extension of  RL, it is its own reality. The point that got me though was comparing SL to VR. Back in the day SL was defined as VR, in the sense of being a "virtual" reality. Nowadays of course VR is associated with HMD delivered experiences - but back the Oculus DK-1 days there was an HMD enabled browser for Second Life, and I've certainly experienced SL through a VR HMD (walking through our Virtual Library of  Birmingham), and it was as good as any other VR experience a that time.

Virtual Library of  Birmingham in SL through Oculus DK1

So to me comparing SL to "VR" is perhaps a category error (but an acceptable - if lazy :-) - shorthand). The distinction is perhaps between true virtual worlds (VWs - with full agency/persistence as in SL) and more constrained Virtual Environments (CVEs, the MUVEs beloved of academics) - SL being just the most real-life-like MUVE and least constrained VE. I feel a 2x2 grid coming on........







No comments:

Post a Comment