22 May 2020

A Tale of Two Seminars





Yesterday I attended two seminars in "3D", one in EngageVR and one in Second Life. Whilst in many ways they shared similar features, and both miles away from your typical Zoom webinar, they couldn't have been more different.

EngageVR



The Engage VR event was an ImmerseUK event on the Future of Work in VR: Training the Next Workforce. The format was very conventional with 4 speakers, each presenting their work/views and then a panel session to take questions, followed by networking. Some attendees (~50%?) were using VR HMDs, and the rest the 2D/3D interface from their PCs. There was also a livestream to YouTube. No idea what the level of knowledge or background of attendees was - but just knowing of the event and getting into VR suggests a reasonably knowledgable crowd - about 30 I'd guess.

I don't want to dwell on the session itself, all the presentations were interesting and had something to say, although some going back over decades now of arguments about affordances and use cases of immersive 3D/VR. Some nice multi-user/multi-site virtual surgery, and a new use of an "in their shoes" perspective for safeguarding training where trainees played the exercise as the counsellor, and then sat in the "victims" position and saw their own avatar reply what they did! Mind you one speaker talked about how they "brought down their project costs into 5 figures" whereas our clients tend to get upset if they go up into 5 figures!

What I do want to do is just collect my reflections from the event - my first "proper" VR seminar, and at 2h15m probably the longest I've had my Quest on for in one go. So, in no particular order:

  • The whole space very reminscent of 2010 SL, nice distant backdrop of Golden Gate to set it in "reality"
  • No on-boarding experience, only found out afterwards how to "hold" my tablet, and not sure there was anyway to "sit", I kept being sat by an organiser
  • When I heard a helicopter I started looking for it in the sky - but it was in RL. Truly immersed.
  • Attendee avatars were just trunk/head/hands, whereas presenters (at least when on panel) where full body, which also seems to be the Engage default, I just the lo-rez attendees to keep performance up
  • If you wanted to ask a question you just stuck you hand up - no need for a "raise hand" button, very natural
  • Not being able to go into 3rd person in order to see myself made me very self-conscious - I couldn't remember what outfit I had on, and was I sat half in the concrete like some people? I had to use the selfie mode on the camera to at least check my outfit. My sense of proprioception was completely gone with 3rd person or proper arms or legs. I almost felt embarrassed -  a bit like newbie SL users wondering what happens to their avatars when they log out
  • In VR you currently can't multi-task - no checking emails or twitter or working on a document whilst half listening to the seminar. I could take some notes (which this post is derived from) using the in-world tablet, but with a pick keyboard very slow. It also means that the content has got to be ace to keep the attention - and whilst this was OK it wasn't ace and I did find myself almost wishing it was on Zoom so I could also do some other stuff - being in VR, or at least HMD VR didn't really add a lot at this stage
  • The absence of any text chat made the whole event seem very passive. I'm used to SL conference (and see below) where text chat and voice run in parallel (and a good RL/twitter event) so people can side comment and research and have private one-to-ones.
  • This whole text entry in VR is an ongoing issue. As one speaker said voice may be part of the solution, but wouldnt cope with multiple streams very easily. Thinking back, the "classic" image from Neuromancer era Cyberpunk is of the cyber-hacker with a "deck" (keyboard) and VR headset or jack. So why haven't we gone down this route - why can't I get my Bluetooth keyboard to hook up to my VR HMD - probably still a faster option than finger tracking and virtual keyboards (UPDATE: See Facebook announcement).
  • May be able to solve this next time I go in but why couldnt I just sit my virtual tablet on my knees so it doesn't block the view. 
  • Would be really useful if avatars had an HMD/No HMD icon on them. In SL days we also experimented with things like icons to show what timezone you were in so you know if you were talking to someone for whom it was the middle of the night.
  • When the presenters switched to the panel session it was very "realistic" since they now had full bodies and as they naturally moved their arms and head it just looked so natural. I think they should have been given full bodies when they did their own bits for this reason.
  • Really need "teleport" effect as each presenter popped on stage
  • Certainly with a Quest it was impossible to read the smaller print on the slides - just a pixel blur. KEEP TEXT BIG on VR slidedecks.
  • I really missed the SL fly-cam so I could zoom in on slides or presenters, or to get an overview of the site.
  • Why stick to conventional PPT one slide at a time? My standard slide viewer in SL leaves a copy of each viewed slide visible so that people can refer back, and also in questions lets me quickly hop back to slides.
  • The headset weight was noticeable, but bearable for the 2hrs. I noticed the fan humming a lot (it was 23 degrees outside), but actually gave a bit of a cool breeze. I got a power warning about 2h in, but cable is long enough to plug in.
  • No user profiles so I couldn't click on people to find out more about them - either from scripted event badges or their Engage profile.
  • You need a straw if you want to drink an RL drink when wearing a VR HMD!
When formal session ended and it opened up into networking the whole space suddenly felt far more "real". There was certainly that RL anxiety over who to speak to, whether to hand on the edge of a group already engaged with each other or to button-hole the 1 person you may have briefly met. Spatial voice was switched on so it was a very noisy place. In the end I started talking to Christophe, one of the speakers and CEO of Bodyswop VR. We actually had to walk over to a quieter part of the build to talk due to the noise of other discussions (I don't think cocktail effect works in VR). Again animation of the hands and head and pretty good "gabber" for the mouth all made it seem sort of natural - probably the weight of the HMD was the thing that anchored me most back to this being "virtual". In the end Christophe and I both noticed how quiet it had got and looking around found we were the only people left - so we obviously had a good chat, just as good as we'd have managed in RL.



So overall a seminar of two halves - some basic SL era lessons to be learned, some affordances and challenges of HMDs to be dealt with, and apart from the desire to multi-task an improvement on most Zoom calls - at least I had the sense of being there in that place with everyone.

Second Life

Pity the SL images came out so dark - should have turned the sun up!


Two hour later, post 8pm clap, and I'm in Second Life for a meeting of the Virtual Worlds Education Roundtable to listen to Mark Childs of the OU (and long time SL/RL colleague) talk about "Choose your reality".

A slightly smaller crowd (~20), most long term SL residents, some newer, all passionate RL educators and SL enthusiasts. Only a few had every tried HMD VR. This session was on the standard (only) SL interface of 2D/3D on a PC screen driving your avatar in 1st or 3rd person.

In contrast to the formal ampitheatre space in Engage everyone started sat around a large outdoor table, or in deck chairs aroudn the edge. But once Mark got going we never saw our chair (or sat) again!

Mark had a number of questions for us to rate out of 5 and discuss. All very combinations of how good is RL/SL/VR in terms of ease of use/creativity/fun. Mark used a novel variation of a walk map. A walk-map is where you put a big graphic on the floor and have people walk to the relevant point on it that reflects what is being discussed or their view (so it could be a process map, a 2x2 matrix, a set of post-it note topics etc). But in this particular system (the Opinionator?) you walked in to the coloured and number "pen" for your choice and then a central pie chart dynamically adjusted itself to show the relevant spread of people. Neat! And remember that this was built and scripted by an SL user, not by Linden Lab. One enterprising attendee was then taking photos of the central pie and posting them onto panels around the event to provide a history.



Most of the discussion was on text-chat, and only some on voice. This is not only often easier to follow but encourages more people to contribute and provides an immediate record to circulate and analyse afterwards. A few people used voice, and the balance was about right.

Every avatar looked unique, not the clones of Engage, and being SL many looked incredibly stylish (my jeans and jacket based on my RL ones looking pretty faded in comparison). The costuming really helps with 2 things 1) remembering who is who and 2) getting some sense of how the person wants to project themselves - which may of course be different to (or more a more honest reflection of?) their RL personality.

I could happily multi-task, so whilst I was typing furiously I could also keep one eye on the Clueless Tweetalong streaming by on my iPad.

As well as the main text chat a couple of people DM'd me so we could have side chats and renew Friend links.

It was only a 1 hour session, but full-one  the whole time and making pretty full use of the capabilities of the platform and of "virtual reality".

Thoughts


Of course this is a bit of an apples and pears comparison in terms of events, and possibly in terms of platforms, and I know that Engage does have some collaboration tools (although they do seem to slavishly follow RL). At the moment (and probably forever) Engage is a meeting, conferencing, training and collaboration space, whereas SL is a true virtual world - with more or less full agency and with full persistence. One of the presenters at the engage event talked about the different platforms identifying their own furrow and sticking to it, and I'm sure that that's what Engage and Linden Lab will do, and certainly what we're doing with Trainingscapes.

One final point though was around Mark's comparison of SL to RL and VR. One participant talked about how we should think in terms of multiple realities rather than extended realities. SL in particular isnt just an extension of  RL, it is its own reality. The point that got me though was comparing SL to VR. Back in the day SL was defined as VR, in the sense of being a "virtual" reality. Nowadays of course VR is associated with HMD delivered experiences - but back the Oculus DK-1 days there was an HMD enabled browser for Second Life, and I've certainly experienced SL through a VR HMD (walking through our Virtual Library of  Birmingham), and it was as good as any other VR experience a that time.

Virtual Library of  Birmingham in SL through Oculus DK1

So to me comparing SL to "VR" is perhaps a category error (but an acceptable - if lazy :-) - shorthand). The distinction is perhaps between true virtual worlds (VWs - with full agency/persistence as in SL) and more constrained Virtual Environments (CVEs, the MUVEs beloved of academics) - SL being just the most real-life-like MUVE and least constrained VE. I feel a 2x2 grid coming on........







19 May 2020

DadenU Day: GraphQL

From Steven:

GraphQL is a data query language to make it easier and more intuitive to use APIs. Existing APIs use fixed endpoints to give the user what they want, however, the fixed nature means that that endpoint can give too much or too little information. These are called over-fetching and under-fetching. Over-fetching uses more intensive database calls on the server and more bandwidth, whereas under-fetching requires more API calls and more complexity on the client. GraphQL allows the user to define what data they want.

I used a prewritten example to experiment with this in the C# library GraphQL for .NET. The example includes a nice GUI playground to test queries. An example GraphQL query is show in Figure 1.



Figure 1:An example query from the playground.

The GraphQL query consists of a type “query”, a name “TestQuery”, a reference to the query to query “reservations”, a list of arguments to the query, and a list of fields to include in the result. This allows the ability to include only the fields you want and no more. It is passed in a standard get or post request to /graphql by default, so very easy to use.

The result of this GraphQL query is shown in Figure 2. It is in JSON format so is easily read by humans and easily parsed with standard libraries. It looks similar in structure to the query, which allows quick checking of the results.




Figure 2:The result from the query in Figure 1.

The example uses a simple hotel reservation system with rooms, reservations and guests. Each of these has its own C# class as normal. To make the GraphQL classes, we need to define a new ObjectGraphType that takes the existing type as a generic parameter as in Figure 3.



Figure 3:The reservation ObjectGraphType. It takes the existing Reservation type as a generic parameter, but fields still need to be defined in the constructor.


Each property in that type must then be given in the constructor by applying the Field method to it. Complex properties are defined manually by giving the type as a generic parameter and the string for querying, though this seems like a library limitation.

A GraphQL query is defined as another ObjectGraphType where the Field method defines the name of the query, a list of arguments to pick up on, and a resolve method that is called to resolve the GraphQL query using the defined arguments. This is shown in Figure 4. The return type is a List of Reservations as expected, but each type is its counterpart in GraphQL.



Figure 4:The reservation query. The argument and resolve parameters are defined unlike in Figure 3.

The resolve function gets passed a context, which contains all the arguments and subfields from the GraphQL query. In the function you can then build up a query to access data depending on what is in the GraphQL query. The arguments can end up triggering complex filters, not just equality checks, and could include sorting.

In GraphQL, there are not just “queries”, there are also “mutations” that allow data change, and “subscriptions” that allow real-time updates. These are wrapped up in a schema, although I am only using queries here. The schema is shown in Figure 5.



Figure 5:The schema used in the example.

All these classes are made available through dependency injection in Startup.cs as shown in Figure 6. Every GraphQL type needs added, and the Query is retrieved in the Schema through the service provider.



Figure 6:The Startup.cs showing the level of dependancy injection required.

The resolve function was originally a list of if-else statements with similar structure. I changed it to a series of chained calls to the same function to remove code duplication and improve readability. I also added conditional inclusion so that the extra fields are only included if they exist in the GraphQL query. This cuts down on the cost of database calls. These methods are shown in Figure 7.

Overall GraphQL is something I will consider in a future project. Once the initial hurdle has been overcome it is powerful, and can be backed up by a standard API if necessary.

Expressions



Figure 7:IQueryable extensions to enable conditional including and generic equal functions.

I wanted to be able to pass a lambda to select a field in the class, a string for the argument and a list of validation functions with messages. I am using Entity Framework Core (EF) to do the database calls, so to avoid execution of the query until the end, I had to write the lambda as an expression. Expressions are the form the C# compiler keeps the function logic until they are compiled and are also used by EF to translate queries into SQL, instead of C#.

In C# you can define a lambda as an Expression<Func<>> and C# will automatically convert the lambda into an expression. However, building on an expression requires you to use the expression tree API and define every little part of the lambda including parameters to build up the tree. In the AddEqual method, you can see the method I wanted to write commented out at the end of the return line. The expression tree building took four statements to build up, by defining the next small bit of the function its own statement. In the end though, it allows the same method to be used for a generic type where you want to check equality in an EF query.

As an aside, checking equality generically for all types in a function (value vs reference, nullable vs non-nullable) required a call to EqualityComparer<T>.Default.Equals(x, default) instead of just using == or .Equals().

In the Include function I had to pass an expression and examine the PropertyInfo to check if that property exists in the subfields.

18 May 2020

DadenU Day: Virtual World/VR Conferencing


For my DadenU Day I decided to check out some of the VR virtual conferencing environments which are around at the moment (apps seems too limiting, but they lack the agency and persistency to be worlds). I managed to squeeze 4 into the day: EngageVR, VirBela, SomniumSpace and Mozilla Hubs. All comments are based on about an hours play in each, so I may well have missed or misunderstood features.

EngageVR



EngageVR (https://engagevr.io/) is the one that seems to be getting the most attention at the moment, and it is pretty slick. Although primarily aimed at VR the 2D/3D interface is fine, and recognises that a lot of people who want to do this sort of virtual conference don't yet (or ever will) have a VR headset.

After a nice avatar designer your dropped into a virtual lecture theatre for a basic tutorial. All "experiences"/environments are accessed through a Oculus style home space and tiled menu. As well as watching a recording (see below) of a panel debate I also wandered around Mars and the Moon (obligatory for VR).

The basics work really well, but the positives that really stood out were:

  • Your avatar has arms as well as hands, and the IK is pretty good
  • You have feet, and legs! When you walk your legs sometimes hang doll like, but other times they get traction and try and walk (although often more slowly than you!). The neatest trick though is that if you bend down your knees bend - even though there is no knees/leg tracking!
  • You can record a session. This records the 3D data, not the visuals. This means if you replay it you are still in the space and see the action unfold around you (as in the panel session). Really weirdly if you've recorded yourself moving around so see yourself moving - from your "current" avatar - you have a clone! Not sure how many times you can repeat this trick!
  • Load times impressively fast

On the negatives:
  • All content is built by/through Engage, so you have minimal agency beyond what the experience lets you have. An "author" mode is promised. So whilst on Mars I could rez a dinosaur and click on points of interest on Curiosity that was it.
  • There is a big difference between the 2D/3D graphics (great) and the Oculus Quest graphics (lower rez) - particularly noticeable on Mars. Obvious impact of the lower memory and power of the Quest headset, but a reasonable way of addressing it.
  • Had to sideload onto Quest - but manageable through Sidequest.
  • Didn't spot text chat (which would be hard in HR, but near essential in 2D/3D)

Overall a very impressive platform, and one which we'd probably have clients consider if there focus is more on 3D/VR events than training.

VirBela


Seen quite a few VirBela (https://www.virbela.com/) events being held recently - Laval Virtual in particular. The whole place has a deliberately cartoony look, and feels a bit like ActiveWorlds did in the late 90s, but with more doll like avatars. Again there's a quick avatar designer and then you arrive at a central plaza (very AW). No tutorial or anything. Information boards tell you a bit about the place and what you can do, and you can TP to example private office suites.

Pros:

  • Very easy set up and access
Cons:
  • VR only on pro
  • Cartoony look probably wont suite most corporate (UK) users
  • No agency or building, very much for virtual meetings/conferences

Overall simple but works well, but not sure the look is for most of our clients.


Somnium Space


I keep hearing about this one (https://www.somniumspace.com/), and wasn't sure what to expect. Overall it felt a bit like Second Life Alpha c.2004 (actually not that good) or early High Fidelity. It is very aiming at a persistent VW rather than a conference/training space. Huge download, and VR promised for 2020. There is scripting (LUA?) and building a la SL, but it that's what you want why not just to go to SL. They also have an SL like land model, although backed by blockchain. I guess they'd say their differentiation is blockchain (so what) and VR building (when it comes). If SL had VR then I think any need for SS would just disappear. Couldn't really get much to work, and very laggy. Maybe check back in a year.

Trying to build/terraform

Mozilla Hubs



Again heard a bit about this one (https://hubs.mozilla.com/#/). Feels like Google Lively more than anything else. Very angular and low rez, and a weird perspective so its hard to tell if an object is small and close, or big and far away! Trying to build anything complex would be a pain. You can though just drop in objects from Google Poly or Sketchfab. Probably more of a fun space to get kids to build things in, and people are using it for more serious work, but probably not for most of our clients. Be an interesting challenge to build a very "serious" space in it though.

A more "serious" space - til I brought the Elephant in!

The avatars are trunk and head only.



Comparing to Trainingscapes

Coming back to Trainingscapes after those 4 actually felt quite liberating. Would love to have EngageVrs recording and avatars, but would probably trade that for the ability to build and create things (which any non-student account can do) and have agency. And for persistence SL still seems streaks ahead. But I can certainly see that EngageVR (and possibly for the right client VirBela) would be a good option for clients after virtual meetings conferences. But then again if they want to do more than sit and listen they're stuck - so not really for collaborative activities, you couldn't do this for instance:




There's also no sign of any of them working on mobile/tablets which Trainingscapes does!

I'll try and test out a few more worlds this week, and then dive back into SL/Open Sim.


11 May 2020

The "New Normal" - an Opportunity for 3D/VR Immersive Learning




For the HE/FE and training sector I am sure that the last month or so has been about coping with rapid changes needed to deliver the summer term's learning on-line, and that most of this has been focused around VLEs, Zoom and the like.

However as it increasingly seems that the "new normal" will impact teaching and learning for probably at least a year, to what extent will FE and HE institutions and commercial training organisations take the opportunity to have a fresh look at immersive 3D and VR solutions. Whilst VLEs, video lectures and Zoom tutorials can deliver a lot of useful content, they will struggle with the sense and context of being there, of learning vocational skills, and of working on physical collaborative tasks. Physical teaching time may be even more limited, and social distancing measures may further reduce the effectiveness.

Of course institutions (and other organisations) are likely to be feeling real impact on budgets, particularly for HE from reduced numbers of overseas students, and possibly even of UK students, and the media are reporting of costs of £1-2m per degree to move to distance learning. And in these stressful times it may be that IT departments, eLearning teams and tutors just fallback on the comfort of familiar (if not totally prevalent) technologies of video lectures and lessons/seminars by video conferencing.

Having been involved in using immersive 3D (and lately VR) for training and education for over a decade it does very feel like a technology whose time has come. We are already seeing an increase in the use of the technology for virtual events (where there is no need to generate content as such), and using it for remote training and learning is surely the next step. That said, the focus may well be on the "immersive 3D" aspect rather than VR unless institutions want to pay for a headset per student - the days of sharing may well be over! Also our key argument remains that you need to be able to create your own content so you can adapt to student need - rather than commissioning "one-offs" from design led VR agencies.

As we at Daden get ready to launch Trainingscapes 2.0 (the updated version of our Fieldcapes application) we feel that Trainingscapes is uniquely positioned to help deliver 3D immersive learning within the "new normal", in particular:

  • It is not a VR only, or even VR led, tool, and is happy supporting uses on smartphones, tablets, laptops and PCs, as well as those lucky enough to have (home) access to a VR headset
  • It has an intuitive content authoring system so that eLearning teams, tutors and even students can create their own 3D/VR content - so you can iterate and scale quickly and at low cost


We're trying to get a feel from those we know working at the "coal face" of education in HE and FE as to how they see the next year panning out - and to what extent they see immersive 3D and VR solutions as being a part of the "new normal"  mix, and we'll see about passing on some of feedback we get. But we're always interested in the wider view - so if you're readin gthis then please feel free to add your view in the comments.

These are interesting, challenging and distressing times, but perhaps a richer educational and training environment can be one of the legacies.




8 May 2020

DadenU Day: The Design of Everyday Things

For Krish's DadenU day the other week Krish looked at Don Norman's influential book, The Design of Everyday Things.



If you have ever pulled a door instead of pushing it, if you have ever been unable to figure out how your microwave works beyond its basic functions and you struggle with those too, and if you get frustrated with software because you cannot get it to do what you want it to do then you are certainly not alone. These experiences can often leave you feeling a little stupid and give the impression that it is your fault for not using things correctly.

Don Norman, in his book, The Design of Everyday Things, would argue that it is not your fault and you should not be the one feeling stupid. He goes on to say that it is the fault of the designer for failing to design for humans. Originally the book was named The Psychology of Everyday Things because it was challenging designers to think about human psychology or the way humans think and behave in their everyday interactions with everyday things. If designers could understand this, they would make better designed products to account for this human behaviour.

Don Norman thinks that design should go beyond finding the solution to a problem although this is a good starting point. Designers, engineers and software developers need to take into account that our thought is very much guided by our emotions rather than being rational. Emotions allow us to make value judgements, help us prioritise what is important and give us the ability to think intuitively as well. These emotional thoughts may be visceral which means they provoke strong feelings within us; they may be behavioural which means that we react positively, more often than not, to the familiar and less positively to the unfamiliar, and finally our emotional thoughts can be reflective which means we look back on past experiences, good and bad, which inform our choice, but we also have the insight to look at future possibilities.

So a successful product not only has to function correctly to solve a problem, but we must consider how we can make a product or design software that evokes positive emotions so that users enjoy the product, so that there is enough familiarity built in to make them feel safe and know how to use it intuitively, and finally it should evoke reflective emotions that are positive.

Don Norman would be the first  to admit that this challenge is not easy to respond to otherwise we would all be making amazing products. However Don Norman does suggest that we should spend more time observing people doing the things that they do with the tools that they use. We may find that often the problems that people have are not the root problem but a symptomatic problem due to poorly designed systems and tools. Spending time in observation may help us as designers to get a better insight into the root problem. It will also give us more insight into the way people behave providing us with the information we need to design better products. Designing for user experience is therefore a research based discipline at its heart.

Although Don Norman’s book is now more than thirty years old, the principles it outlines are far from out of date. Some of the best products in the world follow the principles outlined in his book.