25 February 2021

Introducing Daden Hub - Daden's new virtual home in Mozilla Hubs

 


We've just launched Daden Hub - Daden's virtual home in Mozilla Hubs. 

This 3D environment lets you find out about Daden and what we do, play with some of the Mozilla Hubs tools, and follow teleports to some of our other builds in Mozilla Hubs. 

We are increasingly using it instead of Zoom for some of our internal meetings, and don't be surprised if we invite you into it for external meetings too!

You can visit Daden Hub now in 2D or VR at https://hubs.mozilla.com/SJy5Hwn/daden-hub

Here's a short video giving you a tour of the Hub.



Give us a shout if you'd like to meet up in the space and get a more personal demo!





19 February 2021

A Tale of Two Venues - AI/SF and Wrestling in VR!

 


Last night I attended a fascinating discussion on put on Science Fiction, Tech, and Games by GamesBeat/VentureBeat. It touched on loads of topics of interest like AI, Virtual Sentience, Digital Immortality and whether NPCs should have free will (shades of a recent Richard Bartle talk). What matters for this post though is that although the event was streamed on Zoom I attended it in the beta of Oculus Venues in VR.

I went to the Oculus Quest2 launch event in an earlier Beta, and it was OK but very basic, just move your avatar to a row of cinema seats and sit there. So what has changed?

You start with an avatar design session in your own room (not much choice of anything, but just enough to minimise identikit avatars), and then some very basic movement and interaction instruction. There is a very nice menu UI on your inside wrist which opens up to give you access to the main tools. You can navigate either by a local teleport method (move a circle to where you want to go) or free physical/joystick-based movement.

When you leave the room and new space loads and you enter the Venues atrium.


There's space for I think about 8 "suites", 4 down each side. I don't know how shared this is. My venue had the poster outside and declared  38 attendees.


Entering the suite another space loads. I say "suite" as there is a ground level space and a balcony space, both looking out to a big 2D screen where the Zoom relay was - 3 talking head webcams.

Now as an event experience it was pretty poor. The difference between the 3D/avatar us a 2D/video them is big. I know you might pick up less nuances if they were avatars but at least it would feel more like a seminar than a cinema. And if you were on Zoom you could ask questions by chat, but there was no chat facility in VR (and without a Bluetooth keyboard typing questions would be a pain). Also the lack of chat meant that there was no side channel between the attending avatars - amplifying points, sharing links, getting to know each other. If someone tried to talk to you it just got in the way of the presentation - just as in the physical world. I know I keep saying it but c.2010 Second Life events felt far better.

Unfortunately the Venues screenshot camera doesn't show the video content!

The other big issue were my co-attendees. I know I'm the only person on the ground level who was there from beginning to end. Maybe a handful of others were there for 20mins plus. I'd say peak occupancy was a dozen, more often half that. Most of the people though were there to play and have fun, several were kids (mics on whole time, talking over the presenters etc), and were just having a laugh. Imaging trying to attend a seminar whilst a group of tweens charges through. Luckily at least one group decided to "take the party upstairs" and when I checked in at the end the balcony certainly seemed busier - but that was after the talk finished.

So not convinced.

On my way out I decided to check out the other suites. Only one was in use with 24hr Wrestling. Same layout. But a couple of big differences.

First, the video was 360 style, in fact I think it was even stereoscopic, so it really did feel like you were ringside watching the fight. It filled the whole of the space in front of you, and had a real sense of depth.

Second, as there was no commentary as such, it was just the fight, all the avatars chatting, shouting and fooling around with the cameras and confetti was all appropriate for the event - everyone was ringside!

The Wrestling crowd - not that you can tell with no video!


So I hadn't expected that a Wrestling event would beat an SF/Tech/Games event as a good demo of using VR for events - but it did. Just goes you need to think about the complete experience when looking at how to use VR (and immersive 3D) for different kinds of events.






18 February 2021

NASA's Perseverance Rover in Mozilla Hubs

 


What better way to celebrate the (hopefully successful) landing of NASA's Perseverance rover on Mars than by looking at it in Mozilla Hubs!

NASA wonderful model library at https://nasa3d.arc.nasa.gov/ not only has a nice model of Perseverance but its also a) a reasonable size and b) in the .glb format preferred by WebXR apps likeHubs - this is great news if NASA is now standardising on this format. The model is about 4x the recommended Hub size, so mobiles may have issues, but it loads in 20s or so.

As this is a quick build we've just dropped the model into our existing "mars-like" terrain. We checked it for scale and it looks pretty much spot on. We've not added any interpretation boards - we may do that later. 

If you're in VR then you can have great fun getting on your hands and knees to look under the rover - although we couldn't spot Ingenuity.

The model is at: https://hubs.mozilla.com/o3c26nJ/perseverance-mars-rover with a room code of 817861 if you're using a headset.

Remember that since this is WebXR you can just click immediately on the link above to go to Mars in your browser, no need to download or register, and it works with or without a VR headset.

Have fun, and do let us know how you get on in the comments and/or post images on Social Media and share with friends.

And don't forget to also check out our SpaceX Starship Hub room to look at the next generation of Mars spacecraft. That's at https://hubs.mozilla.com/SMUKcDy/starship-test

Update: Here's a video of the rover from inside our VR headset:



16 February 2021

Virtual Humans interview on the 1202 Podcast



Daden MD David Burden is interviewed on the 1202 Human Factors podcast about our work on Virtual Humans. You can listen to it at:

https://podcasts.apple.com/gb/podcast/virtual-humans-should-we-be-concerned-an-interview/id1478605029?i=1000538768732

12 February 2021

SpaceX Starship in MozillaHubs

 


MozillaHubs is designed to be a relatively low-poly environment so as to have speedy downloads and to work on mobile devices and standalone VR headsets.

Having been quiet pleased with the size of terrain mesh we could bring into MozillaHub we decided to try and use a more substantial 3D model - in this case of Elon Musk's SpaceX's Starship spacecraft. The model is by MartianDays on Sketchfab, and whilst not the highest rez (we were trying to avoid that!) it does look pretty good. For the record it's 20,600 triangles, which compares to 1.2 million (!) for a hi-rez model.

Again we're quite impressed at how well its come in. Load times are pretty variable from 30s to 1 minute, sometimes 2 minutes, but it displays very smoothly once you're in there. And with a VR headset on, looking all the way up to the top you get a real sense of the scale of the thing. We've lifted it off the ground so you can walk underneath to see the Raptor engine bells.

If you want to take a look - in a browser or in VR -  go  to:

https://hubs.mozilla.com/SMUKcDy/starship-test

The hub.link code is 416509 for easier access in VR.





10 February 2021

Introducing VEDS - A Mozilla Hubs based Virtual Experience Design Space

 


iLRN Attendees: The link to the session is: https://hubs.mozilla.com/PNaoEvs/daden-virtual-experience-design-space - this will launch you straight into Mozilla Hubs. If you're in VR the hub.link code is 563553.

If you haven't been in Mozilla Hubs before we also have a tutorial space at: https://hubs.mozilla.com/fFVfvrq/onboarding-tutorial


On the basis of practice-what-you-preach we've created a prototype Virtual Experience Design Space (VEDS) in Mozilla Hubs and opened it up to public use. The space takes a number of design tools that we've used for many years, and which we already have within our Daden Campus space in Trainingscapes but makes them available to anyone with a browser (including mobile) and any WebXR capable VR headset (such as Oculus Quest). Of course the nice thing about WebXR (and so Hubs) is that everything runs in the browser so there is no software to download - even on the VR headset.

Here's a video walkthrough of the VR experience.


Within the space we have four different 3D immersive experience design tools:

  • A dichotomies space, where you can  rank different features (such as linear/freeform or synchronous/asynchronous) against a variety of of different design factors (such as cost, importance, time, risk).
  • Sara de Freitas's 4D framework for vLearning design
  • Bob Stone's 3 Fidelities model
  • Bybee's 5E eLearning design model 
Each is represented by a suitable floor graphic and then how you use them is up to you. There are labelled boxes for the dichotomies space - but you can use them anywhere. There are also a set of markers, so you can get people to "vote" by placing markers (or even their avatar) where they think the most importance/biggest issue/biggest risk is. It's all about using VEDS as a social, collaborative space to do immersive 3D and VR design.

We've linked out in the world to our white paper on the subject for more background on the tools, and you can even browse this in-world (at least from a 2D device, can't get it to work in VR yet!).

Of course Hubs has its own set of generic collaboration tools to use in-world and you're free to make use of those too, we've provided a media frame and a couple of whiteboards to draw on, and you can rez any of their standard objects - why not mark choices with cheeseburgers!

Here's a couple more images of the space:






What to give it a try? Just point your 2D or VR browser to:


The hub.link code is 149638 for easier access from VR.

No sign up is necessary, but remember that if you use our room, rather than spawning your own, then you may well meet other users (including us) in there. We're setting the room to "Remix allowed" so you can also create your own version of it - do let us know if do!

If there's interest we might even see about holding some live events in there to talk about the tools in more detail.

We'd love to hear what you think of VEDS - just drop us a line in the comments, or email us at info@daden.co.uk if your organisation could do with a similar space structured around the tools you use.



8 February 2021

Social Virtual Worlds/Social VR



The last couple of years have seen a dramatic increase in the number of social virtual worlds around. We define such platforms as internet based 3D environments which are open to general use, are multi-user (so you can see and talk to other people) and within which users are able to create their own spaces. The worlds are typically accessible by both a flat screen (PC/Mac, and sometimes mobile/tablet) and through a VR headset.

At Daden we’ve been using these sorts of environments since the early 2000s. What is really beginning to change the game now is the emergence of WebXR social virtual worlds. These are completely web-delivered, so you only need your browser to access them (from PC, mobile and even VR headset), so there is no download, minimal differences between platforms and really easy access. They are emerging as a great way to get people off of Zoom and into somewhere more engaging for whatever social or collaborative task you need to do. The key examples at the moment are probably Mozilla Hubs and Framevr.io.

Below we highlight some of the key affordances, benefits and uses of these WebXR Social Virtual Worlds.

  • Fully 3D, multi-user, avatar based, with full freedom of exploration​
  • In-world audio (often spatial) and text chat
  • Runs without any download – and even on locked down desktops
  • Graphics optimised for lower bandwidths and less powerful devices
  • Out-the-box  set of collaboration tools, eg*: screen-share, document-share, whiteboard, shared web browser (*depends on world)
  • Free access model for low usage, many also open source
  • Developer ability (us) to build custom environments
  • Limited scripting ability at the moment
  • Give meetings more of a sense of “space” than video calls
  • Use environment and movement to help anchor the sessions and learning in memory
  • Help train what you can’t usually teach in the classroom
  • Excite and engage students and employees

We're also working up an analysis summary tool to describe the capabilities of social virtual worlds (and similar spaces), and a comparison between them - watch this space and read the earlier blog post.

If you'd like to know more about business and educational use of Social Virtual Worlds and their underlying technologies then do get in touch and we can arrange a chat and live demo. We'll also be posting some live spaces here shortly, and you can also read about (and see some demos of) the the WebXR technology that underlies these worlds on our WebXR page.

3 February 2021

DadenU and SocialVR Try-Out: Spatial

 


As part of my DadenU day I thought I'd try out another SocialVR environment - this time Spatial (not to be confused with vSpatial which is a VR "virtual desktop" app). Spatial is downloaded as an app onto the Quest. The web client only works in "spectator" mode rather than giving you a full avatar presence.

A really nice touch is that your available "rooms" show as little mini-3D models to choose from (see above). However this doesn't look to be scalable as they are a fixed selection of shapres and actually only emphasises the pre-canned nature of the environment.

Another "USP" is that it takes a webcam image to generate your avatar. OK I didn't try to optimise it and was probably look up too much but the result was very uncanny valley... And you can see from the arm position why many apps have gone for only head-body-hands, missing the arms.


There are a number of fixed office/meeting room type environments to use, but no option I can see at the moment to make your own. There is a big display board, but it wasn't really clear how that worked. "Spectators" appeared in world as their webcam feed above the board. You could separately rez post-its, a shared web browser, images from your desktop etc. There's also a library of 3D models which I guess you can add to - but not too obvious how.

Navigation seemed very clunky with step wise rotation on the joystick - hard to manoeuvre in a space which albeit small is still larger than my room to freely wander in VR in.

And that's sort of it.

As a step towards a very business orientated VR space it's not bad, if they can improve the typical avatar capture to the quality that seems to be in the promotional shots that would be great, and a scalable version of the room models would be wonderful to see more widely. But why be a slave to the real-life model - their "Mars" planning room has a small model of a crater, and a small model of Curiosity - why not have both full scale (and the crater small scale) and have the meeting on Mars?

Here's my initial take on Spatial plotted on our radar diagram, and below some key thoughts on each measure.


  • Accessibility/Usability - Headset download, reasonably intuitive but clunky movement
  • Avatars - nice try at scanning but pretty uncanny valley
  • Environment - nice looking but fixed and few
  • Collab Functions - reasonable starter set but could improve
  • Multi-Platform - really only VR for full functionality
  • Object Rez - fixed library only? 
  • Open Source/Standards - doesn't seem it
  • World Fidelity - none, non-persistent rooms
  • Privacy/Security - room ownership/invites only
  • Scripting - none






1 February 2021

Evaluating 3D Immersive Environment and VR Platforms

 


As part of our work on a forthcoming white paper looking at the ever-growing range of 3D immersive environment platforms available we thought it would be useful to identify some criteria/function sets to help make comparison easier. If you were to use these as part of a selection exercise then you'd probably want to weight them appropriately as different use-cases would need different features. We've also assumed that all offer a common feature set, including multi-user and text/voice communications, walk/run, screengrabs etc. 

A lot of these categories are quite encompassing, and a further analysis may split them down into sub-categories - or create more higher level categories where a particular feature becomes important. We may well revise this list as we work through the platforms (at least 25 at the current count) but here's our starter for 10 (in alphabetical order).


A/accessibility (with a lowercase a) and Usability

How easy is it for people to access and then use the environment? The gold standard is probably now WebXR environments that run in a browser. If the environment needs a download then is it a reasonable size (<500MB), does it install easily, and crucially will it give issues in a locked-down corporate or education environment? Once you've got the app installed is it easy to get to the content you want, do you have to wait for another long download? And then once in-world are all the controls obvious and the core UI (whether on screen or in an HMD) easy to use? Also how hard is it to navigate with your avatar (an area where many new users still struggle)? For HMDs in particular is immersion maintained throughout the experience? Upper-case A Accessibility is a whole other ball game and we'd love to do some work on access to VR for those with disabilities (we've done a demo of an audio virtual world).

Avatar Fidelity & Customisation

People get very focussed on what their avatar looks like. It's interesting that many of the current crop of SocialVR worlds have a) gone for quite cartoony, low rez avatars, and b) gone for a variation of the head-body-hands combination, with all things in between missed out (an inverse kinematics problem). Whilst the low-rez model might work for social gatherings and even business meetings/conferences does it also work for high-skills training? If you've got a lot of people in a multi-user environment then they are more likely to want a distinctive avatar (and often even if only single user!), so how easily and extensively can they customise their avatar? For added points how easily can your avatar emote, gesture, animate, show facial expressions and lip-sync to your voice?

Environment Fidelity and Flexibility

Just as people obsess about their avatar they can also obsess about every last pixel in the detail of the environment- too much Grand Theft Auto and not enough Pokemon Red in their childhood? Whilst a certain level of environmental fidelity is useful, too much can just place too high a demand on the device they are using, or have them spend all their time looking at the scenery and not at the task. What the right balance is will very much depend on the application, so can the platform support the range of fidelities you are likely to use? A nice touch with Mozilla Hubs is a dashboard of the impact of what you are building on its usability by a device - in terms of factors like polygon counts, lighting effects, texture sizes etc - at least it keeps it front of mind. Generally most worlds will support most fidelities, but some have hard limits on object and image upload sizes.

We also might need flexibility in the environment. We might be happy with some out-of-the-box templates for a meeting room, conference or classroom, or just the ability to upload 360 photosphere backdrops. Or we might want to build our own, or import from Blender, 3D Studio Max or elsewhere.

Meeting and Collaboration Functions

One of the biggest use cases for SocialVR is around social and business meetings and gatherings, from a bunch of friends getting together to a big international conference. There is a definite feature set which, whilst it could be customised afresh by each user/creator, is often made available out-the-box to get people started on collaboration and social tasks. This is likely to include items like image, file and screen sharing and virtual white boards, and may include shared web-screens, tilt-brush style in-world 3D drawing/graffiti and 3D object import/rezzing. It is taken as read that the world supports text chat and voice chat. Perhaps one of the most impressive features in this space is the 3D scene recording in EngageVR - nothing weirder than watching a replay of an avatar of yourself from within the same room!


Multi-Platform Support

The key issue for us is does the world support access in a 2D/3D (aka first-person-shooter/Minecraft/Sims) mode from an ordinary PC as well as from a VR Headmounted Display (HMD). Most people still don't have a VR headset, yet immersive 3D is of benefit to all. And even if they do have a VR headset there are a lot of use cases (in a cafe, on the train/bus, sat on the sofa half-watching TV) where they'd probably prefer the 2D/3D approach. There's also the case of which HMDs do they support - will they run on an untethered device like the Quest or do they need the power of a PC behind them like a Rift or Vive? And do they work on some of the "third-party" headsets like the Valve Index, HP Reverb or Pico? Of course if a world has cracked the accessibility challenge through using WebXR then multi-platform support may be more easily achieved (but still isn't a given).

If the world offers a 2D/3D mode then does it run on Windows and Macs (we assume), but what about Linux or Chromebooks? We also find that many students want to use iOS and Android devices, so does it also work on tablets and even smartphones - and here the issues of environment fidelity are likely to bite?

Object Rez, Build and Import

As with the environment we might want some ability to build our own objects in the world, or be happy with the provided libraries. Many worlds offer different permission levels, so "visitors" might not be able to rez things, or only rez a limited range, whereas "owners" can build anything. If I want something that isn't in the in-world library then how do I build it? Do I do this in-world through some sort of virtual Lego, or through what we used to call prim-torture in Second Life? Or do I buy in from a site like Sketchfab (where there may be IP, size or formats issues), or can I import meshes from a 3D design tool like Blender (in which case what formats are supported - glb/gltf seems to be the emerging format for WebXR worlds)?

Open Source/Standards Based/Portability

A lot of the early cyberspace was about walled gardens - AOL, Compuserve etc. The web, driven by standards like HTML and HTTP blew that all away. In particular people could a) build their own content, b) link their content to other people's content and c) host their own content on their own hardware (outside or inside a firewall) if they really wanted to. And apart from any hosting costs you didn't need to pay anybody anything. If we really want 3D environments to take off to the same degree as the web then we need something similar. The software that drives it needs to be open source, the different software and hardware components need to talk to common open standards, and assets (not only 3D objects but also more nebulous things like identities and avatars) need to be portable (or linkable) between spaces. The OpenSim HyperGrid is probably the closest we've had to this, and the move to WebXR might give us another way in to this model.

Persistency/Shared World/In-World Creativity (World Fidelity?)

Probably one of the biggest differences between something like Second Life and the current generation of SocialVR spaces is that the latter lack the persistency and single world model of Second Life. Most of the current platforms (Dual Universe may be an interesting exception) work on a spaces/room model, where you develop in a series of spaces, which you might then open up to visitors, and/or link to other spaces. This is a long way from the one-world model of Second Life (and also Somnium Space) where you just buy land and develop it (which is also closer to the original Snowcrash model). These spaces also tend to have a default state which they revert to once everyone has left - you might be able to rez things whilst you are in, but those objects disappear when you leave - this is particularly true (and probably desirable) in training and learning orientated spaces. If your Object Build model supports in-world building then persistency sort of becomes essential as that is how the world is built. I'm starting to like the idea of "World Fidelity" to describe all this - how much does the virtual world actually feel and behave like the real, peopled, physical world?

Privacy and Security

In a lot of applications you need to have some control of the privacy and security of your part of the virtual environment. You may only want your employees, students or clients in your space. Mind you there are also occasions when you want visitors to have rapid, anonymous access with no sign-up to put them off.  Most platforms offer variations of access control, and perhaps it can be taken as read - but its worth double checking that the implementation will meet your specific needs.

There is also an emerging interest in using blockchain technologies to secure and manage the rights of ownership and IP in the objects created in the virtual world - such as in Somnium Space. This may be overkill for many applications, but is an interesting area to watch.

Scripting and Bots

Being able to walk around a space and look at things is nice. But being able to interact with objects (particularly if they can link through to Web APIs) is better, and being able to see and talk to bots within the space, which makes the whole place seem more alive, is better still. Scripting seems to be a big divide between platforms. Many (most?) of the SocialVR spaces don't support scripting - certainly as an ordinary user, or even as a "developer". The more training orientated ones do have scripting (but what language and how easy is it?), and some platforms have it only through a developer SDK (e.g. MREs in AltSpaceVR). Once every world supports the equivalent of JavascriptXR then we can really begin to see some innovative use of virtual space.

Cost

Cost  is always likely to be a factor in any selection, but I was always taught that it should be on an orthogonal axis to the feature selection, and it may also be related to the Open Source/Portability assessment.

Radar Plot

When we've got  a set of parameters like this we find radar-plots as a nice way to represent target systems. The diagrams below show some prototype radar diagrams for Second Life and Mozilla Hubs.


Radar Plot - Mozilla Hubs


Radar Plot - Second Life


A Starter for 10

We are well aware that there are some things we've left off this list that others might see as vital - thinks like latency for instance. But it is out starter for 10. I'm sure your list varies, and would be interested to hear variations in the comments.