11 November 2021

An AI Landscape — What Do You Mean by AI?


I've updated my AI Landscape piece as I find myself using it a lot and posted it to Medium at  


I think I'll post all my long form pieces to Medium as its quite a nice platform to write and read things in, and probably gets more reach than the blog! But I'll always post links here as things get published.

10 November 2021

3D Immersive Visual Analytics White Paper - Just Updated


We last updated our 3D Immersive Visual Analytics White Paper in 2017! A lot has happened since then so it was well due an update - even though 80% of the paper covers the fundamentals which a) haven't changed and b) still seem to be learnt by many people.

 Although we stopped selling and using Datascape a couple of years ago the use of VR for data visualisation is still something that is very close to my heart and that I'd like to do more work on. My first steps in using WebXR are a move in this direction, and the current Traveller Map WebXR demonstrator is actually a data-driven app which has a lot of the key features in it needed for a WebXR IVA system. I'll hopefully get some development time over the next few months to pull a proper demo together.

In the meantime you can revisit some of our immersive 3D visualisations of the past at:

Read the updated 3D Immersive Visual Analytics White Paper now!

8 November 2021

David speaking at CogX on Virtual Humans and Digital Immortality


Somehow forgot to blog this when I did the talk at CogX back in 2019 but still highly relevant and been linking to it from the updated website so I thought I might as well post it up here as well.

1 November 2021

Jon Radoff's 7 Layer Metaverse Model and Market Map


Jon Radoff has an interesting "7 layer model of the Metaverse" at https://medium.com/building-the-metaverse/the-metaverse-value-chain-afcf9e09e3a7 and a mapping of the recent Facebook/Meta announcements at https://medium.com/building-the-metaverse/facebook-as-a-metaverse-company-d5712198b22d.

Needless to say we all have our own variations/comments on this. For me I think that Infrastructure/Decentralisation/Spatial should be the bottom 3 layers, and really the Human Interface should be the outer one - you don't (or shouldn't) build you virtual experience on the human interface, you should build the experience and then open up to as many interface types as you can. Discovery could perhaps be split between internal discovery (how you find out about stuff in the world) and external discovery (how you find out about experiences, and particularly how one experience links to another).  And should the Creator economy sit on top of the experience as in a true Virtual World prople are using the experience to create the economy, and then enhance the experience (so circular!). Perhaps something more like a network model would be more realistic (if less eye-catching) given the interconnectivities involved - it's not quite the ISO 7 Layer model diagram.

I'll have to dig out one of my old models.

Jon also has a Market Map of the Metaverse at https://medium.com/building-the-metaverse/market-map-of-the-metaverse-8ae0cde89696 - which is likely to be quite dynamic!

Jon also has a Metaverse Canon Reading Guide at https://medium.com/building-the-metaverse/the-metaverse-canon-reading-guide-9eb1b371b505

29 October 2021

Facebook (aka Meta) Connect2021

Me at Connect21 in Venues*

*although you could see the video in-world it deoens't show on the photos you take

I attended the Facebook Connect2021 event in Oculus (now Horizons) Venues yesterday afternoon. Struck me that there were a lot more people in VR than last time, around 1200 were meant to be there, although we were all sharded into ~ 20 people pods (and they were on 2 levels so you really only saw ~ 10).

Needless to say the whole name change thing and the relentless focus on the Metaverse made it a bit more weighty than the last one I went to which just introduced the Quest 2, but apart from Mark Zuckerberg finally "getting" virtual worlds there was really little there that wasn't being talked about 10 years ago. Sure the tech is moving on but there are still lots of areas which need to be sorted.

So here are my highlights/comments on the talk, in roughly timeline order.

  • Yep, pretty much everything talked about in terms of what you could do in the VW element of the Metaverse you could do in SL 15 years ago
  • The one big difference, and which would be cool, is the better integration of your desktop into VR and bringing your phone into VR so that you can stay in world whilst checking "RL" information or having chats with people in "RL"
  • Zuckerberg talked about the metaverse as being the "replacement" for the mobile internet, but I think despite the fancy nature of the "metaverse", even when implemented through MR glasses we'll still find the unobtrusive nature of the smartphone (along with battery life etc) to be valuable, so I think they'll just be parallel streams into the meta-metaverse!
  • He talked about interoperability in terms of being able to bring avatars and virtual objects between worlds - but it wasn't clear if this was between FB/Meta ecosphere applications only, or out to third party virtual worlds. I fear the former. At the ReLive conference in 2011 we highlighted interoperability as the key medium term need - and we're still not there!
  • Some nice quick videos of the various Horizons platforms, with Rooms going from single to multi-user, and it looks like Spaces will have a Scratch type scripting system, which will be great news as long as its powerful enough for decent work.
  • Also some nice hints about better tools to enable 2D progress web applications to display within the 3D/VR environment, further breaking the barrier between the 3D/VR and 2D/RL environments!
  • The whole presentation kept switching from RL to greenscreen to Horizons to smoke and mirrors, by the end I wasn't sure if Mark was really an avatar or not and you certainly couldn't tell what was real tech and what was marketingware.
  • There seemed to be a lot of emphasis on "holograms" and going to real concerts/meetings as holograms with real friends/colleagues. It was at all clear on how that was ever going to work, although the AR glasses would sort of make that feasible. Mind you having tried to stream a bit of video out of  a concert recently I can't see how the bandwidth will ever be there! Oh, virtual after parties - tick, done that, great one after The Wall show in SL "back in the day" actually on the stage set.
  • The lack of a VR keyboard in Venues itself (and most VR apps) was keenly felt as it meant that the dozen or so of us there could really interact whilst the session was going on. Meta may have a solution - see below.
  • A big announcement for the short term was that Oculus products will not now mandate FB accounts, so the concerns that many corporate/academic clients had might go away. They will also continue to allow sideloading onto Quest, so it looks like the Quest will remain a more open platform and not be locked down to FB/Meta - hurrah!
  • Lots of stuff on their "presence" SDKs to help developers better integrate controllers, MR /pas-through features and voice control. "Presence" was emphasised a lot, more than immersion (just like we've been telling a recent client!)
  • Project Cambria will be their next generation high-end headset, non-tethered but more expensive that the Quest, although they hope features will gradually drop down to the Quests. Hi-rez, colour pass-through to better enable MR applications seems the key differences, plus the sensors to detect facial expression and eye-movement for more natural avatar interaction, nice.
  • Project Lazare is their AR/MR glasses project, heading for full Hololens type MR but in a spectacles form-factor. This is what would make their "holograms" believable - as long as you put glasses on all the RL people.
  • A list of futures "breakthrough" areas for VW tech looked very generic, everything talked about was evolutionary, and interestingly was missing neuro-interfaces tech. 
  • Then he talked about neurotech (!), or specifically electromyography (EMG) and a device which looked suspiciously like the old Thalmic Labs MYO. Still have mine (see photo below). This senses what your fingers are doing from the electro-neural impulses picked up at the wrist. Some nice demos (?) of subtle gesture control, including typing. In theory you need hardly actually move your fingers at all. One of the more interesting things shown. Does it work for feet and locomotion too?
  • The Codec photo-realistic avatars looked quite good - volumetric and based on face/body scans, but with the ability to then change hair, clothes etc. The clothing demo and talk about hair modelling reminded me of how much we wrote about those areas in our Virtual Humans book.

My old MYO

Needless to say all the bad press side of Facebook, privacy and corporate greed didn't get a look in (although there was a snide comment about high-taxes (!) stifling growth and innovation), but he did use the words "humble" or "humbling" more times than I could count, and Nick Clegg popped up for about 30 seconds to say not very much. As some of our recent work has highlighted for a client VR offers unparalleled insights into personal behaviour and its vital that our explicit and implicit data is secure and not being exploited, and the "metaverse" is not under the control of one company. If Facebook was really serious about the metaverse they'd open source the whole lot right now.

You can watch the presentation for yourself at https://www.youtube.com/watch?v=Uvufun6xer8&ab_channel=Meta

Full set of Connect21 presentations on the more technical side at https://www.facebook.com/watch/113556640449808/901458330492319/

Update: Looking at the World Building in Horizons video it really is very low rez at the moment, possibly even less than Hubs, and talk of strict performance limits. So good for some stuff but a whole load of use cases wiped right out (for now) :-(

27 October 2021

WebXR Wars

Interesting piece in Immersive Web Weekly on the "rift" between the Oculus/Chrome implementation of WebXR and the Firefox/HTC/others version. 

Today XR headset makers face a tough choice among immersive browsers: invest a considerable number of people to maintain a custom Google Chromium-based browser with great WebXR support (as Facebook does for the Quest) or port a Mozilla Gecko-based browser like Firefox Reality and deal with XR support that hasn't changed in any substantial way since Mozilla's Mixed Reality team was laid off more than a year ago.

Last week Shen Ye followed the path of the Pico team in revealing that HTC's new Vive Flow headset will ship with Firefox Reality and Stan Larroque also announced in a YouTube livestream that the Lynx team is working on a Firefox Reality port for their headset.

HTC, Lynx, Pico and Mozilla do not intend, as far as I can tell, to update Gecko's WebXR implementation. The divergence between Chromium- and Gecko-based browsers has already fragmented the fetal immersive web, forcing developers to choose between supporting only one browser engine or writing what is effectively two separate rending and input handling paths for their code. If we want a healthy and open immersive web then this must change.

(editor's note for transparency: I was the original product manager for Firefox Reality but left Mozilla before the layoffs. I have donated to the Mozilla Foundation for more than a decade and continue to do so but have no equity or other financial ties.)

We've bumped into this at Daden when looking at using Pico for a potential project. Pretty typical of the industry, agree a new standard and then immediately split it in two!

8 July 2021

RAF's Stories from the Future - Complete with Virtual Personas

The Royal Air Force has just released a second edition of their "Stories from the Future" - fiction pieces designed to get people thinking about the future of the Services and Air Defence.

One of the stories, "Heads Together" draws on some of the work we've done for MOD around the concept of virtual personas:

"Diverse viewpoints make for better decisions, so imagine a world where the whole of society engages with Defence through some form of service and the friends that you make there can convene virtually when you need to discuss a problem, whether they now work in Defence, in industry, in academia – or at all. In this tale, we look at how our people might benefit from this in the future"

The question that the article explicitly poses at the end is:

How would you feel about bring perpetuated in virtual form after you had changed jobs, left Defence or even died? Would advice from your virtual self be a liability to your real self?

You can read this, and the other stories at https://www.raf.mod.uk/documents/pdf/stories-from-the-future-second-edition/