29 February 2016

The VR Landscape Map

In trying to explain VR to people it has been interesting to see how certain notions get fixed in people minds. In particular there seems to be a prevailing view that Cardboard type solutions are for simple VR, such as 2D photospheres and that Oculus is for complex VR, such as 3D interactive games. One such example is a recent CBC blog post "Virtual reality: The difference between a $20 and $1,100 VR headset" which said that "Google Cardboard is passive VR" and then talking of the Samsung Gear VR said "you can look around, but you can't move around in the virtual space." Now it may have been that he was talking about lateral tracking (but even that could be done by those headsets), but the lay reader is bound to come away with the sense that Cardboard and its ilk are all about static experiences and photospheres, and Oculus and co are the "real" VR.

To help better explain the VR landscape to people we've come up with a simple 2x2 grid. It featured in our recent white paper, but we've had such a good response to it we thought it was worth giving it its own blog post.


The grid is based on two key features of a VR solution:

  • Is the content natively 2D (such as a photospheres), or is it natively 3D (3D CGI)
  • Is the device being used to experience the content based around a smartphone+holder solution, or an integrated headset.

Of course there are blurred edges between the 2D and 3D content categories (stereoscopic imagery, 3D objects against photosphere backgrounds), and even in devices the categories may not be quite so clear cut as different headsets emerge, but it hopefully gets people thinking about the important distinctions, and the saliency of each option against their needs.

The grid then shows where some typical VR use cases may best sit on the grid. But the important point is that in theory almost any of them could be implemented in any quadrant of the grid, but some approaches will be far more cost efficient and offer better capability and affordances than approaches from other quadrants.

As with all 2x2 matrices I'm sure we could make it more complex to deal with all sorts of special cases, but for now we think its at the right level to get the important message across.

And the next challenge is to work out a nice diagram to explain the difference between 2D screen based 3D, VR and Augmented Reality!

26 February 2016

DNS Visualisations


In putting together some demos for Datascape2XL we've been diving back into our archives to revisit some of our favourite datasets and give them the "XL" treatment. This dataset shows DNS activity on a computer network, with time wrapped around a cylinder (left to right), and the DNS addresses being plotted "up" the screen, the data being sourced courtesy of our friends at Assuria.

The image below shows the original plot. This had about 20k data points covering about a week. The vertical "stacks" or "bands" correspond to heightened activity during days of the week, not quite sure why there's such a big gap, probably the PCs we selected (to keep the data volume down) were off those days.


Even with this few points there were a lot of features of interest, such as the obvious "beaconing" going on against some DNS addresses pretty much 24/7.

The image below and at the top of the article show the result with Datascape2XL and all 848,000 data points being plotted, covering 100+ PCs over about 8 weeks (you can see the 8 blocks of 5 days of weekday activity).


We set the radius to about 180 degrees, but could easily have opened it out to the full 360 degrees. The black background works great on a big monitor, possibly less so on this web page, but is (like everything else in the visualisation) user selectable. We also did a version where we plotted all the data against a single 24 clock so you could see if things happen at the same time every day.

The image below shows a couple of days in detail.


So every point is a DNS request being made. The colour is assigned to the DNS request type. Red is normal "A" IP4 requests, Cyan in "PTR" requests, Orange is SOA, Yellow is SRV etc.

Beaconing on both A and PTR requests clearly stands out, as to some of the vertical "DNS cascades" caused by web pages which pull in resources (aka "ads") from lots of servers. Interesting that of the 3 days in the plot in the lower image only the first day shows SOA/SRV activity. It also looks like there was a bit of activity late on Sunday night!

The final image shows a close up zoom of the individual requests, with hover text on a particular record.


We'll post a video up in the next few weeks so as to give you a real sense of what it is like to fly though this visualisation. The video will also show what the "visualisation" is like with audio added, a "sonification", where with the new version of Datascape we can map data fields to sounds as well! And leverage the fact that the brain actually parses audio information quicker than visual information (especially text).

19 February 2016

Latest Newsletter out - VR and Fieldscapes

Our February 2016 Newsletter is out. In this latest issue of the Daden Newsletter we look at:

  • What is going on behind the hype of virtual reality, with a neat 2x2 matrix which we think nicely summarises what the current options are through 2016, and probably beyond.
  • Fieldscapes - the new name for our InnovateUK funded Virtual Field Trips Phase 2 project, as part of their Design for Impact programme. We're now working closely with the Field Studies Council and the schools using their Preston Montford field studies centre in order to ensure that Fieldscapes is designed to meet the needs of teachers and pupils.

We also provide an update on the development of our 2nd Generation Datascape 3D immersive data visualisation application, with some sneak-peak screenshots, and the planned beta and live release dates.

Read it now!

We hope you enjoy the newsletter, and do get in touch if you would like to discuss any of the topics raised in the newsletter, or our products and services, in more detail!

Feb 16 Newsletter - VR and Fieldscapes

Our February 2016 Newsletter is out. In this latest issue of the Daden Newsletter we look at: 

  • What is going on behind the hype of virtual reality, with a neat 2x2 matrix which we think nicely summarises what the current options are through 2016, and probably beyond. 
  • Fieldscapes - the new name for our InnovateUK funded Virtual Field Trips Phase 2 project, as part of their Design for Impact programme. We're now working closely with the Field Studies Council and the schools using their Preston Montford field studies centre in order to ensure that Fieldscapes is designed to meet the needs of teachers and pupils. 

We also provide an update on the development of our 2nd Generation Datascape 3D immersive data visualisation application, with some sneak-peak screenshots, and the planned beta and live release dates. 

Read it now!

We hope you enjoy the newsletter, and do get in touch if you would like to discuss any of the topics raised in the newsletter, or our products and services, in more detail!

18 February 2016


By: Joe Robbins


For our recent Daden-U day, I decided to look into A-Frame, the new open-source framework from Mozilla that is designed to make producing websites containing virtual reality content quick and simple.


Examples of the kind of content that is possible through A-Frame can be found at https://aframe.io. To experience the examples as they were intended you will either need to visit them on a PC with an Oculus Rift or on a mobile device with an attached VR viewer such as Google Cardboard.

If you are using the Oculus, you may need to download a browser capable of handling VR content, I myself opted for Mozilla’s own Nightly browser and it worked fine.

My Experience

Given the number of Daden’s projects that utilise VR I thought that it would be a good idea to try building a small VR website informing visitors about who Daden are and what we do here.

I started by downloading the Nightly browser and going through the examples presented on the A-Frame website. This served to give me an overview of what could be accomplished using A-Frame and also gave me some inspiration as to what I wanted my website to look like.

I then grabbed the office’s Oculus Rift Development Kit and hooked it up to my PC, pleasantly surprised with how painless it was to begin viewing the example sites through the headset.

It was time to pull back the cover and see how these pages were made. I had expected that to achieve even the simple results shown in the image below, one would have to write code of nightmarish proportions. The reality was quite different, and the process was no more complicated than adding any kind of content to a website.


For instance, the body of the page shown above consisted of nothing more than the listing below.


Nothing here should look too peculiar to anyone who has covered even the basics of web development. A-Frame introduces its own set of HTML tags, but fortunately all of the properties for these tags are fairly self-explanatory.

I read further into the framework’s documentation and found that it utilises an entity-component paradigm that will be familiar to anyone who has ever developed using a game engine like Unity. For those who haven’t seen this system, it simply involves having an entity, which is a general-purpose object within a scene, and a series of components attached to said entity. The components house properties that affect the behaviour of the entity.

Whilst the documentation for A-Frame isn’t complete at the time of writing, the content that was there did a very good job of introducing the different building blocks with which one produces a VR website.

It was finally time for me to try my own hand at writing a VR webpage. After rolling up my sleeves, I created a new project in Visual Studio, where I would be writing my website. Adding the A-Frame script to my site was accomplished with a single line referencing the location of the latest version.

I planned to produce a very simple page to begin with, featuring the Daden logo, a few paragraphs and images illustrating what we do, and a video showing some of our work as the cherry on top.

I hit a roadblock immediately, I attempted to import a PNG file containing the Daden logo into my page and tried to run it. All that was waiting for me was a black rectangle. I tried a couple of other images, with varying degrees of success. Visiting the relevant page in the documentation I found a message to the effect of “Yeah, images don’t always work, but we’re going to try and fix it some day, maybe”.

With the image not working out, I moved onto the text. I crawled through the documentation looking for some page that would tell me how to present text and came up empty. It would appear that, at least for now, there is no functionality in A-Frame for presenting a paragraph of text. The workaround that sprung to mind was to import the text inside an image file and then just display the image. This would have been fine, had I not already realised that images don’t tend to present themselves correctly.

At this point I was holding out very little hope that the video would work, but to my surprise I managed to import it using a single HTML tag.

So in the end my page consisted of nothing but a floating video, which is not much use to anyone. Perhaps with a bit more time spent working out how to successfully bring pictures into my site I could begin to produce something impressive. But we may never know, because soon after, my graphics drivers decided to go haywire and I had to waste time getting my screens back up and running.

The Good

  • Easy to utilise - 3D shapes can be produced with a single HTML tag, and the attributes attached to those tags are simple and easily manipulated. Bringing in the A-Frame script is also handled with just one tag.
  • Impressive example pieces - A few of the example sites I encountered did impress me, if not in terms of functionality, then at least in terms of design. One that stood out to me was the clothing shop pictured below.


The Bad

  • Importing images rarely worked - I could only seem to import images that were already hosted online, rather than from within my own Visual Studio project. While I’m certain I could have found a workaround, I was disappointed that such a fundamental task was not as hassle-free as the rest of the framework.
  • Requires custom-built content - It shouldn’t surprise anyone that we can’t simply load up our old web pages, plug in a headset and immediately be treated to a great VR experience. A VR website needs to be built from the ground up, and needs to be designed to take advantage of the extra dimension granted to its users. This means that 3D models are infinitely preferable to flat images, and somebody is going to have to create all those models.
  • Reading text just doesn’t work - One of the examples on the A-Frame site displays some articles from Wikipedia. The problem with that is that the resolution of the displays in VR headsets simply isn’t high enough to display that text clearly at a distance. This means that the user has to move very close to the text and ends up craning their neck just to read one line of text. I think we can all agree that this is not the kind of progress we imagined the VR revolution to bring about.
  • Poor community support - As is to be expected with any new technology, there is not yet a wide array of resources available in terms of support. This meant that when I ran into problems, such as my images not rendering correctly, I was pretty much on my own. Coupled with the incomplete documentation on the official A-Frame website, this means that a fair amount of perseverance is required when debugging with the framework.


I cannot deny that A-Frame is an intriguing new technology, and that it is a promising start for VR web development. I am also very impressed with how simple it is to make use of the framework. However, I still have my doubts about viewing web pages in virtual reality. A lot more design work needs to be put in if we are going to come up with sensible ways to present information to users.

Ultimately, I feel that the future of the VR web is in the hands of content creators, any compelling virtual reality website is going to need a wealth of 3D models, images and videos that have been specifically crafted for viewing in this new format. Time will tell if A-Frame is the start of a VR web revolution or just a nifty way of drawing some cubes in your browser.

12 February 2016

3D Immersive Data Visualisation Video Talk

David's video presentation this week on Brighttalk is now available for view. It's a 45 minute intro into how we got involved with 3D data visualisation, and an introduction to the capabilities of our Datascape 3D visual analytics application. Note that you need to have or create an account on the Brightalk site to view.

Datascape 2 - The Countdown Starts


After a big internal review this week we're now going full steam ahead to get Datascape2 XL out into beta by the end of March, with a target of the live release by the end of May.

Based on our experiences with Datascape 1 the new version of Datascape will be coming in two flavours:

  • Datascape2XL - a PC "download and install" programme
  • Datascape2GL - a web-based software-as-a-service version, using WebGL.

Our current focus is on Datascape2XL, with GL following perhaps a Quarter later.

We'll be gradually releasing more information on Datascape2GL over the next month or so, and updating the web site pages, wiki pages, and videos. But as a taster the big differences between Datascape1 and Datscape2XL are:

  • A move from Unity3D to DirectX and WPF
  • Our own graphics engine written in DirectX giving us over 10 million selectable data points on screen
  • Completely rewritten data import system, including Excel import and easy adjustment of field types
  • A completely rewritten user interface, making system uses probably an order of magnitude easier to use
  • Drag and drop on field mapping
  • Smart mappings - so you should get something plottable without doing anything other than importing the data
  • Dynamic switching between mappings
  • Oculus VR support (other HMD to come)
  • Audio support (so you can map to audio features (pitch, timbre) as well as vidual features of a point (almost essential in VR use)

But the core of the system remains absolutely the same - mapping any fields (or combinations of fields) in your data to any (one or more) features of the plotted 3D point, and then the ability to fly through that data to find the trends and anomalies using the eye/brain's optimisation for detecting patterns in 3D space.

If you'd like to be considered for the beta trial then please fill out the form - but note that priority will be given to existing Datascape users and partners.

And by the way, that image at the top of the post shows 13 million selectable data points (not 13!) plotted in a navigable 3D space - just add Oculus Rift!

8 February 2016

Virtual Reality - Without The Hype

Download our new white paper on Virtual Reality, in which we attempt to cut through the hype and let you know what you can do today, and what might need to wait til later in 2016, or even 2017. We also try to clarify the differences between the Smartphone and Integrated Headset approaches to VR, and the differences between 2D photosphere and 3D model content. Let us know your thoughts through our Facebook or LinkedIn Groups.

Virtual Reality - Without the Hype

A lot has been written about Virtual Reality in the last year year or so. This short white paper is an attempt to get beneath the hype and identify what the key issues, opportunities and challenges are with deploying Virtual Reality solutions in 2016, and how that might change in 2017 and beyond. Download the Virtual Reality - Without the Hype White Paper.

1 February 2016

Social VR

Friday was the second DadenU day, when each of us gets to whatever we want, as long as its within the broad range of tech and activities that Daden is involved with. My initial aim was to go in and try out some "Social VR" worlds. The plan was scuppered when a certain TV programme put in a last minute plea to borrow our DK2 headset, so all I could do was sign up, read about them, and try and get them to work with DK1 (fail). Hopefully I'll get a chance to try them out for real later in the week, in which case I'll post more, but for now here's a quick intro to SocialVR.

Most of the hype and demo around VR have been about solo experiences, whether it's playing FPS or space combat games, explorations like Google Expeditions, documentary films or cyber-sex. With a heritage in Virtual Worlds such as AlphaWorld, There.com and Second Life we've always been far more interested in 3D environments as social spaces, as well as learning spaces of course. The "socialVR" label has been applied to those VR environments which focus on having relatively large (~4 - 20) numbers of people in the same virtual space, all interacting with each others, but not confined to playing a single game or carrying out a single undertaking. Second Life with VR if you like.

Needless to say both Linden Lab and Philip Rosedale (founder of Second Life) are both active in the SocialVR space, the former with Project Sansar, and Philip Rosedale with High Fidelity. But there are also some new kids on the block. The two I've looked at so far are JanusVR and AltSpaceVR.


JanusVR describes itself as "an immersive, collaborative, multi-dimensional internet". Users are represented by very cartoony avatars, and the spaces range in 3D quality from very basic to reasonable.  What is evident from the screenshots is that the 2D web plays a very important past in JanusVR with the ability to bring web content onto surfaces within the 3D environment. Whilst this gives access to lots of ready made content it will be interesting to see how well this works as we find that reading small 2D text in VR is very hard. It might make sense for a YouTube video or Powerpoint or an image gallery, and that may indeed be where they are focussing it, but dont think about doing a Google Docs spreadsheet just yet. JanusVR also apparently has talk (audio?) and chat, customisable avatars, 3D hand gesture control (Leap Motion?) and the ability to "author content using existing 3D modeling tools, with our extended HTML and javascript".

That authoring looks very interesting and follows an HTML syntax to create "FireBoxRooms", specifying assets and then placing them in the room - with 16 room templates available to get you started. All the standard 3D asset types are available, including 3D objects, 2D images and video, sounds, particles, shaders, skyboxes, NPC avatars etc. Can't wait to have a play.

Whilst the cartoon style is not likely to win any business converts yet (and more sober avatars may be available), it certainly looks like an interesting socialVR playground.


AltSpaceVR is probably getting the most press of all the SocialVR plays. It's interesting that their tag-line shows a similar focus to JanusVR - "Experience the web, from anywhere, with anyone, through virtual reality" - it looks like it's all about the 2D web in 3D. Leap Motion or Kinect can be used for avatar control, and the system is also designed for use with normal 2D screens as well as VR headsets. Initially AltSpaceVR avatars were simple "robots" (shades of Qwak), but more human - although again cartoony - avatars are also appearing. AltSpaceVR appears to have two different elements - a professionally crafted "container" world or room, and instances of 3D content. It's not clear how much control you have over the containers (which may well just be bigger, controlled versions of the standard 3D content) , but the 3D content (which typically rezzes in a space in the room) is written using Javascript and the THREE.js (WebGL) library (which we're already using for DatascapeGL). In one really cool demo they even show editing the THREE.js content on a web based code editor on an in-world web screen, and then having the resultant 3D object/scene change each time the updated code is saved (which I suppose is what we used to have in SL!). So as a development and sharing environments for 3D "widgets" this looks like great news. And of course that has already got us thinking that if we've already got a dataviz app in Three.JS then what would AltSpaceVR be like as a collaborative 3D data viewer? Watch this space.....

The other things I really like about AltSpaceVR is their Dungeons and Dragons in VR. Using, I assume, the THREE.js 3D widget functionality, they have created a social VR space for D&D players to gather around the table and play out a D&D Fantasy Role-Playing session. It may seem odd that people would want to do that rather than putting themselves into an immersive 3D fantasy world, but tabeltop and board games are having a resurgence, and the idea of playing them in VR or virtual worlds is something I've been toying with since the early days of SL (although for me it's more likely to be Taveller, an SPI hex war classic or a proper miniatures game than D&D). So if that's the sort of thing you can do with AltSpaceVR then it might be time to junk the Unity/Oculus prototype I was working on and start work on an AltSpaceVR/Oculus version instead!

So, only a quick look, and informed by reading not doing, but hopefully I can bring you updated on what each is like to use and code for in the coming weeks.