The industry has been excited about the idea of Virtual Reality brought into the spotlight through innovative headsets like the Oculus Rift. Where by wearing a pair of goggles you can transport to a digital world.
In order to bring VR to the masses more quickly many companies have come out with low cost headsets that utilize a mobile phone as the screen. Such as, DodoCase’s VR Cardboard Toolkit, Google Cardboard and Samsung’s Gear VR which are low cost headsets that are cases mounting your phone to your head using it’s screen and orientation data to create the VR experience. This is all an effort to test the waters of what some say is the next technological revolution.
The illusion of immersing yourself in a digital world is achieved partly because of a Stereoscopic 3D View. When we see, we’re able to understand depth and 3d space because we have two eyes at different positions with offset viewpoints these images are stitched together in our brain. If you’ve ever used a stereogram or the magic eye images you know how your brain can easily be tricked to create interesting visual effects. This means in order to have content displayed in a way that is believable, we need our content rendered with two views.
To bring VR to the web there are already libraries out there that work with Three.js to create a stereoscopic rendering of your 3D scenes. In order for it to move with your head we need access to the device’s orientation in the browser. Because of this Firefox and Google are pushing the WebVR API, giving web developers the ability to tap into the headset’s data. You can play around with this stuff in Firefox Nightly and special builds of Chrome.
Similarly, Augmented Reality has been around for some time though as something of a gimmick. Augmented Reality is the digital altering of the real world often done by taking video and overlaying digital content. Libraries like JSARToolkit allow people to use their webcam to have graphics mapped to a recognizable marker. Recently though Microsoft announced their project HoloLens, which is a headset that maps virtual content to the real world. This may very well be vaporware, but I think the vision is interesting and something that should excite anyone involved in technology.
Bringing VR and AR to the web in a headset introduces a lot of interesting opportunities. The obvious example being gaming, but there are many interesting examples being explored beyond gaming, like virtual film experiences or ways of communicating virtually.
With all these opportunities, it also introduces a lot of new challenges. As a technology still in its infancy, interaction is probably one of the most problematic areas in this space. Gestural interaction models like those seen with leap motion and the kinect seem to make the most sense but common patterns still have to be established. For example what’s the pinch to zoom or home button for VR? How do these translate across different kinds of VR applications?
User Interfaces are also something still to be explored. Some models can be carried over from past interfaces, but we need to consider what new paradigms might fit better for this different way of interacting. Particularly when building them on the web, what might markup look like for a heads up display in virtual reality or a mapped user interface in augmented reality? This is something Mozilla has been exploring by trying to bring web markup into virtual reality experiences.
Virtual and Augmented reality have long been dreams of science fiction but are finally starting to get mainstream attention. It’s interesting to think how this might effect how we interact with digital media and consume content. Will this all change the face of the web completely? Force it to grow and adapt? Or simply live alongside the web as a native component?