After spending some time digging through example code, random blog posts, etc. to get more information on the actual API I can only conclude that there is none. Well not documented. I've got some bits and pieces though.
As far as I understand it right now (and that may be wrong) the main idea behind WebVR is to find a VR device, gotten through
, and request fullscreen on a canvas element passing on the vr device object.
Of course Chrome and Firefox still have to duke out whether
is gonna use promises or callbacks. Personally I don't really care because promises will be a language feature soon anyways (I would have minded if it were based on a library only). For callbacks you can specify two callbacks (pass and fail). For promises you can register
for the same. Besides the paradigm, which is a personal preference, the only advantage of promises is that you can easily attach multiple handlers to the same promise. I don't think that's going to be a useful use case here, but hey.
The current problem appears to be that, without an actual VR device your SOOL. In the current landscape that means you can't do webvr without an Oculus. To make matters worse, it seems you can't even do WebVR without connecting your phone to a desktop (but that's the case with Oculus anyways, right now, so that may also be the reason). Firefox currently has no mobile build with WebVR, Chrome's build does return "a" VR device on a regular phone but the build crashes when I try to pass it on in the
call. Total wipeout.
I'll get my Oculus soon and that may evaporate some of these complaints. But until then let me explain what I had initially expected from WebVR.
I think WebVR should fundamentally be aimed for a CardBoard approach. That is, using just a smartphone as VR screen in a generic holder like CardBoard (or the GearVR if you don't plug it in
). So as I understand it the current WebVR needs WebGL to do the whole camera thing. That means 2d canvas is pretty much out of the window, though I don't think it has to be.
I would like WebVR to have this flow;
. Done! I would expect WebVR to cut the canvas in half, apply barrel distortion to both sides, and display the result fullscreen as normal. It would ignore anything else it usually uses. The app itself would be responsible for updating the canvas, applying different camera positions, etc etc.
This is what I would imagine with WebVR: a website that can make something work in VR anywhere, without needing a VR device. Or WebGL. Or your phone connected to a desktop. Or even canvas. We'll get to a point where people use CSS to make amazing VR animations. Environments even.
Since this is basically asking for a double barrel distortion on a 2d canvas I'm going to try and see if I can make a mock for this in WebGL as proof of concept.
The more I think about it, the more I realize that this whole VR thing was already long possible and these lenses are similar to the Wiimote innovation of the first Wii. But that's still great :) Even better that we can do it now, anyways.