Virtual reality, and really even augmented reality, is a very personal experience in the sense that only the one wearing it can really appreciate what’s happening. In fact, only the one wearing it can actually see what’s going on, with spectators left imagining. With some age-old “green screen” Hollywood magic, some 3D modeling, and eye tracking, Google has devised a way to give viewers a better sense of the virtual world and, in a way, makes it actually easier to sell the idea of VR to them.
Google has a rather odd way of defining “mixed reality”. For companies like Microsoft and Magic Leap, mixed reality is an offshoot (or really just a renaming) of augmented reality, where virtual objects are projected onto the real world and users are able to interact with them as if they were actual real-world objects.
In this particular context, however, Google describes mixed reality practically as a way for “the audience” to see what’s happening in the virtual world by combining what’s being projected in the virtual world with what’s actually in the real-world, but not from the wearer’s point of view. Or to put it simply, mixed reality for Google is trying to show to the audience what the VR headset wearer looks like or what he or she is doing in VR, with the VR world in the background (or foreground).
In that sense, the problems with VR immediately surface. One, you really don’t get to see the VR world from a third person point of view, much less with the wearer actually in the VR world. At the very least you’ll see a person wearing oversized goggles flapping his or her arms around. At most, you’ll see the VR world from the first person view of the wearer. With some green screen magic, cameras, and compositing, Google is able to track the wearer’s movement and combine it with a rendering of the VR world from the viewpoint of a spectator, but that still leaves one other problem: the headset.
Unlike, say, the HoloLens, a VR headset like the HTC Vive is a completely opaque thing. It prevents users from seeing the wearer’s eyes, which makes the miss out on subtle things like expressions or, more importantly, what the wear is actually looking at. After all, one’s face can look in one direction but the eyes looking in a different direction. To solve this, Google employs more sophisticated technology and techniques, like 3D modeling a person’s face and overlaying it on top of headset to make it look “invisible”. But more importantly, Google tracks the wearer’s eyes and adjusts the 3D rendered face as needed. The result is a more realistic representation of the wearer’s face, but one that is almost close to uncanny valley.
While all of these happen at near real-time speeds, it’s something that can really only be experienced on another screen, like a TV or monitor, as it uses post-processing and composition to bring all of these together. So while it might not exactly be useful when the VR user is right in front of you, Google envision that this technology could be used for things like VR conferences. Or maybe when trying to give other people a clue of how not so crazy virtual reality really is. Or how crazy it actually is.
SOURCE: Google (1), (2)