If you’ve ever used any VR headsets like the Quest or PlayStation VR, you’ve certainly been impressed by the immersion. But at no time were you in doubt whether what you saw was “real” or just “virtual”.
Will we ever be able to genuinely confuse the two? At Meta, this challenge is known as the “Visual Turing Test”. Alan Turing, the father of computing, had predicted that, in the future, a computer would be perfectly capable of impersonating human — Meta just paraphrased his term and now says that the virtual is what will pass for the real.
And the company has made steady strides toward that goal. Mark Zuckerberg, CEO of Meta, met virtually with the press last Thursday (16) to present prototypes and studies made by Reality Labs, a division dedicated to VR and metaverse productions.
The following is a summary of how they aim to overcome the most common flaws in the technology of virtual reality glasses to perfectly resemble the way our own eyes see ordinary reality.
1- Focus variation
In real life, our eyes and brain quickly adjust as we choose some point in our vision to be our focus. Right now, you are looking at your computer or cell phone, being able to identify each of these little letters. But if you decide to look further ahead, your vision will quickly adjust to focus on a window, people on the street, or other objects in the environment.
VR glasses still can’t make this transition smoothly. But since 2017, Meta has been working on a series of prototypes called Half Dome, whose latest iteration, Half Dome 3, introduces an innovation dubbed Varifocal.
In short, it’s a technology that tracks the eyes to understand where you intend to focus, adapting the image displayed on the display instantly. Imagine in a zombie game: you will clearly see the shotgun in your “hands”, but also the undead that will appear at the end of the hall, catching your attention with a grunt.
The curvature of the VR glasses and the nature of the production and projection of the virtual image on the lenses still generate certain curvatures or expansions in the elements that are more at the margin of the field of vision.
Currently, headsets try to solve the problem with a static solution, but one that doesn’t cover all the variables, depending on where the user is looking.
Meta created another prototype, with 3D TV technology and software that emulates a VR lens to create distortion simulations in order to be able to study them.
It’s like a quick test lab, where the Reality Labs team can test different algorithms to correct changes, without necessarily having to wear a headset.
Some TV and cell phone models have already exceeded the goal of 60 pixels per image angle, which allows for a “retinal” resolution, that is, similar to that of the human retina. Why aren’t VR glasses there yet?
Because pixels are spread out to cover a larger area to simulate depth of field. This lowers the final resolution.
The main clue is texts: any virtual object with very “thin” letters is difficult to read. It’s like you’re on one of those tests in the eye doctor’s office.
The Reality Labs team managed to overcome the hurdle in a prototype called Butterscotch (“Caramel”), with a new type of hybrid lens that reduced the field of view by about 50% of the Quest 2 (the headset model currently sold by Goal).
According to Zuckerberg, the machine practically solves the issue of resolution – the problem now is to fit all this technology into a lightweight, ergonomic piece of equipment, so that it can work commercially.
Also according to Reality Labs metrics, nature is between 10 and 100 times brighter than the images on the best high definition televisions currently on the market. And, according to qualitative research, the lack of this liveliness is the main obstacle in the perception that a virtual image is as perfect as the real one.
That’s where the Starburst test product comes in — the first VR headset with HDR (High Dynamic Frequency). This feature, already present in some TVs and monitors, creates sharper, brighter images with impressive realism.
In HDR, light output is measured in the unit “nits”. The Quest 2 can emit up to 100 nits. Certain TV models come close to 10,000 nits, considered the parameter that best represents common reality. Startburst manages to reach 20 thousand.
The problem is that the equipment is still an impractical, cumbersome piece of equipment that researchers hold close to their faces by straps, as if they were binoculars.
5- Weight and size
Let’s face it: no image created in VR will “fool” a user as long as there is a huge headset squeezing his head and constantly reminding him that he is “stuck” in a machine.
Meta is making headway in miniaturization: the prototype Holocake (“Holobolo” in free translation) is the smallest and lightest VR headset they’ve ever created. It is fully functional, capable of running any existing VR program – just connect it (still by cable) to the PC.
To arrive at the dimensions of an ordinary sunglasses, the engineers behind Holocake used two new techniques. The first, called polarization-based optical bending, reduces the space between the display panel and the lens.
The second is even more innovative: instead of using a conventional curved lens, it uses a flat holographic lens. It is a completely new proposition on the market, with enormous potential to become standard in the future.
Then it will be possible to combine the powers of the half-domeof butterscotch It’s from starbust in a single lightweight, portable device? Who is in this direction is another project, the Mirror Lake (“Mirror Lake”). For now, it’s not even a prototype yet: it’s just a design, with a shape similar to those goggles worn by snow skiers.
It will compile the Meta’s progress over the past seven years to win the Visual Turing Test. But it will still take a few more years before it can finally be produced and marketed on a large scale at affordable prices.