Over the past couple of years, with the introduction of the Apple Vision Pro and the Meta Quest 3, I've become a believer in the potential of mixed reality.
First, and this was a big concern for me, it's possible to use VR headsets without barfing. Second, some of the applications are truly amazing, especially the entertainment. While the ability to watch a movie on a giant screen is awesome, the fully immersive 3D experiences on the Vision Pro are really quite compelling.
In this article, I'm going to show you a technology that has the potential of definitively obsoleting VR devices like the Vision Pro and Quest 3. But first, I want to recount an experience I had with the Vision Pro that had a bit of a reality-altering effect.
Then later, when we discuss the Stanford research, you'll see how they might expand on something like what I experienced and take it far beyond the next level.
Also: These XR glasses gave me a 200-inch screen to work with
There's a Vision Pro experience called Wild Life. I watched the Rhino episode from early 2024 that told the story of a wildlife refuge in Africa. While watching, I really felt like I could reach out and touch the animals; they were that close.
But here's where it gets interesting. Whenever something on TV shows someplace I've actually been to in real life, I have an internal dialog box pop up in my brain that says, "I've been there."
So, some time after I watched the Vision Pro episode on the rhino refuge, we saw a news story about the place. And wouldn't you know it? My brain said, "I've been there," even though I've never been to Africa. Something about the VR immersion indexed that episode in my brain as an actual lived experience, not just something I watched.
To be clear, I knew at the time it wasn't a real experience. I currently know that it wasn't a real-life lived experience. Yet some little bit of internal brain parameterization still indexes it in the lived experiences table rather than the viewed experiences table.
Also: I finally tried Samsung's XR headset, and it beats my Apple Vision Pro in meaningful ways
But there are a few widely known problems with the Vision Pro. It's way too expensive, but it's not just that. I own one. I purchased it to be able to write about it for you. Even though I have one right here and movies are insanely awesome on it, I only use it when I have to for work.
Why? Because it's also quite uncomfortable. It's like strapping a brick to your face. It's heavy, hot, and so intrusive you can't even take a sip of coffee while using it.
Stanford research
All that brings us to some Stanford research that I first covered last year.
A team of scientists led by Gordon Wetzstein, a professor of electrical engineering and director of the Stanford Computational Imaging Lab, has been working on solving both the immersion and the comfort problem using holography instead of TV technology.
Using a combination of optical nanostructures called waveguides and augmented by AI, the team managed to construct a prototype device. By controlling the intensity and phase of light, they're able to manipulate light at the nano level. The challenge is making real-time adjustments to all the nano-light sequences based on the environment.
Also: We tested the best AR and MR glasses: Here's how the Meta Ray-Bans stack up
All of that took a ton of AI to improve image formation, optimize wavefront manipulation, handle wildly complex calculations, perform pattern recognition, deal with the thousands of variables involved in light propagation (phase shifts, interference patterns, diffraction effects, and more), and then correct for changes dynamically.
Add to that real-time processing and optimization done at the super-micro level managing light for each eye, processing machine learning and constantly refining the holographic images, handling non-linear and high-dimensional data that comes from dealing with changing surface dimensionality, and then making it work with optical data, spatial data, and environmental information.
It was a lot. But it was not enough.
Visual Turing Test
The reason I mentioned the rhinos earlier in this article is because the Stanford team has just issued a new research report published in Nature Photonics Journal, showing how they are trying to exceed the perception of reality possible from screen display technology.
Back in the 1950s, digital pioneer Alan Turing suggested what has become known as the Turing Test. Basically, if a human can't tell if a machine at the other end of a conversation is a machine or a human, that machine is said to pass the Turing Test.
The Stanford folks are proposing the idea of a visual Turing Test, where a mixed reality device would pass the test if you can't tell whether what you're looking at is real or computer-generated.
Also: The day reality became unbearable: A peek beyond Apple's AR/VR headset
Putting aside all the nightmares of uber-deep fakes and my little story here, the Stanford team contends that no matter how high-resolution stereoscopic LED technology is, it's still flat. The human brain, they say, will always be able to distinguish 3D represented on a flat display from true reality.
As real as it might be, there's still an uncanny valley that lets the brain sense distortions.
But holography bends light the same way physical objects do. The Stanford scientists contend that they can build holographic displays that produce 3D objects that are every bit as dimensional as real objects. By doing so, they'll pass their visual Turing Test.
"A visual Turing Test then means, ideally, one cannot distinguish between a physical, real thing as seen through the glasses and a digitally created image being projected on the display surface," says Suyeon Choi, a postdoctoral scholar in Wetzstein's lab and first author of the paper.
Also: Meta just launched a $400 Xbox Edition Quest 3S headset - and it's full of surprises
I'm not sure about this. Yes, I support the idea that they will be capable of producing eyewear that bends light to replicate reality. But I wear glasses. There's always a periphery outside the edge of my glasses that I can see and sense.
Unless they create headsets that block that peripheral vision, they won't be able to truly emulate reality. It's probably doable. The Meta Quest 3 and the Vision Pro both wrap around the eyes. But if Stanford's goal is to make holographic glasses that feel like normal glasses, then peripheral vision could complicate matters.
In any case, let's talk about how far they've come in a year.
That was then, this is now
Let's start by defining the technical term "étendue." According to Dictionnaires Le Robert and translated into English by The Goog, étendue is the "Property of bodies to be located in space and to occupy part of it."
Ocular scientists use it to combine two characteristics of a visual experience: the field of view (or how wide an image appears) and the eyebox (the area in which a pupil can move and still see the entire image).
A large étendue would both provide a wide field of view and allow the eye to move around enough for real life while still seeing the generated image.
Since we reported on the project in 2024, the Stanford team has increased the field of view (FOV) from 11 degrees to 34.2 degrees horizontally and 20.2 degrees vertically.
This is still a far cry from the Quest 3's 110‑degree horizontal and 96‑degree vertical FOV, or even the estimated 100‑degree FOV of the Vision Pro. Of course, human eyes each have a field of view of about 140 degrees and, when combined, give us vision of about 200 degrees.
Also: This AR headset is changing how surgeons see inside their patients
This year, the team developed a custom-designed angle-encoded holographic waveguide. Instead of Surface Relief Gratings (SRGs) used in 2024, the new prototype's couplers are constructed of Volume Bragg Gratings (VBGs). VBGs prevent "world-side light leakage" and visual noise that can degrade contrast in previous designs, and they also suppress stray light and ghost images.
Both SRGs and VBGs are used to control how light bends or splits. SRGs function via a tiny pattern etched on the surface of a material -- light bounces off that surface. VBGs provide changes inside the material and reflect or filter light based on how that internal structure interacts with light waves. VBGs essentially provide more control over light movement.
Another key element of the newest prototype is the MEMS (Micro-Electromechanical System) mirror. This mirror is integrated into the illumination module along with a collimated fiber-coupled laser and the holographic waveguide we discussed above. It is another tool for steering light, in this case the illumination angles incident on the Spatial Light Modulator (SLM).
This, in turn, creates what the team calls a "synthetic aperture," which has the benefit of increasing the eyebox. Recall that the bigger the eyebox, the more a user's eye can move while using the mixed-reality system.
Also: HP just turned Google Beam's hologram calls into reality - and you can buy it this year
AI continues to play a key role in the dynamic functionality of the display, compensating heavily for real-world conditions and helping to create a seamless impression of the mixture of real reality and constructed reality. AI optimizes the image quality and three‑dimensionality of the holographic images.
Last year, the team did not specify the size of the prototype eyewear, except to say it was smaller than typical VR displays. This year, the team says they've achieved a "total optical stack thickness of less than 3 mm (panel to lens)." By contrast, the lenses on my everyday eyeglasses are about 2 mm thick.
"We want this to be compact and lightweight for all-day use, basically. That's problem number one, the biggest problem," Wetzstein said.
The thrillogy of the trilogy
The Stanford team describes these reports on their progress as a trilogy. Last year's report was Volume One. This year, we're learning about their progress in Volume Two.
It's not clear how far away Volume Three is, which the team describes as real-world deployment. But with the improvements they've been making, I'm guessing we'll see some more progress (and possibly Volumes Four and Five) sooner, rather than later.
Also: I wore Google's XR glasses, and they already beat my Ray-Ban Meta in 3 ways
I'm not entirely sure that blending reality with holographic images to the point where you can't tell the difference is healthy. On the other hand, real reality can be pretty disturbing, so constructing our own bubble of holographic reality might offer a respite (or a new pathology).
It's all just so very weird and ever so slightly creepy. But this is the world we live in.
What do you think about the idea of a "visual Turing Test"? Do you believe holographic displays could truly fool the brain into thinking digital imagery is real? Have you tried any of the current‑gen mixed reality headsets like the Vision Pro or Quest 3? How immersive did they feel to you? Do you think Stanford's waveguide-based holographic approach could overcome the comfort and realism barriers holding back mainstream XR adoption? Let us know in the comments below.
Get the morning's top stories in your inbox each day with our Tech Today newsletter.
You can follow my day‑to‑day project updates on social media. Be sure to subscribe to my weekly update newsletter and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.