Researchers in the field of spatial computing have developed a prototype augmented reality (AR) headset that projects full-color, 3D dynamic images onto lenses resembling ordinary glasses. This new design promises a visually immersive 3D experience in a sleek, comfortable form factor suitable for all-day wear, unlike the bulkier AR systems currently available.

“Our headset looks like everyday glasses to the outside world, but the wearer sees an enriched environment overlaid with vibrant, full-color 3D computed imagery,” said Gordon Wetzstein, an associate professor of electrical engineering and a leading expert in spatial computing. Wetzstein and his team introduced their device in a recent Nature paper.

Although still a prototype, this technology has the potential to revolutionize various fields, from gaming and entertainment to training and education—essentially, any domain where computed imagery could enhance or inform the user’s understanding of their surroundings.

“Imagine a surgeon wearing these glasses to plan a complex surgery, or an airplane mechanic using them to learn about the latest jet engine,” said Manu Gopakumar, a doctoral student in the Wetzstein-led Stanford Computational Imaging Lab and co-first author of the paper.

The new approach is the first to navigate the complex engineering requirements that have previously resulted in either cumbersome headsets or unsatisfactory 3D visual experiences, often causing visual fatigue or nausea.

“No other augmented reality system currently available matches our 3D image quality or compact form factor,” said Gun-Yeal Lee, a postdoctoral researcher in the Stanford Computational Imaging Lab and co-first author of the paper.

The researchers overcame technical barriers through a combination of AI-enhanced holographic imaging and novel nanophotonic device approaches. Traditional AR systems use complex optical setups that capture the real world with exterior cameras, blend this imagery with computed visuals, and project the combined image to the user’s eyes stereoscopically. These systems are inherently bulky due to the need for magnifying lenses between the eyes and projection screens.

“These limitations not only contribute to bulkiness but also lead to unsatisfactory perceptual realism and visual discomfort,” explained Suyeon Choi, a doctoral student in the Stanford Computational Imaging Lab and co-author of the paper.

To produce more visually satisfying 3D images, Wetzstein’s team employed holography, a Nobel-winning visual technique developed in the late 1940s. Holography has been limited by its inability to portray accurate 3D depth cues, often resulting in underwhelming and sometimes nausea-inducing visual experiences.

The team used AI to enhance the depth cues in holographic images. Advances in nanophotonics and waveguide display technologies allowed them to project computed holograms onto the glasses’ lenses without bulky additional optics. A waveguide, constructed by etching nanometer-scale patterns onto the lens surface, directs the holographic displays mounted at each temple. These displays project imagery through the etched patterns, bouncing light within the lens before delivering it to the viewer’s eye. The result is a seamless view of both the real world and full-color, 3D computed images.

The 3D effect is enhanced by combining traditional stereoscopic imaging, where each eye sees a slightly different image, with holography, which provides a full 3D volume in front of each eye.

“Holography delivers a lifelike 3D image quality, making the visual experience both satisfying and free from the fatigue that plagued earlier approaches,” said Brian Chao, a doctoral student in the Stanford Computational Imaging Lab and co-author of the paper.

“Holographic displays have long been considered the ultimate 3D technique, but they’ve never quite achieved commercial success,” Wetzstein remarked. “Maybe now, with this killer app, they have the breakthrough they’ve been waiting for all these years.”

By Impact Lab