Home » Technology » Did Stanford just prototype the future of AR glasses?

Share This Post

Technology

Did Stanford just prototype the future of AR glasses?

Did Stanford just prototype the future of AR glasses?

/

Stanford’s Computational Imaging Lab has an advanced holographic imaging tech that could improve AR headset capabilities.

Share this story

Stanford’s holographic AR glasses prototype.

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Image: Andrew Brodhead / Stanford

A research team at Stanford is developing a new AI-assisted holographic imaging technology it claims is thinner, lighter, and higher quality than anything its researchers have seen. Could it take augmented reality (AR) headsets to the next level?

For now, the lab version has an anemic field of view — just 11.7 degrees in the lab, far smaller than a Magic Leap 2 or even a Microsoft HoloLens.

But Stanford’s Computational Imaging Lab has an entire page with visual aid after visual aid that suggests it could be onto something special: a thinner stack of holographic components that could nearly fit into standard glasses frames, and be trained to project realistic, full-color, moving 3D images that appear at varying depths.

A comparison of the optics between existing AR glasses (a) and the prototype one (b) with the 3D-printed prototype (c).

A comparison of the optics between existing AR glasses (a) and the prototype one (b) with the 3D-printed prototype (c).

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Image: Stanford Computational Imaging Lab

Like other AR eyeglasses, they use waveguides, which are a component that guides light through glasses and into the wearer’s eyes. But researchers say they’ve developed a unique “nanophotonic metasurface waveguide” that can “eliminate the need for bulky collimation optics,” and a “learned physical waveguide model” that uses AI algorithms to drastically improve image quality. The study says the models “are automatically calibrated using camera feedback”.

Objects, both real and augmented, can have varying depths.

Objects, both real and augmented, can have varying depths.

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>GIF: Stanford Computational Imaging Lab

Although the Stanford tech is currently just a prototype, with working models that appear to be attached to a bench and 3D-printed frames, the researchers are looking to disrupt the current spatial computing market that also includes bulky passthrough mixed reality headsets like Apple’s Vision Pro, Meta’s Quest 3, and others.

Postdoctoral researcher Gun-Yeal Lee, who helped write the paper published in Nature, says there’s no other AR system that compares both in capability and compactness.

Companies like Meta have spent billions buying and building AR glasses technology, in the hopes of eventually producing a “holy grail” product the size and shape of normal glasses. Currently, Meta’s Ray-Bans have no on-board display, but the leaked Meta hardware roadmap we obtained last year showed a 2027 target date for Meta’s first true AR glasses.

Share This Post