Holographic display prototype used in the experiments
Not only can this technique run on a smartphone but it also needs less than 1 megabyte of memory.
Virtual reality has been around for decades, and every year, headlines all over the internet announce it to be the next big thing. However, those predictions are yet to become a reality, and VR technologies are far from being widespread. While there are many reasons for that, VR making users feel sick is definitely one of the culprits.
Better 3D visualization could help with that, and now, MIT researchers have developed a new way to produce holograms thanks to a deep learning-based method that works so efficiently that cuts down the computational power need in an instant, according to a press release by the university.
A hologram is an image that resembles a 2D window looking onto a 3D scene, and this 60-year-old technology remade for the digital world can deliver an outstanding image of the 3D world around us.
“People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations,” explains Liang Shi, the study’s lead author and a Ph.D. student in MIT’s Department of Electrical Engineering and Computer Science. “It’s often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades.”
Generating real-time 3D holograms
This new approach, called “tensor holography”, will bring that goal closer, allowing holography to reach realms of VR and 3D printing. “Everything worked out magically, which really exceeded all of our expectations,” said study lead author Liang Shi, a computer scientist at MIT, to IEEE Spectrum.
In order to accomplish that, the study, published in the journal Nature and funded in part by Sony, explains how the researchers used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation.
A convolutional neural network, which is a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information, was designed, and in order to accomplish that, a large, high-quality dataset, which didn’t exist for 3D holograms, was needed. Each pair matching a picture with its corresponding hologram, a custom database of 4,000 pairs of computer-generated images, which included color and depth information for each pixel, was built.
The convolutional neural network then got on the job by using this data to learn how to calculate the best way to generate holograms for the images. With this process, it was able to produce new holograms from images with depth information. The neural network was much faster than physics-based calculations and run on an efficiency that “amazed” the team members.
Not only the new system requires less than 620 kilobytes of memory, but it can also create 60 color 3D holograms per second with a resolution of 1,920 by 1,080 pixels on a single consumer-grade GPU. For example, the team could run it on an iPhone 11 Pro at a rate of 1.1 holograms per second.
This suggests that the new system could one day create holograms in real-time on future VR and AR mobile headsets, helping VR users be more immersed thanks to the realistic scenery while getting rid of the side effects of long-term VR usage. 3D printing, microscopy, visualization of medical data, and the design of surfaces with unique optical properties could be other fields where this system could see application.
“It’s a considerable leap that could completely change people’s attitudes toward holography,” said co-author Wojciech Matusik. “We feel like neural networks were born for this task.”
Via InterestingEngineering.com