NVIDIA’s Instant NeRF is a neural rendering model that can produce a 3D scene from 2D data inputs in seconds and can render images of that scene in milliseconds.
The process is known as inverse rendering and allows AI to approximate how light behaves in the real world, which can be used to turn a collection of still images into a digital 3D scene in seconds. NVIDIA’s research team has developed an approach that accomplishes the task extremely rapidly — almost instantly — which makes it one of the first models of its kind that can combine ultra-fast neural network training and rapid render.
What is a NeRF?
NVIDIA simplifies this explanation and says that NeRFs use neural networks to represent and render 3D scenes based on an input collection of 2D images. The neural network requires a few dozen images taken from multiple positions around the scene as well as the camera’s position of each of those shots.
“In a scene that includes people or other moving elements, the quicker these shots are captured, the better. If there’s too much motion during the 2D image capture process, the AI-generated 3D scene will be blurry,” NVIDIA says.
Continue reading… “NVIDIA’s New Tech Can Turn A Set of Photos into 3D Scenes in Seconds”