Nvidia is set to launch a new suite of microservices called Omniverse Cloud Sensor RTX, designed to provide highly accurate sensor simulations that will significantly accelerate the development of fully autonomous machines. According to Nvidia, developers using Omniverse Cloud Sensor RTX can test sensor perception and AI software at scale in realistic, physically accurate virtual environments, long before deploying them in the real world.
In addition to aiding developers, Omniverse Cloud Sensor RTX will enable sensor manufacturers to validate and integrate digital twins of their sensors in virtual environments. This capability is expected to reduce the time and cost associated with physical prototyping.
Omniverse Cloud Sensor RTX is built on the OpenUSD framework and powered by Nvidia’s RTX ray-tracing and neural-rendering technologies. It enhances the creation of simulated environments by combining real-world data from sources like videos, cameras, radar, and lidar with synthetic data.
Even in scenarios where real-world data is scarce, these microservices can simulate a wide range of activities, such as the operation of a robotic arm, the movement of a factory conveyor belt, or the presence of a robot or person nearby.
“Developing safe and reliable autonomous machines powered by generative physical AI requires training and testing in physically based virtual worlds,” said Rev Lebaredian, vice president of Omniverse and simulation technology at Nvidia. “Omniverse Cloud Sensor RTX microservices will enable developers to easily build large-scale digital twins of factories, accelerating the next wave of AI.”
Among the first to access Omniverse Cloud Sensor RTX are software developers like Foretellix and MathWorks, who are using it to advance autonomous vehicle development.
By Impact Lab