Nvidia's new AI converts real video into 3D rendering


We have often wondered when playing or experiencing virtual reality: how can this be closer to the real world? Nvidia could have an answer. The company has developed an artificial intelligence capable of transforming video into a virtual landscape.

Nvidia has set up a demonstration area on NeurIPS AI conference in Montreal to show this technology. The company used its own supercomputer, the DGX-1, powered by the GPU Tensor Core to convert videos captured from the dashcam of a stand-alone car. This configuration made it possible to convert the theory into tangible results.

The research team then extracted a high-level semantic map using a neural network, and then used Unreal Engine 4 to generate high-level colorized frames. In the last step, Nvidia uses its AI to convert these representations into images. Developers can easily modify the final result according to their needs.

"Nvidia has been creating new ways to generate interactive graphics for 25 years. This is the first time we can do it with a neural network, "said Bryan Catanzaro, vice president of Applied Deep Learning at Nvidia. "Neural networks, in particular, generative models will change the way graphics are created."

He added that this technology will help developers and artists to create virtual content at a much lower cost than before.

This is particularly interesting for game developers and content creators in virtual reality, as they can explore new possibilities by taking advantage of standard video. However, this technology is still in the development phase and requires a supercomputer. So maybe we'll have to wait a bit before we see this on our consoles and desktops.

You can read more about Nvidia search here.

To read further:

Microsoft is building a chrome-based browser to replace Edge