Nvidia claims they've developed software that can take a series of input videos, and generate a unique, but visually similar 3D representation of the world depicted in them. While Unreal Engine 4 is used somewhere in the process, the heart of the software is a neural network that can extract a "high-level descriptions of a scene" from footage and fill in the details. The company created a demo out of city driving footage, and claims it "allows attendees to navigate a virtual urban environment that is being rendered by this neural network." Check out the rendering tech in the video here. This AI breakthrough will allow developers and artists to create new interactive 3D virtual worlds for automotive, gaming or virtual reality by training models on videos from the real world. This will lower the cost and time it takes to create virtual worlds. The work was developed by a team of NVIDIA researchers led by Bryan Catanzaro, Vice President of Applied Deep Learning at NVIDIA... For training, the team used NVIDIA Tesla V100 GPUs on a DGX-1 with the cuDNN-accelerated PyTorch deep learning framework, and thousands of videos from the Cityscapes, and Apolloscapes datasets.