Nvidia Trains Robot Arms with a Virtual World

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
Building on previous research, Nvidia taught a robot to pick up objects using a virtual environment. While similar commercial neural nets are usually trained on real world data, Nvidia developed an Unreal Engine plugin to train the deep learning algorithm in a game-like virtual space. Using a "standard RGB camera," the robot can recognize the position and orientation of objects in the real world, and use that data to manipulate them.

Check out the robot in action here.

The algorithm, which performs more robustly than leading methods, aims to solve a disconnect in computer vision and robotics, namely, that most robots currently do not have the perception they need to be able to handle disturbances in the environment. This work is important because it is the first time in computer vision that an algorithm trained only on synthetic data (generated by a computer) is able to beat a state-of-the-art network trained on real images for object pose estimation on several objects of a standard benchmark. Synthetic data has the advantage over real data in that it is possible to generate an almost unlimited amount of labeled training data for deep neural networks.
 
Last edited:
The algorithm, which performs more robustly than leading methods, aims to solve a disconnect in computer vision and robotics, namely, that most robots currently do not have the perception they need to be able to handle disturbances in the environment


Reading that i was left wondering more about self driving cars then anything. How is the car AI going to handle the vehicle in the front blowing out a tire.. etc AKA "disturbances in the environment"
 
They need to teach it how to open pickle jars and add a can opener to it.
 
Back
Top