AlphaAtlas
[H]ard|Gawd
- Joined
- Mar 3, 2018
- Messages
- 1,713
The field of machine learning has been advancing rapidly these past few years, while scientists have also made significant progress in robotics. The ANYmal robot is one of the most impressive examples of the later, as it's already found work on remote power stations. But, in order to improve its performance even more, scientists have come up with a system that allows ANYMal to improve itself. WIthout any pre-programmed commands, the robot learned to catch itself when it falls, to get back up when it gets knocked over, and to run at a surprisingly brisk pace. The researchers suggest that performance is ultimately limited by the mechanical aspects of the robot itself, but that there's room for improvement too. I suggest watching the longer video in the full research paper, but in case you aren't worried about AI-controlled robodogs already, I embedded a short video of the ANYMal running. Thanks to Techxplore for spotting the paper.
Check out the video here.
We applied the presented methodology to learning several complex motor skills that were deployed on the physical quadruped. First, the controller enabled the ANYmal robot to follow base velocity commands more accurately and energy-efficiently than the best previously existing controller running on the same hardware. Second, the controller made the robot run faster, breaking the previous speed record of ANYmal by 25%. The controller could operate at the limits of the hardware and push performance to the maximum. Third, we learned a controller for dynamic recovery from a fall. This maneuver is exceptionally challenging for existing methods because it involves multiple unspecified internal and external contacts. It requires fine coordination of actions across all limbs and must use momentum to dynamically flip the robot. To the best of our knowledge, such recovery skill has not been achieved on a quadruped of comparable complexity... Even at 100 Hz, evaluation of the network uses only 0.25% of the computation available on a single CPU core.
Check out the video here.
We applied the presented methodology to learning several complex motor skills that were deployed on the physical quadruped. First, the controller enabled the ANYmal robot to follow base velocity commands more accurately and energy-efficiently than the best previously existing controller running on the same hardware. Second, the controller made the robot run faster, breaking the previous speed record of ANYmal by 25%. The controller could operate at the limits of the hardware and push performance to the maximum. Third, we learned a controller for dynamic recovery from a fall. This maneuver is exceptionally challenging for existing methods because it involves multiple unspecified internal and external contacts. It requires fine coordination of actions across all limbs and must use momentum to dynamically flip the robot. To the best of our knowledge, such recovery skill has not been achieved on a quadruped of comparable complexity... Even at 100 Hz, evaluation of the network uses only 0.25% of the computation available on a single CPU core.