Columbia Engineering researchers have used deep learning, a machine learning (ML) technique, to teach a robot to become self-aware. A four-degree-of-freedom articulated robotic arm moved randomly and collected data from one thousand trajectories, each comprising of one hundred points. The robotic arm was not presented with prior knowledge of physics, geometry, or motor dynamics. Deep learning allowed the robot to initially create an inaccurate self-model that did not know what it was or how its joints were connected. But after nearly 35 hours of training, the self-model was accurate within 4 cm. This allowed the robot to perform "pick and place" tasks in a closed loop where it recalibrated itself based on its self-model as it moved. The objects in this task were placed onto the ground at specific locations each time. This allowed the robot to grasp objects and place them into a receptacle with a 100% success rate. When presented with the same task but in an open loop system; where it had to rely solely upon its internal self-model without external feedback, the robot performed the task with a 44% success rate. "'That's like trying to pick up a glass of water with your eyes closed, a process difficult even for humans,' observed the study's lead author Kwiatkowski, a PhD student in the computer science department who works in Lipson's lab." The robot is now capable of detecting and compensating for broken parts. It will re-imagine and retrain itself when damaged. The robot arm can also write text messages to researchers. To date, robots have operated by having a human explicitly model the robot. "But if we want robots to become independent, to adapt quickly to scenarios unforeseen by their creators, then it's essential that they learn to simulate themselves," says Hod Lipson, professor of mechanical engineering, and director of the Creative Machines lab, where the research was done. Lipson, who is also a member of the Data Science Institute, notes that self-imaging is key to enabling robots to move away from the confinements of so-called "narrow-AI" towards more general abilities. "This is perhaps what a newborn child does in its crib, as it learns what it is," he says. "We conjecture that this advantage may have also been the evolutionary origin of self-awareness in humans. While our robot's ability to imagine itself is still crude compared to humans, we believe that this ability is on the path to machine self-awareness." Lipson and Kwiatkowski are aware of the ethical implications. "Self-awareness will lead to more resilient and adaptive systems, but also implies some loss of control," they warn. "It's a powerful technology, but it should be handled with care." The researchers are now exploring whether robots can model not just their own bodies, but also their own minds, i.e. whether robots can think about thinking.