Robot Imagination

Researchers are working on brain-inspired algorithms that could revolutionize the way in which robots are deployed. By simulating the learning processes that occur in a biological brain, robots could actually learn how to deal with unfamiliar situations, rather than needing to be programmed for specific tasks.

One example is Darwin, a “robot toddler” that has already learned how to perform various actions by “imagining” how to do them, and then practicing a series of tasks. A high-level deep-learning network responds to the dynamics of the robot’s actuators and sensors, as well as their responses to environmental factors, while providing overall guidance to carry out the required movements.

So far, Darwin has learned how to stand, reach with its hand, and maintain its balance on an inclined surface; however, researchers continue to add more variability to the tasks. By giving them the ability to learn “on the fly,” these complex algorithms may open up a whole new world of applications for robots to act more reliably and efficiently in the real world. In the shorter term, deep-learning will be particularly useful for improving locomotion of humanoid robots, which have historically struggled with walking on uneven surfaces or becoming unbalanced when reaching out for objects.

For information: Pieter Abbeel, University of California at Berkeley, Electrical Engineering and Computer Sciences, 746 Sutardja Dai Hall #1758, Berkeley, CA 94720; phone: 510-642-3214; email: pabbeel@cs.berkeley.edu; Web site: http://www.berkeley.edu/  or http://www.eecs.berkeley.edu/