Search

A Robot’s Body Image

Image courtesy of Pixabay.

How do people catch balls flying through the air with unpredictable trajectories? All humans have spatial awareness of their arms and automatically calculate how each joint needs to move to meet the ball at exactly the right time and place. Although this seems intuitive, many complex calculations occur in the brain to make this happen. For example, the glove size, length of each arm bone, the degrees through which each joint has to rotate, and the speed of motion must be accurately calculated for the catch to be successful. To translate this skill to robots, there is one essential factor: self-awareness.

In a recent Science Robotics paper, Boyuan Chen, formerly a PhD student at Columbia University’s Creative Machines Lab and now an Assistant Professor at Duke University, created a visual self-modeling robot. Chen made a physical model to map the coordinate network of the arm and made a kinematic model to create equations for the movement of the arm for each coordinate point. Incorporating machine learning, Chen designed these self-modeling robots to build on the previously obtained information continuously. Once the robot determines its exact dimensions and potential movements, it can move on and complete complex tasks instead of relearning everything. Chen’s robot can also map its future activity to predict what will happen in real-time to perform the exact movement needed to execute the tasks.

However, just self-modeling is not enough. For practical use, these robots also need to have spatial awareness and be able to model outside variables to adapt to each specific scenario. For example, the robot would need to be able to recognize a motor malfunction, adapt to execute its tasks, and recalculate its movements. Chen used a red ball hanging around the robot to test its ability to recognize and touch objects. Because the robot can detect the ball at a specific position, it can perceive where the ball is relative to the robot and make a model of itself (self-image). Humans can improve this self-image by designing robots to be aware of outside variables as well, which is essential in the real world to avoid breaking objects. In this paper, the ball serves as a test for the robot’s self-awareness and environmental awareness, indicating its ability to detect a 3D object around it.

Chen’s goal for the future, through his new General Robotics Lab at Duke University, is to create “generalist” self-modeling that can deal with any task, whether that be playing catch or watering a plant, as opposed to current-day “specialist” robots that focus on specific tasks. Robots can reach maximum efficiency by learning to map the outside world. If they can attain this skill, robots can execute a vastly more diverse set of tasks, increasing their potential to improve our lives.