Scientists from the United States trained the robot to hear and found that its efficiency almost doubled. He was able to predict the physical properties of objects only on the basis of sounds.
Researchers at Carnegie Mellon University (CMU) have found that a robot’s perception can be dramatically improved by adding hearing to the device.
In the first large-scale study of the interaction of sound and robot action, conducted at the CMU Institute of Robotics, researchers found that sounds can help a robot distinguish even similar objects like a metal screwdriver and a metal key. Hearing can also help robots determine what actions sound should trigger and use them to predict the physical properties of new objects. The researchers noted that the speed of operation doubled, and the devices began to classify objects by ear only 76% of the time.
To conduct their research, the scientists created a large dataset by recording video and audio of 60 common objects – toys, hand tools, shoes, apples, tennis balls. For each of these objects, a whole set of sounds was recorded – how they slid, rolled, or crashed into each other. The robot then analyzed this dataset and learned how to handle it.
This is the largest set of sound data that robots have trained on. Researchers were surprised how much more efficient the devices were. They found, for example, that a robot can use what it has learned about the sound of one set of objects to make predictions about the physical properties of other objects that it has never seen or touched.