A model has been developed that helps robots perceive the physical environment

American researchers from the Department of Aeronautics and Cosmonautics of the Massachusetts Institute of Technology have developed a model through which robots will be able to realize everything that is happening around, as people do. Information about this is published on the website of the institution.

A team of researchers introduced a spatial perception model: 3D Dynamic Scene Graphs. Development allows the robot to quickly create a 3D map of its environment, in particular, everything surrounding it is divided by objects and their location in space, for example, a bed opposite the wardrobe.

In addition to creating a conventional map, the new model may allow the robot to request the location of objects and premises or the movement of people in its path.

According to one of the associate professors of the department, the new model will help the robot quickly make decisions and plan its movement. This is similar to how we build our route.

In order to build a model of the environment, the model is based on working with Kimera, it is an open-source library. During operation, Kimera works by accepting image streams from the robot’s camera, as well as inertial measurements from onboard sensors, to evaluate the trajectory of the robot or camera and reconstruct the scene in the form of a three-dimensional grid. All this happens in real-time.

The result is an approximate map of space in the form of a three-dimensional grid, where everything is color-coded as objects, structures, and people.

The new development will be useful not only for ordinary household tasks, where coordinated awareness is needed, but also the technology will be applicable in production or in order to investigate the place where the disaster occurred.