The new AI model allows you to control devices (primarily mobile phones) using natural language. This feature will be useful for visually impaired users.
Google has introduced new models based on AI, with which smartphones can control the natural language. This was indicated in a report from the conference of the Association of Computational Linguistics (ACL) in 2020, where researchers offer a method for teaching models that will be primarily useful for people with visual impairments.
Researchers have created an initial base of commands for AI, which could help in interaction with devices. They process an incoming request, predict the sequence of actions of the application, as well as screens and interactive elements for moving from one screen to another.
They, with the help of AI, have already created three sets of instructions that can be used for multi-stage work with smartphones. In addition, scientists already have almost 300 thousand one-step teams that relate to the user interface, they will work on almost all Android devices.
Scientists said that during the experiments, AI was able to translate the user’s natural speech into actions with an accuracy of 89.21%. However, as speech patterns become more complex or artificial noise is created while speaking commands, the effectiveness of AI sharply dropped to 70.59%. Google is confident that further, the model will cope with its task better.
Researchers will post all data sets, models, and results publicly available on GitHub. They invite other scientists to participate and hope that this will be the first step to solve the problem of controlling devices using natural language.