MIT algorithm teaches AI systems to skepticism

A new deep learning algorithm developed by researchers at MIT teaches AI systems to be skeptical about inputs.

A team from MIT has combined a data learning algorithm with a deep neural network, which is used, for example, to train an algorithm to play video games.

To make AI systems resilient to conflicting data, researchers have tried to implement guards for supervised learning.

Traditionally, a neural network is trained to associate specific labels or actions with given inputs. For example, a neural network that receives thousands of images tagged as cats, along with images tagged as houses and hot dogs, should correctly label the new image as a cat.

In robust artificial intelligence systems, the same supervised learning techniques can be tested with partially modified versions of the image. If the net hits the same label – a cat – there is a high chance that the image and changes or not is a cat.

To use neural networks in safety-critical scenarios, we had to figure out how to make real-time decisions based on worst-case assumptions, the authors explain.

Therefore, the team aimed to rely on yet another form of machine learning that does not require the binding of labeled inputs to outputs, but rather aims to amplify certain actions in response to inputs. This approach is commonly used to teach computers to play chess and Go.

The authors believe the new CARRL algorithm could help robots safely deal with unpredictable interactions in the real world.

If you have found a spelling error, please, notify us by selecting that text and pressing Ctrl+Enter.

Alexandr Ivanov earned his Licentiate Engineer in Systems and Computer Engineering from the Free International University of Moldova. Since 2013, Alexandr has been working as a freelance web programmer.
Function: Web Developer and Editor
Alexandr Ivanov

Spelling error report

The following text will be sent to our editors: