Despite the fact that the brain can throw up imperfect memories, including events that never happened, the algorithms and strategies of human neural networks are better and more efficient at storing information than AI systems can afford. The research is published in Physical Review Letters.
In recent decades, artificial intelligence has performed well in many areas of science and technology. Even at chess, the AI algorithm plays better than humans. It is worth remembering how, in 1996, the Deep Blue computer first beat the human, chess champion Garry Kasparov. New research shows that the brain’s strategy for storing memories can lead to imperfect memories, but in turn allows it to store more memories and with less resource costs than AI can do. The work was carried out by SISSA scientists in collaboration with the Kavli Institute for Systems Neurobiology and the Center for Neural Computing.
Neural networks, whether real or artificial, learn by tuning connections between neurons. By making them stronger or weaker, some neurons become more active, some less active, until a certain pattern of activity appears. We call this pattern “memory”. AI’s strategy is to use complex and lengthy algorithms that iteratively tune and optimize connections between neurons. The brain makes this much easier: each connection between neurons only changes depending on how active two neurons are at the same time. For a long time, it was believed that compared to the AI algorithm, this allows you to store less memory.
A new study reveals a different picture: when a relatively simple strategy used by the brain to change neural connections is combined with biologically plausible models of the response of individual neurons, then such a strategy works the same, or even better, than AI algorithms.
The reason for this paradox lies in the introduction of errors: when memory is efficiently retrieved, it can be identical or correlated with the original input to be remembered. The brain’s strategy leads to the extraction of memories that are not identical to the original inputs, suppressing the activity of those neurons that are barely active in each pattern. These damped neurons don’t really play a critical role in distinguishing between different memories stored on the same network. Ignoring them, neural resources focus on those neurons that are important for the input that needs to be remembered and provide higher throughput.
Overall, this study highlights how biologically plausible self-organizing learning procedures can be as effective as slow and implausible learning algorithms.