AI can function like a human brain if it is programmed to use similar learning algorithms for new objects.
In a new study, a team of scientists explains how the new approach dramatically improves the ability of AI software to quickly learn new visual concepts.
We can make AI learn much better if we train them in the way our brain perceives information.
Maximilian Riesenhuber, Ph.D., Professor of Neurobiology, Georgetown University Medical Center
Humans can learn new visual concepts quickly and well from little data — sometimes just one example is enough. Even three to four month old babies can easily learn to recognize zebras and distinguish them from cats, horses and giraffes. But computers usually need to “see” many examples of the same object to know what it is, Riesenhuber explains.
Therefore, it was necessary to develop software for defining relationships between entire visual categories, rather than trying to use a more standard approach to object identification using only low-level and intermediate information such as shape and color.
The team found that artificial neural networks, which represent objects in terms of previously studied concepts, learn new visual concepts much faster.
The fact is that the architecture of the brain that underlies the study of human visual concepts is based on neural networks involved in object recognition. It is believed that the anterior temporal lobe of the brain contains “abstract” representations that go beyond form. These complex neural hierarchies for visual recognition allow people to learn new tasks and, most importantly, use previously acquired knowledge.
Despite advances in artificial intelligence, the human visual system is still the gold standard in terms of the ability to generalize from a few examples: it can reliably work with image variations and clearly analyze what is happening around.