Artificial Intelligence was able to translate brain signals into sentences with almost no errors. This system in the future will help restore speech to patients who have lost the ability to speak due to illness.
Joseph Mackin of the University of California and his colleagues used deep learning algorithms to study the brain signals of four patients. All of them suffered from epilepsy, so they had already attached brain electrodes that transmitted data about seizures.
Each woman was asked to read a set of sentences aloud, while the team recorded their brain activity. The largest group of sentences contained 250 unique words. The team fed this brain activity into the neural network algorithm, teaching it to identify regularly occurring patterns that may be associated with repetitive aspects of speech – for example, a combination of vowels and consonants. Then these patterns were fed into the second neural network, which tried to turn them into words in order to form sentences.
Each time a person speaks the same sentence, brain activity will be similar, but not identical, the researchers explained. “Memorizing a person’s brain activity while reading sentences will not help, so the algorithm should instead understand what is similar in the patterns and summarize these data”, says Makin.
During the tests, the best AI results contained only 3% of errors. Researchers are sure that the algorithm was helped by the fact that patients read out simple sentences with a small number of unique words. But in some cases, the AI was able to parse and distinguish similar in sound words only in brain activity (for example, the words Tina and Turner).
The team tried to decode the data of brain signals immediately into separate sentences. But the error rate immediately rose to 38%. Researchers note that while AI can not quickly cope with this task. “Usually people know and use up to 350 thousand words, but the algorithm cannot decrypt them all. Developing its capabilities will be incredibly difficult”, scientists say.