New brain-computer interface for faster and more accurate decoding of “brain language”

The new brain-computer interface device (BCI) reported by two independent scientists in the United States can decode brain activity into language more quickly and accurately than existing technology, while covering a larger vocabulary than previous devices. The study demonstrates advances in technology designed to help severely paralyzed people regain their ability to communicate. The two studies were recently published simultaneously in Nature.

People with neurological disorders, including brainstem stroke or amyotrophic lateral sclerosis, often lose their ability to speak due to muscle paralysis. Previous studies have shown that language can be decoded from the brain activity of paralyzed patients, but only in written form, with limited speed, accuracy, and vocabulary.

Francis Willett and colleagues at Stanford University developed a brain-computer interface device that collects the neural activity of individual cells by inserting a thin array of electrodes into the brain and trains an artificial neural network to decode the vocalizations that the patient is trying to make. With the help of the device, a patient with amyotrophic lateral sclerosis can communicate at a rate of 62 words per minute, which is 3.4 times faster than previous similar devices, and further approaching the speed of natural conversation (about 160 words per minute). The device had an error rate of 9.1 percent at a vocabulary of 50 words, 2.7 times lower than the previous state-of-the-art language brain-computer interface device. The device had an error rate of 23.8% when using the 12500 vocabulary, which the authors believe may be the first successful demonstration of decoding large vocabulary.

In another study, Edward Chang and colleagues at the University of California, San Francisco, developed a device based on different methods to capture brain activity, with electrodes covering the surface of the brain and detecting the activity of many cells. This brain-computer interface device can simultaneously convert brain signals into three forms of output: text, speech, and control of an avatar. The researchers trained a deep learning model to decode neural data collected from a patient severely paralyzed by a brainstem stroke, which was collected while the patient tried to speak sentences silently. The median translation speed of brain signals to text was 78 words per minute (error rate of 25%). When translating brain signals to speech, the error rate is 28.2% under the 372 vocabulary, and the smaller the vocabulary, the lower the error rate. The device can also translate neural activity into facial expressions, which can be presented in the form of animated avatars. In summary, this multimodal brain-computer interface device offers more possibilities for paralyzed patients to communicate more naturally and expressively.

Nick Ramsey and Nathan Crone of Utrecht University Medical Center in the Netherlands said in a contemporaneous news opinion piece that the two brain-computer interface devices represent a major advance in neuroscience and neural engineering research, and have great potential to relieve the suffering of people who have lost their voice due to paralytic nerve damage and disease. They noted that further work was needed to achieve wider adoption. (Source: Feng Weiwei, China Science News)

Patient Pat participated in the Stanford University study by Steve Fisch

Related paper information:

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button