Brain signals converted into words ‘speak’ for person with paralysis


A man unable to speak after a stroke has produced sentences through a system that reads electrical signals from speech production areas of his brain, researchers report today. The approach has previously been used in nondisabled volunteers to reconstruct spoken or imagined sentences. But this first demonstration in a person who is paralyzed “tackles really the main issue that was left to be tackled—bringing this to the patients that really need it,” says Christian Herff, a computer scientist at Maastricht University who was not involved in the new work.

The participant had a stroke more than a decade ago that left him with anarthria—an inability to control the muscles involved in speech. Because his limbs are also paralyzed, he communicates by selecting letters on a screen using small movements of his head, producing roughly five words per minute. To enable faster, more natural communication, neurosurgeon Edward Chang of the University of California, San Francisco, tested an approach that uses a computational model known as a deep-learning algorithm to interpret patterns of brain activity in the sensorimotor cortex, a brain region involved in producing speech. The approach has so far been tested in volunteers who have electrodes surgically implanted for nonresearch reasons such as to monitor epileptic seizures.

In the new study, Chang’s team temporarily removed a portion of the participant’s skull and laid a thin sheet of electrodes smaller than a credit card directly over his sensorimotor cortex. To “train” a computer algorithm to associate brain activity patterns with the onset of speech and with particular words, the team needed reliable information about what the man intended to say and when.

So the researchers repeatedly presented one of 50 words on a screen and asked the man to attempt to say it on cue. Once the algorithm was trained with data from the individual word task, the man tried to read sentences built from the same set of 50 words, such as “Bring my glasses, please.” To improve the algorithm’s guesses, the researchers added a processing component called a natural language model, which uses common word sequences to predict the likely next word in a sentence. With that approach, the system only got about 25% of the words in a sentence wrong, they report today in The New England Journal of Medicine. That’s “pretty impressive,” says Stephanie Riès-Cornou, a neuroscientist at San Diego State University. (The error rate for chance performance would be 92%.)

Because the brain reorganizes over time, it wasn’t clear that speech production areas would give interpretable signals after more than 10 years of anarthria, notes Anne-Lise Giraud, a neuroscientist at the University of Geneva. The signals’ preservation “is surprising,” she says. And Herff says the team made a “gigantic” step by generating sentences as the man was attempting to speak rather than from previously recorded brain data, as most studies have done.

With the new approach, the man could produce sentences at a rate of up to 18 words per minute, Chang says. That’s roughly comparable to the speed achieved with another brain-computer interface, described in Nature in May. That system decoded individual letters from activity in a brain area responsible for planning hand movements while a person who was paralyzed imagined handwriting. These speeds are still far from the 120 to 180 words per minute typical of conversational English, Riès-Cornou notes, but they far exceed what the participant can achieve with his head-controlled device.

The system isn’t ready for use in everyday life, Chang notes. Future improvements will include expanding its repertoire of words and making it wireless, so the user isn’t tethered to a computer roughly the size of a minifridge, 14 July 2021