Search

A Tech Clairvoyant for Paralyzed Voices: A New Prosthesis that Translates Brain Activity to Speech

He had not been able to speak for sixteen years. At the age of twenty, the patient, known as BRAVO-1, experienced a severe stroke resulting in paralysis and anarthria, the loss of the ability to articulate speech. But now, after the implantation of a novel neuroprosthesis, BRAVO-1 can communicate efficiently with the world—using only his brainwaves. Edward Chang, neurosurgeon and Chair of Neurological Surgery at the University of California San Francisco (UCSF), spearheaded this decades-long effort to successfully decode words and sentences from neural activity.

Chang’s journey with the brain started during his time in medical school at UCSF, where with brain mapping techniques he observed surgeries where the patients were actually awake. “It dawned on me that there was a huge, huge need to better understand how the human brain works to treat neurological conditions that we don’t necessarily have cures for yet,” Chang said. “I decided to go into neurosurgery because it not only allowed me to work directly with the brain, but also take care of patients in a way that’s hard to do in other fields.” 

In addition to practicing, Chang conducts research as co-director of the Center for Neural Engineering and Prostheses, which is a collaborative organization between UCSF and UC Berkeley that focuses on developing biomedical technology to help people with neurological disabilities like paralysis and speech disorders. 

Over the last decade, Chang’s lab intently studied the region of the brain that controls the vocal tract. “What we found was a map of the different parts of the vocal tract and kinematic properties that give rise to speech,” Chang said. This neural code for every consonant and vowel is composed of elemental movements, such as the tongue moving forward, that are very precise and highly coordinated. With this newfound knowledge, they sought to create a device that could translate brain activity into words. Thus, over the past decade, Chang and his research group have been working on a “neuroprosthesis”—a device that can record and decode the participant’s brain activity, then display their “speech” on screen. 

Helping to lead these efforts is post-doctoral researcher David Moses, whose interest in programming, bioengineering, and their intersection with medicine and neuroprosthetics led him to the Chang lab. Thus began the BRAVO (BCI—brain computer interface—Restoration of Arm and Voice) clinical trial, in which Chang and his team enrolled their first participant, BRAVO-1, to begin testing the potential speech neuroprosthesis. 

The neural implant, composed of 128 electrodes that record neural activity from the surface of the brain, was implanted in BRAVO-1 over the brain region that controls the vocal tract. Unlike the telepathic transmission commonly depicted in sci-fi movies, this technology relies on the patient trying to engage in speech: the implant detects these signals, which are then analyzed. “This isn’t like mind reading or any internal monologue… it has to be controlled by volitional attempts to speak,” Moses said. Alongside the development of the hardware, Chang’s research group primarily focused on creating and programming the software behind this new device.

In February of 2019, they implanted the device in the patient’s sensorimotor complex, which controls speech. Two months later, BRAVO-1 began to attend fifty data-recording sessions over a span of eighty-one weeks. “[BRAVO-1] is an incredible person and truly a pioneer. Even though we had a lot of proof of principle, there’s a lot of reasons it might not have worked,” Chang said. 

One such concern was that after the patient had not spoken for over fifteen years, there was no telling how much information about his speech attempts would be represented in the expected part of his brain. During each session, the participant performed many trials of two different tasks: an isolated-word task and a sentence task. Twenty-two hours of data were collected from over 9,800 trials of the former task, which involved the participant’s attempts to say one word from a predefined set of fifty common English vocabulary words. In addition, 250 trials of the sentence task, in which the participant attempted to produce word sequences from the same set, were also performed. Both tasks helped the researchers train, fine-tune, improve, and evaluate their computational models. 

Finally, the conversational variant of the sentence task was implemented, in hopes of demonstrating a real-time sentence-decoding process. The participant was first visually prompted with a question or statement onscreen. Then, he tried to speak in response to the prompt from a predefined set of fifty common English vocabulary words. The electrode arrays in the implant detected and collected the brain signals, which were then sent and processed in real-time to the computational processing system. 

In the system, first, a speech detection model identifies when the participant has been attempting to speak. This algorithm specifically detects the onsets and offsets of the participant’s word production attempts directly from brain activity, limiting the temporal window of relevant signals analyzed in the later steps. Next, a word classification algorithm predicts the probability that each of the fifty words has been attempted. However, this is not as simple as identifying one signal associated with one word. “There isn’t one particular part of my brain that only lights up when I’m saying just that word,” Moses said. Instead, when we pronounce certain words, our brain relays signals to our vocal tract, which then performs certain articulatory gestures such as opening our mouths. Thus, the brain activity processed by the neural implant is not necessarily limited to certain words or phrases, but rather depends on the pattern of articulations associated with each word. 

A third algorithm yields the probabilities for the next word in a sentence given the previous ones. This language model is based on English linguistic structure; for instance, “I am very good” is more likely to be said than “I am very going.” Finally, the predicted word or sentence is displayed onscreen as feedback, demonstrating the newfound possibility of “speech” for the paralyzed patient.

Chang’s system better resembles real-time speech in terms of accuracy of communication and rapid pace, achieving a median rate of 15.2 words per minute decoded and a median word error rate of 25.6 percent. The research team’s next steps include replicating these results in more than one participant: as long as the patient is cognitively intact and can attempt to produce speech, this neuroprosthesis could potentially be useful for people with a variety of injuries or disabilities, interpreting their brain waves and allowing them to communicate once more. 

However, while this device is certainly ground-breaking, there are still some limitations with the current system. “It seems very unlikely that we could just expand this current form to a thousand words,” Moses said. The team intends to keep working on modifications or alternative approaches to their initial proof-of-concept to expand the neuroprothesis’ potential. The ultimate vision is some kind of brain-computer interface that is convenient, portable, and minimally intrusive, with the ability to decode words and sentences quickly, facilitating accurate communication with the outside world.

“Now that we even have this initial proof of concept, and this first shred of evidence that this is feasible, it’s really quite motivating to see how far we can go with it,” Moses said. The researchers describe this project as a unique opportunity to tangibly help paralyzed people reconnect and communicate with the outside world, which the team finds incredibly rewarding and is committed towards pursuing. Ultimately, Chang and his research team hope to restore the individual’s voice—thereby reaffirming both the patients’ autonomy and fundamental connection to humanity.