Brain Signals Turned Into Speech


An example of the electrode array used to record brain activity in the study. The findings hold promise for people who can't speak.

Scientists have harnessed artificial intelligence to translate brain signals into speech, in a step toward brain implants that one day could let people with impaired abilities speak their minds, according to a new study.
 In findings published Wednesday in the journal Nature, a research team at the University of California, San Francisco, introduced an experimental brain decoder that combined direct recording of signals from the brains of research subjects with artificial intelligence, machine learning and a speech synthesizer.
 When perfected, the system could give people who can't speak, such as stroke patients, cancer victims, and those suffering from amyotrophic lateral sclerosis — or Lou Gehrig's disease — the ability to conduct conversations at a natural pace, the researchers said.
 “Our plan was to make essentially an artificial vocal tract — a computer one — so that paralyzed people could use their brains to animate it to get speech out,” said UCSF neurosurgery researcher Gopala K. Anumanchipalli, lead author of the study.
 It may be a decade or more before any workable neural speech system based on this research is available for clinical use, said Boston University neuroscientist Frank H. Guenther, who has tested an experimental wireless brain implant to aid speech synthesis. But “for these people, this system could be life-changing,” said Dr. Guenther, who wasn't involved in the project.
 To translate brain signals to speech, the UCSF scientists utilized the motor-nerve impulses generated by the brain to control the muscles that articulate our thoughts once we've decided to express them aloud.
 “We are tapping into the parts of the brain that control movement,” said UCSF neurosurgeon Edward Chang, the senior scientist on the study. “We are trying to decipher movement to produce sound.” As their first step, the scientists placed arrays of electrodes across the brains of volunteers who can speak normally.
 The five men and women, all suffering from severe epilepsy, had undergone neurosurgery to expose the surface of their brains as part of a procedure to map and then surgically remove the source of their crippling seizures. The speech experiments took place while the patients waited for random seizures that could be mapped to specific brain tissue and then surgically removed.
 As the patients spoke dozens of test sentences aloud, the scientists recorded the neural impulses from the brain's motor cortex to the 100 or so muscles in the lips, jaw, tongue and throat that shape breath into words. In essence, the researchers recorded a kind of musical score of muscle movements — a score generated in the brain to produce each sentence, like the fingering of notes on a wind instrument.

The system could one day help those who can't speak to talk at a normal pace.  

In the second step, they turned those brain signals into audible speech with the help of an artificial intelligence system that can match the signals to a database of muscle movements — and then match the resulting muscle configuration to the appropriate sound.
 The resulting speech reproduced the sentences with about 70% accuracy, the researchers wrote, at about 150 words a minute, which is the speed of normal speech. “It was able to work reasonably well,” said study coauthor Josh Chartier. “We found that in many cases the gist of the sentence was understood.”
 Columbia University neuroscientist Nima Mesgarani, who last month demonstrated a different computer prototype that turns neural recordings into speech, called the advance announced Wednesday “a significant improvement.” He wasn't part of the research team.
 Translating the signals took over a year, and the researchers don't know how quickly the system could work in a normal interactive conversation. Nor is there a way to collect the neural signals without major surgery, Dr. Chang said.
 It hasn't yet been tested among patients whose speech muscles are paralyzed. The scientists did also ask the epilepsy patients by asking them to just “think” some of the test sentences without saying them out loud. They couldn't detect any difference in those brain signals when compared with tests with spoken words.
 “There is a very fundamental question of whether or not the same algorithms will work in the population who cannot speak,” he said. “We want to make the technology better, more natural, and the speech more intelligible. There is a lot of engineering going on.”
 Dr. Kristina Simonyan, of Harvard Medical School, who studies speech disorders and the neural mechanisms of human speech and who wasn't involved in the project, found the findings encouraging. “This is not the final step, but there is a hope on the horizon,” she said.

Virtual Voice

Researchers were able to synthesize speech by decoding brain signals from spoken sentences.


1 Researchers put an array of electrodes on the exposed brains of volunteers while they spoke dozens of sentences.

2 They recorded brain signals that control the movements of the larynx, tongue and other vocal muscles that make sounds for speech.

3 Using a neural network, they translated the signals into muscle movements and then into speech sounds.

4 The decoded sounds produced a sound wave that showed the re-created speech was close to natural speech.

Illustrations: Nature; 4vector (brain)
Sources: Nature; University of California, San Francisco

 

BY ROBERT LEE HOTZ

Categories: 

Add new comment