Brain-to-Speech Interface

In an unprecedented advancement for neuroscience, engineers have created a computer interface that can monitor and decode brain activity, and reconstruct the signals directly into intelligible speech. The breakthrough development could help people who have lost their ability to speak – such as stroke victims or those living with amyotrophic lateral sclerosis (ALS) – to once again communicate with the outside world.

For decades, researchers have been working on ways to decode the patterns of brain activity that appear when people speak – or think about speaking. Similar patterns are also apparent when we listen to others speak – or think about listening. So the first step was to monitor the brain activity in patients undergoing brain surgery while they listened to spoken sentences and numerical digits from zero to nine. These signals were used to “train” a vocoder – a computer algorithm that synthesizes speech from recordings of human voices, which is similar to that used by the Amazon Echo and Apple Siri. The voice output was tested by asking individuals to listen to recordings of a sequence of numbers and repeat what they heard. The results indicated an intelligibility rate of about 75 percent, which is well above previous technologies.

The next step will be to test more complicated sentences on brain signals generated when a person speaks or imagines speaking. Ultimately the system may become part of an implantable device to translate a wearer’s thoughts into words.

For information: Nima Mesgarani, Columbia University, Department of Electrical Engineering, Jerome L. Greene Science Center, 3229 Broadway, New York, NY 10027; phone: 212-854-8013; email: nima@ee.columbia.edu; Web site: http://naplab.ee.columbia.edu/ or https://www.columbia.edu/