AI used to translate brain signals into speech

Scientists have developed a device based on artificial intelligence (AI) which can transform brain signals into speech. In a paper published in Nature, an international journal of science, scientists at the University of California, San Francisco, explain how they developed a speech-generating device able to facilitate communication for people who have lost their ability to speak. In developing this device, scientists trained a deep-learning algorithm on a wide variety of data: brain activity recorded as individuals read hundreds of sentences allowed, as well as data determining how movements of the tongue, lips, jaw and larynx created sound. The algorithm was then incorporated into a decoder which transforms brain signals into estimated movements of the vocal tract, and turns these movements into synthetic speech. Otherwise put, the device creates speech by mapping brain activity to movements of the vocal tract and translating them to sound. While this development is encouraging, commentators say it is not clear whether the device would work with ‘words that people only think’.