Researchers explore brain signals to restore speech for disabled patients
A new brain-computer system can read inner speech in paralysis patients, potentially replacing slow and tiring attempts at spoken communication.

Researchers have developed a brain-computer interface (BCI) that can decode ‘inner speech’ in patients with severe paralysis, potentially enabling faster and more comfortable communication.
The system, tested by a team led by Stanford University’s Frank Willett, records brain activity from the motor cortex using microelectrode arrays smaller than a baby aspirin, translating neural patterns into words via machine learning.
Unlike earlier BCIs that rely on attempted speech, which can be slow or tiring, the new approach focuses on silent imagined speech. Tests with four participants showed that inner speech produces clear, consistent brain signals, though at a smaller scale than attempted speech.
While accuracy is lower, the findings suggest that future systems could restore rapid communication through thought alone.
Privacy concerns have been addressed through methods that prevent unintended decoding. Current BCIs can be trained to ignore inner speech, and a ‘password’ approach for next-generation devices ensures decoding begins only when a specific imagined phrase is used.
Such safeguards are designed to avoid accidental capture of thoughts the user never intended to express.
The technology remains in early development and is subject to strict regulation.
Researchers are now exploring improved, wireless hardware and additional brain regions linked to language and hearing, aiming to enhance decoding accuracy and make the systems more practical in everyday life.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!