A new cerebral interface interprets interior monologues with surprising precision

Scientists can now decipher brain activity linked to the silent interior monologue in people’s heads with precision up to 74%, according to a new study.
In new research published today in Cell, Scientists from the University of Stanford have decoded words imagined by four participants suffering from severe paralysis due to SLA or a stroke of the trunk. In addition to being absolutely wild, the results could help people who cannot speak to communicate more easily using brain interfaces (BCIS), according to researchers.
“This is the first time that we have managed to understand what brain activity looks like when you just think about talking,” said author Erin Kunz, a student in electrical engineering at the University of Stanford, in a statement. “For people with severe speech deficiencies and engines, BCIs capable of decoding inner speech could help them communicate much more easily and more naturally.”
Previously, scientists have managed to decode a speech attempt using BCIs. When people physically try to speak out loud by hiring the muscles related to speech, these technologies can interpret the resulting brain activity and type what they are trying to say. But although effective, current communication methods assisted by BCI can always be exhausting for people with limited muscle control. The new study is the first to directly face the inner discourse.
To do this, the researchers recorded an activity in the engine cortex – the region responsible for controlling voluntary movements, including speech, using microelectrodes located in the engine cortex of the four participants.
The researchers noted that the speech attempted and imagined activated similar but not identical models of brain activity. They formed an AI model to interpret these imagined speech signals, decoding the sentences of a vocabulary up to 125,000 words with precision up to 74%. In some cases, the system has even picked up unleashed interior thoughts, such as the numbers that participants counted silently during a task.
For people who want to use the new technology but do not always want their interior reflections on a complete explosion, the team added a password controlled mechanism that prevented the BCI from decoding the inner speech unless the participants thought of a password (“Chitty Chitty Bang Bang” in this case). The system recognized the password with more than 98% precision.
While the accuracy of 74% is high, current technology always makes a substantial amount of errors. But researchers hope that soon, more sensitive and better algorithms registration devices could further increase their performance.
“The future of the BCIs is brilliant,” said Frank Willett, deputy professor in the Department of Stanford Neurosurgery and the main study of the study. “This work gives real hope that BCIS discourse can one day restore communication which is as fluid, natural and comfortable as conversational speech.”
https://gizmodo.com/app/uploads/2025/08/Research-participant-1200×675.jpg