This AI model can turn your thoughts into text, here’s how – Times of India
What is the AI system?
The system, called a semantic decoder, is developed by researchers at The University of Texas at Austin. The researchers said that the system relies in part on a transformer model which is similar to the ones that power Open AI’s ChatGPT and Google’s Bard chatbot.
How does this system work?
Unlike other language decoding systems in development, the semantic decoder does not require subjects to have surgical implants, making the process noninvasive. Furthermore, participants do not need to use only words from a prescribed list.
The brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner.
Later, when the participant listens to a new story or imagines telling a story allows the machine to generate corresponding text from brain activity alone, researchers said in the paper published in the journal Nature Neuroscience.
The trained system does not produce a word-for-word transcript but delivers text that closely or precisely matches the intended meaning of the participant’s original words.
For example, when a participant heard the words “I don’t have my driver’s licence yet” during an experiment, the thoughts were translated to, “She has not even started to learn to drive yet.”
How will it help humans?
The system may help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.
It is to be noted that the system is not practical for use outside of the laboratory because of its reliance on the time needed on an fMRI machine. But this work could transfer to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).
“fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” one of the researchers said.
Risks with this system
The researchers have also addressed questions about the potential misuse of the technology. According to the paper, decoding worked only with cooperative participants who had participated willingly in training the decoder.
“Results for individuals on whom the decoder had not been trained were unintelligible, and if participants on whom the decoder had been trained later put up resistance — for example, by thinking other thoughts — results were similarly unusable,” according to a press release.