Researchers at the University of Texas have developed a new AI system that can read someone’s mind. The catch is that it can do it without using a brain implant. The findings are an incredible advancement that will help paralyzed patients to communicate again.
The new speech system differs from previous ones because it doesn’t require invasive surgery. It deciphers thoughts through external images with the help of a semantic decoder similar to ChatGPT.
Reading Minds With AI
According to Science News, Scientists have already developed brain implants, which read brain activity and help people without the power of speech to communicate again. But the new system is revolutionary and slightly sinister because it can read minds remotely.
The team at the University of Texas use Functional Magnetic Resonance Imaging (fMRI) to scan the brain. They feed the data into a semantic decoder which translates the images into a continuous text stream.
According to News Atlas, The semantic decoder has an encoding system similar to Open AIs ChatGPT and Googles Bard and can predict how people respond to language.
fMRI produces precise images of brain activity, but the process is slow compared to real-time speech. Ultimately there is a time delay, and one fMRI image will contain up to 20 words which are difficult to decipher, but new AI systems can accurately translate them.
In the study, researchers trained the coders on three volunteers while they listened to 16 hours of spoken word stories. The coders tracked the brain activity and could predict how the participants would respond to word sequences.
How Effective Is The New Technology?

Previous non-invasive methods have only been able to create single words or short sentences due to the time delay in imaging. The new system doesn’t have word-for-word accuracy – instead, it picks up on the gist of the conversation so it can operate in real time.
According to The Guardian, instead of focusing on single words, the system maps out how we respond to groups of words. For example, the decoder translated “I don’t have my driver’s license yet” to “She has not even started to learn to drive yet”.
The volunteers also watched short, silent videos, and the decoder correctly described some scenes. The system was close to or entirely accurate around 50 percent of the time. But It struggled with pronouns, and sometimes it got things completely wrong.
Furthermore, the decoder didn’t work when participants resisted the testing – if they deliberately thought about other things, for example. It was also ineffective at reading people’s minds if they didn’t train the decoder.
Should We Be Worried About Mind Reading AI?

There’s no doubt that AI can benefit human health. The new system will enable many people to communicate again and help us to discover more about how the brain works. But, the technology certainly raises moral questions.
Jerry Tang, a lead author of the study, says – “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that. We want to ensure people only use these technologies when they want to and that it helps them.”
The system currently depends on fMRI, so it will only work in a laboratory setting, but scientists hope to use the technology with portable brain imaging systems in the future.
There are also serious concerns about the risks involved with AI technology. As reported by Time Magazine earlier this year, leaders in the technology industry, including Elon Musk, signed an open letter calling for a halt to developing AI until we can manage and control it appropriately.