AI’s next terrifying advancement is reading your mind

This is actually happening: AI is starting to read minds.

Researchers at the University of Texas Austin have successfully created an artificial intelligence system that “can translate a person’s brain activity” into plain, readable language.

The high-tech device, known as a semantic decoder, can do so by utilizing similar technology implemented by ChatGPT and Google’s Bard — no surgery or special implant is required.

“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” professor and research lead Alex Huth said.

“We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”


A person being put into an MRI
“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” professor and research lead Alex Huth said.
Nolan Zunk/University of Texas at Austin

The breakthrough uses an fMRI scanner in tandem with the decoder to transcribe a person’s thoughts after a cumbersome process.

A test subject is instructed to listen to hours of podcasts while inside of the scanner before having their own “thoughts decoded” by either telling a story in their heads or imagining a story, according to the university. They are also shown silent video clips.

This is what “allows the machine to generate corresponding text from brain activity alone.”

Admittedly, the AI can’t yet develop a word-for-word translation, but “instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly.”


Transcriptions from a person's mind to actual words was accurate nearly half the time.
Transcriptions from a person’s mind to actual words were accurate nearly half the time.
University of Texas at Austin

Still, nearly half the time the text is near or spot on to a person’s thoughts.

“For example, in experiments, a participant listening to a speaker say, ‘I don’t have my driver’s license yet’ had their thoughts translated as, ‘She has not even started to learn to drive yet,’ ” the release added.

Although the researchers intend on the technology being used for mentally conscious individuals who can’t speak — like stroke victims — they are well aware of what can happen if this AI put into the wrong hands.

“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” fellow lead researcher Jerry Tang said. “We want to make sure people only use these types of technologies when they want to and that it helps them.”


Researchers Alex Huth and Jerry Tang made breakthroughs on artificial intelligence scanning brains.
Researchers Alex Huth and Jerry Tang made breakthroughs on artificial intelligence scanning brains.
Nolan Zunk / University of Texas at Austin

At the moment, nefarious uses of the decoder are extremely limited as “a person needs to spend up to 15 hours lying in an MRI scanner, being perfectly still, and paying good attention to stories that they’re listening to before this really works well on them,” Huth said.

The research team also shared ways that test subjects were able to “easily and completely thwart” the decoding — thinking of animals was a major block.

“I think right now, while the technology is in such an early state, it’s important to be proactive by enacting policies that protect people and their privacy,” said Tang. “Regulating what these devices can be used for is also very important.”