A brain scan can translate a person’s thoughts into words


The researchers showed participants short Pixar videos with no dialogue and recorded their brain responses in a separate experiment designed to test whether the decoder could retrieve the entire content the user was watching. It was found out.

Romain Brett, a theoretical neuroscientist at the Paris Vision Institute, who was not involved in the trial, is not entirely sure of the technology’s effectiveness at this stage. “The way the algorithm works is basically that the AI ​​model produces sentences with fuzzy information about the semantic field of the sentences estimated from the brain scans,” he says. At a general level, there may be some interesting use cases, such as guessing what you think. But I’m a little skeptical that we’re really getting to the point of mind-reading.

It may not work well yet, but the experiment raises ethical issues in future use of the brain decoder for monitoring and diagnosis. With this in mind, the team set out to test whether they could train and run a decoder without human cooperation. They did this by trying to decode perceived speech from each participant using decoder models trained on the other person’s data. They realized that they had done “something beyond chance.”

This suggests that a decoder cannot be applied to a person’s mental activity unless that person is willing and first trains the decoder.

Jerry Tang, a PhD student at the university working on the project, said: “We think that brain privacy is very important and that no one’s brain should be decoded without their cooperation. “We believe it is important to continue to explore the privacy implications of brain decoding and to develop policies that protect everyone’s intellectual privacy.”



Source link

Related posts

Leave a Comment

twelve + twenty =