Meta Is Building AI That Reads Brainwaves. The Present Reality Is a Mess

Meta, Facebook’s parent corporation, has scientists developing a method to read people’s thoughts. The business declared on August 31 that its artificial intelligence lab had developed AI with the ability to “hear” by analyzing brainwaves.

While this study is still in its infancy, it has the potential to pave the way for future technologies that could aid those with traumatic brain injuries who are unable to use traditional modes of communication like speech or typing.

Most crucially, scientists attempt to document this behavior without resorting to invasive techniques like implanting electrodes in the brain. One hundred sixty-nine healthy adults participated in the Meta AI study by listening to stories and reading sentences aloud.

Researchers monitored brain activity by attaching electrodes to participants’ skulls and recording their electrical activity.

The data was then used to train an artificial intelligence algorithm in the hopes of identifying trends. They hoped the algorithm could “hear” the sounds people’s brains were making in response to what they were listening to.

Jean Remi King, a research scientist at Facebook’s Artificial Intelligence Research (FAIR) Lab, spoke with TIME about the study’s motivations, obstacles, and ethical considerations. Other experts in the field have not yet reviewed this study.

We have simplified and modified this interview for readability.

TIME: Can you tell the layperson the goals of this study and the results that your team found?

To quote Jean Remi King: Traumatic brain injury, anoxia (oxygen deficiency), and other diseases can also render people incapable of communicating. Over the past two decades, brain-computer interfaces have emerged as a potential solution for these individuals.

The placement of an electrode on the patient’s motor cortex allows us to decipher neural activity, thereby facilitating the patient’s interaction with the outside world.

However, implanting an electrode into the brain is a very intrusive procedure. Therefore, we planned to test out noninvasive brain activity recordings. The objective was to program a computer to analyze how a listener’s brain processed a story being told to them.

Can you tell me some of your difficulties when completing this study?

I’d want to draw attention to two difficulties. For one thing, our brain activity signals are very “noisy.” There is a significant distance between the sensors and the brain. A human head and flesh are distorting the signal we’re picking up. It, therefore, necessitates extremely sophisticated technology to detect them using a sensor.

The other major issue is more theoretical; specifically, we don’t know how language is represented in the brain. Without machine learning, it would be pretty challenging to say, “OK, this brain activity implies this word, or this phoneme, or a desire to act, or anything,” even if the signal were pronounced.

This AI project aims to train itself to take on these two tasks by connecting voice and brain activity words in response to an address.

In what ways could this study be expanded upon? Can you give us an idea of how long until this AI will be used to aid those with neurological injuries in communicating again?

Down the road, patients will require a bedside device that can also facilitate language generation. Only the perception of speech is the focus of our research. One possible next step would be to analyze how individuals’ attention is directed during conversations to see whether or not they can keep up with what people are saying.

Moreover, it would be perfect if we could figure out what they are trying to say. The extreme difficulty is expected because asking a healthy volunteer to do so causes many rapid facial movements that these detectors immediately pick up on. It will be challenging to confirm that we are decoding brain activity, not muscular activity. That’s the plan, and it’ll be challenging, but we’re up for the challenge.

How else could this research be used?

Judging that is not easy because we are just interested in one thing. The goal is to use brain activity as a proxy for what people have heard in the scanner.

Most of your peers and reviewers have probably asked, “Where is the value in that? Why? Because there isn’t much to be gained by deciphering something people already heard.” However, I view this more as evidence of the principle that these signals may contain more complex representations than we initially assumed.

Does this study leave out any crucial information, in your opinion?

I want to emphasize that this study is conducted within FAIR and is neither top-down, Meta-directed, nor product-focused.