UPDATED 09:30 EDT / OCTOBER 18 2023

AI

Meta’s newest AI system can decode images from human brain activity

Artificial intelligence researchers from Meta Platforms Inc. have made another key breakthrough, designing an algorithm that can replicate the process of transforming brain activity into the images we see every day.

In a blog post today, Meta researchers explained that they use a non-invasive neuroimaging technique known as magnetoencephalography or MEG to collect thousands of brain activity measurements every second. The AI system they have developed is then able to decode this activity to generate the visual representations they create in the human brain.

Deployed in real time, the AI system can “reconstruct extraordinarily rich images from these recordings of brain activity,” the researchers said.

The research builds upon an earlier system Meta created that can decode speech from MEG signals. The new AI system they created is made up of three parts, namely an image encoder, a brain encoder and an image decoder. First, the image encoder creates a rich set of representations of an image, independently of the brain. Then, the brain encoder learns to align the MEG signals to those image embeddings. Finally, the image decoder creates a plausible image based on those brain representations.

Meta’s researchers said the system was trained on a public dataset of MEG signals that was acquired from health volunteers by a consortium of academic researchers. The signals pertain to experimental data based on the same image database.

“We first compare the decoding performance obtained with a variety of pretrained image modules and show that the brain signals best align with modern computer vision AI systems like DINOv2, a recent self-supervised architecture able to learn a rich set of representations without any human annotations,” the researchers said. “This result confirms that self-supervised learning leads AI systems to learn brain-like representations: The artificial neurons in the algorithm tend to be activated similarly to the physical neurons of the brain in response to the same image.”

According to the researchers, this functional alignment between such AI systems and the brain can then be used to guide the generation of plausible images.

The system can still be improved. It has shown it’s reliable enough to preserve high-level features such as the types of object present in each image, but it often generates inaccurate low-level features by misplacing or mis-orienting some objects in the generated images. But although better results can be obtained using existing Magnetic Resonance Imaging technology, the advantage of MEGs is that it can be used at any time, producing a continuous flux of images decoded from brain activity.

Meta’s researchers believe their work is an important development that can help the scientific community understand the foundations of human intelligence. In the longer term, it can be a stepping stone towards non-invasive brain-computer interfaces that can be used to help treat people with brain injuries. It can also pave the way to building AI systems with greater potential to learn and reason like humans.

Image: Freepik

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU