Meta AI announces long-term study on human brain and language processing

We’re excited to bring Transform 2022 back to life on July 19th and virtually July 20-28. Join AI and data leaders for sensible conversations and exciting networking opportunities. Register today!

The human brain has long been a puzzle, and continues to be – how it evolved, how it continues to evolve, its tapped and unused abilities.

The same is true for Artificial Intelligence (AI) and Machine Learning (ML) models.

And just as the human brain developed AI and ML models which are evolving more and more sophisticated day by day, these systems are now being applied to the study of the human brain. In particular, such studies seek to enhance the capabilities of AI systems and to model more closely after brain functions so that they can function more and more autonomously.

Researchers at Meta AI have launched a similar initiative. The research arm of Facebook’s parent company today announced a long-term study to better understand how the human brain processes language. Researchers are looking at how brain and AI language models respond to similar spoken or written sentences.

“We’re trying to compare the AI ​​system with the brain,” said Jean-Remy King, a senior research scientist at Meta AI.

He noted that spoken language makes human beings completely unique and understanding how the brain works is still a challenging and ongoing process. The underlying question is: “What makes humans more powerful or more efficient than these machines? We want to recognize not only the similarities but also the rest of the differences. “

Brain imaging and human-level AI

Meta AI is working with Neurospin (CEA), the Paris-based research center for innovation in brain imaging and the French National Institute for Research in Digital Science (INRIA). This work is part of Meta AI’s broader focus on human-level AI that can learn little without human supervision.

By better understanding how the human brain processes language, the researchers speculate that they may gather insights that will help guide the development of AI so that people can learn and process speech as efficiently as possible.

“Developing, training and using specialized learning algorithms to perform a variety of tasks is becoming increasingly easy,” King said. “But these AI systems go far beyond the efficiency of the human brain. What is clear is that there is something missing in these systems that are able to understand and learn a language more efficiently, at least as effectively as humans do. This is obviously a million-dollar question. “

In deep learning, multiple layers of neural networks work together for learning. This approach has been applied in the work of meta AI researchers to highlight when and where the understanding of words and sentences is introduced into the brain when the volunteer reads or listens to the story.

Over the past two years, researchers have applied in-depth learning techniques to public neuroimaging datasets obtained from images of brain activity in volunteer magnetic resonance imaging (MRI) and computerized tomography (CT) scans. It was collected and shared by many educational institutions, including Princeton University and the Max Planck Institute for Psycholinguistics.

The team modeled thousands of these brain scans while also applying a magnetoencephalography (MEG) scanner to take images every millisecond. Working with INRIA, they compared different language models with the brain responses of 345 volunteers who were recorded with functional magnetic resonance imaging (fMRI) as they listened to complex stories.

The same descriptions that were read or presented to human subjects were then introduced to AI systems. “We can compare these two sets of data to see when and where they match or match,” King said.

What researchers have discovered so far

Researchers have already uncovered valuable insights. Notably, the language patterns that most closely match brain activity are those that best predict the next word out of context (such as “on a dark and stormy night …” or “Once Upon a Time …”), King explained. Such predictions, based in part on observable inputs, are at the heart of AI Self-Supervised Learning (SSL).

However, certain regions of the brain assume words and ideas far ahead of time – while in contrast, language models are usually trained to predict the next word. They are limited in their ability to anticipate complex ideas, conspiracies and descriptions.

“(Humans) systematically predict what’s going to happen next,” King said. “But it’s not just speculation at the word level, it’s at a more abstract level.”

In further contrast, the human brain can learn with a few million sentences and constantly adapt and store information between its trillions of synapses. AI language models, meanwhile, are trained on billions of sentences and can quantify 175 billion artificial synapses.

King pointed out the fact that infants are exposed to thousands of sentences and can understand language quickly. For example, from just a few examples, children learn that “orange” can refer to both fruit and color. But modern AI systems have difficulty with this task.

“It’s very clear that today’s AI system, no matter how good or effective, is extremely inefficient,” King said. While AI models are performing increasingly complex tasks, “it is becoming very clear that in many ways they do not fully understand things.”

To further their studies, meta-AI researchers and Neurospin are now building original neuroimaging datasets. This will be open source to help further research in the fields of AI and neuroscience with code, deep learning models and research papers. “The idea is to provide a range of tools that will be used and capitalized on by our colleagues in the academic and other fields,” King said.

By studying in more depth the long-range predictive capabilities, researchers can help improve modern AI language models, he said. Increasing the algorithms with long distance predictions can make them more correlated with the brain.

“What is clear now is that these systems can be compared to the human brain, which was not the case a few years ago,” King insisted.

He added that the branches of neuroscience and AI need to be brought together for scientific progress. Over time, they will evolve more closely and collaboratively.

“This exchange between neuroscience and AI is not just a metaphorical exchange with abstract ideas,” King said. “It simply came to our notice then. We are trying to understand what is architecture, what are the principles of learning in the brain? And we’re trying to implement these architectures and these principles into our model. “

Venturebeat’s mission Transformative Enterprise is about to become a digital town square for technology decision makers to gain knowledge about technology and transactions. Learn more about membership.

Similar Posts

Leave a Reply

Your email address will not be published.