Our brains encode sentences differently from words alone

April 10, 2021

Brain scans show we interpret words in a sentence differently from the words themselves. (Unsplash/Brett Jordan)

Scientists from multiple universities and Facebook AI have used a deep artificial neural network to predict brain activation in people reading sentences, discovering that the activation patterns diverged from those associated with the words themselves.

"The meaning that you understand from each sentence I say is more than the sum of its constituent word parts," Andrew James Anderson, lead author of the study, published March 22 in the Journal of Neuroscience, told The Academic Times. "If you take the two sentences 'The car ran over the cat' or 'The cat ran over the car,' despite both sentences containing precisely the same words, they have entirely different meanings, especially for the cat."

Anderson and his colleagues used magnetic resonance to scan the brains of 14 participants as they read 240 sentences that had been encoded by a well-established deep neural network called InferSent. Taking ordered word sequences as its input, InferSent tries to combine those words into sentence-level representations capable of solving a sentence-level entailment task — in other words, whether two sentences entail each other, contradict each other or are neutral to each other. "'The car ran over the cat' does not entail 'The cat ran over the car,' whereas 'The car ran over the cat' and 'The cat was squashed by the fast-moving Volkswagen' do have the entailment relationship," Anderson said.

"Are these sentence representations confined to a particular localized brain region, or do they represent a more network-level operation, spanning multiple brain regions?" Anderson asked. "Our results point more towards the latter — that a network of brain regions, including the temporal, parietal, and frontal cortex, are all involved."

By using InferSent to capture sentence-level semantics, the team overcame the limitations of previous imaging studies, which have typically used "bags-of-words" approaches that model sentences as an unordered sequence of words. Anderson said that InferSent and similar models are better able to understand the factors that make otherwise identical words different in different sentences. 

"Consider a word such as bat," he said. "That word could either be a piece of sports equipment or a flying mammal, depending upon the context it's in. These models have begun to be able to accommodate things like the context that words are in and also, importantly, the order in which words appear."

According to Anderson, the complexity of InferSent and similar neural networks makes it hard for even experts to understand exactly how they do what they do. "There are so many moving parts — thousands, millions, trillions of parameters," he explained. "If you end up with a model with so many thousands and millions of measurements, it's difficult to look under the hood and try to have intuition on just how it's doing what it's doing."

"In the same way, if you record a set of brain activity — the activity of thousands of neurons firing away — it's difficult to know just by looking at them exactly how they are solving particular tasks," he continued. "To gain a thorough understanding of how some of these models are actually working would probably take a concerted experimental effort, just like trying to understand how the brain works."

The team also compared InferSent's performance to two major neural networks for natural language processing: BERT, or Bidirectional Encoder Representations from Transformers, developed by Google researchers; and ELMo, or Embeddings from Language Models, developed at the University of Washington and the Allen Institute for AI. ELMo and BERT performed similarly to InferSent.

"It was of interest just to put these results with InferSent into broader context with these newer models," Anderson said. "There's a bit of a trade-off. These newer models are becoming increasingly complex. Although they're proving to be the more powerful approach, it's not necessarily a given that that is how the brain is also operating. It is also more difficult for the likes of fMRI researchers to experiment with BERT, just because it's so large and complicated and you have to make additional decisions about how you're going to use it."

The models these researchers use all ultimately rely on text to represent meaning. Anderson noted that the human brain may not entirely depend on language to represent meaning, and that many animals can clearly solve problems despite the fact that they lack language. "No matter how good these computational models that use only text get, it's arguable that they're ever going to capture this sort of experiential knowledge that comes from living in the world," he said. "The deep network models may be currently impoverished in that respect."

Anderson also highlighted the limitations of magnetic resonance imaging, which measures a peak signal from blood flow roughly four seconds after the region is stimulated. "Not only does fMRI have a relatively slow sample rate, it's also slightly delayed," he said.

Anderson thinks the methods he has tested could eventually have clinical value, including in neurodegenerative disease, such as Alzheimer's disease. "Alzheimer's disease is associated with a signature buildup of pathological proteins," he explained. "These build up in brain regions that are associated with long-term memory, so these new AI-based methods may provide a means to assess how well the diseased brain regions are functioning in encoding meaning."

"Furthermore, these sorts of methods may provide the opportunity to test whether brain networks rewire in the face of disease so as to enable less diseased brain regions to take on the roles of diseased ones," he said. "Finding this out could be useful for helping to characterize the progression of diseases."

The paper, "Deep artificial neural networks reveal a distributed cortical network encoding propositional sentence-level meaning," published in the Journal of Neuroscience on March 22, was authored by Andrew James Anderson, Rajeev D. S. Raizada, and Scott Grimm, University of Rochester; Douwe Kiela, Facebook AI Research; Jeffrey R. Binder, Leonardo Fernandino, Colin J. Humphries, and Lisa L. Conant, Medical College of Wisconsin; and Edmund S. Lalor, University of Rochester and Trinity College Dublin.

Saving
We use cookies to improve your experience on our site and to show you relevant advertising.