Summary: The brain is constantly acting as a prediction machine, constantly comparing sensory information with internal predictions.
source: Max Planck Institute
This is in line with a recent theory about how our brain works: it is a prediction machine, constantly comparing the sensory information we capture (such as images, sounds, and language) with internal predictions.
“This theoretical idea is very popular in neuroscience, but the evidence for it is often indirect and limited to synthetic situations,” says lead author Misha Heilbronn.
“I would really like to understand exactly how this works and test it in different situations.”
Heilbronn reveals that brain research into this phenomenon is usually done in an artificial environment. To evoke predictions, participants are asked to stare at a single pattern of moving dots for half an hour, or listen to simple patterns with sounds such as “beep beep boop, beep beep boop.”
“Studies of this kind actually reveal that our brain can make predictions, but that doesn’t always happen in the complexity of everyday life either. We try to get it out of the lab. We study the same kind of phenomenon, how the brain deals with unexpected information, but then in Natural situations that are often unpredictable.”
Hemingway and Holmes
The researchers analyzed the brain activity of people who listened to Hemingway stories or about Sherlock Holmes. At the same time, they analyzed the texts of books using computer models, which are called deep neural networks. In this way, they were able to calculate how unpredictable each word was.
For each word or sound, the brain makes detailed statistical predictions and turns out to be very sensitive to the degree of unpredictability: the brain’s response is stronger when the word is unexpected in context.
“In itself, this is not so surprising: after all, everyone knows that you can sometimes anticipate the next language. For example, your brain sometimes “fills in the blank” automatically and mentally finishes someone else’s sentences, for example if he begins to speak very slowly Or stuttering or not being able to think of a word. But what we’ve shown here is that this happens constantly. Our brain is constantly guessing at words. The predictive mechanism is always turned on.”
More than just software
“In fact, our brain does something similar to speech recognition software. Speech recognition tools that use artificial intelligence also constantly make predictions and allow themselves to be guided by their predictions, just like the autocomplete function on your phone.
“However, we noticed a big difference: Brains don’t just make words, they make predictions at many different levels, from abstract meaning and grammar to specific sounds.”
There is good reason for the continued interest from tech companies that want to use new insights of this kind to build better language and image recognition software, for example. But these kinds of applications are not Heilbronn’s main goal.
“I would really like to understand how our predictive mechanism works at the most basic level. I am now working with the same research setup, but for visual and auditory perception, like music.”
About this research in Neuroscience News
author: press office
source: Max Planck Institute
Contact: Press Office – Max Planck Institute
picture: The image is attributed to DALL-E and OpenAi – Micha Heilbron
original search: Access closed.
“Hierarchy of Linguistic Predictions During Natural Language Understanding” by Misha Heilbrunn et al. PNAS
Hierarchy of language predictions during natural language understanding
Understanding spoken language requires transforming ambiguous phonemic flows into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming inputs.
However, the role of prediction in language processing remains in dispute, with disagreement over the ubiquity and representative nature of predictions.
Here, we address both issues by analyzing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to accurately identify contextual predictions.
First, we demonstrate that the brain’s responses to words are modulated by ubiquitous predictions. Next, we separate model-based predictions into distinct dimensions, revealing separable neural signatures for predictions about grammatical category (parts of speech), phonemes, and semantics.
Finally, we show that high-level (word) predictions inform low-level (audio) predictions, supporting hierarchical predictive processing.
Together, these results underscore the prevalence of prediction in language processing, showing that the brain automatically predicts the next language at multiple levels of abstraction.
#brain #alwaysactive #prediction #machine #Neuroscience #News