A new study used an experiment with mice to investigate adaptive behavior in decision-making and “how reward expectations are affected by differences in internal representations.” Archive photo by toubibe/Pixabay

June 6 (UPI) — Brains apply “data compression” to maximize performance and minimize cost when making decisions, according to a study released Monday that could affect future research into artificial intelligence.

The study in the journal Nature Neuroscience used an experiment with mice to study adaptive behavior in decision making and “how reward expectations are affected by differences in internal representations”.

The mice were challenged to estimate whether two tones were separated by an interval greater than 1.5 seconds while the researchers recorded the activity of dopaminergic neurons, known to play a key role in learning the value of shares.

“If the animal incorrectly estimated the interval length on a given trial, then the activity of these neurons would produce a ‘prediction error’ which should help improve performance on future trials,” said Christian Machens, one of the lead authors of the study. in a press release.

The researchers noted in a preprint of the study that the mice almost always made the right choice, but that the results became more variable the closer they got to the 1.5-second goal. Previous research has shown that animals estimate their own ability to correctly classify different stimuli.

Researchers created models using the concepts of reinforcement learning and time-difference learning, areas of machine learning associated with artificial intelligence, and compared these results with recorded data on mouse behavior .

The study noted that by comparing these patterns to recorded responses, researchers “were able to infer the nature of internal representations animals might use during a task.”

“Data compression” basically refers to the researchers tricking the brain into forgetting just enough information, through “tunnel vision” and the mice’s own actions, that the mice always arrive at a solution but not enough to get to the bad one.

“Compressing representations of the outside world is akin to eliminating all irrelevant information and adopting a temporary ‘tunnel view’ of the situation,” said Machens, head of the Theoretical Neuroscience Laboratory at the Champalimaud Foundation in Portugal.

The researchers noted that the findings have “broad implications for neuroscience, as well as artificial intelligence.”

“While the brain has clearly evolved to process information efficiently, AI algorithms often solve problems through brute force: using lots of data and lots of parameters,” said lead author Joe Paton, director of the Champalimaud neuroscience research program.

“Our work provides a set of principles to guide future studies of how internal representations of the world can support intelligent behavior in the context of biology and AI.”