New Gen AI model to help explain human memory, imagination

New Gen AI model to help explain human memory, imagination
x
Highlights

Recent advances in generative AI can help explain how memories enable us to learn about the world, re-live old experiences, and construct totally new experiences for imagination and planning, according to a new study The study, published in Nature Human Behaviour, uses an AI computational model - known as a generative neural network - to simulate how neural networks in the brain learn from and remember a series of events (each one represented by a simple scene).

New Delhi: Recent advances in generative AI can help explain how memories enable us to learn about the world, re-live old experiences, and construct totally new experiences for imagination and planning, according to a new study The study, published in Nature Human Behaviour, uses an AI computational model - known as a generative neural network - to simulate how neural networks in the brain learn from and remember a series of events (each one represented by a simple scene).

The model featured networks representing the hippocampus and neocortex, to investigate how they interact. Both parts of the brain are known to work together during memory, imagination and planning. “Recent advances in the generative networks used in AI show how information can be extracted from experience so that we can both recollect a specific experience and also flexibly imagine what new experiences might be like,” said lead author Eleanor Spens, a doctoral student at University College London’s (UCL) Institute of Cognitive Neuroscience. “We think of remembering as imagining the past based on concepts, combining some stored details with our expectations about what might have happened,” Spens said. Humans need to make predictions to survive (e.g. to avoid danger or to find food), and the AI networks suggest how, and when we replay memories while resting, it helps our brains pick up on patterns from past experiences that can be used to make these predictions. Researchers played 10,000 images of simple scenes to the model. The hippocampal network rapidly encoded each scene as it was experienced. It then replayed the scenes over and over again to train the generative neural network in the neocortex.

The neocortical network learned to pass the activity of the thousands of input neurons (neurons that receive visual information) representing each scene through smaller intermediate layers of neurons (the smallest containing only 20 neurons), to recreate the scenes as patterns of activity in its thousands of output neurons (neurons that predict the visual information).

Show Full Article
Print Article
Next Story
More Stories
ADVERTISEMENT
ADVERTISEMENTS