What a New Theory on Memory Formation Means for Machine Learning

0
What a New Theory on Memory Formation Means for Machine Learning
Andrii Vodolazhskyi | Shutterstock.com

A recent study challenges an accepted concept of how the brain forms and stores memories. The new learning mechanism could inform future AI development.

One of the many enigmas of the human brain is that which concerns memory and the traces of its physical structure in the brain.

As knowledge of the human brain unfolds, its secrets little by little become common knowledge. For example, we have come to understand that the brain does not record memories as mere mental images.

Study suggests new memory formation and learning mechanismClick To Tweet

The development of new methodologies, based on brain observations in humans and animals will give new impetus to explore neuroscience.

If our understanding of how neural pathways form robust relationships changes, we could then inform the development of more efficient AI–which are based on our current understandings.

Neurons Wire Together and Fire Longer

There’s a learning rule in neuroscience where, when neural connections are activated at the nearly the same time, the bond (synapses) connecting them is reinforced accordingly.

That’s the principle of the Hebbian theory, based on which it was thought that memories result from strengthened neural networks that stem from linked neurons.

To form or recall a memory, neurons fire with bursts of activity measured in the order of tens of milliseconds, (called long-term potentiation, or LTP).

The neural network has also to eliminate some connections (called pruning) to accommodate for the formation and storage of new memories

However, a new study by neuroscientists at Howard Hughes Medical Institute suggests otherwise.

HHMI’s researchers have identified a new memory and learning mechanism that they called “behavioral time scale synaptic plasticity” (BTSP).

Per the authors of the study, the longer time scale of BTSP, which spans seconds “[implies] that no causal relationship of interconnected neurons may be necessary to form long-lasting associations between them.”

Human Self-Reflection to Inform Human Creations

The human brain, and its many function such as memory, is still subject to a controversial debate on which we have yet to reach a common agreement.

A better understanding of human intelligence could lead to future developments in AI with current systems still incapable of displaying adaptability and most rely on constant and prior human input.

With the advancement of brain imaging techniques, scientists have the opportunity to acquire new knowledge to inform AI development.

Discoveries in neuroscience could be applied to the construction of advanced artificial neural networks, with the possibility of learning, growing, and adapting.

A better understanding of how strong synaptic connections are formed could help build an AI platform efficient enough to approach the much-discussed “unsupervised” AI.

Memory is very effective, taking only a few seconds to record a personal event, the date, and its context–among other data that we still strive to understand.

However, in the human brain, there is no specific region that controls all memory. Different memory systems are based on distinct but interconnected neural networks that work in close collaboration.

Billions of neurons, each connected to other neurons, form a neural machine made of billions of synaptic connections allowing us to form memories, to perceive reality, to predict, to decide, and to act.

The key to all these capabilities is the amazing property of the human neural network to extend, reshape itself, and reconfigure its own circuits all the time. And an artificial neuronal network with this property and these capabilities could approach the true “unsupervised” AI with human-like intelligence that we mentioned earlier.

How exactly could this discovery inform the construction of artificial neural networks?

banner ad to seo services page