Columbia scientists have developed a new mathematical model that helps to explain how the human brain’s biological complexity allows it to lay down new memories without wiping out old ones, illustrating how the brain maintains the fidelity of memories for years, decades or even a lifetime.
This model could help neuroscientists design more targeted studies of memory, and also spur advances in neuromorphic hardware, powerful computing systems inspired by the human brain.
“The brain is continually receiving, organizing and storing memories. These processes, which have been studied in countless experiments, are so complex that scientists have been developing mathematical models in order to fully understand them,” said Stefano Fusi, the paper’s senior author.
“The model that we have developed finally explains why the biology and chemistry underlying memory are so complex, and how this complexity drives the brain’s ability to remember,” he added.
“The problem with a simple, dial-like model of how synapses function was that it was assumed their strength could be dialed up or down indefinitely,” said Dr. Fusi, adding, “But in the real world this can’t happen. Whether it’s the volume knob on a stereo, or any biological system, there has to be a physical limit to how much it could turn.”
When these limits were imposed, the memory capacity of these models collapsed.
So Dr. Fusi, in collaboration with fellow Zuckerman Institute investigator Larry Abbot offered an alternative- each synapse is more complex than just one dial, and instead should be described as a system with multiple dials.
In 2005, Drs. Fusi and Abbott published research explaining this idea. They described how different dials within a synapse could operate in tandem to form new memories while protecting old ones. But even that model, the authors later realized, fell short of what they believed the brain, particularly the human brain, could hold.
Memories are widely believed to be stored in synapses, tiny structures on the surface of neurons. These synapses act as conduits, transmitting the information housed inside electrical pulses that normally pass from neuron to neuron. In the earliest memory models, the strength of electrical signals that passed through synapses was compared to a volume knob on a stereo; it dialled up to boost (or down to lower) the connection strength between neurons. This allowed for the formation of memories.
These models worked extremely well, as they accounted for enormous memory capacity. But they also posed an intriguing dilemma.
We came to realize that the various synaptic components, or dials, not only functioned at different timescales, but were also likely communicating with each other,” said Marcus Benna, the first author of today’s Nature Neuroscience paper. “Once we added the communication between components to our model, the storage capacity increased by an enormous factor, becoming far more representative of what is achieved inside the living brain.”
Dr. Benna likened the components of this new model to a system of beakers connected to each other through a series of tubes.
“In a set of interconnected beakers, each filled with different amounts of water, the liquid will tend to flow between them such that the water levels become equalized. In our model, the beakers represent the various components within a synapse,” explained Dr. Benna. “Adding liquid to one of the beakers — or removing some of it — represents the encoding of new memories. Over time, the resulting flow of liquid will diffuse across the other beakers, corresponding to the long-term storage of memories.’