Filter
Associated Lab
- Druckmann Lab (3) Apply Druckmann Lab filter
- Fitzgerald Lab (1) Apply Fitzgerald Lab filter
- Hermundstad Lab (4) Apply Hermundstad Lab filter
- Jayaraman Lab (5) Apply Jayaraman Lab filter
- Lee (Albert) Lab (1) Apply Lee (Albert) Lab filter
- Leonardo Lab (1) Apply Leonardo Lab filter
- Magee Lab (2) Apply Magee Lab filter
- Pachitariu Lab (1) Apply Pachitariu Lab filter
- Pastalkova Lab (1) Apply Pastalkova Lab filter
- Reiser Lab (4) Apply Reiser Lab filter
- Romani Lab (44) Apply Romani Lab filter
- Rubin Lab (1) Apply Rubin Lab filter
- Spruston Lab (1) Apply Spruston Lab filter
- Svoboda Lab (5) Apply Svoboda Lab filter
- Voigts Lab (1) Apply Voigts Lab filter
Associated Project Team
Publication Date
- 2025 (1) Apply 2025 filter
- 2024 (4) Apply 2024 filter
- 2023 (2) Apply 2023 filter
- 2022 (3) Apply 2022 filter
- 2021 (4) Apply 2021 filter
- 2020 (2) Apply 2020 filter
- 2019 (3) Apply 2019 filter
- 2018 (3) Apply 2018 filter
- 2017 (6) Apply 2017 filter
- 2016 (2) Apply 2016 filter
- 2015 (4) Apply 2015 filter
- 2014 (2) Apply 2014 filter
- 2013 (1) Apply 2013 filter
- 2011 (1) Apply 2011 filter
- 2010 (1) Apply 2010 filter
- 2008 (2) Apply 2008 filter
- 2007 (1) Apply 2007 filter
- 2006 (1) Apply 2006 filter
- 2005 (1) Apply 2005 filter
Type of Publication
44 Publications
Showing 21-30 of 44 resultsWe have used simulations to study the learning dynamics of an autonomous, biologically realistic recurrent network of spiking neurons connected via plastic synapses, subjected to a stream of stimulus-delay trials, in which one of a set of stimuli is presented followed by a delay. Long-term plasticity, produced by the neural activity experienced during training, structures the network and endows it with active (working) memory, i.e. enhanced, selective delay activity for every stimulus in the training set. Short-term plasticity produces transient synaptic depression. Each stimulus used in training excites a selective subset of neurons in the network, and stimuli can share neurons (overlapping stimuli). Long-term plasticity dynamics are driven by presynaptic spikes and coincident postsynaptic depolarization; stability is ensured by a refresh mechanism. In the absence of stimulation, the acquired synaptic structure persists for a very long time. The dependence of long-term plasticity dynamics on the characteristics of the stimulus response (average emission rates, time course and synchronization), and on the single-cell emission statistics (coefficient of variation) is studied. The study clarifies the specific roles of short-term synaptic depression, NMDA receptors, stimulus representation overlaps, selective stimulation of inhibition, and spike asynchrony during stimulation. Patterns of network spiking activity before, during and after training reproduce most of the in vivo physiological observations in the literature.
Neurons in multiple brain regions fire trains of action potentials anticipating specific movements, but this 'preparatory activity' has not been systematically compared across behavioral tasks. We compared preparatory activity in auditory and tactile delayed-response tasks in male mice. Skilled, directional licking was the motor output. The anterior lateral motor cortex (ALM) is necessary for motor planning in both tasks. Multiple features of ALM preparatory activity during the delay epoch were similar across tasks. First, majority of neurons showed direction-selective activity and spatially intermingled neurons were selective for either movement direction. Second, many cells showed mixed coding of sensory stimulus and licking direction, with a bias toward licking direction. Third, delay activity was monotonic and low-dimensional. Fourth, pairs of neurons with similar direction selectivity showed high spike-count correlations. Our study forms the foundation to analyze the neural circuit mechanisms underlying preparatory activity in a genetically tractable model organism.Short-term memories link events separated in time. Neurons in frontal cortex fire trains of action potentials anticipating specific movements, often seconds before the movement. This 'preparatory activity' has been observed in multiple brain regions, but has rarely been compared systematically across behavioral tasks in the same brain region. To identify common features of preparatory activity, we developed and compared preparatory activity in auditory and tactile delayed-response tasks in mice. The same cortical area is necessary for both tasks. Multiple features of preparatory activity, measured with high-density silicon probes, were similar across tasks. We find that preparatory activity is low-dimensional and monotonic. Our study forms the foundation to analyze the circuit mechanisms underlying preparatory activity in a genetically tractable model organism.
Many animals rely on persistent internal representations of continuous variables for working memory, navigation, and motor control. Existing theories typically assume that large networks of neurons are required to maintain such representations accurately; networks with few neurons are thought to generate discrete representations. However, analysis of two-photon calcium imaging data from tethered flies walking in darkness suggests that their small head-direction system can maintain a surprisingly continuous and accurate representation. We thus ask whether it is possible for a small network to generate a continuous, rather than discrete, representation of such a variable. We show analytically that even very small networks can be tuned to maintain continuous internal representations, but this comes at the cost of sensitivity to noise and variations in tuning. This work expands the computational repertoire of small networks, and raises the possibility that larger networks could represent more and higher-dimensional variables than previously thought.
Empirical estimates of the dimensionality of neural population activity are often much lower than the population size. Similar phenomena are also observed in trained and designed neural network models. These experimental and computational results suggest that mapping low-dimensional dynamics to high-dimensional neural space is a common feature of cortical computation. Despite the ubiquity of this observation, the constraints arising from such mapping are poorly understood. Here we consider a specific example of mapping low-dimensional dynamics to high-dimensional neural activity-the neural engineering framework. We analytically solve the framework for the classic ring model-a neural network encoding a static or dynamic angular variable. Our results provide a complete characterization of the success and failure modes for this model. Based on similarities between this and other frameworks, we speculate that these results could apply to more general scenarios.
Mean-Field theory is extended to recurrent networks of spiking neurons endowed with short-term depression (STD) of synaptic transmission. The extension involves the use of the distribution of interspike intervals of an integrate-and-fire neuron receiving a Gaussian current, with a given mean and variance, in input. This, in turn, is used to obtain an accurate estimate of the resulting postsynaptic current in presence of STD. The stationary states of the network are obtained requiring self-consistency for the currents-those driving the emission processes and those generated by the emitted spikes. The model network stores in the distribution of two-state efficacies of excitatory-to-excitatory synapses, a randomly composed set of external stimuli. The resulting synaptic structure allows the network to exhibit selective persistent activity for each stimulus in the set. Theory predicts the onset of selective persistent, or working memory (WM) activity upon varying the constitutive parameters (e.g. potentiated/depressed long-term efficacy ratio, parameters associated with STD), and provides the average emission rates in the various steady states. Theoretical estimates are in remarkably good agreement with data "recorded" in computer simulations of the microscopic model.
Hippocampal CA3 is central to memory formation and retrieval. Although various network mechanisms have been proposed, direct evidence is lacking. Using intracellular Vm recordings and optogenetic manipulations in behaving mice, we found that CA3 place-field activity is produced by a symmetric form of behavioral timescale synaptic plasticity (BTSP) at recurrent synapses among CA3 pyramidal neurons but not at synapses from the dentate gyrus (DG). Additional manipulations revealed that excitatory input from the entorhinal cortex (EC) but not the DG was required to update place cell activity based on the animal's movement. These data were captured by a computational model that used BTSP and an external updating input to produce attractor dynamics under online learning conditions. Theoretical analyses further highlight the superior memory storage capacity of such networks, especially when dealing with correlated input patterns. This evidence elucidates the cellular and circuit mechanisms of learning and memory formation in the hippocampus.
The dilemma that neurotheorists face is that (1) detailed biophysical models that can be constrained by direct measurements, while being of great importance, offer no immediate insights into cognitive processes in the brain, and (2) high-level abstract cognitive models, on the other hand, while relevant for understanding behavior, are largely detached from neuronal processes and typically have many free, experimentally unconstrained parameters that have to be tuned to a particular data set and, hence, cannot be readily generalized to other experimental paradigms. In this contribution, we propose a set of "first principles" for neurally inspired cognitive modeling of memory retrieval that has no biologically unconstrained parameters and can be analyzed mathematically both at neuronal and cognitive levels. We apply this framework to the classical cognitive paradigm of free recall. We show that the resulting model accounts well for puzzling behavioral data on human participants and makes predictions that could potentially be tested with neurophysiological recording techniques.
The brain plans and executes volitional movements. The underlying patterns of neural population activity have been explored in the context of movements of the eyes, limbs, tongue, and head in nonhuman primates and rodents. How do networks of neurons produce the slow neural dynamics that prepare specific movements and the fast dynamics that ultimately initiate these movements? Recent work exploits rapid and calibrated perturbations of neural activity to test specific dynamical systems models that are capable of producing the observed neural activity. These joint experimental and computational studies show that cortical dynamics during motor planning reflect fixed points of neural activity (attractors). Subcortical control signals reshape and move attractors over multiple timescales, causing commitment to specific actions and rapid transitions to movement execution. Experiments in rodents are beginning to reveal how these algorithms are implemented at the level of brain-wide neural circuits. Expected final online publication date for the , Volume 45 is July 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
The brain plans and executes volitional movements. The underlying patterns of neural population activity have been explored in the context of movements of the eyes, limbs, tongue, and head in nonhuman primates and rodents. How do networks of neurons produce the slow neural dynamics that prepare specific movements and the fast dynamics that ultimately initiate these movements? Recent work exploits rapid and calibrated perturbations of neural activity to test specific dynamical systems models that are capable of producing the observed neural activity. These joint experimental and computational studies show that cortical dynamics during motor planning reflect fixed points of neural activity (attractors). Subcortical control signals reshape and move attractors over multiple timescales, causing commitment to specific actions and rapid transitions to movement execution. Experiments in rodents are beginning to reveal how these algorithms are implemented at the level of brain-wide neural circuits.
Human memory can store large amount of information. Nevertheless, recalling is often a challenging task. In a classical free recall paradigm, where participants are asked to repeat a briefly presented list of words, people make mistakes for lists as short as 5 words. We present a model for memory retrieval based on a Hopfield neural network where transition between items are determined by similarities in their long-term memory representations. Meanfield analysis of the model reveals stable states of the network corresponding (1) to single memory representations and (2) intersection between memory representations. We show that oscillating feedback inhibition in the presence of noise induces transitions between these states triggering the retrieval of different memories. The network dynamics qualitatively predicts the distribution of time intervals required to recall new memory items observed in experiments. It shows that items having larger number of neurons in their representation are statistically easier to recall and reveals possible bottlenecks in our ability of retrieving memories. Overall, we propose a neural network model of information retrieval broadly compatible with experimental observations and is consistent with our recent graphical model (Romani et al., 2013).