Filter
Associated Lab
- Druckmann Lab (3) Apply Druckmann Lab filter
- Fitzgerald Lab (1) Apply Fitzgerald Lab filter
- Hermundstad Lab (4) Apply Hermundstad Lab filter
- Jayaraman Lab (5) Apply Jayaraman Lab filter
- Lee (Albert) Lab (1) Apply Lee (Albert) Lab filter
- Leonardo Lab (1) Apply Leonardo Lab filter
- Magee Lab (2) Apply Magee Lab filter
- Pachitariu Lab (1) Apply Pachitariu Lab filter
- Pastalkova Lab (1) Apply Pastalkova Lab filter
- Reiser Lab (4) Apply Reiser Lab filter
- Romani Lab (44) Apply Romani Lab filter
- Rubin Lab (1) Apply Rubin Lab filter
- Spruston Lab (1) Apply Spruston Lab filter
- Svoboda Lab (5) Apply Svoboda Lab filter
- Voigts Lab (1) Apply Voigts Lab filter
Associated Project Team
Publication Date
- 2025 (1) Apply 2025 filter
- 2024 (4) Apply 2024 filter
- 2023 (2) Apply 2023 filter
- 2022 (3) Apply 2022 filter
- 2021 (4) Apply 2021 filter
- 2020 (2) Apply 2020 filter
- 2019 (3) Apply 2019 filter
- 2018 (3) Apply 2018 filter
- 2017 (6) Apply 2017 filter
- 2016 (2) Apply 2016 filter
- 2015 (4) Apply 2015 filter
- 2014 (2) Apply 2014 filter
- 2013 (1) Apply 2013 filter
- 2011 (1) Apply 2011 filter
- 2010 (1) Apply 2010 filter
- 2008 (2) Apply 2008 filter
- 2007 (1) Apply 2007 filter
- 2006 (1) Apply 2006 filter
- 2005 (1) Apply 2005 filter
Type of Publication
44 Publications
Showing 1-10 of 44 resultsUncertainty is a fundamental aspect of the natural environment, requiring the brain to infer and integrate noisy signals to guide behavior effectively. Sampling-based inference has been proposed as a mechanism for dealing with uncertainty, particularly in early sensory processing. However, it is unclear how to reconcile sampling-based methods with operational principles of higher-order brain areas, such as attractor dynamics of persistent neural representations. In this study, we present a spiking neural network model for the head-direction (HD) system that combines sampling-based inference with attractor dynamics. To achieve this, we derive the required spiking neural network dynamics and interactions to perform sampling from a large family of probability distributions - including variables encoded with Poisson noise. We then propose a method that allows the network to update its estimate of the current head direction by integrating angular velocity samples - derived from noisy inputs - with a pull towards a circular manifold, thereby maintaining consistent attractor dynamics. This model makes specific, testable predictions about the HD system that can be examined in future neurophysiological experiments: it predicts correlated subthreshold voltage fluctuations; distinctive short- and long-term firing correlations among neurons; and characteristic statistics of the movement of the neural activity "bump" representing the head direction. Overall, our approach extends previous theories on probabilistic sampling with spiking neurons, offers a novel perspective on the computations responsible for orientation and navigation, and supports the hypothesis that sampling-based methods can be combined with attractor dynamics to provide a viable framework for studying neural dynamics across the brain.Competing Interest StatementThe authors have declared no competing interest.
A cognitive compass enabling spatial navigation requires neural representation of heading direction (HD), yet the neural circuit architecture enabling this representation remains unclear. While various network models have been proposed to explain HD systems, these models rely on simplified circuit architectures that are incompatible with empirical observations from connectomes. Here we construct a novel network model for the fruit fly HD system that satisfies both connectome-derived architectural constraints and the functional requirement of continuous heading representation. We characterize an ensemble of continuous attractor networks where compass neurons providing local mutual excitation are coupled to inhibitory neurons. We discover a new mechanism where continuous heading representation emerges from combining symmetric and anti-symmetric activity patterns. Our analysis reveals three distinct realizations of these networks that all match observed compass neuron activity but differ in their predictions for inhibitory neuron activation patterns. Further, we found that deviations from these realizations can be compensated by cell-type-specific rescaling of synaptic weights, which could be potentially achieved through neuromodulation. This framework can be extended to incorporate the complete fly central complex connectome and could reveal principles of neural circuits representing other continuous quantities, such as spatial location, across insects and vertebrates.
Hippocampal CA3 is central to memory formation and retrieval. Although various network mechanisms have been proposed, direct evidence is lacking. Using intracellular Vm recordings and optogenetic manipulations in behaving mice, we found that CA3 place-field activity is produced by a symmetric form of behavioral timescale synaptic plasticity (BTSP) at recurrent synapses among CA3 pyramidal neurons but not at synapses from the dentate gyrus (DG). Additional manipulations revealed that excitatory input from the entorhinal cortex (EC) but not the DG was required to update place cell activity based on the animal's movement. These data were captured by a computational model that used BTSP and an external updating input to produce attractor dynamics under online learning conditions. Theoretical analyses further highlight the superior memory storage capacity of such networks, especially when dealing with correlated input patterns. This evidence elucidates the cellular and circuit mechanisms of learning and memory formation in the hippocampus.
Many animals rely on persistent internal representations of continuous variables for working memory, navigation, and motor control. Existing theories typically assume that large networks of neurons are required to maintain such representations accurately; networks with few neurons are thought to generate discrete representations. However, analysis of two-photon calcium imaging data from tethered flies walking in darkness suggests that their small head-direction system can maintain a surprisingly continuous and accurate representation. We thus ask whether it is possible for a small network to generate a continuous, rather than discrete, representation of such a variable. We show analytically that even very small networks can be tuned to maintain continuous internal representations, but this comes at the cost of sensitivity to noise and variations in tuning. This work expands the computational repertoire of small networks, and raises the possibility that larger networks could represent more and higher-dimensional variables than previously thought.
As we move through the world, we see the same visual scenes from different perspectives. Although we experience perspective deformations, our perception of a scene remains stable. This raises the question of which neuronal representations in visual brain areas are perspective-tuned and which are invariant. Focusing on planar rotations, we introduce a mathematical framework based on the principle of equivariance, which asserts that an image rotation results in a corresponding rotation of neuronal representations, to explain how the same representation can range from being fully tuned to fully invariant. We applied this framework to large-scale simultaneous neuronal recordings from four visual cortical areas in mice, where we found that representations are both tuned and invariant but become more invariant across higher-order areas. While common deep convolutional neural networks show similar trends in orientation-invariance across layers, they are not rotation-equivariant. We propose that equivariance is a prevalent computation of populations of biological neurons to gradually achieve invariance through structured tuning.
Flying insects exhibit remarkable navigational abilities controlled by their compact nervous systems. Optic flow, the pattern of changes in the visual scene induced by locomotion, is a crucial sensory cue for robust self-motion estimation, especially during rapid flight. Neurons that respond to specific, large-field optic flow patterns have been studied for decades, primarily in large flies, such as houseflies, blowflies, and hover flies. The best-known optic-flow sensitive neurons are the large tangential cells of the dipteran lobula plate, whose visual-motion responses, and to a lesser extent, their morphology, have been explored using single-neuron neurophysiology. Most of these studies have focused on the large, Horizontal and Vertical System neurons, yet the lobula plate houses a much larger set of 'optic-flow' sensitive neurons, many of which have been challenging to unambiguously identify or to reliably target for functional studies. Here we report the comprehensive reconstruction and identification of the Lobula Plate Tangential Neurons in an Electron Microscopy (EM) volume of a whole Drosophila brain. This catalog of 58 LPT neurons (per brain hemisphere) contains many neurons that are described here for the first time and provides a basis for systematic investigation of the circuitry linking self-motion to locomotion control. Leveraging computational anatomy methods, we estimated the visual motion receptive fields of these neurons and compared their tuning to the visual consequence of body rotations and translational movements. We also matched these neurons, in most cases on a one-for-one basis, to stochastically labeled cells in genetic driver lines, to the mirror-symmetric neurons in the same EM brain volume, and to neurons in an additional EM data set. Using cell matches across data sets, we analyzed the integration of optic flow patterns by neurons downstream of the LPTs and find that most central brain neurons establish sharper selectivity for global optic flow patterns than their input neurons. Furthermore, we found that self-motion information extracted from optic flow is processed in distinct regions of the central brain, pointing to diverse foci for the generation of visual behaviors.
Neocortical spiking dynamics control aspects of behavior, yet how these dynamics emerge during motor learning remains elusive. Activity-dependent synaptic plasticity is likely a key mechanism, as it reconfigures network architectures that govern neural dynamics. Here, we examined how the mouse premotor cortex acquires its well-characterized neural dynamics that control movement timing, specifically lick timing. To probe the role of synaptic plasticity, we have genetically manipulated proteins essential for major forms of synaptic plasticity, Ca2+/calmodulin-dependent protein kinase II (CaMKII) and Cofilin, in a region and cell-type-specific manner. Transient inactivation of CaMKII in the premotor cortex blocked learning of new lick timing without affecting the execution of learned action or ongoing spiking activity. Furthermore, among the major glutamatergic neurons in the premotor cortex, CaMKII and Cofilin activity in pyramidal tract (PT) neurons, but not intratelencephalic (IT) neurons, is necessary for learning. High-density electrophysiology in the premotor cortex uncovered that neural dynamics anticipating licks are progressively shaped during learning, which explains the change in lick timing. Such reconfiguration in behaviorally relevant dynamics is impeded by CaMKII manipulation in PT neurons. Altogether, the activity of plasticity-related proteins in PT neurons plays a central role in sculpting neocortical dynamics to learn new behavior.
The brain plans and executes volitional movements. The underlying patterns of neural population activity have been explored in the context of movements of the eyes, limbs, tongue, and head in nonhuman primates and rodents. How do networks of neurons produce the slow neural dynamics that prepare specific movements and the fast dynamics that ultimately initiate these movements? Recent work exploits rapid and calibrated perturbations of neural activity to test specific dynamical systems models that are capable of producing the observed neural activity. These joint experimental and computational studies show that cortical dynamics during motor planning reflect fixed points of neural activity (attractors). Subcortical control signals reshape and move attractors over multiple timescales, causing commitment to specific actions and rapid transitions to movement execution. Experiments in rodents are beginning to reveal how these algorithms are implemented at the level of brain-wide neural circuits.
To flexibly navigate, many animals rely on internal spatial representations that persist when the animal is standing still in darkness, and update accurately by integrating the animal's movements in the absence of localizing sensory cues. Theories of mammalian head direction cells have proposed that these dynamics can be realized in a special class of networks that maintain a localized bump of activity via structured recurrent connectivity, and that shift this bump of activity via angular velocity input. Although there are many different variants of these so-called ring attractor networks, they all rely on large numbers of neurons to generate representations that persist in the absence of input and accurately integrate angular velocity input. Surprisingly, in the fly, Drosophila melanogaster, a head direction representation is maintained by a much smaller number of neurons whose dynamics and connectivity resemble those of a ring attractor network. These findings challenge our understanding of ring attractors and their putative implementation in neural circuits. Here, we analyzed failures of angular velocity integration that emerge in small attractor networks with only a few computational units. Motivated by the peak performance of the fly head direction system in darkness, we mathematically derived conditions under which small networks, even with as few as 4 neurons, achieve the performance of much larger networks. The resulting description reveals that by appropriately tuning the network connectivity, the network can maintain persistent representations over the continuum of head directions, and it can accurately integrate angular velocity inputs. We then analytically determined how performance degrades as the connectivity deviates from this optimally-tuned setting, and we find a trade-off between network size and the tuning precision needed to achieve persistence and accurate integration. This work shows how even small networks can accurately track an animal's movements to guide navigation, and it informs our understanding of the functional capabilities of discrete systems more broadly.
The brain plans and executes volitional movements. The underlying patterns of neural population activity have been explored in the context of movements of the eyes, limbs, tongue, and head in nonhuman primates and rodents. How do networks of neurons produce the slow neural dynamics that prepare specific movements and the fast dynamics that ultimately initiate these movements? Recent work exploits rapid and calibrated perturbations of neural activity to test specific dynamical systems models that are capable of producing the observed neural activity. These joint experimental and computational studies show that cortical dynamics during motor planning reflect fixed points of neural activity (attractors). Subcortical control signals reshape and move attractors over multiple timescales, causing commitment to specific actions and rapid transitions to movement execution. Experiments in rodents are beginning to reveal how these algorithms are implemented at the level of brain-wide neural circuits. Expected final online publication date for the , Volume 45 is July 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.