Filter
Associated Lab
- Druckmann Lab (3) Apply Druckmann Lab filter
- Fitzgerald Lab (1) Apply Fitzgerald Lab filter
- Hermundstad Lab (4) Apply Hermundstad Lab filter
- Jayaraman Lab (5) Apply Jayaraman Lab filter
- Lee (Albert) Lab (1) Apply Lee (Albert) Lab filter
- Leonardo Lab (1) Apply Leonardo Lab filter
- Magee Lab (2) Apply Magee Lab filter
- Pachitariu Lab (1) Apply Pachitariu Lab filter
- Pastalkova Lab (1) Apply Pastalkova Lab filter
- Reiser Lab (4) Apply Reiser Lab filter
- Romani Lab (44) Apply Romani Lab filter
- Rubin Lab (1) Apply Rubin Lab filter
- Spruston Lab (1) Apply Spruston Lab filter
- Svoboda Lab (5) Apply Svoboda Lab filter
- Voigts Lab (1) Apply Voigts Lab filter
Associated Project Team
Publication Date
- 2025 (1) Apply 2025 filter
- 2024 (4) Apply 2024 filter
- 2023 (2) Apply 2023 filter
- 2022 (3) Apply 2022 filter
- 2021 (4) Apply 2021 filter
- 2020 (2) Apply 2020 filter
- 2019 (3) Apply 2019 filter
- 2018 (3) Apply 2018 filter
- 2017 (6) Apply 2017 filter
- 2016 (2) Apply 2016 filter
- 2015 (4) Apply 2015 filter
- 2014 (2) Apply 2014 filter
- 2013 (1) Apply 2013 filter
- 2011 (1) Apply 2011 filter
- 2010 (1) Apply 2010 filter
- 2008 (2) Apply 2008 filter
- 2007 (1) Apply 2007 filter
- 2006 (1) Apply 2006 filter
- 2005 (1) Apply 2005 filter
Type of Publication
44 Publications
Showing 1-10 of 44 resultsFlying insects exhibit remarkable navigational abilities controlled by their compact nervous systems. Optic flow, the pattern of changes in the visual scene induced by locomotion, is a crucial sensory cue for robust self-motion estimation, especially during rapid flight. Neurons that respond to specific, large-field optic flow patterns have been studied for decades, primarily in large flies, such as houseflies, blowflies, and hover flies. The best-known optic-flow sensitive neurons are the large tangential cells of the dipteran lobula plate, whose visual-motion responses, and to a lesser extent, their morphology, have been explored using single-neuron neurophysiology. Most of these studies have focused on the large, Horizontal and Vertical System neurons, yet the lobula plate houses a much larger set of 'optic-flow' sensitive neurons, many of which have been challenging to unambiguously identify or to reliably target for functional studies. Here we report the comprehensive reconstruction and identification of the Lobula Plate Tangential Neurons in an Electron Microscopy (EM) volume of a whole Drosophila brain. This catalog of 58 LPT neurons (per brain hemisphere) contains many neurons that are described here for the first time and provides a basis for systematic investigation of the circuitry linking self-motion to locomotion control. Leveraging computational anatomy methods, we estimated the visual motion receptive fields of these neurons and compared their tuning to the visual consequence of body rotations and translational movements. We also matched these neurons, in most cases on a one-for-one basis, to stochastically labeled cells in genetic driver lines, to the mirror-symmetric neurons in the same EM brain volume, and to neurons in an additional EM data set. Using cell matches across data sets, we analyzed the integration of optic flow patterns by neurons downstream of the LPTs and find that most central brain neurons establish sharper selectivity for global optic flow patterns than their input neurons. Furthermore, we found that self-motion information extracted from optic flow is processed in distinct regions of the central brain, pointing to diverse foci for the generation of visual behaviors.
To support cognitive function, the CA3 region of the hippocampus performs computations involving attractor dynamics. Understanding how cellular and ensemble activities of CA3 neurons enable computation is critical for elucidating the neural correlates of cognition. Here we show that CA3 comprises not only classically described pyramid cells with thorny excrescences, but also includes previously unidentified 'athorny' pyramid cells that lack mossy-fiber input. Moreover, the two neuron types have distinct morphological and physiological phenotypes and are differentially modulated by acetylcholine. To understand the contribution of these athorny pyramid neurons to circuit function, we measured cell-type-specific firing patterns during sharp-wave synchronization events in vivo and recapitulated these dynamics with an attractor network model comprising two principal cell types. Our data and simulations reveal a key role for athorny cell bursting in the initiation of sharp waves: transient network attractor states that signify the execution of pattern completion computations vital to cognitive function.
To flexibly navigate, many animals rely on internal spatial representations that persist when the animal is standing still in darkness, and update accurately by integrating the animal's movements in the absence of localizing sensory cues. Theories of mammalian head direction cells have proposed that these dynamics can be realized in a special class of networks that maintain a localized bump of activity via structured recurrent connectivity, and that shift this bump of activity via angular velocity input. Although there are many different variants of these so-called ring attractor networks, they all rely on large numbers of neurons to generate representations that persist in the absence of input and accurately integrate angular velocity input. Surprisingly, in the fly, Drosophila melanogaster, a head direction representation is maintained by a much smaller number of neurons whose dynamics and connectivity resemble those of a ring attractor network. These findings challenge our understanding of ring attractors and their putative implementation in neural circuits. Here, we analyzed failures of angular velocity integration that emerge in small attractor networks with only a few computational units. Motivated by the peak performance of the fly head direction system in darkness, we mathematically derived conditions under which small networks, even with as few as 4 neurons, achieve the performance of much larger networks. The resulting description reveals that by appropriately tuning the network connectivity, the network can maintain persistent representations over the continuum of head directions, and it can accurately integrate angular velocity inputs. We then analytically determined how performance degrades as the connectivity deviates from this optimally-tuned setting, and we find a trade-off between network size and the tuning precision needed to achieve persistence and accurate integration. This work shows how even small networks can accurately track an animal's movements to guide navigation, and it informs our understanding of the functional capabilities of discrete systems more broadly.
Many animals maintain an internal representation of their heading as they move through their surroundings. Such a compass representation was recently discovered in a neural population in the Drosophila melanogaster central complex, a brain region implicated in spatial navigation. Here, we use two-photon calcium imaging and electrophysiology in head-fixed walking flies to identify a different neural population that conjunctively encodes heading and angular velocity, and is excited selectively by turns in either the clockwise or counterclockwise direction. We show how these mirror-symmetric turn responses combine with the neurons' connectivity to the compass neurons to create an elegant mechanism for updating the fly's heading representation when the animal turns in darkness. This mechanism, which employs recurrent loops with an angular shift, bears a resemblance to those proposed in theoretical models for rodent head direction cells. Our results provide a striking example of structure matching function for a broadly relevant computation.
Decisions are held in memory until enacted, which makes them potentially vulnerable to distracting sensory input. Gating of information flow from sensory to motor areas could protect memory from interference during decision-making, but the underlying network mechanisms are not understood. Here, we trained mice to detect optogenetic stimulation of the somatosensory cortex, with a delay separating sensation and action. During the delay, distracting stimuli lost influence on behavior over time, even though distractor-evoked neural activity percolated through the cortex without attenuation. Instead, choice-encoding activity in the motor cortex became progressively less sensitive to the impact of distractors. Reverse engineering of neural networks trained to reproduce motor cortex activity revealed that the reduction in sensitivity to distractors was caused by a growing separation in the neural activity space between attractors that encode alternative decisions. Our results show that communication between brain regions can be gated via attractor dynamics, which control the degree of commitment to an action.
Learning is primarily mediated by activity-dependent modifications of synaptic strength within neuronal circuits. We discovered that place fields in hippocampal area CA1 are produced by a synaptic potentiation notably different from Hebbian plasticity. Place fields could be produced in vivo in a single trial by potentiation of input that arrived seconds before and after complex spiking. The potentiated synaptic input was not initially coincident with action potentials or depolarization. This rule, named behavioral time scale synaptic plasticity, abruptly modifies inputs that were neither causal nor close in time to postsynaptic activation. In slices, five pairings of subthreshold presynaptic activity and calcium (Ca(2+)) plateau potentials produced a large potentiation with an asymmetric seconds-long time course. This plasticity efficiently stores entire behavioral sequences within synaptic weights to produce predictive place cell activity.
Learning requires neural adaptations thought to be mediated by activity-dependent synaptic plasticity. A relatively non-standard form of synaptic plasticity driven by dendritic calcium spikes, or plateau potentials, has been reported to underlie place field formation in rodent hippocampal CA1 neurons. Here we found that this behavioral timescale synaptic plasticity (BTSP) can also reshape existing place fields via bidirectional synaptic weight changes that depend on the temporal proximity of plateau potentials to pre-existing place fields. When evoked near an existing place field, plateau potentials induced less synaptic potentiation and more depression, suggesting BTSP might depend inversely on postsynaptic activation. However, manipulations of place cell membrane potential and computational modeling indicated that this anti-correlation actually results from a dependence on current synaptic weight such that weak inputs potentiate and strong inputs depress. A network model implementing this bidirectional synaptic learning rule suggested that BTSP enables population activity, rather than pairwise neuronal correlations, to drive neural adaptations to experience.
As we move through the world, we see the same visual scenes from different perspectives. Although we experience perspective deformations, our perception of a scene remains stable. This raises the question of which neuronal representations in visual brain areas are perspective-tuned and which are invariant. Focusing on planar rotations, we introduce a mathematical framework based on the principle of equivariance, which asserts that an image rotation results in a corresponding rotation of neuronal representations, to explain how the same representation can range from being fully tuned to fully invariant. We applied this framework to large-scale simultaneous neuronal recordings from four visual cortical areas in mice, where we found that representations are both tuned and invariant but become more invariant across higher-order areas. While common deep convolutional neural networks show similar trends in orientation-invariance across layers, they are not rotation-equivariant. We propose that equivariance is a prevalent computation of populations of biological neurons to gradually achieve invariance through structured tuning.
Neocortical spiking dynamics control aspects of behavior, yet how these dynamics emerge during motor learning remains elusive. Activity-dependent synaptic plasticity is likely a key mechanism, as it reconfigures network architectures that govern neural dynamics. Here, we examined how the mouse premotor cortex acquires its well-characterized neural dynamics that control movement timing, specifically lick timing. To probe the role of synaptic plasticity, we have genetically manipulated proteins essential for major forms of synaptic plasticity, Ca2+/calmodulin-dependent protein kinase II (CaMKII) and Cofilin, in a region and cell-type-specific manner. Transient inactivation of CaMKII in the premotor cortex blocked learning of new lick timing without affecting the execution of learned action or ongoing spiking activity. Furthermore, among the major glutamatergic neurons in the premotor cortex, CaMKII and Cofilin activity in pyramidal tract (PT) neurons, but not intratelencephalic (IT) neurons, is necessary for learning. High-density electrophysiology in the premotor cortex uncovered that neural dynamics anticipating licks are progressively shaped during learning, which explains the change in lick timing. Such reconfiguration in behaviorally relevant dynamics is impeded by CaMKII manipulation in PT neurons. Altogether, the activity of plasticity-related proteins in PT neurons plays a central role in sculpting neocortical dynamics to learn new behavior.
Uncertainty is a fundamental aspect of the natural environment, requiring the brain to infer and integrate noisy signals to guide behavior effectively. Sampling-based inference has been proposed as a mechanism for dealing with uncertainty, particularly in early sensory processing. However, it is unclear how to reconcile sampling-based methods with operational principles of higher-order brain areas, such as attractor dynamics of persistent neural representations. In this study, we present a spiking neural network model for the head-direction (HD) system that combines sampling-based inference with attractor dynamics. To achieve this, we derive the required spiking neural network dynamics and interactions to perform sampling from a large family of probability distributions - including variables encoded with Poisson noise. We then propose a method that allows the network to update its estimate of the current head direction by integrating angular velocity samples - derived from noisy inputs - with a pull towards a circular manifold, thereby maintaining consistent attractor dynamics. This model makes specific, testable predictions about the HD system that can be examined in future neurophysiological experiments: it predicts correlated subthreshold voltage fluctuations; distinctive short- and long-term firing correlations among neurons; and characteristic statistics of the movement of the neural activity "bump" representing the head direction. Overall, our approach extends previous theories on probabilistic sampling with spiking neurons, offers a novel perspective on the computations responsible for orientation and navigation, and supports the hypothesis that sampling-based methods can be combined with attractor dynamics to provide a viable framework for studying neural dynamics across the brain.Competing Interest StatementThe authors have declared no competing interest.