Filter
Associated Lab
- Aguilera Castrejon Lab (17) Apply Aguilera Castrejon Lab filter
- Ahrens Lab (68) Apply Ahrens Lab filter
- Aso Lab (42) Apply Aso Lab filter
- Baker Lab (38) Apply Baker Lab filter
- Betzig Lab (115) Apply Betzig Lab filter
- Beyene Lab (14) Apply Beyene Lab filter
- Bock Lab (17) Apply Bock Lab filter
- Branson Lab (54) Apply Branson Lab filter
- Card Lab (43) Apply Card Lab filter
- Cardona Lab (64) Apply Cardona Lab filter
- Chklovskii Lab (13) Apply Chklovskii Lab filter
- Clapham Lab (15) Apply Clapham Lab filter
- Cui Lab (19) Apply Cui Lab filter
- Darshan Lab (12) Apply Darshan Lab filter
- Dennis Lab (1) Apply Dennis Lab filter
- Dickson Lab (46) Apply Dickson Lab filter
- Druckmann Lab (25) Apply Druckmann Lab filter
- Dudman Lab (52) Apply Dudman Lab filter
- Eddy/Rivas Lab (30) Apply Eddy/Rivas Lab filter
- Egnor Lab (11) Apply Egnor Lab filter
- Espinosa Medina Lab (21) Apply Espinosa Medina Lab filter
- Feliciano Lab (10) Apply Feliciano Lab filter
- Fetter Lab (41) Apply Fetter Lab filter
- FIB-SEM Technology (1) Apply FIB-SEM Technology filter
- Fitzgerald Lab (29) Apply Fitzgerald Lab filter
- Freeman Lab (15) Apply Freeman Lab filter
- Funke Lab (41) Apply Funke Lab filter
- Gonen Lab (91) Apply Gonen Lab filter
- Grigorieff Lab (62) Apply Grigorieff Lab filter
- Harris Lab (64) Apply Harris Lab filter
- Heberlein Lab (94) Apply Heberlein Lab filter
- Hermundstad Lab (29) Apply Hermundstad Lab filter
- Hess Lab (79) Apply Hess Lab filter
- Ilanges Lab (2) Apply Ilanges Lab filter
- Jayaraman Lab (47) Apply Jayaraman Lab filter
- Ji Lab (33) Apply Ji Lab filter
- Johnson Lab (6) Apply Johnson Lab filter
- Kainmueller Lab (19) Apply Kainmueller Lab filter
- Karpova Lab (14) Apply Karpova Lab filter
- Keleman Lab (13) Apply Keleman Lab filter
- Keller Lab (76) Apply Keller Lab filter
- Koay Lab (18) Apply Koay Lab filter
- Lavis Lab (154) Apply Lavis Lab filter
- Lee (Albert) Lab (34) Apply Lee (Albert) Lab filter
- Leonardo Lab (23) Apply Leonardo Lab filter
- Li Lab (30) Apply Li Lab filter
- Lippincott-Schwartz Lab (178) Apply Lippincott-Schwartz Lab filter
- Liu (Yin) Lab (7) Apply Liu (Yin) Lab filter
- Liu (Zhe) Lab (64) Apply Liu (Zhe) Lab filter
- Looger Lab (138) Apply Looger Lab filter
- Magee Lab (49) Apply Magee Lab filter
- Menon Lab (18) Apply Menon Lab filter
- Murphy Lab (13) Apply Murphy Lab filter
- O'Shea Lab (7) Apply O'Shea Lab filter
- Otopalik Lab (13) Apply Otopalik Lab filter
- Pachitariu Lab (49) Apply Pachitariu Lab filter
- Pastalkova Lab (18) Apply Pastalkova Lab filter
- Pavlopoulos Lab (19) Apply Pavlopoulos Lab filter
- Pedram Lab (15) Apply Pedram Lab filter
- Podgorski Lab (16) Apply Podgorski Lab filter
- Reiser Lab (53) Apply Reiser Lab filter
- Riddiford Lab (44) Apply Riddiford Lab filter
- Romani Lab (48) Apply Romani Lab filter
- Rubin Lab (147) Apply Rubin Lab filter
- Saalfeld Lab (64) Apply Saalfeld Lab filter
- Satou Lab (16) Apply Satou Lab filter
- Scheffer Lab (38) Apply Scheffer Lab filter
- Schreiter Lab (68) Apply Schreiter Lab filter
- Sgro Lab (21) Apply Sgro Lab filter
- Shroff Lab (31) Apply Shroff Lab filter
- Simpson Lab (23) Apply Simpson Lab filter
- Singer Lab (80) Apply Singer Lab filter
- Spruston Lab (97) Apply Spruston Lab filter
- Stern Lab (158) Apply Stern Lab filter
- Sternson Lab (54) Apply Sternson Lab filter
- Stringer Lab (39) Apply Stringer Lab filter
- Svoboda Lab (135) Apply Svoboda Lab filter
- Tebo Lab (35) Apply Tebo Lab filter
- Tervo Lab (9) Apply Tervo Lab filter
- Tillberg Lab (21) Apply Tillberg Lab filter
- Tjian Lab (64) Apply Tjian Lab filter
- Truman Lab (88) Apply Truman Lab filter
- Turaga Lab (53) Apply Turaga Lab filter
- Turner Lab (39) Apply Turner Lab filter
- Vale Lab (8) Apply Vale Lab filter
- Voigts Lab (3) Apply Voigts Lab filter
- Wang (Meng) Lab (27) Apply Wang (Meng) Lab filter
- Wang (Shaohe) Lab (25) Apply Wang (Shaohe) Lab filter
- Wu Lab (9) Apply Wu Lab filter
- Zlatic Lab (28) Apply Zlatic Lab filter
- Zuker Lab (25) Apply Zuker Lab filter
Associated Project Team
- CellMap (12) Apply CellMap filter
- COSEM (3) Apply COSEM filter
- FIB-SEM Technology (5) Apply FIB-SEM Technology filter
- Fly Descending Interneuron (12) Apply Fly Descending Interneuron filter
- Fly Functional Connectome (14) Apply Fly Functional Connectome filter
- Fly Olympiad (5) Apply Fly Olympiad filter
- FlyEM (56) Apply FlyEM filter
- FlyLight (50) Apply FlyLight filter
- GENIE (47) Apply GENIE filter
- Integrative Imaging (7) Apply Integrative Imaging filter
- Larval Olympiad (2) Apply Larval Olympiad filter
- MouseLight (18) Apply MouseLight filter
- NeuroSeq (1) Apply NeuroSeq filter
- ThalamoSeq (1) Apply ThalamoSeq filter
- Tool Translation Team (T3) (28) Apply Tool Translation Team (T3) filter
- Transcription Imaging (49) Apply Transcription Imaging filter
Publication Date
- 2025 (215) Apply 2025 filter
- 2024 (212) Apply 2024 filter
- 2023 (158) Apply 2023 filter
- 2022 (192) Apply 2022 filter
- 2021 (194) Apply 2021 filter
- 2020 (196) Apply 2020 filter
- 2019 (202) Apply 2019 filter
- 2018 (232) Apply 2018 filter
- 2017 (217) Apply 2017 filter
- 2016 (209) Apply 2016 filter
- 2015 (252) Apply 2015 filter
- 2014 (236) Apply 2014 filter
- 2013 (194) Apply 2013 filter
- 2012 (190) Apply 2012 filter
- 2011 (190) Apply 2011 filter
- 2010 (161) Apply 2010 filter
- 2009 (158) Apply 2009 filter
- 2008 (140) Apply 2008 filter
- 2007 (106) Apply 2007 filter
- 2006 (92) Apply 2006 filter
- 2005 (67) Apply 2005 filter
- 2004 (57) Apply 2004 filter
- 2003 (58) Apply 2003 filter
- 2002 (39) Apply 2002 filter
- 2001 (28) Apply 2001 filter
- 2000 (29) Apply 2000 filter
- 1999 (14) Apply 1999 filter
- 1998 (18) Apply 1998 filter
- 1997 (16) Apply 1997 filter
- 1996 (10) Apply 1996 filter
- 1995 (18) Apply 1995 filter
- 1994 (12) Apply 1994 filter
- 1993 (10) Apply 1993 filter
- 1992 (6) Apply 1992 filter
- 1991 (11) Apply 1991 filter
- 1990 (11) Apply 1990 filter
- 1989 (6) Apply 1989 filter
- 1988 (1) Apply 1988 filter
- 1987 (7) Apply 1987 filter
- 1986 (4) Apply 1986 filter
- 1985 (5) Apply 1985 filter
- 1984 (2) Apply 1984 filter
- 1983 (2) Apply 1983 filter
- 1982 (3) Apply 1982 filter
- 1981 (3) Apply 1981 filter
- 1980 (1) Apply 1980 filter
- 1979 (1) Apply 1979 filter
- 1976 (2) Apply 1976 filter
- 1973 (1) Apply 1973 filter
- 1970 (1) Apply 1970 filter
- 1967 (1) Apply 1967 filter
Type of Publication
4190 Publications
Showing 3411-3420 of 4190 resultsMemories are believed to be stored in synapses and retrieved by reactivating neural ensembles. Learning alters synaptic weights, which can interfere with previously stored memories that share the same synapses, creating a trade-off between plasticity and stability. Interestingly, neural representations change even in stable environments, without apparent learning or forgetting-a phenomenon known as representational drift. Theoretical studies have suggested that multiple neural representations can correspond to a memory, with postlearning exploration of these representation solutions driving drift. However, it remains unclear whether representations explored through drift differ from those learned or offer unique advantages. Here, we show that representational drift uncovers noise-robust representations that are otherwise difficult to learn. We first define the nonlinear solution space manifold of synaptic weights for fixed input-output mappings, which allows us to disentangle drift from learning and forgetting and simulate drift as diffusion within this manifold. Solutions explored by drift have many inactive and saturated neurons, making them robust to weight perturbations due to noise or continual learning. Such solutions are prevalent and entropically favored by drift, but their lack of gradients makes them difficult to learn and nonconducive to future learning. To overcome this, we introduce an allocation procedure that selectively shifts representations for new stimuli into a learning-conducive regime. By combining allocation with drift, we resolve the trade-off between learnability and robustness.
Single-wavelength fluorescent reporters allow visualization of specific neurotransmitters with high spatial and temporal resolution. We report variants of intensity-based glutamate-sensing fluorescent reporter (iGluSnFR) that are functionally brighter; detect submicromolar to millimolar amounts of glutamate; and have blue, cyan, green, or yellow emission profiles. These variants could be imaged in vivo in cases where original iGluSnFR was too dim, resolved glutamate transients in dendritic spines and axonal boutons, and allowed imaging at kilohertz rates.
In order to understand the connectivity of neuronal networks, their constituent neurons should ideally be studied in a common framework. Since morphological data from physiologically characterized and stained neurons usually arise from different individual brains, this can only be performed in a virtual standardized brain that compensates for interindividual variability. The desert locust, Schistocerca gregaria, is an insect species used widely for the analysis of olfactory and visual signal processing, endocrine functions, and neural networks controlling motor output. To provide a common multi-user platform for neural circuit analysis in the brain of this species, we have generated a standardized three-dimensional brain of this locust. Serial confocal images from whole-mount locust brains were used to reconstruct 34 neuropil areas in ten brains. For standardization, we compared two different methods: an iterative shape-averaging (ISA) procedure by using affine transformations followed by iterative nonrigid registrations, and the Virtual Insect Brain (VIB) protocol by using global and local rigid transformations followed by local nonrigid transformations. Both methods generated a standard brain, but for different applications. Whereas the VIB technique was designed to visualize anatomical variability between the input brains, the purpose of the ISA method was the opposite, i.e., to remove this variability. A novel individually labeled neuron, connecting the lobula to the midbrain and deutocerebrum, has been registered into the ISA atlas and demonstrates its usefulness and accuracy for future analysis of neural networks. The locust standard brain is accessible at http://www.3d-insectbrain.com .
Fluorescence is magical. Shine one color of light on a fluorophore and it glows in another color. This property allows imaging of biological systems with high sensitivity─we can visualize individual fluorescent molecules in an ocean of nonfluorescent ones. Fluorescence microscopy has long been used to study isolated cells, both living and dead, but the development of newer, tailored fluorophores is swiftly expanding the use of fluorescence imaging to more complicated systems such as intact animals. In the latest in a long string of transformative work, Sletten and co-workers introduce dyes shrouded with multiple polymer chains─effectively star polymers with a bright fluorophore at the center.
An approaching predator and self-motion toward an object can generate similar looming patterns on the retina, but these situations demand different rapid responses. How central circuits flexibly process visual cues to activate appropriate, fast motor pathways remains unclear. Here we identify two descending neuron (DN) types that control landing and contribute to visuomotor flexibility in Drosophila. For each, silencing impairs visually evoked landing, activation drives landing, and spike rate determines leg extension amplitude. Critically, visual responses of both DNs are severely attenuated during non-flight periods, effectively decoupling visual stimuli from the landing motor pathway when landing is inappropriate. The flight-dependence mechanism differs between DN types. Octopamine exposure mimics flight effects in one, whereas the other probably receives neuronal feedback from flight motor circuits. Thus, this sensorimotor flexibility arises from distinct mechanisms for gating action-specific descending pathways, such that sensory and motor networks are coupled or decoupled according to the behavioral state.
Depending on the behavioral state, hippocampal CA1 pyramidal neurons receive very distinct patterns of synaptic input and likewise produce very different output patterns. We have used simultaneous dendritic and somatic recordings and multisite glutamate uncaging to investigate the relationship between synaptic input pattern, the form of dendritic integration, and action potential output in CA1 neurons. We found that when synaptic input arrives asynchronously or highly distributed in space, the dendritic arbor performs a linear integration that allows the action potential rate and timing to vary as a function of the quantity of the input. In contrast, when synaptic input arrives synchronously and spatially clustered, the dendritic compartment receiving the clustered input produces a highly nonlinear integration that leads to an action potential output that is extraordinarily precise and invariant. We also present evidence that both of these forms of information processing may be independently engaged during the two distinct behavioral states of the hippocampus such that individual CA1 pyramidal neurons could perform two different state-dependent computations: input strength encoding during theta states and feature detection during sharp waves.
Sensory function is mediated by interactions between external stimuli and intrinsic cortical dynamics that are evident in the modulation of evoked responses by cortical state. A number of recent studies across different modalities have demonstrated that the patterns of activity in neuronal populations can vary strongly between synchronized and desynchronized cortical states, i.e., in the presence or absence of intrinsically generated up and down states. Here we investigated the impact of cortical state on the population coding of tones and speech in the primary auditory cortex (A1) of gerbils, and found that responses were qualitatively different in synchronized and desynchronized cortical states. Activity in synchronized A1 was only weakly modulated by sensory input, and the spike patterns evoked by tones and speech were unreliable and constrained to a small range of patterns. In contrast, responses to tones and speech in desynchronized A1 were temporally precise and reliable across trials, and different speech tokens evoked diverse spike patterns with extremely weak noise correlations, allowing responses to be decoded with nearly perfect accuracy. Restricting the analysis of synchronized A1 to activity within up states yielded similar results, suggesting that up states are not equivalent to brief periods of desynchronization. These findings demonstrate that the representational capacity of A1 depends strongly on cortical state, and suggest that cortical state should be considered as an explicit variable in all studies of sensory processing.
The central nervous system can generate various behaviors, including motor responses, which we can observe through video recordings. Recent advances in gene manipulation, automated behavioral acquisition at scale, and machine learning enable us to causally link behaviors to their underlying neural mechanisms. Moreover, in some animals, such as the Drosophila melanogaster larva, this mapping is possible at the unprecedented scale of single neurons, allowing us to identify the neural microcircuits generating particular behaviors. These high-throughput screening efforts, linking the activation or suppression of specific neurons to behavioral patterns in millions of animals, provide a rich dataset to explore the diversity of nervous system responses to the same stimuli. However, important challenges remain in identifying subtle behaviors, including immediate and delayed responses to neural activation or suppression, and understanding these behaviors on a large scale. We here introduce several statistically robust methods for analyzing behavioral data in response to these challenges: 1) A generative physical model that regularizes the inference of larval shapes across the entire dataset. 2) An unsupervised kernel-based method for statistical testing in learned behavioral spaces aimed at detecting subtle deviations in behavior. 3) A generative model for larval behavioral sequences, providing a benchmark for identifying higher-order behavioral changes. 4) A comprehensive analysis technique using suffix trees to categorize genetic lines into clusters based on common action sequences. We showcase these methodologies through a behavioral screen focused on responses to an air puff, analyzing data from 280 716 larvae across 569 genetic lines. Preprint: https://www.biorxiv.org/content/10.1101/2024.05.03.591825v1
