Filter
Associated Lab
- Aguilera Castrejon Lab (1) Apply Aguilera Castrejon Lab filter
- Ahrens Lab (53) Apply Ahrens Lab filter
- Aso Lab (40) Apply Aso Lab filter
- Baker Lab (19) Apply Baker Lab filter
- Betzig Lab (101) Apply Betzig Lab filter
- Beyene Lab (8) Apply Beyene Lab filter
- Bock Lab (14) Apply Bock Lab filter
- Branson Lab (50) Apply Branson Lab filter
- Card Lab (36) Apply Card Lab filter
- Cardona Lab (45) Apply Cardona Lab filter
- Chklovskii Lab (10) Apply Chklovskii Lab filter
- Clapham Lab (14) Apply Clapham Lab filter
- Cui Lab (19) Apply Cui Lab filter
- Darshan Lab (8) Apply Darshan Lab filter
- Dickson Lab (32) Apply Dickson Lab filter
- Druckmann Lab (21) Apply Druckmann Lab filter
- Dudman Lab (38) Apply Dudman Lab filter
- Eddy/Rivas Lab (30) Apply Eddy/Rivas Lab filter
- Egnor Lab (4) Apply Egnor Lab filter
- Espinosa Medina Lab (15) Apply Espinosa Medina Lab filter
- Feliciano Lab (7) Apply Feliciano Lab filter
- Fetter Lab (31) Apply Fetter Lab filter
- Fitzgerald Lab (16) Apply Fitzgerald Lab filter
- Freeman Lab (15) Apply Freeman Lab filter
- Funke Lab (38) Apply Funke Lab filter
- Gonen Lab (59) Apply Gonen Lab filter
- Grigorieff Lab (34) Apply Grigorieff Lab filter
- Harris Lab (53) Apply Harris Lab filter
- Heberlein Lab (13) Apply Heberlein Lab filter
- Hermundstad Lab (23) Apply Hermundstad Lab filter
- Hess Lab (74) Apply Hess Lab filter
- Ilanges Lab (2) Apply Ilanges Lab filter
- Jayaraman Lab (42) Apply Jayaraman Lab filter
- Ji Lab (33) Apply Ji Lab filter
- Johnson Lab (1) Apply Johnson Lab filter
- Karpova Lab (13) Apply Karpova Lab filter
- Keleman Lab (8) Apply Keleman Lab filter
- Keller Lab (61) Apply Keller Lab filter
- Koay Lab (2) Apply Koay Lab filter
- Lavis Lab (137) Apply Lavis Lab filter
- Lee (Albert) Lab (29) Apply Lee (Albert) Lab filter
- Leonardo Lab (19) Apply Leonardo Lab filter
- Li Lab (4) Apply Li Lab filter
- Lippincott-Schwartz Lab (97) Apply Lippincott-Schwartz Lab filter
- Liu (Yin) Lab (1) Apply Liu (Yin) Lab filter
- Liu (Zhe) Lab (58) Apply Liu (Zhe) Lab filter
- Looger Lab (137) Apply Looger Lab filter
- Magee Lab (31) Apply Magee Lab filter
- Menon Lab (12) Apply Menon Lab filter
- Murphy Lab (6) Apply Murphy Lab filter
- O'Shea Lab (6) Apply O'Shea Lab filter
- Otopalik Lab (1) Apply Otopalik Lab filter
- Pachitariu Lab (36) Apply Pachitariu Lab filter
- Pastalkova Lab (5) Apply Pastalkova Lab filter
- Pavlopoulos Lab (7) Apply Pavlopoulos Lab filter
- Pedram Lab (4) Apply Pedram Lab filter
- Podgorski Lab (16) Apply Podgorski Lab filter
- Reiser Lab (45) Apply Reiser Lab filter
- Riddiford Lab (20) Apply Riddiford Lab filter
- Romani Lab (31) Apply Romani Lab filter
- Rubin Lab (105) Apply Rubin Lab filter
- Saalfeld Lab (46) Apply Saalfeld Lab filter
- Satou Lab (1) Apply Satou Lab filter
- Scheffer Lab (36) Apply Scheffer Lab filter
- Schreiter Lab (50) Apply Schreiter Lab filter
- Sgro Lab (1) Apply Sgro Lab filter
- Shroff Lab (31) Apply Shroff Lab filter
- Simpson Lab (18) Apply Simpson Lab filter
- Singer Lab (37) Apply Singer Lab filter
- Spruston Lab (57) Apply Spruston Lab filter
- Stern Lab (73) Apply Stern Lab filter
- Sternson Lab (47) Apply Sternson Lab filter
- Stringer Lab (32) Apply Stringer Lab filter
- Svoboda Lab (131) Apply Svoboda Lab filter
- Tebo Lab (9) Apply Tebo Lab filter
- Tervo Lab (9) Apply Tervo Lab filter
- Tillberg Lab (18) Apply Tillberg Lab filter
- Tjian Lab (17) Apply Tjian Lab filter
- Truman Lab (58) Apply Truman Lab filter
- Turaga Lab (39) Apply Turaga Lab filter
- Turner Lab (27) Apply Turner Lab filter
- Vale Lab (7) Apply Vale Lab filter
- Voigts Lab (3) Apply Voigts Lab filter
- Wang (Meng) Lab (21) Apply Wang (Meng) Lab filter
- Wang (Shaohe) Lab (6) Apply Wang (Shaohe) Lab filter
- Wu Lab (8) Apply Wu Lab filter
- Zlatic Lab (26) Apply Zlatic Lab filter
- Zuker Lab (5) Apply Zuker Lab filter
Associated Project Team
- CellMap (12) Apply CellMap filter
- COSEM (3) Apply COSEM filter
- FIB-SEM Technology (3) Apply FIB-SEM Technology filter
- Fly Descending Interneuron (11) Apply Fly Descending Interneuron filter
- Fly Functional Connectome (14) Apply Fly Functional Connectome filter
- Fly Olympiad (5) Apply Fly Olympiad filter
- FlyEM (53) Apply FlyEM filter
- FlyLight (49) Apply FlyLight filter
- GENIE (46) Apply GENIE filter
- Integrative Imaging (4) Apply Integrative Imaging filter
- Larval Olympiad (2) Apply Larval Olympiad filter
- MouseLight (18) Apply MouseLight filter
- NeuroSeq (1) Apply NeuroSeq filter
- ThalamoSeq (1) Apply ThalamoSeq filter
- Tool Translation Team (T3) (26) Apply Tool Translation Team (T3) filter
- Transcription Imaging (45) Apply Transcription Imaging filter
Associated Support Team
- Project Pipeline Support (5) Apply Project Pipeline Support filter
- Anatomy and Histology (18) Apply Anatomy and Histology filter
- Cryo-Electron Microscopy (36) Apply Cryo-Electron Microscopy filter
- Electron Microscopy (16) Apply Electron Microscopy filter
- Gene Targeting and Transgenics (11) Apply Gene Targeting and Transgenics filter
- Integrative Imaging (17) Apply Integrative Imaging filter
- Invertebrate Shared Resource (40) Apply Invertebrate Shared Resource filter
- Janelia Experimental Technology (37) Apply Janelia Experimental Technology filter
- Management Team (1) Apply Management Team filter
- Molecular Genomics (15) Apply Molecular Genomics filter
- Primary & iPS Cell Culture (14) Apply Primary & iPS Cell Culture filter
- Project Technical Resources (50) Apply Project Technical Resources filter
- Quantitative Genomics (19) Apply Quantitative Genomics filter
- Scientific Computing Software (92) Apply Scientific Computing Software filter
- Scientific Computing Systems (7) Apply Scientific Computing Systems filter
- Viral Tools (14) Apply Viral Tools filter
- Vivarium (7) Apply Vivarium filter
Publication Date
- 2025 (126) Apply 2025 filter
- 2024 (215) Apply 2024 filter
- 2023 (159) Apply 2023 filter
- 2022 (167) Apply 2022 filter
- 2021 (175) Apply 2021 filter
- 2020 (177) Apply 2020 filter
- 2019 (177) Apply 2019 filter
- 2018 (206) Apply 2018 filter
- 2017 (186) Apply 2017 filter
- 2016 (191) Apply 2016 filter
- 2015 (195) Apply 2015 filter
- 2014 (190) Apply 2014 filter
- 2013 (136) Apply 2013 filter
- 2012 (112) Apply 2012 filter
- 2011 (98) Apply 2011 filter
- 2010 (61) Apply 2010 filter
- 2009 (56) Apply 2009 filter
- 2008 (40) Apply 2008 filter
- 2007 (21) Apply 2007 filter
- 2006 (3) Apply 2006 filter
2691 Janelia Publications
Showing 1411-1420 of 2691 resultsSingle-beam scanning electron microscopes (SEM) are widely used to acquire massive datasets for biomedical study, material analysis, and fabrication inspection. Datasets are typically acquired with uniform acquisition: applying the electron beam with the same power and duration to all image pixels, even if there is great variety in the pixels' importance for eventual use. Many SEMs are now able to move the beam to any pixel in the field of view without delay, enabling them, in principle, to invest their time budget more effectively with non-uniform imaging.
Previously, we developed a novel model for anxiety during motivated behavior by training rats to perform a task where actions executed to obtain a reward were probabilistically punished and observed that after learning, neuronal activity in the ventral tegmental area (VTA) and dorsomedial prefrontal cortex (dmPFC) represent the relationship between action and punishment risk (Park & Moghaddam, 2017). Here we used male and female rats to expand on the previous work by focusing on neural changes in the dmPFC and VTA that were associated with the learning of probabilistic punishment, and anxiolytic treatment with diazepam after learning. We find that adaptive neural responses of dmPFC and VTA during the learning of anxiogenic contingencies are independent from the punisher experience and occur primarily during the peri-action and reward period. Our results also identify peri-action ramping of VTA neural calcium activity, and VTA-dmPFC correlated activity, as potential markers for the anxiolytic properties of diazepam.
Cognitive maps confer animals with flexible intelligence by representing spatial, temporal and abstract relationships that can be used to shape thought, planning and behaviour. Cognitive maps have been observed in the hippocampus1, but their algorithmic form and learning mechanisms remain obscure. Here we used large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different linear tracks in virtual reality. Throughout learning, both animal behaviour and hippocampal neural activity progressed through multiple stages, gradually revealing improved task representation that mirrored improved behavioural efficiency. The learning process involved progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent structure of the task. This decorrelation process was driven by individual neurons acquiring task-state-specific responses (that is, 'state cells'). Although various standard artificial neural networks did not naturally capture these dynamics, the clone-structured causal graph, a hidden Markov model variant, uniquely reproduced both the final orthogonalized states and the learning trajectory seen in animals. The observed cellular and population dynamics constrain the mechanisms underlying cognitive map formation in the hippocampus, pointing to hidden state inference as a fundamental computational principle, with implications for both biological and artificial intelligence.
We propose a framework for detecting action patterns from motion sequences and modeling the sensory-motor relationship of animals, using a generative recurrent neural network. The network has a discriminative part (classifying actions) and a generative part (predicting motion), whose recurrent cells are laterally connected, allowing higher levels of the network to represent high level phenomena. We test our framework on two types of data, fruit fly behavior and online handwriting. Our results show that 1) taking advantage of unlabeled sequences, by predicting future motion, significantly improves action detection performance when training labels are scarce, 2) the network learns to represent high level phenomena such as writer identity and fly gender, without supervision, and 3) simulated motion trajectories, generated by treating motion prediction as input to the network, look realistic and may be used to qualitatively evaluate whether the model has learnt generative control rules.
Biological neural networks seem to efficiently select and represent task-relevant features of their inputs, an ability that is highly sought after also in artificial networks. A lot of work has gone into identifying such representations in both sensory and motor systems; however, less is understood about how representations form during complex learning conditions to support behavior, especially in higher associative brain areas. Our work shows that the hippocampus maintains a robust hierarchical representation of task variables and that this structure can support new learning through minimal changes to the neural representations. bioRxiv Preprint: https://www.doi.org/10.1101/2024.08.21.608911
An agglomerative clustering algorithm merges the most similar pair of clusters at every iteration. The function that evaluates similarity is traditionally handdesigned, but there has been recent interest in supervised or semisupervised settings in which ground-truth clustered data is available for training. Here we show how to train a similarity function by regarding it as the action-value function of a reinforcement learning problem. We apply this general method to segment images by clustering superpixels, an application that we call Learning to Agglomerate Superpixel Hierarchies (LASH). When applied to a challenging dataset of brain images from serial electron microscopy, LASH dramatically improved segmentation accuracy when clustering supervoxels generated by state of the boundary detection algorithms. The naive strategy of directly training only supervoxel similarities and applying single linkage clustering produced less improvement.
Manifold attractors are a key framework for understanding how continuous variables, such as position or head direction, are encoded in the brain. In this framework, the variable is represented along a continuum of persistent neuronal states which forms a manifold attactor. Neural networks with symmetric synaptic connectivity that can implement manifold attractors have become the dominant model in this framework. In addition to a symmetric connectome, these networks imply homogeneity of individual-neuron tuning curves and symmetry of the representational space; these features are largely inconsistent with neurobiological data. Here, we developed a theory for computations based on manifold attractors in trained neural networks and show how these manifolds can cope with diverse neuronal responses, imperfections in the geometry of the manifold and a high level of synaptic heterogeneity. In such heterogeneous trained networks, a continuous representational space emerges from a small set of stimuli used for training. Furthermore, we find that the network response to external inputs depends on the geometry of the representation and on the level of synaptic heterogeneity in an analytically tractable and interpretable way. Finally, we show that a too complex geometry of the neuronal representation impairs the attractiveness of the manifold and may lead to its destabilization. Our framework reveals that continuous features can be represented in the recurrent dynamics of heterogeneous networks without assuming unrealistic symmetry. It suggests that the representational space of putative manifold attractors in the brain dictates the dynamics in their vicinity.
Cortical neurons form specific circuits, but the functional structure of this microarchitecture and its relation to behaviour are poorly understood. Two-photon calcium imaging can monitor activity of spatially defined neuronal ensembles in the mammalian cortex. Here we applied this technique to the motor cortex of mice performing a choice behaviour. Head-fixed mice were trained to lick in response to one of two odours, and to withhold licking for the other odour. Mice routinely showed significant learning within the first behavioural session and across sessions. Microstimulation and trans-synaptic tracing identified two non-overlapping candidate tongue motor cortical areas. Inactivating either area impaired voluntary licking. Imaging in layer 2/3 showed neurons with diverse response types in both areas. Activity in approximately half of the imaged neurons distinguished trial types associated with different actions. Many neurons showed modulation coinciding with or preceding the action, consistent with their involvement in motor control. Neurons with different response types were spatially intermingled. Nearby neurons (within approximately 150 mum) showed pronounced coincident activity. These temporal correlations increased with learning within and across behavioural sessions, specifically for neuron pairs with similar response types. We propose that correlated activity in specific ensembles of functionally related neurons is a signature of learning-related circuit plasticity. Our findings reveal a fine-scale and dynamic organization of the frontal cortex that probably underlies flexible behaviour.
Two simple models, vaulting over stiff legs and rebounding over compliant legs, are employed to describe the mechanics of legged locomotion. It is agreed that compliant legs are necessary for describing running and that legs are compliant while walking. Despite this agreement, stiff legs continue to be employed to model walking. Here, we show that leg compliance is necessary to model walking and, in the process, identify the principles that underpin two important features of legged locomotion: First, at the same speed, step length, and stance duration, multiple gaits that differ in the number of leg contraction cycles are possible. Among them, humans and other animals choose a gait with M-shaped vertical ground reaction forces because it is energetically favored. Second, the transition from walking to running occurs because of the inability to redirect the vertical component of the velocity during the double stance phase. Additionally, we also examine the limits of double spring-loaded pendulum (DSLIP) as a quantitative model for locomotion, and conclude that DSLIP is limited as a model for walking. However, insights gleaned from the analytical treatment of DSLIP are general and will inform the construction of more accurate models of walking.
Leptin is an adipose tissue hormone that maintains homeostatic control of adipose tissue mass by regulating the activity of specific neural populations controlling appetite and metabolism1. Leptin regulates food intake by inhibiting orexigenic agouti-related protein (AGRP) neurons and activating anorexigenic pro-opiomelanocortin (POMC) neurons2. However, while AGRP neurons regulate food intake on a rapid time scale, acute activation of POMC neurons has only a minimal effect3–5. This has raised the possibility that there is a heretofore unidentified leptin-regulated neural population that suppresses appetite on a rapid time scale. Here, we report the discovery of a novel population of leptin-target neurons expressing basonuclin 2 (Bnc2) that acutely suppress appetite by directly inhibiting AGRP neurons. Opposite to the effect of AGRP activation, BNC2 neuronal activation elicited a place preference indicative of positive valence in hungry but not fed mice. The activity of BNC2 neurons is finely tuned by leptin, sensory food cues, and nutritional status. Finally, deleting leptin receptors in BNC2 neurons caused marked hyperphagia and obesity, similar to that observed in a leptin receptor knockout in AGRP neurons. These data indicate that BNC2-expressing neurons are a key component of the neural circuit that maintains energy balance, thus filling an important gap in our understanding of the regulation of food intake and leptin action.