Filter
Associated Lab
- Aguilera Castrejon Lab (2) Apply Aguilera Castrejon Lab filter
- Ahrens Lab (58) Apply Ahrens Lab filter
- Aso Lab (42) Apply Aso Lab filter
- Baker Lab (19) Apply Baker Lab filter
- Betzig Lab (103) Apply Betzig Lab filter
- Beyene Lab (10) Apply Beyene Lab filter
- Bock Lab (14) Apply Bock Lab filter
- Branson Lab (51) Apply Branson Lab filter
- Card Lab (37) Apply Card Lab filter
- Cardona Lab (45) Apply Cardona Lab filter
- Chklovskii Lab (10) Apply Chklovskii Lab filter
- Clapham Lab (14) Apply Clapham Lab filter
- Cui Lab (19) Apply Cui Lab filter
- Darshan Lab (8) Apply Darshan Lab filter
- Dennis Lab (1) Apply Dennis Lab filter
- Dickson Lab (32) Apply Dickson Lab filter
- Druckmann Lab (21) Apply Druckmann Lab filter
- Dudman Lab (41) Apply Dudman Lab filter
- Eddy/Rivas Lab (30) Apply Eddy/Rivas Lab filter
- Egnor Lab (4) Apply Egnor Lab filter
- Espinosa Medina Lab (18) Apply Espinosa Medina Lab filter
- Feliciano Lab (10) Apply Feliciano Lab filter
- Fetter Lab (31) Apply Fetter Lab filter
- FIB-SEM Technology (1) Apply FIB-SEM Technology filter
- Fitzgerald Lab (16) Apply Fitzgerald Lab filter
- Freeman Lab (15) Apply Freeman Lab filter
- Funke Lab (42) Apply Funke Lab filter
- Gonen Lab (59) Apply Gonen Lab filter
- Grigorieff Lab (34) Apply Grigorieff Lab filter
- Harris Lab (55) Apply Harris Lab filter
- Heberlein Lab (13) Apply Heberlein Lab filter
- Hermundstad Lab (26) Apply Hermundstad Lab filter
- Hess Lab (76) Apply Hess Lab filter
- Ilanges Lab (3) Apply Ilanges Lab filter
- Jayaraman Lab (44) Apply Jayaraman Lab filter
- Ji Lab (33) Apply Ji Lab filter
- Johnson Lab (1) Apply Johnson Lab filter
- Karpova Lab (13) Apply Karpova Lab filter
- Keleman Lab (8) Apply Keleman Lab filter
- Keller Lab (61) Apply Keller Lab filter
- Koay Lab (3) Apply Koay Lab filter
- Lavis Lab (144) Apply Lavis Lab filter
- Lee (Albert) Lab (29) Apply Lee (Albert) Lab filter
- Leonardo Lab (19) Apply Leonardo Lab filter
- Li Lab (6) Apply Li Lab filter
- Lippincott-Schwartz Lab (107) Apply Lippincott-Schwartz Lab filter
- Liu (Yin) Lab (2) Apply Liu (Yin) Lab filter
- Liu (Zhe) Lab (59) Apply Liu (Zhe) Lab filter
- Looger Lab (137) Apply Looger Lab filter
- Magee Lab (31) Apply Magee Lab filter
- Menon Lab (12) Apply Menon Lab filter
- Murphy Lab (6) Apply Murphy Lab filter
- O'Shea Lab (6) Apply O'Shea Lab filter
- Otopalik Lab (1) Apply Otopalik Lab filter
- Pachitariu Lab (39) Apply Pachitariu Lab filter
- Pastalkova Lab (5) Apply Pastalkova Lab filter
- Pavlopoulos Lab (7) Apply Pavlopoulos Lab filter
- Pedram Lab (4) Apply Pedram Lab filter
- Podgorski Lab (16) Apply Podgorski Lab filter
- Reiser Lab (49) Apply Reiser Lab filter
- Riddiford Lab (20) Apply Riddiford Lab filter
- Romani Lab (39) Apply Romani Lab filter
- Rubin Lab (110) Apply Rubin Lab filter
- Saalfeld Lab (47) Apply Saalfeld Lab filter
- Satou Lab (1) Apply Satou Lab filter
- Scheffer Lab (38) Apply Scheffer Lab filter
- Schreiter Lab (52) Apply Schreiter Lab filter
- Sgro Lab (2) Apply Sgro Lab filter
- Shroff Lab (31) Apply Shroff Lab filter
- Simpson Lab (18) Apply Simpson Lab filter
- Singer Lab (37) Apply Singer Lab filter
- Spruston Lab (61) Apply Spruston Lab filter
- Stern Lab (75) Apply Stern Lab filter
- Sternson Lab (47) Apply Sternson Lab filter
- Stringer Lab (36) Apply Stringer Lab filter
- Svoboda Lab (132) Apply Svoboda Lab filter
- Tebo Lab (11) Apply Tebo Lab filter
- Tervo Lab (9) Apply Tervo Lab filter
- Tillberg Lab (18) Apply Tillberg Lab filter
- Tjian Lab (17) Apply Tjian Lab filter
- Truman Lab (58) Apply Truman Lab filter
- Turaga Lab (41) Apply Turaga Lab filter
- Turner Lab (27) Apply Turner Lab filter
- Vale Lab (8) Apply Vale Lab filter
- Voigts Lab (4) Apply Voigts Lab filter
- Wang (Meng) Lab (27) Apply Wang (Meng) Lab filter
- Wang (Shaohe) Lab (6) Apply Wang (Shaohe) Lab filter
- Wong-Campos Lab (1) Apply Wong-Campos Lab filter
- Wu Lab (8) Apply Wu Lab filter
- Zlatic Lab (26) Apply Zlatic Lab filter
- Zuker Lab (5) Apply Zuker Lab filter
Associated Project Team
- CellMap (12) Apply CellMap filter
- COSEM (3) Apply COSEM filter
- FIB-SEM Technology (5) Apply FIB-SEM Technology filter
- Fly Descending Interneuron (12) Apply Fly Descending Interneuron filter
- Fly Functional Connectome (14) Apply Fly Functional Connectome filter
- Fly Olympiad (5) Apply Fly Olympiad filter
- FlyEM (56) Apply FlyEM filter
- FlyLight (50) Apply FlyLight filter
- GENIE (47) Apply GENIE filter
- Integrative Imaging (9) Apply Integrative Imaging filter
- Larval Olympiad (2) Apply Larval Olympiad filter
- MouseLight (18) Apply MouseLight filter
- NeuroSeq (1) Apply NeuroSeq filter
- ThalamoSeq (1) Apply ThalamoSeq filter
- Tool Translation Team (T3) (29) Apply Tool Translation Team (T3) filter
- Transcription Imaging (45) Apply Transcription Imaging filter
Associated Support Team
- Project Pipeline Support (5) Apply Project Pipeline Support filter
- Anatomy and Histology (18) Apply Anatomy and Histology filter
- Cryo-Electron Microscopy (41) Apply Cryo-Electron Microscopy filter
- Electron Microscopy (18) Apply Electron Microscopy filter
- Gene Targeting and Transgenics (11) Apply Gene Targeting and Transgenics filter
- High Performance Computing (7) Apply High Performance Computing filter
- Integrative Imaging (18) Apply Integrative Imaging filter
- Invertebrate Shared Resource (40) Apply Invertebrate Shared Resource filter
- Janelia Experimental Technology (37) Apply Janelia Experimental Technology filter
- Management Team (1) Apply Management Team filter
- Mass Spectrometry (1) Apply Mass Spectrometry filter
- Molecular Genomics (15) Apply Molecular Genomics filter
- Primary & iPS Cell Culture (14) Apply Primary & iPS Cell Culture filter
- Project Technical Resources (53) Apply Project Technical Resources filter
- Quantitative Genomics (20) Apply Quantitative Genomics filter
- Scientific Computing (100) Apply Scientific Computing filter
- Viral Tools (14) Apply Viral Tools filter
- Vivarium (7) Apply Vivarium filter
Publication Date
- 2026 (17) Apply 2026 filter
- 2025 (225) Apply 2025 filter
- 2024 (211) Apply 2024 filter
- 2023 (157) Apply 2023 filter
- 2022 (166) Apply 2022 filter
- 2021 (175) Apply 2021 filter
- 2020 (177) Apply 2020 filter
- 2019 (177) Apply 2019 filter
- 2018 (206) Apply 2018 filter
- 2017 (186) Apply 2017 filter
- 2016 (191) Apply 2016 filter
- 2015 (195) Apply 2015 filter
- 2014 (190) Apply 2014 filter
- 2013 (136) Apply 2013 filter
- 2012 (112) Apply 2012 filter
- 2011 (98) Apply 2011 filter
- 2010 (61) Apply 2010 filter
- 2009 (56) Apply 2009 filter
- 2008 (40) Apply 2008 filter
- 2007 (21) Apply 2007 filter
- 2006 (3) Apply 2006 filter
2800 Janelia Publications
Showing 1461-1470 of 2800 resultsLayer 6b (L6b), the deepest neocortical layer, projects to cortical targets and higher-order thalamus and is the only layer responsive to the wake-promoting neuropeptide orexin/hypocretin. These characteristics suggest that L6b can strongly modulate brain state, but projections to L6b and their influence remain unknown. Here, we examine the inputs to L6b ex vivo in the mouse primary somatosensory cortex with rabies-based retrograde tracing and channelrhodopsin-assisted circuit mapping in brain slices. We find that L6b receives its strongest excitatory input from intracortical long-range projection neurons, including those in the contralateral hemisphere. In contrast, local intracortical input and thalamocortical input were significantly weaker. Moreover, our data suggest that L6b receives far less thalamocortical input than other cortical layers. L6b was most strongly inhibited by PV and SST interneurons. This study shows that L6b integrates long-range intracortical information and is not part of the traditional thalamocortical loop.
We present a method, open-source software, and experiments which embed arbitrary deformation vector fields produced by any method (e.g., ANTs or VoxelMorph) in the Large Deformation Diffeomorphic Metric Mapping (LDDMM) framework. This decouples formal diffeomorphic shape analysis from image registration, which has many practical benefits. Shape analysis can be added to study designs without modification to already chosen image registration methods and existing databases of deformation fields can be reanalyzed within the LDDMM framework without repeating image registrations. Pairwise time series studies can be extended to full time series regression with minimal added computing. The diffeomorphic rigor of image registration methods can be compared by embedding deformation fields and comparing projection distances. Finally, the added value of formal diffeomorphic shape analysis can be more fairly evaluated when it is derived from and compared to a baseline set of deformation fields. In brief, the method is a straightforward use of geodesic shooting in diffeomorphisms with a deformation field as the target, rather than an image. This is simpler than the image registration case which leads to a faster implementation that requires fewer user derived parameters.
Naïve Bayes Nearest Neighbour (NBNN) is a simple and effective framework which addresses many of the pitfalls of K-Nearest Neighbour (KNN) classification. It has yielded competitive results on several computer vision benchmarks. Its central tenet is that during NN search, a query is not compared to every example in a database, ignoring class information. Instead, NN searches are performed within each class, generating a score per class. A key problem with NN techniques, including NBNN, is that they fail when the data representation does not capture perceptual (e.g. class-based) similarity. NBNN circumvents this by using independent engineered descriptors (e.g. SIFT). To extend its applicability outside of image-based domains, we propose to learn a metric which captures perceptual similarity. Similar to how Neighbourhood Components Analysis optimizes a differentiable form of KNN classification, we propose 'Class Conditional' metric learning (CCML), which optimizes a soft form of the NBNN selection rule. Typical metric learning algorithms learn either a global or local metric. However, our proposed method can be adjusted to a particular level of locality by tuning a single parameter. An empirical evaluation on classification and retrieval tasks demonstrates that our proposed method clearly outperforms existing learned distance metrics across a variety of image and non-image datasets.
Animals infer when and where a reward is available from experience with informative sensory stimuli and their own actions. In vertebrates, this is thought to depend upon the release of dopamine from midbrain dopaminergic neurons. Studies of the role of dopamine have focused almost exclusively on their encoding of informative sensory stimuli; however, many dopaminergic neurons are active just prior to movement initiation, even in the absence of sensory stimuli. How should current frameworks for understanding the role of dopamine incorporate these observations? To address this question, we review recent anatomical and functional evidence for action-related dopamine signaling. We conclude by proposing a framework in which dopaminergic neurons encode subjective signals of action initiation to solve an internal credit assignment problem.
Single-beam scanning electron microscopes (SEM) are widely used to acquire massive datasets for biomedical study, material analysis, and fabrication inspection. Datasets are typically acquired with uniform acquisition: applying the electron beam with the same power and duration to all image pixels, even if there is great variety in the pixels' importance for eventual use. Many SEMs are now able to move the beam to any pixel in the field of view without delay, enabling them, in principle, to invest their time budget more effectively with non-uniform imaging.
Previously, we developed a novel model for anxiety during motivated behavior by training rats to perform a task where actions executed to obtain a reward were probabilistically punished and observed that after learning, neuronal activity in the ventral tegmental area (VTA) and dorsomedial prefrontal cortex (dmPFC) represent the relationship between action and punishment risk (Park & Moghaddam, 2017). Here we used male and female rats to expand on the previous work by focusing on neural changes in the dmPFC and VTA that were associated with the learning of probabilistic punishment, and anxiolytic treatment with diazepam after learning. We find that adaptive neural responses of dmPFC and VTA during the learning of anxiogenic contingencies are independent from the punisher experience and occur primarily during the peri-action and reward period. Our results also identify peri-action ramping of VTA neural calcium activity, and VTA-dmPFC correlated activity, as potential markers for the anxiolytic properties of diazepam.
Cognitive maps confer animals with flexible intelligence by representing spatial, temporal and abstract relationships that can be used to shape thought, planning and behaviour. Cognitive maps have been observed in the hippocampus1, but their algorithmic form and learning mechanisms remain obscure. Here we used large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different linear tracks in virtual reality. Throughout learning, both animal behaviour and hippocampal neural activity progressed through multiple stages, gradually revealing improved task representation that mirrored improved behavioural efficiency. The learning process involved progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent structure of the task. This decorrelation process was driven by individual neurons acquiring task-state-specific responses (that is, 'state cells'). Although various standard artificial neural networks did not naturally capture these dynamics, the clone-structured causal graph, a hidden Markov model variant, uniquely reproduced both the final orthogonalized states and the learning trajectory seen in animals. The observed cellular and population dynamics constrain the mechanisms underlying cognitive map formation in the hippocampus, pointing to hidden state inference as a fundamental computational principle, with implications for both biological and artificial intelligence.
We propose a framework for detecting action patterns from motion sequences and modeling the sensory-motor relationship of animals, using a generative recurrent neural network. The network has a discriminative part (classifying actions) and a generative part (predicting motion), whose recurrent cells are laterally connected, allowing higher levels of the network to represent high level phenomena. We test our framework on two types of data, fruit fly behavior and online handwriting. Our results show that 1) taking advantage of unlabeled sequences, by predicting future motion, significantly improves action detection performance when training labels are scarce, 2) the network learns to represent high level phenomena such as writer identity and fly gender, without supervision, and 3) simulated motion trajectories, generated by treating motion prediction as input to the network, look realistic and may be used to qualitatively evaluate whether the model has learnt generative control rules.
Biological neural networks seem to efficiently select and represent task-relevant features of their inputs, an ability that is highly sought after also in artificial networks. A lot of work has gone into identifying such representations in both sensory and motor systems; however, less is understood about how representations form during complex learning conditions to support behavior, especially in higher associative brain areas. Our work shows that the hippocampus maintains a robust hierarchical representation of task variables and that this structure can support new learning through minimal changes to the neural representations. bioRxiv Preprint: https://www.doi.org/10.1101/2024.08.21.608911
