Filter
Associated Lab
- Aguilera Castrejon Lab (1) Apply Aguilera Castrejon Lab filter
- Ahrens Lab (52) Apply Ahrens Lab filter
- Aso Lab (40) Apply Aso Lab filter
- Baker Lab (19) Apply Baker Lab filter
- Betzig Lab (100) Apply Betzig Lab filter
- Beyene Lab (8) Apply Beyene Lab filter
- Bock Lab (14) Apply Bock Lab filter
- Branson Lab (49) Apply Branson Lab filter
- Card Lab (34) Apply Card Lab filter
- Cardona Lab (44) Apply Cardona Lab filter
- Chklovskii Lab (10) Apply Chklovskii Lab filter
- Clapham Lab (13) Apply Clapham Lab filter
- Cui Lab (19) Apply Cui Lab filter
- Darshan Lab (8) Apply Darshan Lab filter
- Dickson Lab (32) Apply Dickson Lab filter
- Druckmann Lab (21) Apply Druckmann Lab filter
- Dudman Lab (38) Apply Dudman Lab filter
- Eddy/Rivas Lab (30) Apply Eddy/Rivas Lab filter
- Egnor Lab (4) Apply Egnor Lab filter
- Espinosa Medina Lab (14) Apply Espinosa Medina Lab filter
- Feliciano Lab (7) Apply Feliciano Lab filter
- Fetter Lab (31) Apply Fetter Lab filter
- Fitzgerald Lab (16) Apply Fitzgerald Lab filter
- Freeman Lab (15) Apply Freeman Lab filter
- Funke Lab (37) Apply Funke Lab filter
- Gonen Lab (59) Apply Gonen Lab filter
- Grigorieff Lab (34) Apply Grigorieff Lab filter
- Harris Lab (50) Apply Harris Lab filter
- Heberlein Lab (13) Apply Heberlein Lab filter
- Hermundstad Lab (22) Apply Hermundstad Lab filter
- Hess Lab (73) Apply Hess Lab filter
- Ilanges Lab (2) Apply Ilanges Lab filter
- Jayaraman Lab (42) Apply Jayaraman Lab filter
- Ji Lab (33) Apply Ji Lab filter
- Johnson Lab (1) Apply Johnson Lab filter
- Karpova Lab (13) Apply Karpova Lab filter
- Keleman Lab (8) Apply Keleman Lab filter
- Keller Lab (61) Apply Keller Lab filter
- Koay Lab (2) Apply Koay Lab filter
- Lavis Lab (135) Apply Lavis Lab filter
- Lee (Albert) Lab (29) Apply Lee (Albert) Lab filter
- Leonardo Lab (19) Apply Leonardo Lab filter
- Li Lab (3) Apply Li Lab filter
- Lippincott-Schwartz Lab (93) Apply Lippincott-Schwartz Lab filter
- Liu (Yin) Lab (1) Apply Liu (Yin) Lab filter
- Liu (Zhe) Lab (56) Apply Liu (Zhe) Lab filter
- Looger Lab (137) Apply Looger Lab filter
- Magee Lab (31) Apply Magee Lab filter
- Menon Lab (12) Apply Menon Lab filter
- Murphy Lab (6) Apply Murphy Lab filter
- O'Shea Lab (5) Apply O'Shea Lab filter
- Otopalik Lab (1) Apply Otopalik Lab filter
- Pachitariu Lab (34) Apply Pachitariu Lab filter
- Pastalkova Lab (5) Apply Pastalkova Lab filter
- Pavlopoulos Lab (7) Apply Pavlopoulos Lab filter
- Pedram Lab (4) Apply Pedram Lab filter
- Podgorski Lab (16) Apply Podgorski Lab filter
- Reiser Lab (45) Apply Reiser Lab filter
- Riddiford Lab (20) Apply Riddiford Lab filter
- Romani Lab (31) Apply Romani Lab filter
- Rubin Lab (105) Apply Rubin Lab filter
- Saalfeld Lab (45) Apply Saalfeld Lab filter
- Satou Lab (1) Apply Satou Lab filter
- Scheffer Lab (36) Apply Scheffer Lab filter
- Schreiter Lab (50) Apply Schreiter Lab filter
- Shroff Lab (29) Apply Shroff Lab filter
- Simpson Lab (18) Apply Simpson Lab filter
- Singer Lab (37) Apply Singer Lab filter
- Spruston Lab (57) Apply Spruston Lab filter
- Stern Lab (72) Apply Stern Lab filter
- Sternson Lab (47) Apply Sternson Lab filter
- Stringer Lab (29) Apply Stringer Lab filter
- Svoboda Lab (131) Apply Svoboda Lab filter
- Tebo Lab (8) Apply Tebo Lab filter
- Tervo Lab (9) Apply Tervo Lab filter
- Tillberg Lab (17) Apply Tillberg Lab filter
- Tjian Lab (17) Apply Tjian Lab filter
- Truman Lab (58) Apply Truman Lab filter
- Turaga Lab (36) Apply Turaga Lab filter
- Turner Lab (26) Apply Turner Lab filter
- Vale Lab (7) Apply Vale Lab filter
- Voigts Lab (3) Apply Voigts Lab filter
- Wang (Meng) Lab (17) Apply Wang (Meng) Lab filter
- Wang (Shaohe) Lab (6) Apply Wang (Shaohe) Lab filter
- Wu Lab (8) Apply Wu Lab filter
- Zlatic Lab (26) Apply Zlatic Lab filter
- Zuker Lab (5) Apply Zuker Lab filter
Associated Project Team
- CellMap (12) Apply CellMap filter
- COSEM (3) Apply COSEM filter
- FIB-SEM Technology (2) Apply FIB-SEM Technology filter
- Fly Descending Interneuron (10) Apply Fly Descending Interneuron filter
- Fly Functional Connectome (14) Apply Fly Functional Connectome filter
- Fly Olympiad (5) Apply Fly Olympiad filter
- FlyEM (53) Apply FlyEM filter
- FlyLight (48) Apply FlyLight filter
- GENIE (43) Apply GENIE filter
- Integrative Imaging (2) Apply Integrative Imaging filter
- Larval Olympiad (2) Apply Larval Olympiad filter
- MouseLight (18) Apply MouseLight filter
- NeuroSeq (1) Apply NeuroSeq filter
- ThalamoSeq (1) Apply ThalamoSeq filter
- Tool Translation Team (T3) (26) Apply Tool Translation Team (T3) filter
- Transcription Imaging (45) Apply Transcription Imaging filter
Associated Support Team
- Project Pipeline Support (3) Apply Project Pipeline Support filter
- Anatomy and Histology (18) Apply Anatomy and Histology filter
- Cryo-Electron Microscopy (33) Apply Cryo-Electron Microscopy filter
- Electron Microscopy (15) Apply Electron Microscopy filter
- Gene Targeting and Transgenics (11) Apply Gene Targeting and Transgenics filter
- Integrative Imaging (17) Apply Integrative Imaging filter
- Invertebrate Shared Resource (40) Apply Invertebrate Shared Resource filter
- Janelia Experimental Technology (36) Apply Janelia Experimental Technology filter
- Management Team (1) Apply Management Team filter
- Molecular Genomics (15) Apply Molecular Genomics filter
- Primary & iPS Cell Culture (14) Apply Primary & iPS Cell Culture filter
- Project Technical Resources (47) Apply Project Technical Resources filter
- Quantitative Genomics (19) Apply Quantitative Genomics filter
- Scientific Computing Software (89) Apply Scientific Computing Software filter
- Scientific Computing Systems (6) Apply Scientific Computing Systems filter
- Viral Tools (14) Apply Viral Tools filter
- Vivarium (7) Apply Vivarium filter
Publication Date
- 2025 (62) Apply 2025 filter
- 2024 (223) Apply 2024 filter
- 2023 (162) Apply 2023 filter
- 2022 (167) Apply 2022 filter
- 2021 (175) Apply 2021 filter
- 2020 (177) Apply 2020 filter
- 2019 (177) Apply 2019 filter
- 2018 (206) Apply 2018 filter
- 2017 (186) Apply 2017 filter
- 2016 (191) Apply 2016 filter
- 2015 (195) Apply 2015 filter
- 2014 (190) Apply 2014 filter
- 2013 (136) Apply 2013 filter
- 2012 (112) Apply 2012 filter
- 2011 (98) Apply 2011 filter
- 2010 (61) Apply 2010 filter
- 2009 (56) Apply 2009 filter
- 2008 (40) Apply 2008 filter
- 2007 (21) Apply 2007 filter
- 2006 (3) Apply 2006 filter
2638 Janelia Publications
Showing 2541-2550 of 2638 resultsModern supervised learning algorithms can learn very accurate and complex discriminating functions. But when these classifiers fail, this complexity can also be a drawback because there is no easy, intuitive way to diagnose why they are failing and remedy the problem. This important question has received little attention. To address this problem, we propose a novel method to analyze and understand a classifier's errors. Our method centers around a measure of how much influence a training example has on the classifier's prediction for a test example. To understand why a classifier is mispredicting the label of a given test example, the user can find and review the most influential training examples that caused this misprediction, allowing them to focus their attention on relevant areas of the data space. This will aid the user in determining if and how the training data is inconsistently labeled or lacking in diversity, or if the feature representation is insufficient. As computing the influence of each training example is computationally impractical, we propose a novel distance metric to approximate influence for boosting classifiers that is fast enough to be used interactively. We also show several novel use paradigms of our distance metric. Through experiments, we show that it can be used to find incorrectly or inconsistently labeled training examples, to find specific areas of the data space that need more training data, and to gain insight into which features are missing from the current representation. Code is available at https://github.com/kristinbranson/InfluentialNeighbors.
In practice, understanding the spatial relationships between the surfaces of an object, can significantly improve the performance of object recognition systems. In this paper we propose a novel framework to recognize objects in pictures taken from arbitrary viewpoints. The idea is to maintain the frontal views of the major faces of objects in a global flat map. Then an unfolding warping technique is used to change the pose of the query object in the test view so that all visible surfaces of the object can be observed from a frontal viewpoint, improving the handling of serious occlusions and large viewpoint changes. We demonstrate the effectiveness of our approach through analysis of recognition trials of complex objects with comparison to popular methods.
It is known that sensory deprivation, including postnatal whisker trimming, can lead to severe deficits in the firing rate properties of cortical neurons. Recent results indicate that development of synchronous discharge among cortical neurons is also activity influenced, and that correlated discharge is significantly impaired following loss of bilateral sensory input in rats. Here we investigate whether unilateral whisker trimming (unilateral deprivation or UD) after birth interferes in the same way with the development of synchronous discharge in cortex. We measured the coincidence of spikes among pairs of neurons recorded under urethane anesthesia in one whisker barrel field deprived by trimming all contralateral whiskers for 60 days after birth (UD), and in untrimmed controls (CON). In the septal columns around barrels, UD significantly increased the coincident discharge among cortical neurons compared with CON, most notably in layers II/III. In contrast, synchronous discharge was normal between layer IV UD barrel neurons: i.e., not different from CON. Thus, while bilateral whisker deprivation (BD) produced a global deficit in the development of synchrony in layer IV, UD did not block the development of synchrony between neurons in layer IV barrels and increased synchrony within septal circuits. We conclude that changes in synchronous discharge after UD are unexpectedly different from those recorded after BD, and we speculate that this effect may be due to the driven activity from active commissural inputs arising from the contralateral hemisphere that received normal activity levels during postnatal development.
Gaining independent genetic access to discrete cell types is critical to interrogate their biological functions as well as to deliver precise gene therapy. Transcriptomics has allowed us to profile cell populations with extraordinary precision, revealing that cell types are typically defined by a unique combination of genetic markers. Given the lack of adequate tools to target cell types based on multiple markers, most cell types remain inaccessible to genetic manipulation. Here we present CaSSA, a platform to create unlimited genetic switches based on CRISPR/Cas9 (Ca) and the DNA repair mechanism known as single-strand annealing (SSA). CaSSA allows engineering of independent genetic switches, each responding to a specific gRNA. Expressing multiple gRNAs in specific patterns enables multiplex cell-type-specific manipulations and combinatorial genetic targeting. CaSSA is a new genetic tool that conceptually works as an unlimited number of recombinases and will facilitate genetic access to cell types in diverse organisms.
The analysis of single particle trajectories plays an important role in elucidating dynamics within complex environments such as those found in living cells. However, the characterization of intracellular particle motion is often confounded by confinement of the particles within non-trivial subcellular geometries. Here, we focus specifically on the case of particles undergoing Brownian motion within a tubular network, as found in some cellular organelles. An unraveling algorithm is developed to uncouple particle motion from the confining network structure, allowing for an accurate extraction of the diffusion coefficient, as well as differentiating between Brownian and fractional Brownian dynamics. We validate the algorithm with simulated trajectories and then highlight its application to an example system: analyzing the motion of membrane proteins confined in the tubules of the peripheral endoplasmic reticulum in mammalian cells. We show that these proteins undergo diffusive motion with a well-characterized diffusivity. Our algorithm provides a generally applicable approach for disentangling geometric morphology and particle dynamics in networked architectures.
Segmentation of objects in microscopy images is required for many biomedical applications. We introduce object-centric embeddings (OCEs), which embed image patches such that the spatial offsets between patches cropped from the same object are preserved. Those learnt embeddings can be used to delineate individual objects and thus obtain instance segmentations. Here, we show theoretically that, under assumptions commonly found in microscopy images, OCEs can be learnt through a self-supervised task that predicts the spatial offset between image patches. Together, this forms an unsupervised cell instance segmentation method which we evaluate on nine diverse large-scale microscopy datasets. Segmentations obtained with our method lead to substantially improved results, compared to state-of-the-art baselines on six out of nine datasets, and perform on par on the remaining three datasets. If ground-truth annotations are available, our method serves as an excellent starting point for supervised training, reducing the required amount of ground-truth needed by one order of magnitude, thus substantially increasing the practical applicability of our method. Source code is available at github.com/funkelab/cellulus.
Background: Segmenting electron microscopy (EM) images of cellular and subcellular processes in the nervous system is a key step in many bioimaging pipelines involving classification and labeling of ultrastructures. However, fully automated techniques to segment images are often susceptible to noise and heterogeneity in EM images (e.g. different histological preparations, different organisms, different brain regions, etc.). Supervised techniques to address this problem are often helpful but require large sets of training data, which are often difficult to obtain in practice, especially across many conditions. Results: We propose a new, principled unsupervised algorithm to segment EM images using a two-step approach: edge detection via salient watersheds following by robust region merging. We performed experiments to gather EM neuroimages of two organisms (mouse and fruit fly) using different histological preparations and generated manually curated ground-truth segmentations. We compared our algorithm against several state-of- the-art unsupervised segmentation algorithms and found superior performance using two standard measures of under-and over-segmentation error. Conclusions: Our algorithm is general and may be applicable to other large-scale segmentation problems for bioimages.
Neuroendocrine systems in animals maintain organismal homeostasis and regulate stress response. Although a great deal of work has been done on the neuropeptides and hormones that are released and act on target organs in the periphery, the synaptic inputs onto these neuroendocrine outputs in the brain are less well understood. Here, we use the transmission electron microscopy reconstruction of a whole central nervous system in the larva to elucidate the sensory pathways and the interneurons that provide synaptic input to the neurosecretory cells projecting to the endocrine organs. Predicted by network modeling, we also identify a new carbon dioxide-responsive network that acts on a specific set of neurosecretory cells and that includes those expressing corazonin (Crz) and diuretic hormone 44 (Dh44) neuropeptides. Our analysis reveals a neuronal network architecture for combinatorial action based on sensory and interneuronal pathways that converge onto distinct combinations of neuroendocrine outputs.