Filter
Associated Lab
- Aso Lab (2) Apply Aso Lab filter
- Card Lab (1) Apply Card Lab filter
- Cardona Lab (3) Apply Cardona Lab filter
- Dickson Lab (1) Apply Dickson Lab filter
- Espinosa Medina Lab (1) Apply Espinosa Medina Lab filter
- Feliciano Lab (1) Apply Feliciano Lab filter
- Fitzgerald Lab (1) Apply Fitzgerald Lab filter
- Remove Funke Lab filter Funke Lab
- Hess Lab (5) Apply Hess Lab filter
- Keller Lab (2) Apply Keller Lab filter
- Lippincott-Schwartz Lab (2) Apply Lippincott-Schwartz Lab filter
- Reiser Lab (1) Apply Reiser Lab filter
- Rubin Lab (1) Apply Rubin Lab filter
- Saalfeld Lab (10) Apply Saalfeld Lab filter
- Scheffer Lab (1) Apply Scheffer Lab filter
- Stern Lab (1) Apply Stern Lab filter
- Tillberg Lab (1) Apply Tillberg Lab filter
- Turaga Lab (5) Apply Turaga Lab filter
- Turner Lab (1) Apply Turner Lab filter
Associated Project Team
Associated Support Team
37 Janelia Publications
Showing 1-10 of 37 resultsWe address the problem of inferring the number of independently blinking fluorescent light emitters, when only their combined intensity contributions can be observed at each timepoint. This problem occurs regularly in light microscopy of objects that are smaller than the diffraction limit, where one wishes to count the number of fluorescently labelled subunits. Our proposed solution directly models the photo-physics of the system, as well as the blinking kinetics of the fluorescent emitters as a fully differentiable hidden Markov model. Given a trace of intensity over time, our model jointly estimates the parameters of the intensity distribution per emitter, their blinking rates, as well as a posterior distribution of the total number of fluorescent emitters. We show that our model is consistently more accurate and increases the range of countable subunits by a factor of two compared to current state-of-the-art methods, which count based on autocorrelation and blinking frequency, Further-more, we demonstrate that our model can be used to investigate the effect of blinking kinetics on counting ability, and therefore can inform experimental conditions that will maximize counting accuracy.
Animal behavior is principally expressed through neural control of muscles. Therefore understanding how the brain controls behavior requires mapping neuronal circuits all the way to motor neurons. We have previously established technology to collect large-volume electron microscopy data sets of neural tissue and fully reconstruct the morphology of the neurons and their chemical synaptic connections throughout the volume. Using these tools we generated a dense wiring diagram, or connectome, for a large portion of the Drosophila central brain. However, in most animals, including the fly, the majority of motor neurons are located outside the brain in a neural center closer to the body, i.e. the mammalian spinal cord or insect ventral nerve cord (VNC). In this paper, we extend our effort to map full neural circuits for behavior by generating a connectome of the VNC of a male fly.
The recent assembly of the adult Drosophila melanogaster central brain connectome, containing more than 125,000 neurons and 50 million synaptic connections, provides a template for examining sensory processing throughout the brain. Here we create a leaky integrate-and-fire computational model of the entire Drosophila brain, on the basis of neural connectivity and neurotransmitter identity, to study circuit properties of feeding and grooming behaviours. We show that activation of sugar-sensing or water-sensing gustatory neurons in the computational model accurately predicts neurons that respond to tastes and are required for feeding initiation. In addition, using the model to activate neurons in the feeding region of the Drosophila brain predicts those that elicit motor neuron firing-a testable hypothesis that we validate by optogenetic activation and behavioural studies. Activating different classes of gustatory neurons in the model makes accurate predictions of how several taste modalities interact, providing circuit-level insight into aversive and appetitive taste processing. Additionally, we applied this model to mechanosensory circuits and found that computational activation of mechanosensory neurons predicts activation of a small set of neurons comprising the antennal grooming circuit, and accurately describes the circuit response upon activation of different mechanosensory subtypes. Our results demonstrate that modelling brain circuits using only synapse-level connectivity and predicted neurotransmitter identity generates experimentally testable hypotheses and can describe complete sensorimotor transformations.
The forthcoming assembly of the adult Drosophila melanogaster central brain connectome, containing over 125,000 neurons and 50 million synaptic connections, provides a template for examining sensory processing throughout the brain. Here, we create a leaky integrate-and-fire computational model of the entire Drosophila brain, based on neural connectivity and neurotransmitter identity, to study circuit properties of feeding and grooming behaviors. We show that activation of sugar-sensing or water-sensing gustatory neurons in the computational model accurately predicts neurons that respond to tastes and are required for feeding initiation. Computational activation of neurons in the feeding region of the Drosophila brain predicts those that elicit motor neuron firing, a testable hypothesis that we validate by optogenetic activation and behavioral studies. Moreover, computational activation of different classes of gustatory neurons makes accurate predictions of how multiple taste modalities interact, providing circuit-level insight into aversive and appetitive taste processing. Our computational model predicts that the sugar and water pathways form a partially shared appetitive feeding initiation pathway, which our calcium imaging and behavioral experiments confirm. Additionally, we applied this model to mechanosensory circuits and found that computational activation of mechanosensory neurons predicts activation of a small set of neurons comprising the antennal grooming circuit that do not overlap with gustatory circuits, and accurately describes the circuit response upon activation of different mechanosensory subtypes. Our results demonstrate that modeling brain circuits purely from connectivity and predicted neurotransmitter identity generates experimentally testable hypotheses and can accurately describe complete sensorimotor transformations.
Automatic image segmentation is critical to scale up electron microscope (EM) connectome reconstruction. To this end, segmentation competitions, such as CREMI and SNEMI, exist to help researchers evaluate segmentation algorithms with the goal of improving them. Because generating ground truth is time-consuming, these competitions often fail to capture the challenges in segmenting larger datasets required in connectomics. More generally, the common metrics for EM image segmentation do not emphasize impact on downstream analysis and are often not very useful for isolating problem areas in the segmentation. For example, they do not capture connectivity information and often over-rate the quality of a segmentation as we demonstrate later. To address these issues, we introduce a novel strategy to enable evaluation of segmentation at large scales both in a supervised setting, where ground truth is available, or an unsupervised setting. To achieve this, we first introduce new metrics more closely aligned with the use of segmentation in downstream analysis and reconstruction. In particular, these include synapse connectivity and completeness metrics that provide both meaningful and intuitive interpretations of segmentation quality as it relates to the preservation of neuron connectivity. Also, we propose measures of segmentation correctness and completeness with respect to the percentage of "orphan" fragments and the concentrations of self-loops formed by segmentation failures, which are helpful in analysis and can be computed without ground truth. The introduction of new metrics intended to be used for practical applications involving large datasets necessitates a scalable software ecosystem, which is a critical contribution of this paper. To this end, we introduce a scalable, flexible software framework that enables integration of several different metrics and provides mechanisms to evaluate and debug differences between segmentations. We also introduce visualization software to help users to consume the various metrics collected. We evaluate our framework on two relatively large public groundtruth datasets providing novel insights on example segmentations.
We present a method to automatically identify and track nuclei in time-lapse microscopy recordings of entire developing embryos. The method combines deep learning and global optimization. On a mouse dataset, it reconstructs 75.8% of cell lineages spanning 1 h, as compared to 31.8% for the competing method. Our approach improves understanding of where and when cell fate decisions are made in developing embryos, tissues, and organs.
We present a method to automatically identify and track nuclei in time-lapse microscopy recordings of entire developing embryos. The method combines deep learning and global optimization. On a mouse dataset, it reconstructs 75.8% of cell lineages spanning 1 h, as compared to 31.8% for the competing method. Our approach improves understanding of where and when cell fate decisions are made in developing embryos, tissues, and organs.
The study of neural circuits requires the reconstruction of neurons and the identification of synaptic connections between them. To scale the reconstruction to the size of whole-brain datasets, semi-automatic methods are needed to solve those tasks. Here, we present an automatic method for synaptic partner identification in insect brains, which uses convolutional neural networks to identify post-synaptic sites and their pre-synaptic partners. The networks can be trained from human generated point annotations alone and requires only simple post-processing to obtain final predictions. We used our method to extract 244 million putative synaptic partners in the fifty-teravoxel full adult fly brain (FAFB) electron microscopy (EM) dataset and evaluated its accuracy on 146,643 synapses from 702 neurons with a total cable length of 312 mm in four different brain regions. The predicted synaptic connections can be used together with a neuron segmentation to infer a connectivity graph with high accuracy: 96% of edges between connected neurons are correctly classified as weakly connected (less than five synapses) and strongly connected (at least five synapses). Our synaptic partner predictions for the FAFB dataset are publicly available, together with a query library allowing automatic retrieval of up- and downstream neurons.
DaCapo is a specialized deep learning library tailored to expedite the training and application of existing machine learning approaches on large, near-isotropic image data. In this correspondence, we introduce DaCapo's unique features optimized for this specific domain, highlighting its modular structure, efficient experiment management tools, and scalable deployment capabilities. We discuss its potential to improve access to large-scale, isotropic image segmentation and invite the community to explore and contribute to this open-source initiative.
Imaging neuronal networks provides a foundation for understanding the nervous system, but resolving dense nanometer-scale structures over large volumes remains challenging for light microscopy (LM) and electron microscopy (EM). Here we show that X-ray holographic nano-tomography (XNH) can image millimeter-scale volumes with sub-100-nm resolution, enabling reconstruction of dense wiring in Drosophila melanogaster and mouse nervous tissue. We performed correlative XNH and EM to reconstruct hundreds of cortical pyramidal cells and show that more superficial cells receive stronger synaptic inhibition on their apical dendrites. By combining multiple XNH scans, we imaged an adult Drosophila leg with sufficient resolution to comprehensively catalog mechanosensory neurons and trace individual motor axons from muscles to the central nervous system. To accelerate neuronal reconstructions, we trained a convolutional neural network to automatically segment neurons from XNH volumes. Thus, XNH bridges a key gap between LM and EM, providing a new avenue for neural circuit discovery.