Filter
Associated Lab
- Ahrens Lab (2) Apply Ahrens Lab filter
- Branson Lab (5) Apply Branson Lab filter
- Card Lab (1) Apply Card Lab filter
- Cardona Lab (1) Apply Cardona Lab filter
- Funke Lab (5) Apply Funke Lab filter
- Keller Lab (1) Apply Keller Lab filter
- Otopalik Lab (1) Apply Otopalik Lab filter
- Pachitariu Lab (1) Apply Pachitariu Lab filter
- Reiser Lab (3) Apply Reiser Lab filter
- Rubin Lab (1) Apply Rubin Lab filter
- Saalfeld Lab (3) Apply Saalfeld Lab filter
- Sternson Lab (1) Apply Sternson Lab filter
- Remove Turaga Lab filter Turaga Lab
Associated Support Team
35 Janelia Publications
Showing 1-10 of 35 resultsWhat can we learn from a connectome? We constructed a simplified model of the first two stages of the fly visual system, the lamina and medulla. The resulting hexagonal lattice convolutional network was trained using backpropagation through time to perform object tracking in natural scene videos. Networks initialized with weights from connectome reconstructions automatically discovered well-known orientation and direction selectivity properties in T4 neurons and their inputs, while networks initialized at random did not. Our work is the first demonstration, that knowledge of the connectome can enable in silico predictions of the functional properties of individual neurons in a circuit, leading to an understanding of circuit function from structure alone.
The field of connectomics has recently produced neuron wiring diagrams from relatively large brain regions from multiple animals. Most of these neural reconstructions were computed from isotropic (e.g., FIBSEM) or near isotropic (e.g., SBEM) data. In spite of the remarkable progress on algorithms in recent years, automatic dense reconstruction from anisotropic data remains a challenge for the connectomics community. One significant hurdle in the segmentation of anisotropic data is the difficulty in generating a suitable initial over-segmentation. In this study, we present a segmentation method for anisotropic EM data that agglomerates a 3D over-segmentation computed from the 3D affinity prediction. A 3D U-net is trained to predict 3D affinities by the MALIS approach. Experiments on multiple datasets demonstrates the strength and robustness of the proposed method for anisotropic EM segmentation.
The study of neural circuits requires the reconstruction of neurons and the identification of synaptic connections between them. To scale the reconstruction to the size of whole-brain datasets, semi-automatic methods are needed to solve those tasks. Here, we present an automatic method for synaptic partner identification in insect brains, which uses convolutional neural networks to identify post-synaptic sites and their pre-synaptic partners. The networks can be trained from human generated point annotations alone and requires only simple post-processing to obtain final predictions. We used our method to extract 244 million putative synaptic partners in the fifty-teravoxel full adult fly brain (FAFB) electron microscopy (EM) dataset and evaluated its accuracy on 146,643 synapses from 702 neurons with a total cable length of 312 mm in four different brain regions. The predicted synaptic connections can be used together with a neuron segmentation to infer a connectivity graph with high accuracy: 96% of edges between connected neurons are correctly classified as weakly connected (less than five synapses) and strongly connected (at least five synapses). Our synaptic partner predictions for the FAFB dataset are publicly available, together with a query library allowing automatic retrieval of up- and downstream neurons.
Brains encode behaviors using neurons amenable to systematic classification by gene expression. The contribution of molecular identity to neural coding is not understood because of the challenges involved with measuring neural dynamics and molecular information from the same cells. We developed CaRMA (calcium and RNA multiplexed activity) imaging based on recording in vivo single-neuron calcium dynamics followed by gene expression analysis. We simultaneously monitored activity in hundreds of neurons in mouse paraventricular hypothalamus (PVH). Combinations of cell-type marker genes had predictive power for neuronal responses across 11 behavioral states. The PVH uses combinatorial assemblies of molecularly defined neuron populations for grouped-ensemble coding of survival behaviors. The neuropeptide receptor neuropeptide Y receptor type 1 (Npy1r) amalgamated multiple cell types with similar responses. Our results show that molecularly defined neurons are important processing units for brain function.
In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.
In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.
Molecular profiles of neurons influence information processing, but bridging the gap between genes, circuits, and behavior has been very difficult. Furthermore, the behavioral state of an animal continuously changes across development and as a result of sensory experience. How behavioral state influences molecular cell state is poorly understood. Here we present a complete atlas of the Drosophila larval central nervous system composed of over 200,000 single cells across four developmental stages. We develop polyseq, a python package, to perform cell-type analyses. We use single-molecule RNA-FISH to validate our scRNAseq findings. To investigate how internal state affects cell state, we optogentically altered internal state with high-throughput behavior protocols designed to mimic wasp sting and over activation of the memory system. We found nervous system-wide and neuron-specific gene expression changes. This resource is valuable for developmental biology and neuroscience, and it advances our understanding of how genes, neurons, and circuits generate behavior.
The availability of both anatomical connectivity and brain-wide neural activity measurements in C. elegans make the worm a promising system for learning detailed, mechanistic models of an entire nervous system in a data-driven way. However, one faces several challenges when constructing such a model. We often do not have direct experimental access to important modeling details such as single-neuron dynamics and the signs and strengths of the synaptic connectivity. Further, neural activity can only be measured in a subset of neurons, often indirectly via calcium imaging, and significant trial-to-trial variability has been observed. To address these challenges, we introduce a connectome-constrained latent variable model (CC-LVM) of the unobserved voltage dynamics of the entire C. elegans nervous system and the observed calcium signals. We used the framework of variational autoencoders to fit parameters of the mechanistic simulation constituting the generative model of the LVM to calcium imaging observations. A variational approximate posterior distribution over latent voltage traces for all neurons is efficiently inferred using an inference network, and constrained by a prior distribution given by the biophysical simulation of neural dynamics. We applied this model to an experimental whole-brain dataset, and found that connectomic constraints enable our LVM to predict the activity of neurons whose activity were withheld significantly better than models unconstrained by a connectome. We explored models with different degrees of biophysical detail, and found that models with realistic conductance-based synapses provide markedly better predictions than current-based synapses for this system.
We can now measure the connectivity of every neuron in a neural circuit, but we cannot measure other biological details, including the dynamical characteristics of each neuron. The degree to which measurements of connectivity alone can inform the understanding of neural computation is an open question. Here we show that with experimental measurements of only the connectivity of a biological neural network, we can predict the neural activity underlying a specified neural computation. We constructed a model neural network with the experimentally determined connectivity for 64 cell types in the motion pathways of the fruit fly optic lobe but with unknown parameters for the single-neuron and single-synapse properties. We then optimized the values of these unknown parameters using techniques from deep learning, to allow the model network to detect visual motion. Our mechanistic model makes detailed, experimentally testable predictions for each neuron in the connectome. We found that model predictions agreed with experimental measurements of neural activity across 26 studies. Our work demonstrates a strategy for generating detailed hypotheses about the mechanisms of neural circuit function from connectivity measurements. We show that this strategy is more likely to be successful when neurons are sparsely connected-a universally observed feature of biological neural networks across species and brain regions.
Numerous efforts to generate "connectomes," or synaptic wiring diagrams, of large neural circuits or entire nervous systems are currently underway. These efforts promise an abundance of data to guide theoretical models of neural computation and test their predictions. However, there is not yet a standard set of tools for incorporating the connectivity constraints that these datasets provide into the models typically studied in theoretical neuroscience. This article surveys recent approaches to building models with constrained wiring diagrams and the insights they have provided. It also describes challenges and the need for new techniques to scale these approaches to ever more complex datasets.