Filter
Associated Lab
- Aso Lab (2) Apply Aso Lab filter
- Card Lab (1) Apply Card Lab filter
- Cardona Lab (3) Apply Cardona Lab filter
- Dickson Lab (1) Apply Dickson Lab filter
- Fitzgerald Lab (1) Apply Fitzgerald Lab filter
- Funke Lab (35) Apply Funke Lab filter
- Hess Lab (5) Apply Hess Lab filter
- Keller Lab (2) Apply Keller Lab filter
- Lippincott-Schwartz Lab (2) Apply Lippincott-Schwartz Lab filter
- Rubin Lab (1) Apply Rubin Lab filter
- Saalfeld Lab (10) Apply Saalfeld Lab filter
- Scheffer Lab (1) Apply Scheffer Lab filter
- Stern Lab (1) Apply Stern Lab filter
- Tillberg Lab (1) Apply Tillberg Lab filter
- Turaga Lab (5) Apply Turaga Lab filter
- Turner Lab (1) Apply Turner Lab filter
Associated Project Team
Publication Date
Type of Publication
35 Publications
Showing 11-20 of 35 resultsDopaminergic neurons with distinct projection patterns and physiological properties compose memory subsystems in a brain. However, it is poorly understood whether or how they interact during complex learning. Here, we identify a feedforward circuit formed between dopamine subsystems and show that it is essential for second-order conditioning, an ethologically important form of higher-order associative learning. The Drosophila mushroom body comprises a series of dopaminergic compartments, each of which exhibits distinct memory dynamics. We find that a slow and stable memory compartment can serve as an effective “teacher” by instructing other faster and transient memory compartments via a single key interneuron, which we identify by connectome analysis and neurotransmitter prediction. This excitatory interneuron acquires enhanced response to reward-predicting odor after first-order conditioning and, upon activation, evokes dopamine release in the “student” compartments. These hierarchical connections between dopamine subsystems explain distinct properties of first- and second-order memory long known by behavioral psychologists.
Deep neural networks trained to inpaint partially occluded images show a deep understanding of image composition and have even been shown to remove objects from images convincingly. In this work, we investigate how this implicit knowledge of image composition can be be used to separate cells in densely populated microscopy images. We propose a measure for the independence of two image regions given a fully self-supervised inpainting network and separate objects by maximizing this independence. We evaluate our method on two cell segmentation datasets and show that cells can be separated completely unsupervised. Furthermore, combined with simple foreground detection, our method yields instance segmentation of similar quality to fully supervised methods.
We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-net, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of ~2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.
We present an auxiliary learning task for the problem of neuron segmentation in electron microscopy volumes. The auxiliary task consists of the prediction of local shape descriptors (LSDs), which we combine with conventional voxel-wise direct neighbor affinities for neuron boundary detection. The shape descriptors capture local statistics about the neuron to be segmented, such as diameter, elongation, and direction. On a study comparing several existing methods across various specimen, imaging techniques, and resolutions, auxiliary learning of LSDs consistently increases segmentation accuracy of affinity-based methods over a range of metrics. Furthermore, the addition of LSDs promotes affinity-based segmentation methods to be on par with the current state of the art for neuron segmentation (flood-filling networks), while being two orders of magnitudes more efficient-a critical requirement for the processing of future petabyte-sized datasets.
We present a method for microtubule tracking in electron microscopy volumes. Our method first identifies a sparse set of voxels that likely belong to microtubules. Similar to prior work, we then enumerate potential edges between these voxels, which we represent in a candidate graph. Tracks of microtubules are found by selecting nodes and edges in the candidate graph by solving a constrained optimization problem incorporating biological priors on microtubule structure. For this, we present a novel integer linear programming formulation, which results in speed-ups of three orders of magnitude and an increase of 53% in accuracy compared to prior art (evaluated on three 1 . 2 × 4 × 4µm volumes of Drosophila neural tissue). We also propose a scheme to solve the optimization problem in a block-wise fashion, which allows distributed tracking and is necessary to process very large electron microscopy volumes. Finally, we release a benchmark dataset for microtubule tracking, here used for training, testing and validation, consisting of eight 30 x 1000 x 1000 voxel blocks (1 . 2 × 4 × 4µm) of densely annotated microtubules in the CREMI data set (https://github.com/nilsec/micron).
Understanding learning through synaptic plasticity rules in the brain is a grand challenge for neuroscience. Here we introduce a novel computational framework for inferring plasticity rules from experimental data on neural activity trajectories and behavioral learning dynamics. Our methodology parameterizes the plasticity function to provide theoretical interpretability and facilitate gradient-based optimization. For instance, we use Taylor series expansions or multilayer perceptrons to approximate plasticity rules, and we adjust their parameters via gradient descent over entire trajectories to closely match observed neural activity and behavioral data. Notably, our approach can learn intricate rules that induce long nonlinear time-dependencies, such as those incorporating postsynaptic activity and current synaptic weights. We validate our method through simulations, accurately recovering established rules, like Oja’s, as well as more complex hypothetical rules incorporating reward-modulated terms. We assess the resilience of our technique to noise and, as a tangible application, apply it to behavioral data from Drosophila during a probabilistic reward-learning experiment. Remarkably, we identify an active forgetting component of reward learning in flies that enhances the predictive accuracy of previous models. Overall, our modeling framework provides an exciting new avenue to elucidate the computational principles governing synaptic plasticity and learning in the brain.
Animal brains are complex organs composed of thousands of interconnected neurons. Characterizing the network properties of these brains is a requisite step towards understanding mechanisms of computation and information flow. With the completion of the Flywire project, we now have access to the connectome of a complete adult Drosophila brain, containing 130,000 neurons and millions of connections. Here, we present a statistical summary and data products of the Flywire connectome, delving into its network properties and topological features. To gain insights into local connectivity, we computed the prevalence of two- and three-node network motifs, examined their strengths and neurotransmitter compositions, and compared these topological metrics with wiring diagrams of other animals. We uncovered a population of highly connected neurons known as the “rich club” and identified subsets of neurons that may serve as integrators or broadcasters of signals. Finally, we examined subnetworks based on 78 anatomically defined brain regions. The freely available data and neuron populations presented here will serve as a foundation for models and experiments exploring the relationship between neural activity and anatomical structure.
Animals communicate using sounds in a wide range of contexts, and auditory systems must encode behaviorally relevant acoustic features to drive appropriate reactions. How feature detection emerges along auditory pathways has been difficult to solve due to challenges in mapping the underlying circuits and characterizing responses to behaviorally relevant features. Here, we study auditory activity in the Drosophila melanogaster brain and investigate feature selectivity for the two main modes of fly courtship song, sinusoids and pulse trains. We identify 24 new cell types of the intermediate layers of the auditory pathway, and using a new connectomic resource, FlyWire, we map all synaptic connections between these cell types, in addition to connections to known early and higher-order auditory neurons-this represents the first circuit-level map of the auditory pathway. We additionally determine the sign (excitatory or inhibitory) of most synapses in this auditory connectome. We find that auditory neurons display a continuum of preferences for courtship song modes and that neurons with different song-mode preferences and response timescales are highly interconnected in a network that lacks hierarchical structure. Nonetheless, we find that the response properties of individual cell types within the connectome are predictable from their inputs. Our study thus provides new insights into the organization of auditory coding within the Drosophila brain.
As observed in human language learning and song learning in birds, the fruit fly Drosophila melanogaster changes its auditory behaviors according to prior sound experiences. This phenomenon, known as song preference learning in flies, requires GABAergic input to pC1 neurons in the brain, with these neurons playing a key role in mating behavior. The neural circuit basis of this GABAergic input, however, is not known. Here, we find that GABAergic neurons expressing the sex-determination gene doublesex are necessary for song preference learning. In the brain, only four doublesex-expressing GABAergic neurons exist per hemibrain, identified as pCd-2 neurons. pCd-2 neurons directly, and in many cases mutually, connect with pC1 neurons, suggesting the existence of reciprocal circuits between them. Moreover, GABAergic and dopaminergic inputs to doublesex-expressing GABAergic neurons are necessary for song preference learning. Together, this study provides a neural circuit model that underlies experience-dependent auditory plasticity at a single-cell resolution.
Connections between neurons can be mapped by acquiring and analyzing electron microscopic (EM) brain images. In recent years, this approach has been applied to chunks of brains to reconstruct local connectivity maps that are highly informative, yet inadequate for understanding brain function more globally. Here, we present the first neuronal wiring diagram of a whole adult brain, containing 5×10 chemical synapses between ∼130,000 neurons reconstructed from a female . The resource also incorporates annotations of cell classes and types, nerves, hemilineages, and predictions of neurotransmitter identities. Data products are available by download, programmatic access, and interactive browsing and made interoperable with other fly data resources. We show how to derive a projectome, a map of projections between regions, from the connectome. We demonstrate the tracing of synaptic pathways and the analysis of information flow from inputs (sensory and ascending neurons) to outputs (motor, endocrine, and descending neurons), across both hemispheres, and between the central brain and the optic lobes. Tracing from a subset of photoreceptors all the way to descending motor pathways illustrates how structure can uncover putative circuit mechanisms underlying sensorimotor behaviors. The technologies and open ecosystem of the FlyWire Consortium set the stage for future large-scale connectome projects in other species.