Filter
Associated Lab
- Aso Lab (2) Apply Aso Lab filter
- Bock Lab (1) Apply Bock Lab filter
- Card Lab (1) Apply Card Lab filter
- Cardona Lab (1) Apply Cardona Lab filter
- Fetter Lab (4) Apply Fetter Lab filter
- Funke Lab (1) Apply Funke Lab filter
- Harris Lab (1) Apply Harris Lab filter
- Hess Lab (10) Apply Hess Lab filter
- Jayaraman Lab (1) Apply Jayaraman Lab filter
- Rubin Lab (7) Apply Rubin Lab filter
- Saalfeld Lab (2) Apply Saalfeld Lab filter
- Scheffer Lab (36) Apply Scheffer Lab filter
- Sternson Lab (1) Apply Sternson Lab filter
- Turner Lab (1) Apply Turner Lab filter
Associated Project Team
Publication Date
- 2023 (2) Apply 2023 filter
- 2022 (1) Apply 2022 filter
- 2021 (2) Apply 2021 filter
- 2020 (1) Apply 2020 filter
- 2019 (2) Apply 2019 filter
- 2018 (4) Apply 2018 filter
- 2017 (4) Apply 2017 filter
- 2015 (2) Apply 2015 filter
- 2014 (7) Apply 2014 filter
- 2013 (3) Apply 2013 filter
- 2012 (3) Apply 2012 filter
- 2011 (2) Apply 2011 filter
- 2010 (3) Apply 2010 filter
Type of Publication
36 Publications
Showing 11-20 of 36 resultsReconstructing neuronal circuits at the level of synapses is a central problem in neuroscience and becoming a focus of the emerging field of connectomics. To date, electron microscopy (EM) is the most proven technique for identifying and quantifying synaptic connections. As advances in EM make acquiring larger datasets possible, subsequent manual synapse identification ({\em i.e.}, proofreading) for deciphering a connectome becomes a major time bottleneck. Here we introduce a large-scale, high-throughput, and semi-automated methodology to efficiently identify synapses. We successfully applied our methodology to the Drosophila medulla optic lobe, annotating many more synapses than previous connectome efforts. Our approaches are extensible and will make the often complicated process of synapse identification accessible to a wider-community of potential proofreaders.
The most established method of reconstructing neural circuits from animals involves slicing tissue very thin, then taking mosaics of electron microscope (EM) images. To trace neurons across different images and through different sections, these images must be accurately aligned, both with the others in the same section and to the sections above and below. Unfortunately, sectioning and imaging are not ideal processes - some of the problems that make alignment difficult include lens distortion, tissue shrinkage during imaging, tears and folds in the sectioned tissue, and dust and other artifacts. In addition the data sets are large (hundreds of thousands of images) and each image must be aligned with many neighbors, so the process must be automated and reliable. This paper discusses methods of dealing with these problems, with numeric results describing the accuracy of the resulting alignments.
Understanding the circuit mechanisms behind motion detection is a long-standing question in visual neuroscience. In , recent synapse-level connectomes in the optic lobe, particularly in ON-pathway (T4) receptive-field circuits, in concert with physiological studies, suggest an increasingly intricate motion model compared with the ubiquitous Hassenstein-Reichardt model, while our knowledge of OFF-pathway (T5) has been incomplete. Here we present a conclusive and comprehensive connectome that for the first time integrates detailed connectivity information for inputs to both T4 and T5 pathways in a single EM dataset covering the entire optic lobe. With novel reconstruction methods using automated synapse prediction suited to such a large connectome, we successfully corroborate previous findings in the T4 pathway and comprehensively identify inputs and receptive fields for T5. While the two pathways are likely evolutionarily linked and indeed exhibit many similarities, we uncover interesting differences and interactions that may underlie their distinct functional properties.
Electronic and biological systems both perform complex information processing, but they use very different techniques. Though electronics has the advantage in raw speed, biological systems have the edge in many other areas. They can be produced, and indeed self-reproduce, without expensive and finicky factories. They are tolerant of manufacturing defects, and learn and adapt for better performance. In many cases they can self-repair damage. These advantages suggest that biological systems might be useful in a wide variety of tasks involving information processing. So far, all attempts to use the nervous system of a living organism for information processing have involved selective breeding of existing organisms. This approach, largely independent of the details of internal operation, is used since we do not yet understand how neural systems work, nor exactly how they are constructed. However, as our knowledge increases, the day will come when we can envision useful nervous systems and design them based upon what we want them to do, as opposed to variations on what has been already built. We will then need tools, corresponding to our Electronic Design Automation tools, to help with the design. This paper is concerned with what such tools might look like.
A central problem in neuroscience is reconstructing neuronal circuits on the synapse level. Due to a wide range of scales in brain architecture such reconstruction requires imaging that is both high-resolution and high-throughput. Existing electron microscopy (EM) techniques possess required resolution in the lateral plane and either high-throughput or high depth resolution but not both. Here, we exploit recent advances in unsupervised learning and signal processing to obtain high depth-resolution EM images computationally without sacrificing throughput. First, we show that the brain tissue can be represented as a sparse linear combination of localized basis functions that are learned using high-resolution datasets. We then develop compressive sensing-inspired techniques that can reconstruct the brain tissue from very few (typically 5) tomographic views of each section. This enables tracing of neuronal processes and, hence, high throughput reconstruction of neural circuits on the level of individual synapses.
A new method allows researchers to automatically assign cells into different cell types and tissues, a step which is critical for understanding complex organisms.
Extracting a connectome from an electron microscopy (EM) data set requires identification of neurons and determination of synapses between neurons. As manual extraction of this information is very time-consuming, there has been extensive research effort to automatically segment the neurons to help guide and eventually replace manual tracing. Until recently, there has been comparatively less research on automatically detecting the actual synapses between neurons. This discrepancy can, in part, be attributed to several factors: obtaining neuronal shapes is a prerequisite first step in extracting a connectome, manual tracing is much more time-consuming than annotating synapses, and neuronal contact area can be used as a proxy for synapses in determining connections.
However, recent research has demonstrated that contact area alone is not a sufficient predictor of synaptic connection. Moreover, as segmentation has improved, we have observed that synapse annotation is consuming a more significant fraction of overall reconstruction time. This ratio will only get worse as segmentation improves, gating overall possible speed-up. Therefore, we address this problem by developing algorithms that automatically detect pre-synaptic neurons and their post-synaptic partners. In particular, pre-synaptic structures are detected using a Deep and Wide Multiscale Recursive Network, and post-synaptic partners are detected using a MLP with features conditioned on the local segmentation.
This work is novel because it requires minimal amount of training, leverages advances in image segmentation directly, and provides a complete solution for polyadic synapse detection. We further introduce novel metrics to evaluate our algorithm on connectomes of meaningful size. These metrics demonstrate that complete automatic prediction can be used to effectively characterize most connectivity correctly.
The aim in high-resolution connectomics is to reconstruct complete neuronal connectivity in a tissue. Currently, the only technology capable of resolving the smallest neuronal processes is electron microscopy (EM). Thus, a common approach to network reconstruction is to perform (error-prone) automatic segmentation of EM images, followed by manual proofreading by experts to fix errors. We have developed an algorithm and software library to not only improve the accuracy of the initial automatic segmentation, but also point out the image coordinates where it is likely to have made errors. Our software, called gala (graph-based active learning of agglomeration), improves the state of the art in agglomerative image segmentation. It is implemented in Python and makes extensive use of the scientific Python stack (numpy, scipy, networkx, scikit-learn, scikit-image, and others). We present here the software architecture of the gala library, and discuss several designs that we consider would be generally useful for other segmentation packages. We also discuss the current limitations of the gala library and how we intend to address them.
The challenge of recovering the topology of massive neuronal circuits can potentially be met by high throughput Electron Microscopy (EM) imagery. Segmenting a 3-dimensional stack of EM images into the individual neurons is difficult, due to the low depth-resolution in existing high-throughput EM technology, such as serial section Transmission EM (ssTEM). In this paper we propose methods for detecting the high resolution locations of membranes from low depth-resolution images. We approach this problem using both a method that learns a discriminative, over-complete dictionary and a kernel SVM. We test this approach on tomographic sections produced in simulations from high resolution Focused Ion Beam (FIB) images and on low depth-resolution images acquired with ssTEM and evaluate our results by comparing it to manual labeling of this data.