The Funke lab is hiring, click here for details.
Main Menu (Mobile)- Block
- Overview
-
Support Teams
- Overview
- Anatomy and Histology
- Cryo-Electron Microscopy
- Electron Microscopy
- Flow Cytometry
- Gene Targeting and Transgenics
- Immortalized Cell Line Culture
- Integrative Imaging
- Invertebrate Shared Resource
- Janelia Experimental Technology
- Mass Spectrometry
- Media Prep
- Molecular Genomics
- Primary & iPS Cell Culture
- Project Pipeline Support
- Project Technical Resources
- Quantitative Genomics
- Scientific Computing Software
- Scientific Computing Systems
- Viral Tools
- Vivarium
- Open Science
- You + Janelia
- About Us
Main Menu - Block
- Overview
- Anatomy and Histology
- Cryo-Electron Microscopy
- Electron Microscopy
- Flow Cytometry
- Gene Targeting and Transgenics
- Immortalized Cell Line Culture
- Integrative Imaging
- Invertebrate Shared Resource
- Janelia Experimental Technology
- Mass Spectrometry
- Media Prep
- Molecular Genomics
- Primary & iPS Cell Culture
- Project Pipeline Support
- Project Technical Resources
- Quantitative Genomics
- Scientific Computing Software
- Scientific Computing Systems
- Viral Tools
- Vivarium
We develop methods and tools for the automatic analysis of microscopy image datasets that are too large for manual inspection alone.
Current imaging methods produce vast amounts of data. An electron microscopy volume of a fruit fly brain, for example, comprises hundreds of terabytes of image data. Manually annotating biologically relevant structures in these volumes takes decades of manual labor.
We develop computer vision and machine learning methods and tools to automatize these tasks. What makes this line of research exciting for us are the specific requirements that have to be met for the analysis of large microscopy image datasets. In particular, the methods we aim to develop are:
- Very accurate
Often, structures of interest (like neurons in electron microscopy volumes or cell lineage tracks in light sheet microscopy) span distances that are several orders of magnitudes larger than the resolution required to resolve them. Many correct decisions have to be made simultaneously along the large extent of these structures such that they can confidently be used for subsequent analysis. A handful of errors per reconstructed structure might already render the result unusable (for example for connectome reconstruction from neuron morphologies). - Able to deal with noise
Many structures of interest are only visible at the resolution limit of the imaging method. We therefore often face low signal-to-noise ratios and ambiguous situations that are also challenging for humans to resolve. - Fast
A successful solution has to be fast and parallelizable to scale easily to the size of real-world datasets.
In large datasets, high accuracy and robustness to noise are unlikely to be met without at least some amount of human proofreading. We are interested in developing methods and tools that do not just produce a reconstruction, but work hand in hand with annotators. For that, we are looking to answer the following questions:
- How can automatic methods identify uncertain decisions for human inspection?
Human attention is expensive, but likely needed to verify the output of automatic methods. Proofreaders should be guided to the most uncertain automatic decisions. - How can we make best use of human feedback?
Automatic methods should not be seen as black boxes. Instead, we seek to develop methods that use human feedback to refine reconstructions and continue learning during proofreading. - How can we quantify accuracy?
Eventually, statistics extracted from automatic reconstructions will be used to support or reject hypotheses. We are interested in finding principled ways to quantify the confidence of statistics derived from structures that have not been proofread.