Filter
Associated Lab
- Aguilera Castrejon Lab (1) Apply Aguilera Castrejon Lab filter
- Beyene Lab (1) Apply Beyene Lab filter
- Fitzgerald Lab (1) Apply Fitzgerald Lab filter
- Keller Lab (1) Apply Keller Lab filter
- Lavis Lab (1) Apply Lavis Lab filter
- Liu (Zhe) Lab (1) Apply Liu (Zhe) Lab filter
- Pachitariu Lab (18) Apply Pachitariu Lab filter
- Schreiter Lab (1) Apply Schreiter Lab filter
- Spruston Lab (1) Apply Spruston Lab filter
- Remove Stringer Lab filter Stringer Lab
- Tillberg Lab (1) Apply Tillberg Lab filter
- Turner Lab (1) Apply Turner Lab filter
Associated Project Team
Associated Support Team
29 Janelia Publications
Showing 1-10 of 29 resultsArtificial neural networks learn faster if they are initialized well. Good initializations can generate high-dimensional macroscopic dynamics with long timescales. It is not known if biological neural networks have similar properties. Here we show that the eigenvalue spectrum and dynamical properties of large-scale neural recordings in mice (two-photon and electrophysiology) are similar to those produced by linear dynamics governed by a random symmetric matrix that is critically normalized. An exception was hippocampal area CA1: population activity in this area resembled an efficient, uncorrelated neural code, which may be optimized for information storage capacity. Global emergent activity modes persisted in simulations with sparse, clustered or spatial connectivity. We hypothesize that the spontaneous neural activity reflects a critical initialization of whole-brain neural circuits that is optimized for learning time-dependent tasks.
Genetically encoded fluorescent calcium indicators allow cellular-resolution recording of physiology. However, bright, genetically targetable indicators that can be multiplexed with existing tools in vivo are needed for simultaneous imaging of multiple signals. Here we describe WHaloCaMP, a modular chemigenetic calcium indicator built from bright dye-ligands and protein sensor domains. Fluorescence change in WHaloCaMP results from reversible quenching of the bound dye via a strategically placed tryptophan. WHaloCaMP is compatible with rhodamine dye-ligands that fluoresce from green to near-infrared, including several that efficiently label the brain in animals. When bound to a near-infrared dye-ligand, WHaloCaMP shows a 7× increase in fluorescence intensity and a 2.1-ns increase in fluorescence lifetime upon calcium binding. We use WHaloCaMP1a to image Ca responses in vivo in flies and mice, to perform three-color multiplexed functional imaging of hundreds of neurons and astrocytes in zebrafish larvae and to quantify Ca concentration using fluorescence lifetime imaging microscopy (FLIM).
Simultaneous recordings from hundreds or thousands of neurons are becoming routine because of innovations in instrumentation, molecular tools, and data processing software. Such recordings can be analyzed with data science methods, but it is not immediately clear what methods to use or how to adapt them for neuroscience applications. We review, categorize, and illustrate diverse analysis methods for neural population recordings and describe how these methods have been used to make progress on longstanding questions in neuroscience. We review a variety of approaches, ranging from the mathematically simple to the complex, from exploratory to hypothesis-driven, and from recently developed to more established methods. We also illustrate some of the common statistical pitfalls in analyzing large-scale neural data.
Pretrained neural network models for biological segmentation can provide good out-of-the-box results for many image types. However, such models do not allow users to adapt the segmentation style to their specific needs and can perform suboptimally for test images that are very different from the training images. Here we introduce Cellpose 2.0, a new package that includes an ensemble of diverse pretrained models as well as a human-in-the-loop pipeline for rapid prototyping of new custom models. We show that models pretrained on the Cellpose dataset can be fine-tuned with only 500-1,000 user-annotated regions of interest (ROI) to perform nearly as well as models trained on entire datasets with up to 200,000 ROI. A human-in-the-loop approach further reduced the required user annotation to 100-200 ROI, while maintaining high-quality segmentations. We provide software tools such as an annotation graphical user interface, a model zoo and a human-in-the-loop pipeline to facilitate the adoption of Cellpose 2.0.
Generalist methods for cellular segmentation have good out-of-the-box performance on a variety of image types; however, existing methods struggle for images that are degraded by noise, blurring or undersampling, all of which are common in microscopy. We focused the development of Cellpose3 on addressing these cases and here we demonstrate substantial out-of-the-box gains in segmentation and image quality for noisy, blurry and undersampled images. Unlike previous approaches that train models to restore pixel values, we trained Cellpose3 to output images that are well segmented by a generalist segmentation model, while maintaining perceptual similarity to the target images. Furthermore, we trained the restoration models on a large, varied collection of datasets, thus ensuring good generalization to user images. We provide these tools as 'one-click' buttons inside the graphical interface of Cellpose as well as in the Cellpose API.
Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation algorithm called Cellpose, which can very precisely segment a wide range of image types out-of-the-box and does not require model retraining or parameter adjustments. We trained Cellpose on a new dataset of highly-varied images of cells, containing over 70,000 segmented objects. To support community contributions to the training data, we developed software for manual labelling and for curation of the automated results, with optional direct upload to our data repository. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.
Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation method called Cellpose, which can precisely segment cells from a wide range of image types and does not require model retraining or parameter adjustments. Cellpose was trained on a new dataset of highly varied images of cells, containing over 70,000 segmented objects. We also demonstrate a three-dimensional (3D) extension of Cellpose that reuses the two-dimensional (2D) model and does not require 3D-labeled data. To support community contributions to the training data, we developed software for manual labeling and for curation of the automated results. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.
Electrophysiology has long been the workhorse of neuroscience, allowing scientists to record with millisecond precision the action potentials generated by neurons in vivo. Recently, calcium imaging of fluorescent indicators has emerged as a powerful alternative. This technique has its own strengths and weaknesses and unique data processing problems and interpretation confounds. Here we review the computational methods that convert raw calcium movies to estimates of single neuron spike times with minimal human supervision. By computationally addressing the weaknesses of calcium imaging, these methods hold the promise of significantly improving data quality. We also introduce a new metric to evaluate the output of these processing pipelines, which is based on the cluster isolation distance routinely used in electrophysiology.
Limited color channels in fluorescence microscopy have long constrained spatial analysis in biological specimens. Here, we introduce cycle Hybridization Chain Reaction (HCR), a method that integrates multicycle DNA barcoding with HCR to overcome this limitation. cycleHCR enables highly multiplexed imaging of RNA and proteins using a unified barcode system. Whole-embryo transcriptomics imaging achieved precise three-dimensional gene expression and cell fate mapping across a specimen depth of ~310 μm. When combined with expansion microscopy, cycleHCR revealed an intricate network of 10 subcellular structures in mouse embryonic fibroblasts. In mouse hippocampal slices, multiplex RNA and protein imaging uncovered complex gene expression gradients and cell-type-specific nuclear structural variations. cycleHCR provides a quantitative framework for elucidating spatial regulation in deep tissue contexts for research and potentially diagnostic applications. bioRxiv preprint: 10.1101/2024.05.17.594641
Representation learning in neural networks may be implemented with supervised or unsupervised algorithms, distinguished by the availability of feedback. In sensory cortex, perceptual learning drives neural plasticity, but it is not known if this is due to supervised or unsupervised learning. Here we recorded populations of up to 90,000 neurons simultaneously from the primary visual cortex (V1) and higher visual areas (HVA), while mice learned multiple tasks as well as during unrewarded exposure to the same stimuli. Similar to previous studies, we found that neural changes in task mice were correlated with their behavioral learning. However, the neural changes were mostly replicated in mice with unrewarded exposure, suggesting that the changes were in fact due to unsupervised learning. The neural plasticity was concentrated in the medial HVAs and obeyed visual, rather than spatial, learning rules. In task mice only, we found a ramping reward prediction signal in anterior HVAs, potentially involved in supervised learning. Our neural results predict that unsupervised learning may accelerate subsequent task learning, a prediction which we validated with behavioral experiments.