Filter
Associated Lab
- Branson Lab (1) Apply Branson Lab filter
- Dudman Lab (2) Apply Dudman Lab filter
- Harris Lab (3) Apply Harris Lab filter
- Lee (Albert) Lab (2) Apply Lee (Albert) Lab filter
- Remove Pachitariu Lab filter Pachitariu Lab
- Romani Lab (1) Apply Romani Lab filter
- Sternson Lab (1) Apply Sternson Lab filter
- Stringer Lab (18) Apply Stringer Lab filter
- Svoboda Lab (2) Apply Svoboda Lab filter
- Turaga Lab (1) Apply Turaga Lab filter
Associated Support Team
34 Janelia Publications
Showing 1-10 of 34 resultsArtificial neural networks learn faster if they are initialized well. Good initializations can generate high-dimensional macroscopic dynamics with long timescales. It is not known if biological neural networks have similar properties. Here we show that the eigenvalue spectrum and dynamical properties of large-scale neural recordings in mice (two-photon and electrophysiology) are similar to those produced by linear dynamics governed by a random symmetric matrix that is critically normalized. An exception was hippocampal area CA1: population activity in this area resembled an efficient, uncorrelated neural code, which may be optimized for information storage capacity. Global emergent activity modes persisted in simulations with sparse, clustered or spatial connectivity. We hypothesize that the spontaneous neural activity reflects a critical initialization of whole-brain neural circuits that is optimized for learning time-dependent tasks.
Simultaneous recordings from hundreds or thousands of neurons are becoming routine because of innovations in instrumentation, molecular tools, and data processing software. Such recordings can be analyzed with data science methods, but it is not immediately clear what methods to use or how to adapt them for neuroscience applications. We review, categorize, and illustrate diverse analysis methods for neural population recordings and describe how these methods have been used to make progress on longstanding questions in neuroscience. We review a variety of approaches, ranging from the mathematically simple to the complex, from exploratory to hypothesis-driven, and from recently developed to more established methods. We also illustrate some of the common statistical pitfalls in analyzing large-scale neural data.
Artificial activation of anatomically localized, genetically defined hypothalamic neuron populations is known to trigger distinct innate behaviors, suggesting a hypothalamic nucleus-centered organization of behavior control. To assess whether the encoding of behavior is similarly anatomically confined, we performed simultaneous neuron recordings across twenty hypothalamic regions in freely moving animals. Here we show that distinct but anatomically distributed neuron ensembles encode the social and fear behavior classes, primarily through mixed selectivity. While behavior class-encoding ensembles were spatially distributed, individual ensembles exhibited strong localization bias. Encoding models identified that behavior actions, but not motion-related variables, explained a large fraction of hypothalamic neuron activity variance. These results identify unexpected complexity in the hypothalamic encoding of instincts and provide a foundation for understanding the role of distributed neural representations in the expression of behaviors driven by hardwired circuits.
At various stages of the visual system, visual responses are modulated by arousal. Here, we find that in mice this modulation operates as early as in the first synapse from the retina and even in retinal axons. To measure retinal activity in the awake, intact brain, we imaged the synaptic boutons of retinal axons in the superior colliculus. Their activity depended not only on vision but also on running speed and pupil size, regardless of retinal illumination. Arousal typically reduced their visual responses and selectivity for direction and orientation. Recordings from retinal axons in the optic tract revealed that arousal modulates the firing of some retinal ganglion cells. Arousal had similar effects postsynaptically in colliculus neurons, independent of activity in the other main source of visual inputs to the colliculus, the primary visual cortex. These results indicate that arousal modulates activity at every stage of the mouse visual system.
Neural circuits connecting the cerebral cortex, the basal ganglia and the thalamus are fundamental networks for sensorimotor processing and their dysfunction has been consistently implicated in neuropsychiatric disorders1-9. These recursive, loop circuits have been investigated in animal models and by clinical neuroimaging, however, direct functional access to developing human neurons forming these networks has been limited. Here, we use human pluripotent stem cells to reconstruct an in vitro cortico-striatal-thalamic-cortical circuit by creating a four-part loop assembloid. More specifically, we generate regionalized neural organoids that resemble the key elements of the cortico-striatal-thalamic-cortical circuit, and functionally integrate them into loop assembloids using custom 3D-printed biocompatible wells. Volumetric and mesoscale calcium imaging, as well as extracellular recordings from individual parts of these assembloids reveal the emergence of synchronized patterns of neuronal activity. In addition, a multi–step rabies retrograde tracing approach demonstrate the formation of neuronal connectivity across the network in loop assembloids. Lastly, we apply this system to study heterozygous loss of ASH1L gene associated with autism spectrum disorder and Tourette syndrome and discover aberrant synchronized activity in disease model assembloids. Taken together, this human multi-cellular platform will facilitate functional investigations of the cortico-striatal-thalamic-cortical circuit in the context of early human development and in disease conditions.
As we move through the world, we see the same visual scenes from different perspectives. Although we experience perspective deformations, our perception of a scene remains stable. This raises the question of which neuronal representations in visual brain areas are perspective-tuned and which are invariant. Focusing on planar rotations, we introduce a mathematical framework based on the principle of equivariance, which asserts that an image rotation results in a corresponding rotation of neuronal representations, to explain how the same representation can range from being fully tuned to fully invariant. We applied this framework to large-scale simultaneous neuronal recordings from four visual cortical areas in mice, where we found that representations are both tuned and invariant but become more invariant across higher-order areas. While common deep convolutional neural networks show similar trends in orientation-invariance across layers, they are not rotation-equivariant. We propose that equivariance is a prevalent computation of populations of biological neurons to gradually achieve invariance through structured tuning.
Pretrained neural network models for biological segmentation can provide good out-of-the-box results for many image types. However, such models do not allow users to adapt the segmentation style to their specific needs and can perform suboptimally for test images that are very different from the training images. Here we introduce Cellpose 2.0, a new package that includes an ensemble of diverse pretrained models as well as a human-in-the-loop pipeline for rapid prototyping of new custom models. We show that models pretrained on the Cellpose dataset can be fine-tuned with only 500-1,000 user-annotated regions of interest (ROI) to perform nearly as well as models trained on entire datasets with up to 200,000 ROI. A human-in-the-loop approach further reduced the required user annotation to 100-200 ROI, while maintaining high-quality segmentations. We provide software tools such as an annotation graphical user interface, a model zoo and a human-in-the-loop pipeline to facilitate the adoption of Cellpose 2.0.
Generalist methods for cellular segmentation have good out-of-the-box performance on a variety of image types; however, existing methods struggle for images that are degraded by noise, blurring or undersampling, all of which are common in microscopy. We focused the development of Cellpose3 on addressing these cases and here we demonstrate substantial out-of-the-box gains in segmentation and image quality for noisy, blurry and undersampled images. Unlike previous approaches that train models to restore pixel values, we trained Cellpose3 to output images that are well segmented by a generalist segmentation model, while maintaining perceptual similarity to the target images. Furthermore, we trained the restoration models on a large, varied collection of datasets, thus ensuring good generalization to user images. We provide these tools as 'one-click' buttons inside the graphical interface of Cellpose as well as in the Cellpose API.
Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation algorithm called Cellpose, which can very precisely segment a wide range of image types out-of-the-box and does not require model retraining or parameter adjustments. We trained Cellpose on a new dataset of highly-varied images of cells, containing over 70,000 segmented objects. To support community contributions to the training data, we developed software for manual labelling and for curation of the automated results, with optional direct upload to our data repository. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.
Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation method called Cellpose, which can precisely segment cells from a wide range of image types and does not require model retraining or parameter adjustments. Cellpose was trained on a new dataset of highly varied images of cells, containing over 70,000 segmented objects. We also demonstrate a three-dimensional (3D) extension of Cellpose that reuses the two-dimensional (2D) model and does not require 3D-labeled data. To support community contributions to the training data, we developed software for manual labeling and for curation of the automated results. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.